Aerospace Systems Engineer

2010

Tools are worthless

Ultimately any analysis technique, tool set, etc will work. But just spending thousands on a tool set won’t fix a philosophical problem, either. What matters is commitment that SE is valuable and must be done. Its “pay me now or pay me later”. 







Test Development

Test Development
Much of what we test are transitions from one state/mode/condition to another.  These have several items in common that must be considered in crafting a valid test.  In the following I use “state” generically, it could be the state of a component (active/inactive, enabled/disabled) or a system or sub-system mode (standby, operate, maintenance).

Steps toward developing a valid test.

1. Define the state

         1. What are the salient characteristics of the state?  What defines being in the state?
         2. What are the monitors and values of those monitors when in the state?
  
2. Define any behavioral constraints regarding entering the state.  These may be transition dependency (a state diagram may be necessary)
  
3. Validation has two aspects:
         1. Is state achieved? Defined by monitors for state.
         2. Is the transition behavior executed correctly?  Defined by behavior constraints.
  
4. Define test procedure
         1. Preconditions (may need to be set to confirm a function executed successfully.
         2. Action (trigger)
         3. Result (see validation)




Interface Validation

ICD Verification and Audits
ICDs define the interface between two systems. ICDs address, the Physical, Protocol and Application level details of the interface. ICDs provide design information to the designers implementing the system functionality.
1.  Where there are no options there is no design latitude and no ambiguity (i.e., the physical envelop of a device)
2.  Where design latitude provides implementation options the ICD documents the design agreements (i.e., device B configured as 1553 RT 5).
3.  Some systems may have utilization options (i.e.. Control directive 123 or control directive ABC will turn on the device. Typically there will be factors to consider in the choice of these control directives. )  IRS's are sometimes used to enforce an implementation.

Verification

ICDs

ICDs need not be verified. We do, of course want to demonstrate and convince ourselves that the ICD is, in fact, correct! That is different from verification.

The design required to achieve the functionality defined by the requirements; Spec A, Spec B and their parent, demand that the ICD be adhered to. If not it won't work. Therefore the successful verification of the requirement indicates that the ICD was adhered to. There is no option for the function to execute successfully unless the interface is adhered to.  Where the interface provides implementation latitude (#3 above) and a specific implementation is not identified in the requirement, the designer chooses. Verification doesn't care which choice is made.

IRS

An IRS contains requirements; "Shall" statements. If the IRS approach is chosen the IRS imposed design implementation, the IRS requirement, is verified.

Managing ICD Development and Implementation What are the salient items of the ICD?

Differentiating between descriptive material and essential interface design elements in the ICD is important. The spec writing approach of the use of "Will" is helpful to identify these necessary interface design elements. Tracking of the "Wills' can then be managed.  "Wills" are not verified, but we do want to assure that they are correct.

How do we insure ICD items are correct?
This is often what is meant by "verifying" the ICD. It is really a method of confirming that the interface is correctly documented. It is NOT verification in the sense that we verify requirements.
*      Peer Reviews by Subject Matter Expert and Systems Engineering provide a level of assurance of the correctness of the data.
*      Placing the ICD under configuration control and the process of "boarding" the ICD add another layer.
*      Component acceptance testing provides an opportunity to confirm that the expected interface behavior agrees with the documented behavior. This also provides risk reduction prior to assembly.
*      These things assure us, and the customer, that we are managing our interface risk.

How do we insure both sides know what "truth" is?
Achieving successful functionality requires that both sides of the interface have the same understanding of the interface. There should be only one interface document between interfacing entities. An audit of the requirements specifications can be used to provide assurance that the interfacing entities are both referencing the same interface document for the same functions.  Auditing the respective specifications, of the interfacing products, assures that they each reference the appropriate specific section of the correct ICD.

ICD validation occurs when the interfacing products requirements are verified.

A validation plan, mapping the interfacing products functional requirements to the associated ICD "wills" provides a mechanism to track validation of adherence to the ICD.

One should NEVER SAY we will have a "validation" product for ICDs. This would just open ourselves up to a mindless tracking of a meaningless effort. The true value is in the engineering exercise of incrementally defining, coordinating, documenting, communicating, evaluating the interface agreements so that there are NO SURPRISES when the two things interface for the first time.

Analysis
A standard SE Analysis approach, tools and implementations will vary.







Systems Engineering - Where’d we go wrong?

We haven’t expected Systems Engineers to do Systems Engineering in quite a while.
  1. We don’t derive the system, we assemble it.
  2. artificial work partitions have developed within Systems Engineering: We don’t document the system, we write disconnected specs, ICDs (and, maybe, conops)
  3. tools – DOORS, or modeling tools have become the ends not the means
We have lost the understanding that Systems Engineering is the process of deriving a viable system design, and the production of products communicating the design.
We focus on symptoms rather than the problem
  1. we see an integration problem when we really have a design problem
  2. we see requirement problems when we really have a lack of an operations concept
  3. we see a verification problem when we really have a requirement problem
We have abdicated Systems Engineering to others; Architects, Product Engineers, SW Engineers and “Ivory towers” experts.
  1. SEs just develop isolated documentation devoid of responsibility for, or understanding of, the system
  2. SW by its nature must integrate and be functional – the best SEs are currently found in SW engineering
  3. Esoteric discussions of architectural frameworks (yes, there is more than DoDAF) and modeling techniques and tools are interesting but impractical to the program manager, especially when asked to be implemented by a staff that doesn’t understand the Systems Engineering problem space.
  4. Good program managers justifiably resist expensive new tools, techniques or methods especially when wielded by, or foisted on, a staff ill-equipped to evaluate their efficacy or apply them productively.
We have lost technical management
  • We manage schedules and people, but we don’t manage the technical process. Technical management (usually the CSE) and program management need to work hand in glove.


A Practical Recovery Approach
A practical way forward is needed in the short term. Ultimately we need a more comprehensive reintroduction of, and appreciation for, Systems Engineering practices.
Two key things are needed, and competency in them both is readily achievable.
Write requirements well!

  • The 80% solution: “Upon x, the system shall do y, within z.”
  • If this were the only improvement implemented – we would be greatly improved. And it’s easy.
Consciously, deliberately, knowingly - derive the system design
  • This is a team effort! Commitment to this Systems Engineering responsibility would have tremendous rewards.
  • We have talented young engineers they will figure out (or reinvent) how to do it – but first they have to know it’s our responsibility. Deriving the System means, starting from the system’s objectives, develop the necessary architecture and behavior – which are the requirements to fulfill the objectives.
  • I suggest employing Use Case elaboration, to derive and document Requirements (specifications), Architecture (ICDs) and Behavior (OCDs) - holistically. But, don’t go overboard on cutting edge (bloated) tools and techniques.