# The Hitchhiker's Guide to Systems Engineering in Startups
It was a gray and wet November of 2015 when I visited Helsinki for the first time. Perhaps not the greatest time of the year to visit this beautiful city; and it was a puzzling experience. I spent three days visiting the company and the people willing to hire me, in a building that was a shared office space with other startups, right at the border between Helsinki and Espoo (on Espoo side). Three intense days, where I presented myself, what I knew, and what I thought I could contribute to the project, and so on. I must admit, I had never considered myself an adventurous person, but there I was, “selling” myself and thinking about quitting my (good) job back in a beautiful city in Patagonia, in a well-respected company. Taking this offer would mean getting rid of all my stuff and moving to the other side of the world, in more or less two months. During those three days in Finland, I corroborated there was a totally different way of doing space systems; I had heard and researched about NewSpace for some time, and I was aware of what it was at least from a theoretical perspective. But this looked different, like a university project, but with a strong and unusual (for me) commercial orientation that university projects never have. It felt like a mixture of corner-cutting, naivety, and creativity; a combo that looked very appealing to me. Having worked in “classic” space for quite some years, where time tends to go very slowly and things are tested multiple times and documents are the main _produce_ of projects, this opportunity looked like a chance to bring back the joy of doing engineering. And here probably lies the key: time pace. I can say without a single doubt I am an anxious person; I continuously question why things have to take so long, whatever it is. I consider time a precious resource: not because my time is particularly precious, but because life is just too short. I never liked the idea of having to wait five years to see a project being launched into space. I didn’t want to wait that long anymore. NewSpace was the answer to my engineering anxiety, it moved at speeds too interesting not to be part of.
> [!note]
> One of the most shocking things for me about space startups was how risk, a (buzz)word you hear a lot in classic space, meant nothing. Risk-taking was so extensive that the term was seldom mentioned anywhere. It was taken for granted there were plenty of risks taken all over the place. It was a bit of a “shame” to talk about risk, people would silently giggle.
Notably, the project I was joining had *no* customers. Zero. Nobody. Nadie. This was probably the biggest mental adjustment I had to make; the startup I was about to sign a contract with was planning to build a spacecraft nobody asked for. Quite some money and several tens of thousands of man-hours of work endeavor were about to be put on something that had no customer, at least just yet. My classic “prime contractor” space brain was short-circuiting, because it was expecting the customer to exist and do, you know, what customers do: placing endlessly changing and conflicting requirements, complaining about the schedule, asking for eternal meetings, asking for eternal reviews, being the pain in the neck they usually are. All of a sudden, most of my bibliography of Systems Engineering suddenly turned pointless. Each one of those books would start with: “Identify customer needs”. These books were taking customers for granted and did not provide any hint on where to get customers from if you didn’t have them.
When there are no customers, you really miss them. But even though customers were still not there, there were lots of ideas. There were scribbles on whiteboards. Some breadboard experiments here and there. There was this strong belief that the project, if successful, would unlock great applications of Earth observation data and grant years of dominance in an untapped market; there was a vision and passion. So, sails were set towards designing, building, and flying a proof-of-concept to validate the overall idea that if it did not work, things would turn more than ugly. It was all or nothing.
Many startups around the world work the same way. There is vision, there is commitment, excitement, and some (limited) amount of funds, but there are no revenues nor customers, so their runways are finite, meaning that time is very short, and the abyss is always visible in the near horizon. In startups, time to market is king; being late to deliver can mean the end of it. Being slow to hit the market can signify a competitor taking the bite of the cake; i.e. you’re out. This is why startups are fascinating: like it is not challenging enough having to build a complex thing like a satellite, it has to be built as fast as possible, as cheaply as possible, and as performing as possible; i.e a delicate balance inside the death triangle of cost, schedule, and performance. And the most awesome part of it is (which now I know but my 2015 self was constantly wondering) that: it is possible. But, it has to be done the _NewSpace way_, otherwise you are cooked. During those years while I had the privilege of being at the core of the NewSpace enterprise of transforming a scribble into an orbiting spacecraft, I took enough time to look at the whole process from a distance to try to capture the essence of it for others to be able to succeed under similar circumstances. I wanted to understand what the NewSpace _way_ is. I wanted to comprehend what were the shortcuts to take, where it needed to stop, and what was to be discarded when it was time to discard it. I feel many other industries can benefit from the NewSpace way as well. Even _classic_ space can. The historic space industry has a perfectionist mindset that everything needs to be perfect before flight. NewSpace challenges all that. In NewSpace, you polish things as you fly.
In hindsight, one factor I did not realize until very much later in the process is how much my attention was centered around the system under design (spacecraft) and how much I (luckily) overlooked the real complexity of what I was about to embark on; I say luckily because, had I realized of this back in that November of 2015, I would have never taken the job offer. ==All my thinking was put on the thing that was supposed to be launched; but the reality is we ended up building a much broader system: a tech organization==. A highly creative and sometimes chaotic organization with dependencies and couplings I was initially blissfully ignorant of.
I have interviewed hundreds of people for many different positions throughout these years while I was leading an engineering team. In most cases, the interviewee was coming from a bigger, more established organization. Knowing this, I would love to ask them if they were actually ready to leave their comfortable and mature organization to come to a more fluid and dynamic (to say the least) environment. In many cases, interviewees were trying to convince me that they were working in a particularly “fluid” department in this multinational corporation and that this department was behaving “pretty much like a startup”. A Skunkworks group is not a startup; an R&D department may be highly dynamic, but it’s not a startup either. The main difference lies in the _big system_, which we will describe next.
## The Big System
Years ago, I was reading a magazine and there was an advertisement with a picture of a Formula 1 car without its tires, resting on top of four bricks. The ad was asking “How much horsepower again?”. The ad was simple but spot on: a racing car without tires cannot race, it goes nowhere. Connecting that to what we do: systems dependencies are as strong as frequently taken for granted. In racing cars, engines, and aerodynamics usually capture all the attention; tires never get the spotlight, although they are essential for the car to perform. Same way, in NewSpace we tend to focus on the thing that goes spectacularly up in a rocket with lots of flames and smoke. But a true space system is way more than that. A spacecraft is nothing without the complex ecosystem of other _things_ it needs around it to be able to perform. This is a common pitfall in NewSpace orgs: partial awareness of what the _big system_ looks like. I guess it is a defense mechanism as well. Complexity can be daunting, so we tend to look away; the problem is when we never look back at it. I will revisit the _big system_ concept several times throughout the book. I have seen many NewSpace companies falling into the same trap, finding out too late in the process that they have not included important building blocks to make their System/Product feasible. Some references call these components enabling systems. Enabling systems are systems that facilitate the activities towards developing the system-of-interest. The enabling systems provide services that are needed by the system of interest during one or more life cycle stages, although the enabling systems are not a direct element of the operational environment (INCOSE, 2015). Examples of enabling systems include development environments, production systems, logistics, etc. They enable the progress of the main system under design. The relationship between the enabling system and the system of interest may be one where there is interaction between both systems or one where the system of interest simply receives the services it needs when it is needed. Enabling systems strongly depend on perspective: what is an enabling system for me might be the system of interest for the engineer working on that system. Perspective, or point of view, is critical in any complex system; things (architecture, requirements, etc) look very different depending on where you look at it.
> [!info]
> Years ago, an early-stage space startup was a few weeks away from launching their first proof-of-concept spacecraft, only to realize that they had not properly arranged a ground segment to receive the spacecraft's UHF beacon and telemetry. Very last-minute arrangements with the Ham Radio community around the world (partially) solved the problem. A spacecraft without a ground segment is like the racing car on top of the bricks from the ad.
To complete the _big system_ picture, we must add the organization itself to the “bill” as well; the interaction between the organization and the system under design should not go unnoticed. Isn’t the organization an _enabling system_ after all? If we refer to the definition above (as systems that provide services to the system-of-interest), it fits the bill. In the big system, one key thing to analyze and understand is how the way people group together affects the things we design, and also how the technical artifacts we create influence the way we group as people. Engineering witchcraft has two clearly defined sides, which are two sides of the same coin: the technical and the social one. Engineering, at the end of the day, is a social activity. But a social activity which needs to happen in natural, spontaneous, non-artificial ways.
If the organization is one more part of the _big_ _system_ we must come up with, what are the laws that govern its behavior? Are organizations deterministic and/or predictable? How does the organization couple with the machines they spawn?
There has been a bit of an obsession with treating organizations as mechanistic boxes that transform inputs into outputs applying some sort of “processes” or “laws”. Although naive and over-simplistic, this reasoning is still thought-provoking. Organizations do have inputs and outputs. What happens inside the box?
## Are Startups Steam Machines?
I wouldn’t be totally off by saying that an organization is _kind of_ a system. You can assign a “systemic” entity to it, i.e. _systemize_ it, regardless if you know how it internally works or not. If you step out of your office and watch it from a distance, that assembly of people, computers, meeting rooms, coffee machines, and information is kind of a thing on its own: let’s call it a system. It takes (consumes) some stuff as input and produces something else as an output. Some scholars understood the same, so there has been historical attention and insistence from them to apply quasi-mechanistic methods to analyze and understand social structures like organizations as well as their management. One example is a bit of an eccentric concept called cybernetics. Cybernetics is a term coined by Norbert Wiener more than seventy years ago and adapted to management by Anthony Stafford Beer (1929-2002). Cybernetics emphasizes the existence of feedback loops or circular cause-effect relationships and the central role information plays in order to achieve control or self-regulation. At its core, cybernetics is a once-glorified (although not so much anymore) attempt to adapt the principles that control theory has historically employed to control deterministic artificial devices such as steam machines or thermostats. Maxwell (who else?) was the first to define a mathematical foundation of control theory in his famous paper _On Governors_ (Maxwell, 1867). Before Maxwell's paper, closed-loop control was more or less an act of witchcraft; machines would be unstable and oscillate without explanation. Any work on understanding closed-loop control was mostly heuristic back in the day until Maxwell came along; his was mostly a mathematical work. All in all, successful closed-loop control of machines heavily relies on the intrinsic “determinism” of physics laws, at least at the macro level (a quantum physicist would have a stroke just reading that). The macro “model” of the world is deterministic, even though it is just a simplification. Wiener's cybernetics concept came to emphasize that feedback control is an act of communication between entities exchanging information (Wiener, 1985). Cybernetic management (as proposed by Beer) suggests that self-regulation can be obtained in organizations the same way a thermostat keeps the temperature in a room. Strange, isn’t it? As if people could be controlled like a thermostat. It seems everyone was so fascinated by closed-loop control back in the day that they thought it could be applied anywhere, including social systems.
Systems Theory was also around at the time, adding its own take on organizational analysis. Systems Theory uses certain concepts to analyze a wide range of phenomena in the physical sciences, in biology and behavioral sciences. Systems Theory proposes that any system, ranging from the atom to the galaxy, from the cell to the organism, and from the individual to society, can be treated the same. General Systems Theory is an attempt to find common laws in virtually every scientific field. According to it, a system is an assembly of interdependent parts (subsystems), whose interaction determines its _survival_. Interdependence means that a change in one part affects other parts and thus the whole system. Such a statement is true, according to its views, for atoms, molecules, people, plants, formal organizations and planetary systems. In Systems Theory, the behavior of the whole (at any level) cannot be predicted solely by knowledge of the behavior of its subparts. According to it, an industrial organization (like a startup) is an open system, since it engages in transactions with larger systems: society and markets. There are inputs in the form of people, materials, money and in the form of political and economic forces arising in the larger system. There are outputs in the form of products, services and rewards to its members. Similarly, subsystems within the organization down to the individual are open systems. What is more, Systems Theory states that an industrial organization is a sociotechnical system, which means it is not merely an assembly of buildings, manpower, money, machines and processes. The system consists in the organization of people around technology. This means, among other things, that human relations are not an optional feature of the organization: they are a built-in property. The system exists by virtue of the motivated behavior of people. Their relationships and behavior determine the inputs, the transformation, and the outputs of the system (McGregor, 1967).
Systems Theory is a great analysis tool for understanding systems and their inner cause-effect relationships, yet it blissfully overgeneralizes by equating complex social constructs as an organization to physical inanimate objects. Both Cybernetics and Systems Theory fall victim to the so-called “envy of physics”: we seem to feel the urge to explain everything around us by means of math and physics. Society demands this scientific standard, even as it turns around and criticizes these studies as too abstract and removed from the “real world” (Churchill & Bygrave, 1989). Why must we imitate physics to explain social things? This involves an uncritical application of habits of thought to fields different from those in which they have been formed. Such practice can find its roots in the so-called Newtonian revolution and later with the scientific method. When we entered the industrial era, the lens of Newtonian science led us to look at organizational success in terms of maintaining a stable system. If nature or crisis upset this state, the leader’s role was to reestablish equilibrium. Not to do so constituted failure. With stability as the sign of success, the paradigm implied that order should be imposed from above (leading to top-down, command-and-control leadership) and structures should be designed to support the decision makers (leading to bureaucracies and hierarchies). The reigning organizational model, scientific management, was wholly consistent with ensuring regularity, predictability, and efficiency (Tenembaum, 1998). Management theories in the nineteenth and early twentieth centuries also held reductionism, determinism, and equilibrium as core principles. In fact, all of social science was influenced by this paradigm (Hayles, 1991).
The general premise was: that if the behavior of stars and planets could be accurately predicted with a set of elegant equations, then any system’s behavior should be able to be captured in a similar way; including social systems as organizations. For Systems Theory, after all, stars and brains are still composed of the same type of highly deterministic matter. Systems Theory pursues finding theoretical models of any system in order to achieve basically three things: prediction, explanation and control. In social sciences, the symmetry between prediction and explanation is destroyed because the future in social sciences is genuinely uncertain, and therefore cannot be predicted with the same degree of certainty as it can be explained in retrospect. In organizations, we have as yet only a very imperfect ability to tell what has happened in our managerial "experiments", much less to ensure their reproducibility (Simon, 1997).
When we deal with an organization of people, we deal with true uncertainty, and true uniqueness. There are not two identical organizations as there are not two truly identical persons. May sound obvious, but worth stopping for a moment to think about it. Too often small organizations imitate practices from other organizations. Imitative practices come in many forms: Among others, firms expend substantial resources to identify and imitate best practices; firms hire consultants and experts to gain access to good ideas and practices that have worked in other firms; firms invest in trade associations to share information; young firms join business incubators and seek access to well-connected venture capitalists in part with the hope to gain access to good practices used by others. On the prescriptive side, firms are exhorted to invest in capabilities that allow them to more quickly and extensively imitate others, benchmark their practices, implement “best practices,” and invest in absorptive capacity. The underlying rationale for these activities and prescriptions is that firms benefit from imitation. First, a variety of reasons exist why the act of imitating a particular practice may fail. In other words, a firm tries to copy a particular practice but is unable to do so. Reasons include for example cultural distance or a bad relationship with the imitated firm (Csaszar & Siggelkow, 2010).
In any case, clearly the hype about Systems Science in organizations dates from the pre-startup era, but it remains more present than we think. Respected authors in management such as Douglas McGregor, who is still relevant nowadays due to his Theory X and Theory Y of management[[1]](#_ftn1), dedicates a chapter of his “The Professional Manager” to talk about “managerial controls” which he finds analogous to machine control. He states:
“_... application [of feedback loops] to machines is so well understood that extended discussion of them is necessary. In a managerial control system the same principle may be applied to human performance_". But, aware that such a remark could reach hyperbole levels, he adds: “_There is a fundamental difference between the engineering and the human application of the principle of information feedback control loop. Machines and physical processes are docile; they are passive with respect to the information fed to them. They can respond, but only to the extent that the alternative forms of response have been designed into the machine or the process. A more fundamental distinction is that emotions are not involved in the feedback process with machines or technological systems. This distinction, often ignored, is critical._” (McGregor, 1967).
And I am not even totally sure about the statement that machines are “docile”; I believe McGregor didn’t have to deal, for example, with printers in his time. Then, he timely acknowledges that feedback control with machines lacks emotions, which is precisely why machine-like control could never work on social groups. Our social feedback loops are impregnated by emotions and shaped by them. All in all, this outdated school of thought proposes the approach of treating organizations as mere machines composed of interconnected boxes with inputs, outputs, clear boundaries, and predictable behavior. Cybernetics, being highly influenced by such ideas, inherits and extends further the mechanistic mindset. Self-regulation, ubiquitous in the cybernetic perspective, suggests that equilibrium is the norm. Is equilibrium a word that would describe a startup? Anyone who has spent even a few hours in a startup would quickly state that equilibrium is not the norm. Startups behave in unstable ways, more like complex, adaptive, and chaotic _things_. This instability and chaotic nature is what most often sparks innovation, and it is also one of its main threats. As ridiculous as it can sound to believe organizations can be treated as machines, current management practice is still deeply rooted in this overly mechanistic approach (Dooley, 1997).
If control theory were applied to organizations as for machines, this would mean that decision-making in organizations could be controlled by computers. This would include the design process, meaning that a system/be product could be programmatically designed by an artificial intelligence algorithm from a set of rules or specifications, or from measured market fit. This is what’s called algorithm-driven design. How long will design engineers be needed in order to create technical things? How far are we from _the machine that designs the machine_? If organizations were machine-like deterministic systems, algorithm-driven operations would also be possible: i.e. a company being run automatically by a control loop. This algorithm could, for example, analyze sales, analyze the competitors, analyze market share and decide when to make changes automatically, to meet pre-revenues set points. For instance, by launching a new product.
No matter how advanced computers are today, running a company remains a very human-centered activity. Computers help, and they do help a lot, to do repetitive and computing-intensive tasks we don’t want/need to do ourselves very fast and efficiently, but computers are still not decision-makers. When we discuss [[Knowledge Management]], we talk about how capturing and codifying knowledge in ways computers can understand could pave the way for AI-driven decisions at the organizational level in some uncertain future.
It is usually said, without great rigor, that startups are _chaotic_. Chaos is always associated with disorder. Startups can be also considered something like organized anarchies. In an organized anarchy, many things happen at once; technologies (or tasks) are uncertain and poorly understood; preferences and identities change and are indeterminate; problems, solutions, opportunities, ideas, situations, people, and outcomes are mixed together in ways that make their interpretation uncertain and connections unclear; decisions at one time and place have loose relevance to others; solutions have only modest connection to problems; policies often go unimplemented; and decision-makers wander in and out of decision arenas saying one thing and doing another (McFarland & Gomez, 2016). Well, that pretty much sums up almost every early-stage NewSpace organization out there.
A more mathematical definition of chaos says that chaos is about the state of dynamical systems whose apparently random states are governed by deterministic laws which are very sensitive to initial conditions. We analyzed in the previous section that organizations do not follow deterministic laws as determinism is not found in social systems. In any case, organizations are systems with extremely high numbers of variables and internal states. Those variables are coupled in so many different ways that their coupling defines local rules, meaning there is no reasonable higher instruction to define the various possible interactions, culminating in a higher order of emergence greater than the sum of its parts. The study of these complex relationships at various scales is the main goal of the Complex Systems framework. Think of any organization, which is ultimately a collective of people with feelings, egos, and insecurities, with a great variety of past experiences, fears, and strengths. At the same time, most of the interactions in an organization are not coordinated nor puppeteered by some sort of grandmaster (although some CEOs would love to be that grandmaster...), meaning that there is plenty of spontaneous interactions among the actors. Organizations fit the complex adaptive systems (CAS) bill very well, and luckily for us, there is a reasonable body of research around applying the CAS framework to organizations in order not to be able to predict and control them, but to understand better how they work. A CAS is both self-organizing and learning (Dooley, 1997). What is more, the observer is part of the analysis; our flawed and biased perceptions can influence our decisions to alter the scenario in a way that reinforces our own beliefs.
## Systems Thinking
We might be still far from being able to predict and control organizations in an automated manner. As said, managing them remains a very human-centered activity. We can, although, comprehend them better if we think about them as systems, i.e. collection of entities connected together. When we think systemically, we gain a perspective that usually leads to better clarity on what the boundaries are and how the interfaces look like. Essentially, the properties or functions that define any system are functions of the whole which none of its parts have.
There are many different definitions of "system" in the literature, but they are more alike than different. The one that follows tries to capture the core of agreement. A system is a whole consisting of two or more parts that satisfy the following three conditions:
1. The whole has one or more defining properties or functions.
For example, the defining function of an automobile is to transport people on land; one of the defining functions of a corporation is to produce and distribute wealth for shareholders and employees; the defining function of a hospital is to provide care for the sick and disabled. Note that the fact that a system has one or more functions implies that it may be a part of one or more larger (containing) systems, its functions being the roles it plays in these larger systems.
2. Each part in the set can affect the behavior or properties of the whole.
For example, the behavior of such parts of the car can affect the performance and properties of the whole. The manuals, maps, and tools usually found in the glove compartment are examples of accessories rather than parts of the car. They are not essential for the performance of its defining function, which is yet another example of an output from an engineering process that enables the system or product of interest.
3. There is a subset of parts that is sufficient in one or more environments for carrying out the defining function of the whole; each of these parts is necessary but insufficient for carrying out this defining function.
These parts are essential parts of the system; without any one of them, the system cannot carry out its defining function. An automobile's engine, fuel injector, steering wheel, and battery are essential—without them the automobile cannot transport people. Most systems also contain nonessential parts that affect its functioning but not its defining function. An automobile's radio, floor mats, and clock are nonessential, but they do affect automobile users and usage in other ways, for example, by entertaining or informing passengers while they are in transit. A system that requires certain environmental conditions in order to carry out its defining function is an open system. This is why the set of parts that form an open system cannot be sufficient for performing its function in every environment. A system that could carry out its function in every environment would be completely independent of its environment and, therefore, be closed. A system is a whole whose essential properties, its defining functions, are not shared by any of its parts.
> [!attention]
> A system is a whole that cannot be divided into independent parts without loss of its essential properties or functions.
This hardly seems revolutionary, but its implications are considerable. An organization is nothing but a system with parts and interfaces, whose success is not a function of the sum of its parts; but the product of their interaction. Because the properties of a system derive from the interactions of its parts rather than their actions taken separately, when the performances of the parts of a system, considered separately, are improved, the performance of the whole may not be (and usually is not) improved. In fact, the system involved may be destroyed or made to function less well. For example, suppose we were to bring together one each of every automobile currently available and, for each essential part, determine which automobile had the best one. We might find that the Rolls-Royce had the best motor, the Mercedes the best transmission, the Buick the best brakes, and so on. Then suppose we removed these parts from the automobiles of which they were part and tried to assemble them into an automobile that would consist of all the best available parts. We would not even get an automobile, let alone the best one, because the parts don't fit together. The performance of a system depends on how its parts interact, not on how they act taken separately. If we try to put a Rolls Royce motor in a Hyundai, we do not get a better automobile. Chances are we could not get it in, and if we did, the car would not operate well (Ackoff, 1999).
Something that stands out from Ackoff’s remarks is that he seems to assign some sort of negative meaning to the word _analysis,_ considering it reductionistic. In fact, the nuance is that Ackoff considers that applying analytic techniques to a working, existing, synthesized system is reductionistic. In that context, his analysis is correct: if you want to analyze a watch by its parts, you will not be able to, since the watch is only a watch with all its parts assembled together. But analysis, and we will discuss this later on, is still an essential part of our problem-solving “toolbox” when we design technical objects: we must analyze things, so we can then synthesize them. But it is true that the process is not easily reversible in that sense. What has been already synthesized cannot be (easily) analyzed without losing its global function. Reverse engineering is a case of post-synthesis analysis, and that is the reason why reverse engineering is so hard to do for complex systems.
Thinking in systems provides a perspective that can greatly impact in early stages of young and small organizations, not only to help visualize and analyze the _big_ _system_, which encompasses all the way to the organization and its business activities, how it interfaces to the market, what are the dependencies, the unknowns, the value chain. Identifying blocks and parts but at the same time understanding how all that functions is of great use when the level of uncertainty and fuzziness is high. Organizations may not be clockwork machinery, but they sure show an internal composition that Systems Thinking can help bring to the forefront.
### Habits of a Systems Thinker
Systems Thinking is a habit that can be taught. But it requires active training and sponsorship from all directions. A young organization has the great advantage that the complexity of the _big system_ is still reasonably low when things start. Hence disseminating systemic thinking becomes an asset further along the line.
The INCOSE Systems Engineering handbook summarizes the essential properties of a systems thinker (INCOSE, 2015):
- Seeks to understand the big picture.
- Observes how elements within the system change over time, generating patterns and trends.
- Recognizes that a system’s structure (elements and their interactions) generates behavior.
- Identifies the circular nature of complex cause‐and‐effect relationships.
- Surfaces and tests assumptions.
- Changes perspective to increase understanding.
- Considers an issue fully and resists the urge to come to a quick conclusion
- Considers how mental models affect current reality and the future.
- Uses an understanding of system structure to identify possible leverage actions.
- Considers both short‐ and long‐term consequences of actions.
- Finds where unintended consequences emerge.
- Recognizes the impact of time delays when exploring cause‐and‐effect relationships.
- Checks results and changes actions if needed: “successive approximation”.
### Engineering Systems Thinking
At times, Systems Thinking can appear too abstract to engineers. Or too linked to _soft sciences_, or flagged as yet another “thought framework” from the myriads of thought frameworks out there. Systems Engineering is vastly influenced by Systems Thinking, yet this influence remains a bit implicit. To make the connection between the two more explicit, we can discuss now Engineering Systems Thinking.
Dr. Moti Frank was one of the first to identify systems thinking within engineering as a concept distinct from systems thinking within systems science. Russell Ackoff’s Systems Thinking approach is highly based on systems science. Through examination of literature and interviews with engineers, Frank derived 30 _laws_ of engineering systems thinking (Frank, 2000) (reproduced here with permission).
1. In all the project phases/stages and along the system life, the systems engineer has to take into account:
- The customer organization's vision, goals, and tasks.
- The customer requirements and preferences.
- The problems to be solved by the system and the customer's needs.
2. The whole has to be seen as well as the interaction between the systems elements. For this purpose a circular thinking has to be developed, to replace the traditional linear thinking. A problem should not be solved by just dismantling it to parts, but all its implications have to be taken into account. Each activity in a systems certain element affects the other elements and the whole.
3. Consider that every action could have implications also in another place or at another time. Cause and effect are not closely related in time and space (Senge, 1990, 63).
4. One should always look for the synergy and the relative advantages stemming from the integration of subsystems.
5. The solution is not always only engineering one. The systems engineer has also to take into account:
- Cost considerations: business and economic considerations (in the development stages the production costs have also to be taken into account).
- Reuse or utilization of products and infrastructures already developed and proven (reuse in order to reduce risks and costs).
- Organizational, managerial, political and personal considerations.
6. The system engineer should take as many different perspectives as possible, of every subject or problem, and other aspects have to be reviewed from all points of view.
7. Always take into account:
- Electrical considerations.
- Mechanical considerations.
- Environmental conditions constraints.
- [[The Quality of Quality|Quality]] assurance considerations.
- Benefit indices, such as reliability, availability, maintainability, testability, and producibility.
8. In all development phases, the future logistic requirements have to be taken into account (spare parts, maintenance infrastructures, support, service, maintenance levels, worksheets, technical documentation, and various manuals).
9. When a need arises to carry out a modification in the system, take into account:
- The engineering and non-engineering implications in any place and at any time.
- The effects on the form, fit, and function.
- The delays and the time durations of the modification incorporation.
- The system's response time to the changes.
- The needs, difficulties, and attitudes of those supposed live with the modification.
- That the change could bring short-term benefits but long-term damage.
10. Each problem may have more than one possible working solution. All possible alternatives should be examined and compared to each other by quantitative and qualitative measurements. The optimal alternative should be chosen.
11. Engineering design is not necessarily maximal. Not always should one aspire to achieve maximum performances. At every stage, engineering trade-offs and cost-effectiveness considerations should be considered. One could always improve more. One has to know when to cut and freeze a configuration for production. Excessive pressure in a certain point could cause a collapse at another point. Over stressing one part in the system could weaken another part and thus the entire system. Maximum performance design is expensive and not always results in maximizing entire system performance. The harder you push, the harder the system pushes back (Senge, 1990, 58). Faster is slower (Senge, 1990, 62).
12. In case of systems malfunction, problem, or failure, repeated structures and patterns should be looked for and analyzed, and lessons drawn accordingly (repeated failure is a failure that keeps returning, after the repairs, until the true malfunction is found and repaired; a repeated non verified failure is a failure that the user complained about, the technician inspected and could not verify, and the failure reappeared again in the next operation).
13. Look always for the leverage point changes that might introduce significant improvements by minimum effort. Small changes can produce big results but the areas of highest leverage are often the least obvious (Senge, 1990, 63).
14. Pay attention to and take into account slow or gradual processes.
15. Avoid adapting a known solution for the current problem that might not be suitable. The easy way out usually leads back in. Today's problems come from yesterday's solutions (Senge, 1990, 57).
16. Take into account development risks. In each project uncertainty prevails on the level of scheduling, cost, resources, scope, environmental conditions, and technology. Therefore, the strategy of eliminating uncertainties has to be taken e.g., experiments, tests, verifications, analyses, comparisons, simulations, awareness of possible risks, planning ways of retreat, and risk deployment among the partners.
17. It is impossible to run a project without control, configuration management, milestones, and management and scheduling methods. Possible bottlenecks and potential critical paths have to be examined constantly.
18. The operator/user person must be considered as a major part of the system. Hence at each stage, the human element has to be considered. The engineering design should include HMI (Human Machine Interface) considerations.
19. The engineering design is a top-down design (excluding certain open systems, for which the bottom-up approach is preferable). The integration and tests are bottom-up.
20. At every stage, systemic design considerations should be used (such as decentralized or centralized design, minimum dependency between subsystems, etc.). The systems engineer should be familiar with system malfunction analysis methods and tools.
21. Engineering systems thinking requires the use of simulations. The simulation limitations should be taken into account.
22. Engineering systems thinking requires the integration of expertise from different disciplines. Inasmuch as the systems become more complex and dynamic, one person, as competent as he may be, is inadequate to understand and see it all. Systems thinking, by its nature requires the examination of different perspectives, calling for teamwork to cover the various perspectives. When setting up the team proper representation has to be given to all the system's functions. Control and status discussions and meetings as well as brainstorming may have to be more frequent.
23. Try to anticipate the future at every stage. Take into account anticipated technological developments, future market needs, difficulties, problems, and expected changes in the project. (For example: The life expectancy of complex platforms, such as fighter aircraft could reach decades, but the life expectancy of the avionics system is around 10 years on the average. What will be required after 10 years?)
24. Selecting partners and subcontractors could be critical. Before signing agreements, refer to considerations such as the engineering/economic history of the potential partner, manpower (quality, stability, human capital) that he is capable of investing at the projects disposal, division of work and responsibility, and proper arrangements for status meetings, integration tests, and experiments of all kinds.
25. When selecting the software language or software development tools and platforms, make sure that they are usable and supportable, or changeable, throughout the systems life.
26. When selecting components for production take into account their shelf life. Select components whose supply is guaranteed throughout all the systems life. In case of likely obsolescence of components, make sure of having sufficient stock.
27. In order to win a tender, many companies reduce the development price in their offer, assuming that they will be compensated by the serial production and by the cost of modifications (if required). Therefore, in engineering systems thinking, it is recommended not to start development at all, if the serial production budgets are not guaranteed in advance.
28. Always examine the external threats against the system (for example, electromagnetic interference, environmental conditions, etc.).
29. Engineering systems thinking resorts to probability and statistical terms, both when defining the system specifications and when determining the project targets (costs, performance, time, and scope).
30. In engineering systems thinking it is advisable to limit the responsibility assigned to an external factor (such as external consultants), since this increases the dependency on it. Shifting the burden to the intervenor. Teach people to fish, rather than giving them fish (Senge, 1990, 382).
## Out-of-Context Systems Engineering
Systems Engineering is a largely misinterpreted discipline. Or, perhaps the context is misinterpreted, not the discipline itself. Or maybe both. For a start, there is no single, widely accepted definition of what Systems Engineering is; any two “systems engineers” are likely to provide different definitions. Tech organizations, big or small, have been forcing themselves to apply Systems Engineering “by the book” just to find out it gave little to no return on investment (investment here understood not only on money but also pain and sweat vs actual tangible results). In cases I experienced first-hand, it only generated heavyweight processes and extensive documents that made work slower and more cumbersome. I had this conversation with one of the sponsors of this SE process introduction, who was quite puzzled (only in private, in public he would still defend it): “If Systems Engineering is the way to go, why does it make everything more difficult than it used to be?”
Let’s recap what Systems Engineering “by the book” is. First, probably a quick recap on the history of Systems Engineering. Systems Engineering is the result of necessity, and a reaction to two main problems: complexity and multidisciplinarity, which are sort of two sides of the same coin. As semiconductor and computer technology evolved during the 20th century, computing became so pervasive that devices stopped being single-disciplined (fully mechanical, chemical, hydraulic, etc.) to became a combination of many disciplines such as electronics, software, and so on. This gave birth to the concept of a technical _system_, where the global function is the product of different components interacting together to accomplish the system’s goal, and each one of those components is a combination of multiple disciplines combined. Miniaturization helped to squeeze more and more functionality into the systems architectures, where they started to contain more and more constituent components. And here is when the top-bottom vs bottom-up (analysis vs synthesis) gap became more like a fissure: it became clear the way to design complex systems required a thorough top-bottom approach, whereas engineers were still “produced” (educated by universities) and trained to be the ones synthesizing the stuff, i.e to go bottom-up. Universities were providing masons, where the industry was lacking architects. So the top-bottom branch needed some method, and that is what Systems Engineering came to propose. It did so in in perhaps a bit of an uncoordinated way, since Systems Engineering never materialized as a formal educational track like other engineering disciplines, leaving it to a bunch of authors to define its practice and methodology by creating a heterogeneous Systems Engineering body of knowledge which is currently composed of hundreds and hundreds of books, with similar but yet slightly different opinions, approaches and terminologies. Systems Engineering is an industry in itself as well, and a lucrative one for whoever manages to engage with the “big fishes”.
Then, World War II and the Cold War acted as accelerating factors in many critical military technologies such as weapon systems, radars, spacecraft, missiles, rockets, etcetera, so in the later part of the XX century the defense sector became the main driving force for the need of more integration among systems that had not been integrated together before, giving way to the very complex military systems that are known today as for example C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance). These kinds of “Systems of Systems” are reaching levels of integration no one could have thought about some decades ago. The military background (still one of its strongest sponsors) of Systems Engineering cannot be overlooked, and explains most of the content found in the materials. The United States Department of Defense (DoD) has been facing incredibly difficult challenges acquiring those systems, which are without a doubt the most complex technical systems humanity has ever made. What is more, DoD grew increasingly willing to acquire systems that can interoperate. To give an indication of the importance of system acquisition, DoD has even created a university for educating on how to acquire defense systems. This is the Systems Engineering that you will find in most of the books out there. A discipline where the audience is usually a gigantic multi-billion dollar budget governmental entity requesting a highly complex mission-critical System (or System of Systems) on programs with lengthy schedules where requirements specification, conceptual design of the architectures, production and deployment are very defined stages that often span several years or even decades. Just to have some idea, DoD's proposed budget for FY2020 is ~750 billion dollars.
How is this context anywhere related to small and medium enterprises? The answer is simple: it is not related at all. There is almost nothing you can get out of those books and handbooks that you can apply in our 10, 20, 50, 250 people company, or department. Systems Engineering practitioners around the world have noticed the disconnect between the historic Systems Engineering for giant organizations and the need of Systems Engineering for startups and small organizations. There have been some attempts to create tailored Systems Engineering standards for small organizations, for example, ISO/IEC/TR 29110. When a methodology, process, or similar needs a _tailoring_ or a _scaling,_ that usually indicates the original is just overhead.
But, if we can for a moment forget about the background story of Systems Engineering and think about how we still do technical things, we quickly see we deal with the _top-bottom meets bottom-up_, or analysis vs synthesis problem-solving scheme, regardless if we are designing a CubeSat or an inter-continental ballistic missile. Systems Engineering, in a nutshell, is about shedding light on the top-bottom decomposition and then bottom-up integration of all the bits and parts, which is ultimately common sense: we cannot start picking up the bricks and hope we will end up building a house if we first don’t think about what we _need_ to build.
Consuming “old school” Systems Engineering material is absolutely fine, just if proper care is taken to understand that our context can be very different from the context that material refers to. The good news is that we can still take useful things from the “old “Systems Engineering school of thought: mostly about the life cycle of complex things. The bad news is that every organization, as unique as it is, needs its own flavor of Systems Engineering. No book can capture what your organization needs because those needs are unique. We discussed tailoring before; tailoring is a good practice, but it must be done while embedded in the right context. No external entity/consultant/standard can tailor things for us.
> [!info]
Here's a presentation I gave to the Latin America Chapter of INCOSE about Systems Engineering in startups (In Spanish)
>

# References
Ackoff, R. (1999). _Re-Creating the Corporation, A Design of Organizations for the 21st Century_. Oxford University Press.
Churchill, N., & Bygrave, W. D. (1989). The Entrepreneurship Paradigm (I): A Philosophical Look at Its Research Methodologies. _Entrepreneurship Theory and Practice_, _14_(1), 7-26. 10.1177/104225878901400102
Csaszar, F. A., & Siggelkow, N. (2010). How Much to Copy? Determinants of Effective Imitation Breadth. _Organization Science_, _21_(3), 661-676. http://dx.doi.org/10.1287/orsc.1090.0477
Daly, H. E. (1990). _Steady-state economics_ (Second ed.).
Dooley, K. J. (1997). A Complex Adaptive Systems Model of Organization Change. _Nonlinear Dynamics, Psychology, and Life Sciences_, _1_(1), 69-97.
Frank, M. (2000). Engineering systems thinking and systems thinking. _INCOSE Journal of Systems Engineering_, _3_(3), 163-168.
Hayles, N. K. (1991). Introduction: Complex dynamics in science and literature. In _Chaos and order: Complex dynamics in literature and science_ (pp. 1-36). University of Chicago Press.
INCOSE. (2015). _INCOSE Handbook_ (4th ed.). Wiley. ISBN:978-1-118-99940-0
ITU. (2015). _Collection of the basic texts adopted by the Plenipotentiary Conference_. [http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/5.21.61.en.100.pdf](http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/5.21.61.en.100.pdf)
Maxwell, J. C. (1867). On Governors. _Proceedings of the Royal Society of London_, _16_(-), 270-283. [www.jstor.org/stable/112510](http://www.jstor.org/stable/112510)
McFarland, D. A., & Gomez, C. J. (2016). _Organizational Analysis_.
McGregor, D. (1967). _The professional manager_. McGraw-Hill. ISBN:0070450935
Meadows, D., Meadows, D., Randers, J., & Behrens III, W. W. (1970). _The Limits to Growth: A Report for The Club of Rome's Project on the Predicament of Mankind_. Potomac Associates.
Reuters. (2020, March 20). _Apple to pay users $25 an iPhone to settle claims it slowed old handsets_. The Guardian. https://www.theguardian.com/technology/2020/mar/02/apple-iphone-slow-throttling-lawsuit-settlement
Senge, P. (1990). _The Fifth Discipline: The art and practice of the learning organization_. Doubleday, New York.
Simon, H. (1997). _Administrative Behavior: A Study of Decision Making Processes in Administrative Organizations_ (4th ed.). The Free Press.
Tenembaum, T. J. (1998). Shifting paradigms: From Newton to Chaos. _Organizational Dynamics_, _26_(24), 21-32. [https://doi.org/10.1016/S0090-2616(98)90003-1](https://doi.org/10.1016/S0090-2616\(98\)90003-1)
Vallero, D., & Brasier, C. (2008). _Sustainable Design The Science of Sustainability and Green Engineering_. Wiley. ISBN: 9780470130629
Wiener, N. (1985). _Cybernetics, or control and communication in the animal and the machine_ (Second ed.). The MIT Press. ISBN: 0-262-23007-0