# From Whiteboard to Market
Eagerness to jump into details too soon is one of the toughest urges to tame for a system design engineer. Learning to _hold our horses_ and taking the time to think before acting is a skill that takes time to develop; it perhaps comes from experience. Engineering is an evolving, iterative craft; sometimes a slow craft. Time-dependent. This chapter is about understanding how things evolve from pure nothingness, including the triad of factors that are critical for any engineering adventure: the business factor, the social factor, and the technical factor. Factors that are connected to each other and greatly shaped by one valuable (and scarce) resource: time.
## System Life Cycles
All things we create with our ingenuity go through a life cycle, even if not formally defined. This cycle is a mixture of divergence and convergence, construction and destruction, order and disorder, in cycles or waves that are visited and revisited over and over. Nothing engineered is automatically nor magically created but incrementally realized by the execution of a variety of activities, bringing objects from just abstract ideas to an operative device that performs a job to fulfill a need. For some strange reason, bibliography tends to carve in engineers’ brains a discretized image of this lifecycle; a sequence of well-defined stages that have their own names, lengths and activities. And here lies one of the main misconceptions about life cycles, despite its obviousness: the life cycle of what we create flows in continuous time; we only discretize it for practical reasons. And the way to discretize it greatly varies depending on who you ask, or what book you read. Think about your own life cycle as you grow older: it is a continuous progression. We tend to discretize the typical human life cycle in stages like infancy, adolescence, adulthood, old age, and we expect certain things to happen at certain stages, for example, we expect someone at her infancy stage to go to daycare. Discretizing a life cycle is a technique to make it more manageable, but the system maturity evolves in a continuous manner. There is, though, a clear difference between an artificial system life cycle and a human life cycle, and that difference stems from the fact we humans develop as a single entity altogether (our bodily parts develop altogether), whereas systems can actually be developed separately in constituent parts that have their own lifecycle (which should maintain some level of synchronism with the general system), in a sort of a puzzle. These parts of the puzzle are made to fit and work together through several iterations. Each constituent part can also have an internal composition with parts also developed separately, so the nature of life cycles in a system is highly recursive. Discretizing a life cycle in stages is necessary as a way of organizing work better, but you can choose to discretize it in many different ways. Whatever the life cycle staging method chosen, it is quite important to maintain consistency across the organization that designs and builds systems. Life cycle stage names and scopes become very important as a planning and communicational tool, so commonality on this matter is a great advantage.
During different stages of the engineering design process; many key decisions are taken, and key tasks are performed. Another hidden fact about life cycles and their iterative and evolving nature is that we visit life cycle stages several times through the lifetime of the project, which slowly increases system maturity. Let’s state it again: what this means is that we don’t visit a life cycle stage only once; we visit it multiple times. It is just that the weight or intensity of some of the activities as we run through it changes as the maturity of the work progresses. We may believe we have abandoned a stage for good only to find out at some other later stage that an assumption was wrongly made, forcing us to move the project “pointer” back to the beginning. Here’s perhaps an interesting thing to point out: the way time progresses and the way the project “pointer” progresses are related but they don’t always follow each other. Time only progresses forward, whereas the pointer can move back and forth, come and go depending on the Systems Engineering game. Like we jump spaces in a board game depending on the gameplay.
A great deal of information is generated as the life cycle evolves. Is it of paramount importance to feed this information back into other incumbent stages for those stages to make better decisions, in a virtuous circle that must be ensured. Although life cycles are artificially split into chunks, the flow between the chunks must be kept as smooth and continuous as possible. Isolated life cycle stages mean a broken project that is only headed to failure.
When does the maturity of the design reach a level to consider the job done? Probably never; later on, we will see how subjective this can be. What we can only hope for is to define meaningful _snapshots_ that produce outcomes that match some specific goal. For example, a representative prototype or proof-of-concept is a snapshot of a life cycle iteration. Prototypes provide invaluable insight to gain a real-world understanding of the fitness of the product, but they can be costly and complex (sometimes as complex as the real thing). Reaching production grade is also a _freeze_ of the life cycle work; it means the maturity is such that it can reliably replicate/instance the design and those instances will perform as initially planned. Life cycles are pervasive; everything we create needs time to be done, and we have the freedom to organize the work in whatever way suits us and declare it done whenever suits us; no framework or methodology can dictate this for us. We, as designers, are in full control to decide how to do it. But with great power comes great responsibility: proper life cycle management requires coordination, communication, and control.
## The Business Factor
If the business model is not totally fluid, an organization cannot probably be called a startup. Fluidity and startups are practically synonyms. That is of course a bit of a tongue-in-cheek way of saying that the business strategy is yet one more unknown to figure out as the whole project evolves in early-stage organizations. The technical side of space is not the only side of it, of course. Space is a business, like any other business, and as such it requires the same thinking and planning as restaurants or lemonade stands do. There are differences between those, for sure, but to turn an idea or project into a self-sustainable endeavor (as in, making people willing to pay for what you do), the Business Factor must be included as another important piece of the mechanism of growing a startup. There are myriads of books about this, and it is by far out of the scope of this book to dive into business topics, but I considered it important to mention this before diving into more technical matters.
NewSpace altered the way of doing space in different ways. Not only about how to design and build the actual things that go to orbit in a cheaper, simpler, smaller, and faster way, but it also changed the paradigm of doing business with space; it created a new type of market. Before NewSpace, the way of making revenues with space systems was reserved to a very small group of very heavyweight actors, which were building spacecraft for a limited amount of applications: mostly science and (largely) defense and surveillance, with governmental entities as the sole customers. NewSpace created new business cases, allowing data to reach a broader audience, including smaller private entities, and even particular customers, so serve their various needs for applications such as agriculture, mining, flood monitoring, Internet-of-Things, broadband communications, etcetera. Even though new business models appeared, the way NewSpace companies are stepping up to offer their products to broader audiences remains largely under discussion.
There is something about space that still amazes people, even though the first satellite was launched more than sixty years ago. There is still this sci-fi halo about it; it probably has to do with the fact they go up in such a spectacular way on board those fancy rockets which makes it very hard for the layman or laywoman to take as something usual or normal. So, the space business still does not look like selling shoes or insurance. But space _is_ a business just like selling shoes or insurance. The sci-fi halo is a bit of a problem for NewSpace, or maybe a distracting factor more than a problem. Everyone wants to go to orbit, everyone wants to be involved in something related to going to space: investors, politicians, engineers, accountants, lawyers, you name it. This is great since it brings to the industry a great deal of attention without having to make a great effort for it, which other industries would only envy. But the _space halo_ can cloud important business questions that any new company planning to be profitable needs to have in mind, from the very beginning:
● What is the product?
● Why would people pay for it?
● Is this company offering a service or what exactly?
● How is it different from others doing the same?
The interesting bit is that business propositions (or _models_, even though that’s a word I will refuse to use, and later readers will get to see why) totally evolve as the organization and the technical development evolve. It is too naive to believe we will manage to figure all those questions out when the company is founded and that they will remain unchanged forever. What we can only dream for is to define a baseline and then reality will calibrate us, in many different ways: probably the technical complexity of what we are aiming for will call for a change that will impact our business proposition, or our funding will change and force us to change the business strategy, or the market will take a turn and make us rethink the whole thing. In any case, the business questions are as important as the technical questions. The thing is, typically tech startups are founded by engineers, who are fonder of discussing architectures and design things than value propositions. As the project matures, the business questions will surface on an on, and an immature understanding of the business proposition will eventually become a problem. An immature understanding of the business also means a varying business vision within company walls. There must be a common vision inside those walls. Nothing worse than no one really knowing what the product is. That being said, the Product can (and will) change: what the product is today might not be what is going to be in six months. Things change, but it is good the business proposition is communicated and brainstormed properly.
### Prototype the Business
There are lots of assumptions made at the early stages of a business. It is essential to test those assumptions as quickly as possible since this can impact the decision-making on the technical side. Many iterations are needed for this to converge. One way to get started is to make sure there is an internal alignment on what the business proposition is. When you are four, five, or seven people overall in the organization, it is a great moment to make sure you are all on the same page about what you are doing. After all, making space systems is a lot of effort and sweat, so better to make sure everybody is aware of where all that sweat and tears (of joy, of course...) are going. This can spark a lot of discussions, and very healthy ones. It is critical to keep the business proposition very closely attached to the technology development; they will feed each other indispensable insight.
It is key to iterate, on and on, and adjust the business approach as it goes. But, when we have literally zero customers, it is not simple to prototype much since there’s nobody out there to test the business strategy prototypes with. One way is to do some intelligence and identify a few prospective customers we would love to have in our clientele and reach out. When the project is too early, chances are rejection will be the norm, but that’s fine. The intertwined nature of business and technology indicates that, as we gain momentum on the technology side, we increase our chances to engage someone as an early adopter of your proposition. But this must be done with some care; chances of ending up creating a product tailored for one specific customer increase. If early adopters align well with the overall strategy, all is good. In any case, it is important to make sure any early adopters do not influence the business and the technology roadmap in such a way we end up serving them with a highly customized thing no one else will want. For very early-stage organizations, this is a tough situation because making revenues is every founder’s dream.
### The Business Blueprint
Just as blueprints serve as schematic clues for engineers to orient themselves and to communicate with each other while they design, the business strategy needs similar design and development. At very early stages, this takes the form of simple questions and concise answers, in a sort of internal questionnaire which is usually called Business Blueprint. The Business Blueprint provides clues about customers and audience, about how revenue will be made, and about partnerships needed for the project to succeed. The Business Blueprint raises awareness and helps us see the bits and parts that are required to create, capture, and deliver value. Questions may vary a lot for different businesses, and here we will tailor them for a NewSpace enterprise.
#### Customers
1. Question: Who are we serving?
#### Offer (Product proposition)
1. Question: What product or service will we make and deliver?
#### Revenue Model (and Price)
1. Question: How will we make money?
2. Question: How and what will we charge our customers for?
Note: Thinking about pricing at very early stages can be challenging since costs are still not fully understood and not properly broken down. One way to go is to assess what might be needed to cover the most important parts of your business and look at what competitors are charging for similar offers. Again, the revenue model will most likely change dozens of times as the company grows; still, important to revisit it over and over.
#### Channels
1. Question: How will we get our product to customers?
#### Value Proposition
1. Question: What customer need(s) are we fulfilling?
2. Question: What is the unique promise our offer provides?
#### Costs
1. Question: How much does it cost to create and deliver our offer?
#### Partners
1. Question: What key relationships will help us create and deliver our offer?
2. Question; What key technologies are out there we could add to our value chain?
3. Question: What big players or other NewSpace actors in the market are worth tapping?
Note: This point is important and often underrated. Bad partnerships at early stages can be very damaging; partnerships for the sake of PR or marketing are very bad ideas. That being said, good partnerships at early stages can really boost the path ahead. One effect NewSpace companies suffer from is the fact they’re so small that they can fall way under in terms of priority for established suppliers. For example, if a small company needs an onboard computer from an established supplier X, supplier X will always pay more attention to bigger customers in their clientele and leave the small guy all the to the end of the queue when it comes to attention, support, etc. To overcome this issue, NewSpace organizations can partner with other similarly small startups and help each other towards a partnership that works for both. There are risks involved with this, of course, since we are talking about two fragile organizations with perhaps immature products. The key is to select partners with clear tech competence and clear strategies and avoid falling into the trap of cheap prices from PR-oriented startups which mostly care about piling up letters of intent and shooting press releases.
#### Team, Talent & Resources
1. Question: What skills/capabilities/resources will we need?
Young organizations should thoroughly brainstorm the business blueprint frequently, perhaps weekly or so. The questions the blueprint asks revolve around a certain number of concepts or topics to be figured out (Fig. 2.1). A more graphical take can be to lay out all these topics from the blueprint in a sort of cloud or list and populate short answers next to the boxes, according to the organization’s vision and strategy. This can be displayed on a whiteboard in a common hallway to make sure everyone will see it every morning; next to the coffee machine (if there is any) can be a good spot.
![[Pasted image 20250215161430.png]]
Figure 2.1 - There must be a clear alignment of what goes next to each one of these boxes
## The Social Factor - Connecting Brains Together
The last question of the business blueprint makes a nice connection to this section. Tom DeMarco stated decades ago in his always evergreen _Peopleware_: “The major problems of our work are not so much technological as sociological in nature" (DeMarco & Lister, 1999). Understandably, when an early-stage small company is running on paper-thin resources and short runways, probably one of the last things it will think about is sociology. The focus will understandably be on coding, architecture, supply chain, schedule, budget, and so on. Still, even if priorities may seem distant from minding sociological factors, the group of people an organization has assembled is a network of brains that need to cooperate to achieve a goal. Teams can be considered, at the end of the day, a cooperative nervous system. The Social Factor is that intangible kind of _magic_ that makes a team overachieve even if they have limitations, gaps, or constraints. This social factor, when properly addressed, is what glues a team together and converts it into something else, usually driving startups to achieve what others have failed to achieve. What needs to be done for that alchemy to take place? It is not exactly about finding technical geniuses, but more fundamentally about nurturing some drivers which can definitely increase the probability of making a team bond: equality, diversity, trust, open communication, humor and a very strict sense of aversion to bullshit. All these mixed-up together make up a potion that can spark a great sense of determination towards a grand goal, no matter how bold the goal might be. Let’s quickly elaborate on these drivers:
● **Equality**: Structure is flat. The team has similar compensation and titles. No elites are created in the core team, they are all equals, and they feel like equals.
● **Diversity:** Heterogeneity is a team asset and is linked to better team performance (Hackman & Oldham, 1980). Diversity also limits the risks of groupthink (Hart, 1991), which is a condition where the pressure to keep harmony hinders the ability to evaluate alternatives. Nationality, race, gender, and political diversity can enrich the workplace and provide different perspectives for problem-solving.
● **Trust**: It means dependability among team members. It is grown with time and effort, by keeping promises and leading by example, in such a way the team members can feel they have their backs protected. Managers must turn trust into a religion. Managers need to provide enough autonomy to team members and avoid micromanaging and overshadowing. A manager too visible is never a good sign.
● **Open Communication**: No secrecy, no compartmenting of information, no hiding. Feedback is always welcome, even if you won’t like it.
● **Shared Aversion to Bullshit:** A team-wide, management-sponsored appreciation for the truth, and rejection of half-truths. Prevalence of evidence versus speculation. Prevalence of emails/messages versus long pointless meetings without clear agendas. It is important to note that the fight against bullshit is a tiring one and requires an active mindset against it, otherwise, it will eventually take over. We will cover bullshit in more detail at the end of the book.
● **Humor**: This could sound irrelevant in this list, but it is one of the most relevant ones, if not the most relevant. The startup world can be cruel. At times you can feel all you get from the outside world is hostile disbelief and discredit. Tough times are always around the corner, and a pessimistic attitude will not make things any better. In the darkest hours, cheerfulness and a great sense of humor is an asset, something that should be fostered and encouraged.
Other factors as a quiet office space, amenities, good food and free drinks are always welcome, but those are not team bonding factors. A team that can concentrate in peace (capable of reaching _flow_) is always a good thing, but a truly jelled team will perform even without all that comfort.
In early-stage organizations, once the dust from the founding Big Bang_ has settled, the core team is called to duty. This core team is usually small, of a maximum 5 or 6 individuals. The challenge with the core team is not (only) about finding educated and experienced engineers and putting them in a room; the real challenge is to find people with the right personality and culture fit for the challenge ahead and make them play like a well-tuned orchestra. Bonded, healthy core teams are yet another practical example of the Systems Thinking “mantra”: the whole is greater than the sum of its parts.
With flat structures come some challenges related to the decision-making process. A core team that lacks a clear single point of authority (for example a Chief Systems Engineer) will have a hard time deciding and eventually get itself into an analysis paralysis state. A single tech manager is instrumental at this stage in deciding the steps forward. When the technology maturity is low, every path will look almost equally reasonable to take, and every methodology and/or framework will look like the best to adopt, so it is the task of the more experienced manager to guide the team while taking those steps, and often this means showing the way by means of real [[The Struggling Startup Quick Guide#Learn to Tell a Story |storytelling]] from experience, or some proof-of-concept, prototype, analysis, or similar. The manager in charge of a core team who cannot roll up his or her sleeves and go to fight in the trenches, or gather the team to share a story and elaborate on why a particular path is wrong should probably find something else to do. But fighting from the trenches does not equally mean being one more of the core team. The manager is the captain, and as such she must remember and remind everyone (by actions, not by words) of her authority position and for that is necessary to keep a reasonable emotional distance from the team. This is fundamental for the team members to perceive the manager as the shock absorber and the solver of otherwise endless deliberations. During those times of uncertainty, common sense and simplicity should be policed by everyone in the team, including the manager (more about that later). If common sense is jeopardized in any way, there should be a healthy environment in place for anyone to be able to call it out. The sum being greater than the sum of its parts is no magic: it is achieved by making the team members love doing what they do. Easy to say, but not so simple to do. How do you make team members not only like it but actually _love_ it? Engineers are often challenge-thirsty; the feeling of working on things no one else would dare embark on can be very motivating. The good thing is that tech startups get easily in trouble, so challenges are probably the only thing that is not scarce around them. The manager should quickly define bold goals to keep the team engaged. For example: “_In two years, we will be a major player in the global Debris Removal industry_”. Or, more quantitatively: _“In two years’ time we will have 20% of the Debris Removal market share_” to use some examples. Clear, concise, bold, and reasonably long-term goals are good ways of making a team create a common vision towards the future. Such vision for _jelled_ teams, using Tom DeMarco’s term (DeMarco & Lister, 1999), creates strong identities; the team perceives themselves as a pack and will defend and protect other members, come what may. The core team is a very special collective, and it should be composed of people who in general can do many things. Most of these team members will then become IPT leaders or Systems Engineers or similar once the company grows.
Early-stage design is more suited for generalists than specialists. This is no wonder; the design is too coarse at the beginning for any specialist to be of relevance; at this stage generalist minds are of more importance. If the generalists can still have one more specialized side where they could do some detailed design, that's a jackpot for the organization.
Core teams can be prone to burnout and turnover, if not properly managed. At times, working for this type of companies can get tough: team’s members wear too many different hats; everybody does a lot of different things, change is the norm, budgets are thin; stress levels can go high. It can very easily go into a self-reinforcing feedback loop where budgets, schedule and stress can combine in a way people might stop enjoying what they do and reach psychological saturation. It is important for managers to understand that and make sure that the engineers enjoy waking up in the morning and commuting to work. Managers should continuously scan the team humor and body languages to gauge overall morale. How people walk speaks about their humor, how people behave during lunch provides hints of their mental states. Managers need to remain close to the team, but probably not as close. Team members can be friends and go out together, and it would be great if they do. The manager, on the other hand, should keep a reasonably safe emotional distance to the team, to avoid breaking the trust balance (of course the boss can join the outings, but not all of them), but also to have the right autonomy to make tough calls. One team member perceived by the rest of the team to be too close to a manager could be interpreted in the team as a member with influence to the boss. Mistrust spreads quickly and is very hard to reconstruct. Team trust towards the boss and close friendship are not as connected as it may seem. Team trust towards the boss is more a function of crystal-clear communication, honesty, full disclosure, and a sense of equality, despite the hierarchical position. Bosses who show elitist behaviors, for example by receiving too many benefits the team members (who ultimately are the ones actually doing the work) do not get, end up creating barriers which can be significant to get team’s buy-in in difficult times. Managers’ arses are the ones permanently on the line, and they deserve a right compensation for it, but a too thick of a gap with the team can severely affect their leadership. Manager’s biggest goal is to get the team to jell. Overachieving tech teams never jell by tossing corporate goals to them; they’ll go: meh. Competent tech teams jell by giving them challenges and a sense of autonomy and freedom to solve intricate technical problems; that’s why drives them. Managers thinking, they will get tech teams to jell by creating cringey social happenings or sending the team to go-kart racing miss the point and are just naive. Tech teams jell by doing tech; by coding, designing, figuring things out. They do not jell by keeping them sitting in a video call being lectured. Managers should give them a challenge impossible to solve, or a lost cause. They will die trying to sort it out and become a pack in the process.
As the organization grows, most of the core team members will start leading people (probably IPTs) on their own and gain more autonomy. Eventually the core team will disband in favor of a different team structure that will be put in place to support the company growth and the system architecture evolution. The manager of the core team has the unique possibility of transitioning into an integrating role between the spawned groups; the fact he/she has been working many years and in tough times with the members of the core team (which now lead their own teams) creates a very strong leadership fabric which is a great asset for the organization. This is an instrumental moment for the organization to avoid using the typical lonely middle-manager role who leads other subsystem managers but has no team (he or she is just a layer between executives and the IPTs). The chance to spawn the management fabric as the architecture evolves brings the unique advantage the trust and bond between the different levels is preserved.
Last by definitely not least, a word on sense of humor. As said, young NewSpace projects are tough environments. An illustrative analogy that has been proven to be quite useful is to think about early stage NewSpace startups like a submarine on a mission. Few environments are tougher than a submarine. Readers may be surprised how much these military machines run on ‘soft’ leadership skills. In submarines, teams are small and constrained, living in uncomfortable quarters for long times. To make it easier for the crew, there is no substitute for cheerfulness and effective storytelling. In fact, naval training is predicated on the notion that when two groups with equal resources attempt the same thing, the successful group will be the one whose leaders better understand how to use the softer skills to maintain effort and motivate. Cheerfulness counts. No one wants to follow a pessimist when things are looking ugly out there. And things don’t look good quite often in startups, and in submarines. Navies assiduously records how cheerfulness counts in operations. For example, in 2002 one of Australia’s Royal Navy ships ran aground, triggering the largest and most dangerous flooding incident in recent years. The Royal Navy’s investigating board of inquiry found that ‘morale remained high’ throughout demanding hours of damage control and that ‘teams were cheerful and enthusiastic,’ focusing on their tasks; ‘sailors commented that the presence, leadership, and good humor of senior officers gave reassurance and confidence that the ship would survive.’ (St. George, 2013). Turning up and being cheerful, in other words, had a practical benefit.
It has long been understood that cheerfulness can influence happiness at work and therefore productivity (Oswald et al., 2009). A cheerful leader in any environment broadcasts confidence and capability. In a submarine it is the captain, invariably, who sets the mood of the vessel; a gloomy captain means a gloomy ship. And mood travels fast. Cheerfulness affects how people behave: you can see its absence when heads are buried in hands and eye contact is missing (St. George, 2013). And cannot expect to remediate this by taking your team to laser tags or an escape room. You set the tone by showing humor and cheerfulness when things are looking bad and prospects are not nice. Conversely, empty optimism or false cheer can hurt morale. If you choose to be always uber-optimistic, then the effect of your optimism, over time, is reduced. Shit will happen, no matter what; a good sense of humor will not solve it but make it easier. Managers’ true nature is seen during storms, not while sailing calm waters.
### The Engineering Culture
>[!cite] _“It is a great profession. There is the fascination of watching a figment of the imagination emerge through the aid of science to a plan on paper. Then it moves to realization in stone or metal or energy. Then it brings jobs and homes to men. Then it elevates the standards of living and adds to the comforts of life. That is the engineer’s high privilege. The great liability of the engineer compared to men of other professions is that his works are out in the open where all can see them. His acts, step by step, are in hard substance. He cannot bury his mistakes in the grave like the doctors. He cannot argue them into thin air or blame the judge like the lawyers. He cannot, like the architects, cover his failures with trees and vines. He cannot, like the politicians, screen his shortcomings by blaming his opponents and hope the people will forget. The engineer simply cannot deny he did it. If his works do not work, he is damned… On the other hand, unlike the doctor, this is not a life among the weak. Unlike the soldier, destruction is not his purpose. Unlike the lawyer, quarrels are not his daily bread. To the engineer falls the job of clothing the bare bones of science with life, comfort, and hope. No doubt as years go by the people forget which engineer did it, even if they ever knew. Or some politician puts his name on it. Or they credit it to some promoter who used other people’s money... But the engineer himself looks back at the unending stream of goodness which flows from his successes with satisfactions that few professions may know. And the verdict of his fellow professionals is all the accolade he wants.”_ Herbert Hoover
Engineering is much more than a set of skills. It consists of shared values and norms, a special vocabulary and humor, status and prestige ordering, and it shows a differentiation of members from non-members. In short, it is a culture. Engineering culture has been well studied by both social scientists and engineers themselves. Researchers agree that there are several distinguishing features of the culture that separate it from the cultures of specific workplaces and other occupational communities. Although engineering cannot sustain itself without teamwork, it can be at the same time an individual endeavor. Engineers routinely spend long hours at a workstation trying to figure things out on their own, forming intimate relations with the technologies they are tasked to create. Because work is so often done individually, engineers engage in seemingly strange rituals to protect the integrity of their work time. For example, it is typical for an engineer to stay at the office or laboratory late into the night or to bring work home with them just so they can work in solace and without distraction (Perlow, 1999).
But what is culture?
Culture is a property of groups, and an abstraction for explaining group behavior (Schein, 2004). The definition of ‘culture’ is one of those terms (like quality) that everybody uses in slightly different ways. The following are commonly used definitions of culture.
1. Culture is a set of expressive symbols, codes, values and beliefs. These are supported by information and cognitive schemas and expressed through artifacts and practices (Detert et al., 2000).
2. Culture is a shared pattern of basic assumptions learned by a group by interacting with its environment and working through internal group issues. These shared assumptions are validated by the group’s success and are taught to new members as the “correct way to perceive, think, and feel in relation” to problems the group encounters (Schein, 2004).
3. Culture is in the interpersonal interactions, shared cognitions, and the tangible artifacts shared by a group (DiMaggio, 1997).
These definitions share the common features of identifying culture through properties, tangible and intangible, that represent shared thoughts or assumptions within a group, inform group member behavior, and result in some type of artifact visible to members outside the group (Twomey Lamb, 2009). These features are influenced by a group’s history, are socially constructed, and impact a wide range of group behavior at many levels (e.g. national, regional, organizational, and inter-organizational) (Detert et al., 2000). Culture can also be considered at smaller levels. For instance, a team may have its own subculture within an organization: heavily influenced by the overall organizational culture but nuanced by the individuals and experiences on a given team. Within a team, much of the tone is set by the team leader and those who have been with the team the longest. Once established, a group’s culture is tempered by shared experiences and by the past experiences of those who later join the group, bringing with them new beliefs, values and assumptions (Schein, 2004). In an engineering context, this means a team’s culture impacts its creativity, problem solving, and ability to generate new concepts (Harris, 2001). In fact, group norms, one of the characteristics of culture, are key to group performance (Hackman & Oldham, 1980). However, efforts to alter group norms can be confounded by culture. New behaviors or processes introduced to a group will fail to catch on if they go against the prevailing culture (Hackman & Oldham, 1980). This is because one characteristic of culture is its stability within a group (Schein, 2004). The formation of culture begins with the formation of a group, and mirrors the stages of groups formation: forming, storming, norming and performing (Tuckman & Jensen, 1977). Once a group is formed, group norms begin to develop through conflicts, attempts to achieve harmony, and the eventual focus on a mutual goal throughout the execution of which the team matures, adapts, and innovates, constantly testing and updating its behaviors, assumptions, and artifacts (Schein, 2004). While culture is a powerful predictor of group behavior, it can also be a barrier to the introduction of new methods, tools and processes (Belie, 2002). However, culture can also be a motivator for change. So-called ‘cultures of change’ empower members to seek out new methods and ideas to solve problems (Twomey Lamb, 2009). Organizational culture is a contributor to team success. Because trust is at the base of successful interactions, organizations can emphasize positive team norms and create a cultural context that supports team success by fostering and sustaining intellectual curiosity, effective communications and the keeping of thorough documentation (Goodman, 2005).
## The Technical Factor
We design in order to solve a problem. But there are multiple ways of solving a problem, as there are many ways _to skin a cat_. Regardless of the paths we choose, everything we engineer boils down to a combination of analysis and synthesis, decomposition and integration; this is the true nature of Systems Engineering: facilitating the decomposition of the problem into bits that we realize on their own, and then we integrate all together. The terms analysis and synthesis come from (classical) Greek and mean literally "to loosen up" and "to put together" respectively. These terms are used within most modern scientific disciplines to denote similar investigative procedures. In general, analysis is defined as the procedure by which we break down an intellectual or substantial whole into parts or components. Synthesis is defined as the opposite procedure: to combine separate elements or components in order to form a coherent whole. Careless interpretation of these definitions has sometimes led to quite misleading statements -- for instance, that synthesis is "good" because it creates wholes, whereas analysis is "bad" because it reduces wholes to alienated parts. According to this view, the analytic method is regarded as belonging to an outdated, reductionist tradition in science, while synthesis is seen as leading the "new way" to a holistic perspective. Analysis and synthesis, as scientific methods, always go hand in hand; they complement one another. Every synthesis is built upon the results of a preceding analysis, and every analysis requires a subsequent synthesis in order to verify and correct its results. In this context, to regard one method as being inherently better than the other is meaningless (Ritchey, 1991). There cannot be synthesis without analysis, as well as we can never validate our analyses without synthesis.
For the tasks we must perform while we design, we must formulate them before executing. There is an old adage that goes: _it is not the idea that counts – it's the execution_. We can poorly execute an excellently thought idea, as well as we can poorly think something we execute perfectly. For anything we want to achieve, it is always about thinking before doing. We cannot expect proper results (proper as in, close to the desired goal) out of no planning or no thinking. It’s like taking a bus before you know where you want to go.
Whatever the problem is that we are trying to solve, we must take time to analyze what needs to be done and describe to others involved with us in the endeavor the _whys_ and _hows_. Time is always a constraining factor: we cannot take too long in performing analysis, we always need to move to execution otherwise everything just halts. To create technical devices from scratch, like spacecraft, we must start by identifying a need, or an idea, which triggers a set of activities. More knowledgeable and mature companies usually work from an identified need (using techniques as market research, focus groups, customer feedback, etc.), but startups usually work solely on abstract ideas, since they don’t have the luxury of a clientele, often not even a single customer. For whatever _thing_ we need to do in our lives, be it taking a shower, a train, or having lunch, we go through a sequence of steps where we first define a goal, and then we perform a set of activities to achieve such goal; we do this all the time. Most of the time subconsciously, some other times consciously.
The analysis-synthesis perspective is probably the highest level of abstraction we can put ourselves in order to observe how we design things. Both sides offer a lot of internal content to discuss. As engineers we are put in a quest where we must turn an abstract concept into a device or set of devices that will perform a joint function which will hopefully realize the idea or fulfill the need which triggered all that. Think about it again: something that is a thought in our brains needs to turn into an actual thing which must perform a job in a given environment. What type of device or system will fulfill the need is our decision. For example, picture someone having the idea of providing Earth Observation data every hour of every single point of the globe: does that mean the system to develop needs to be spacecraft? It could be drones, or balloons. Most likely the answer will be satellites, because of operational or cost factors, but still it is a choice we make from a variety of other technically feasible options. The solution space offers different types of alternatives, it is up to us to pick the best up. Having many options on the table tends to make the decision-making process harder compared to having one single option. For example, think about an idea of selling a service for orbital debris removal. In this case, the solution space about the artifacts to choose shrinks dramatically; it must be a spacecraft. It is important, during the analysis phase, not to _jump the gun_ into defining technical solutions for the problems we have. Keeping our minds open can lead us to alternatives that are better suited than what past experience could dictate.
Once we decide what type of artifact we must realize for the problem we must solve, either be a spacecraft, a piece of software, or a nuclear submarine, it becomes our design subject; at this stage it is more of a wish than something else (a collective wish to be more precise). We must then dismember it into fictional bits, chunks or parts that we declare our design subject is made of, even without being anywhere sure what those bits and chunks will effectively work together in the way we initially wished for. Designing requires a quote of betting as well. Eventually, the moment of bringing the bits together will come, and that’s precisely the objective of the synthesis part: realizing and connecting the pieces of the puzzle together and checking (verifying) that the whole works as the analytical phase thought it would. In any project, the time comes when analysis meets synthesis: a.k.a reality kicks in, or what we planned vs what we came up with.
Here’s one key: analysis works on _fictions_ without operative behavior (requirements, models, diagrams, schematics), whereas synthesis works with a more concrete or behavioral reality (either literally tangible objects, or other more intangible ones such as software, etc.). Analysis and synthesis belong to different domains: the former to the intangible, the latter to the tangible and executable which can be operated by a user. During the process, analysis and synthesis feed each other with lots of information. This feedback helps us adjust our analytical and synthetic works by perceiving, interpreting and comparing the results from our initial wishes and needs. Is the new whole performing as expected? If not, do more analysis, do more synthesis, verify, repeat. If yes, enjoy. The circular nature of the design process clearly shows a “trial and error” essence at its core. We think, we try, we measure, we compare, we adjust (with more thinking), we try again. Engineering activity requires a constant _osmosis_ between analysis and synthesis, or, if we want to continue stealing terms from biology: a symbiosis. Both need each other and complement each other. In more classic engineering, the analysis and synthesis activities can run long distances on their own. Extended periods of analysis can take place before thinking about putting anything real together. NewSpace changes that paradigm, by shortening considerably this cycle, and coupling both activities closer together. In NewSpace, you cannot afford to analyze things too much; you need to put the puzzle together as fast as you can, and if it does not look right, you try until it does.
Now, revisiting the “think before doing” bit from some paragraphs before, it is fair to say we perform many tasks without consciously analyzing or planning them. The thinking bit happens without us realizing much. You don’t write a plan or a Gantt chart for taking a shower or walking the dog. For such things, the stages of planning and acting are subconscious. We can do many things, repeatedly cycling through the stages while being blissfully unaware that we are doing so. Pouring water in a glass requires all the stages described above, yet we never consciously think about them. It is only when something disrupts the normal flow of activity, that conscious attention is required. Most behavior does not even require going through all stages in sequence; however, most activities will not be satisfied by single actions. Why do we have to plan so much when we design and engineer things, and why do we *not* have to plan so much when we do our everyday stuff? When is it that the analytical planning crosses to the conscious side? Here is when we need to start talking about a few concepts that are central to the way we execute tasks: collectivity, uncertainty and atomicity (of actions). In short:
● Collectivity: how many actors are involved for doing something dictates the need of coordination and explicit planning. We cannot read minds just yet, so we need to capture in some shareable format our ideas and plans. The smaller the number of actors, the simpler the coordination needed.
● Uncertainty: how much we know about the task we are planning to do. The less we know about an activity, the more conscious planning needs to be done.
● Atomicity: how indivisible a task is. The more complex the composition, the more effort is needed to be understood and managed.
When all these factors combine, planning must be very explicit and clear, otherwise the probability of reaching a desired goal decreases consistently. Take a very simple task or thing to do: you want to take a walk in the park; just you, alone. If this action involves only you and nobody else, all the planning will be mostly introspective; yes, your wish of taking a walk could be reasonably conscious, but you will plan it without really thinking a lot about it, nor leaving a very substantial trace behind. Think about the last time you decided to go for a walk by yourself, and what evidence that action has left behind. Think now about a walk in the park with a friend. The action is the same, nothing has changed, but now you need to coordinate with your friend to meet at the right park at the right time. Your friend might or might not know where the park is, you will have to explicitly send messages, directions, timeline, etc. Think about now the last time you went with a friend for a walk and the trace it has left behind: at least, for sure, some messages, and hopefully memories. Collective actions require conscious analysis and formulation, execution and measure in ways all actors involved can understand what’s to be done and converge towards a common goal. Even for tasks we know perfectly well how to perform! Certainty (or the lack of) about an action is the other key factor which imposes the need to make the formulation and execution explicit and coordinated. If you are trying a new park to walk to, you will have to check the maps, you will have to write down directions, or print something, even if you are going alone. Our lack of knowledge about something increases the need for more detailed planning. As for atomicity, an action is atomic if you cannot divide it any further. Taking a walk in the park by yourself is fairly atomic if you want. But think about for example going to the supermarket: there is nothing “unknown” about it (you know how to do it), there is no collectivity (it’s just you), but it is not an _atomic_ action, since you probably need to fetch plenty of elements from the shop. You might be able to remember four, five, or seven items top, but anything more than that you will have to create a list for you to remember.
When we engineer things such as space systems, there is a combination of all that: collectivity, uncertainty (lots of) and of course non-atomicity: tasks are composed of other subtasks, sub-subtasks and so forth, in a flow down structure. Hence, whatever we plan to design for space, to increase its probability of success we will have to tread carefully through a very explicit lifecycle, from idea to realization, mindfully organizing and planning our activities by means of stages and breakdown structures.
Engineering literature is known for calling the same thing many ways, and life cycle stages are no different. In the space industry, NASA has popularized the 7-stages life cycle model (phase A to F) (NASA, 2007). Here, we generically define a set of six stages:
● **Conceptualization**
○ Identify needs
○ Define problem space
○ Characterize solution space
○ Explore ideas and technologies
○ Explore feasible concepts
○ Identify knowledge blind spots
● **Development**
○ Capture/Invent requirements
○ Decompose and break down:
■ Capture architecture / system decomposition
■ Cost Analysis
■ Work decomposition
■ Org decomposition
■ Align all above under the Work Breakdown Structure
○ Capture knowledge
○ Integrate, prototype, test.
● **Production (Construction)**
○ Replicate the design into multiple instances which can reliably perform
● **Utilization**
○ Operate the instances created
● **Support**
○ Ensure Product’s capabilities are sustained
● **Retirement**
○ Archive or dispose the Product
There are multiple ways of subdividing the way things are done. It is an arbitrary decision to choose a way of partitioning the schedule. It was discussed before that maturity milestones in system design are basically “snapshots” we take to the work whenever we consider it has met a criterion to be considered a milestone. Reaching production readiness follows the same principle; we iterate the work, we gate the schedule with a review, and if the review is passed then the snapshot can be considered production ready. The word “production” in consumer markets usually means mass replication of the design specification; i.e. creating multiple instances or copies of the design, which will operate according to specifications. In NewSpace, there is no such thing as mass production. Spacecraft are usually manufactured in small scales, due to their complexity and handcrafted nature. For NewSpace projects, reaching production readiness can be misleading, depending on what the company is offering. If the startup offers for example spacecraft platforms off-the-shelf, then the production readiness concerns mostly the space segment. But more complex approaches require making sure that the _big system_ (the spacecraft plus all the rest of the peripheral systems that must accompany the space segment to provide the global function) are production ready as well. The “concept” and “development” stages are where the time NewSpace companies spend the most. The development stage can produce an operable system as an output, but this system should still not be qualified as “production” ready.
## Work Breakdown Structure
The WBS is probably THE one thing that tech startups must take, or borrow, from classic space. A project without a WBS is a loose project. The WBS is probably the mother of all breakdown structures during the lifetime of a project. When a team needs to develop anything, be it a smartphone, a spacecraft, or a nuclear submarine, one of the first steps after collecting requirements is to come up with the hierarchical discretization on how the work will be done. The WBS is a structure which contains other structures, like PBS/SBS (Product BS/System BS), CBS (Cost Breakdown Structure) and even OBS (Organizational Breakdown Structure). The WBS glues all those structures together, and it must be kept this way throughout the lifecycle of the project. The WBS also acts as a bridge between Systems Engineering and Project Management. To understand the importance of the WBS in devising space systems, NASA has a specific handbook about Work Breakdown Structures. The handbook states (Terrell, 2018, 2):
A WBS is a product-oriented family tree that identifies the hardware, software, services, and all other deliverables required to achieve an end project objective. The purpose of a WBS is to subdivide the project’s work content into more tractable segments to facilitate planning and control of cost, schedule, and technical content. A WBS is developed early in the project development cycle. It identifies the total project work to be performed, which includes not only all in-house work, but also all work content to be performed by contractors, international partners, universities, or any other performing entities. Work scope not contained in the project WBS should not be considered part of the project.
The WBS includes not only the technical aspects of the work to be done to develop the space system, but all the rest as well, including logistics, facilities, etc.
In short: Work not specified in the WBS does not exist.
To organize work better, the NASA WBS handbook proposes a numbering convention for each level in the hierarchy. Even though this is just a suggestion (you can choose your own), it is a good start to define work items unambiguously. The handbook also limits the amount of levels in the WBS up to seven. The reason for these numbers is not pure magic, but the fact research indicates our capacity of processing hierarchical information beyond depths of seven plus minus two is limited (Miller, 1994). Equally important, the naming convention for the system subdivisions is needed: this avoids confusion and solidifies a strong common terminology across the organization. There are numerous terms used to define different levels of the WBS below the topmost system level. An example the reader can use is: subsystem, subassembly, component, equipment, and part. It should be noted that these entities are not necessarily physical (i.e. allocated to a specific instance), but they can also be functional. It is recommended that beyond system level (L2), all the entities are referred in functional ways, also known as functional chains. This way, a functional chain gets its entity in the WBS without locking it to any specific physical allocation. For example, two functional chains are assigned to a spacecraft mission: Attitude Management and On-board Data Handling. A block is assigned to each one of these chains in the WBS, with their own internal structure. Design activities could either decide to allocate both functional chains into one physical Computing Unit (for example using a hypervisor, or simply in two different threads), or allocate these two into two different physical computing units. Using functional chains in the WBS from L3 and below makes sure this decision, whenever taken, will not greatly affect the organization of the work overall. That decision will impact how the pSBS (Physical System Breakdown Structure) or how the As Built or Bill of Materials will map to the WBS, but not more than that. The Work Breakdown Structure must care that the work is done for all functions needed for the System to be successful, and not exactly how those functions are finally implemented.
Project management and other enabling organizational support products should use the subdivisions and terms that most effectively and accurately depict the hierarchical breakdown of project work into meaningful products. A properly structured WBS will readily allow complete aggregation of cost, schedule, and performance data from lower elements up to the project or program level without allocation of a single element of work scope to two or more WBS elements. WBS elements should be identified by a clear, descriptive title and by a numbering scheme as defined by the project that performs the following functions:
● Identifies the level of the WBS element.
● Identifies the higher-level element into which the element will be integrated.
One important thing to consider while creating the WBS (which will be addressed in the following chapter) is to keep the WBS strictly mapped to the system under design to reconcile the WBS with the recursive nature of the system. The work grows bottom-up, meaning that lower levels dictate the work needed for the system to be realized. Put in a different way, avoid adding in higher level of the WBS things or work chunks things that will still be needed in lower levels, for example Project Management. If your system is composed of other subsystems, those subsystems will still need Project Management work, so it does not make a lot of sense to add PM at level 2. Graphically:
![[Pasted image 20250215161459.png]]
Figure 2.2 - WBS example
In the example above (Fig. 2.2), both “Project Management” and “Systems Engineering” are put at L2, but this could be interpreted as no PM nor SE presence or work at lower levels, which is inaccurate. A clearer way of stating the work breakdown for the project is to make sure all levels contain their PM/SE effort correctly mapped into it. Ideally, higher levels of the hierarchy should be treated as “abstract classes”. They merely exist as containers of lower level entities. But what about during early stages of development when the lower level entities do not exist just yet? This brings up a very important factor: The Work Breakdown Structure evolves with system maturity, and system maturity evolves with time, hence the WBS is a function of time. The way to tackle this is as follows: start with the topmost level block (the project) and add to the next level the work needed to create the following level. Let’s start the process from scratch. Fig. 2.3 depicts the initial WBS:
![[Pasted image 20250215161522.png]]
Figure 2.3 - Initial WBS
Your “Debris Removal” project will need PM and SE to understand how to realize this project. Then, PM will work for a while and come up with the idea (obvious one, but bear with me for the sake of the exercise) that a Spacecraft is needed, hence, a new block is added to the WBS:
![[Pasted image 20250215161542.png]]
Figure 2.4 - A new block added to the WBS
The creation of a new block at L2 (Spacecraft), will need PM/SE work on its own. It is important this effort is contained and integrated by the Spacecraft block itself. Of course, level 1 PM/SE will talk and collaborate with level 2 PM/SE and so on, but they have different scopes, hence they should be kept separately.
![[Pasted image 20250215161559.png]]
Figure 2.5 - Adding a subsystem to the WBS
Readers may ask (and rightfully so) how can early stage NewSpace projects afford an army of project managers and systems engineers at every level of the WBS? The short answer is: there is no need for such a crowd. Any startup below 30 or 40 people does not need an actual full time PM nor SE. But the long answer is that PM and SE are understood as “efforts”, which can be carried by people wearing multiple hats in the organization, and not necessarily as a specific person or a team allocated just for it.
But in any case, no matter how small the team, it is very useful to have a person defined as the PM and another person as SE of a specific level of the WBS. In extreme cases, one person can still wear these two hats, but the efforts must be differentiated. Assigned PMs and SEs can be responsible for the activities for a specific level of the WBS and discuss with the PM/SEs of levels below about integration activities. Systems Engineering at every level is capable of spawning or flowing down components for the next level. For the debris removal example, Systems Engineering at level 2, after running analysis along with Flight Dynamics, are in their right to spawn a Propulsion subsystem. This subsystem will require its own PM and SE which will flow things down further and collaborate for integration upstream. It is important to mention that there is an authority flow going downstream as you navigate the WBS. The PM/SE effort at project level (L1) should keep oversight of all the components at that level and below. The PM/SE at L2 (Spacecraft) keeps oversight of that level and below but reports to L1, and so on. The Systems Engineering team of that level is responsible that the component gets properly integrated and verified to be part of the SBS.
![[Pasted image 20250215161619.png]]
Figure 2.6 - Efforts required to spawn a new element in the WBS
As you can see, this is nothing else than analytical work (i.e. flow down). Work gets dissected from requirements and user needs all the way down to components and parts, and the system gets realized (integrated) bottom up. Despite its “classic” smell, the WBS is a great gluing thing to use and to perfect in NewSpace, since it helps organize the work and clearly define responsibilities, despite the team size. Let’s redraw it in a more NewSpace-friendly way:
![[Pasted image 20250215161635.png]]
Figure 2.7 - A more startup-friendly version of a WBS - Every level needs cost/schedule + big picture
The WBS is just a collection of the work and parts needed to realize a system, but how the life cycle stages are broken down inside the different levels of the WBS is totally open. For example, the PM team in charge of the L2 subsystem “Spacecraft” could use a more classic methodology, whereas the L3 team in charge of Propulsion could use Scrum. Even though it would be advisable to follow similar life cycle approaches across the WBS, it is not mandatory to use the same.
We will discuss further ahead how to allocate the integrated teams (IPTs) to realize the different blocks of the WBS.
## Requirements
Requirements are literally sacred in classic space. Years are spent capturing, analyzing and reviewing requirements. Thousands of them are collected in sophisticated and expensive tools. Specific languages have been created or adapted for writing and handling requirements in diagrammatic and hierarchical ways. There surely exists a lucrative industry of the requirement. This industry will make it feel we _must_ do requirements, and they would like us to do nothing else if possible. But requirements have a different meaning and status in NewSpace. For a start, for one to be able to have requirements, a sort of customer figure must exist in the project. A customer with wishes we could data mine and interpret, giving a great deal of information to help streamline our design efforts. Small space startups usually do not have the luxury of having a customer to interrogate; so, what happens with requirements in a no-customer situation?
Some possible scenarios: 1. Totally ignoring them or 2. Faking requirements to make their absence feel less uncomfortable, with very little practical use because those fake requirements are usually shallow. For example, I once had the chance to skim through a “Requirements Specification Document” of a very early stage NewSpace project and this document had requirements like “The spacecraft shall be able to take images”. What is the use of documenting such obvious things? Writing that kind of requirement is pointless but jumping directly to design with zero requirements is dangerous. The risk of doing that is that the organization’s scarce resources could be employed to synthesize a totally wrong solution, or to overlook some important factor, feature or performance measure which could come back later to bite them. For example, “minimum time between acquisitions” in an Earth-observation spacecraft might not be a requirement at the proof-of-concept stage but become a critical technical performance measure later. So, it is healthy to repress the instinct to start designing out of gut feeling and take some time to do some thorough analysis, in the form of open source intelligence, or to observe/collect what competitors do, and how they do it. Requirements analysis cannot take forever unfortunately, otherwise the system never gets to market. Anything helps at this stage to capture what the system is ought to do. Also, role playing, when someone in the team can act as a _pretend_ customer. In summary:
● Understand the market: it is money well spent to invest in reaching out to potential customers and extract what kind of needs they have: price targets, scope, what they are currently not getting from existing players, etc. Get figures from the market you are trying to address: its growth, size, etc.
● Do some competitive intelligence and benchmarking: Run your own intelligence gathering exercise where you scan the market you’re planning to join, choose three or four of your closest competitors, and compile a few requirements based on their offers. Reach out to big players who are doing similar things; chances are they will be willing to help.
● Identify key technical performance measures your system must meet and create a shortlist of the most meaningful requirements which can fulfill those technical performance metrics.
It is enough to capture the topmost ten or fifteen technical performance metrics and requirements for a design to be able to kick off. These master requirements can still (and will evolve) because things are very fluid still. Requirements spawn other requirements, and requirements have their own attributes and kinds.
What other types of requirements are there? There are different types, and many ways of categorizing them, depending on what bibliography you have in your hands. One reasonably simple categorization of requirements is Kano’s model. This model is a theory for product development and [customer satisfaction](https://en.wikipedia.org/wiki/Customer_satisfaction) developed in the 1980s by Professor [Noriaki Kano](https://en.wikipedia.org/wiki/Noriaki_Kano), which classifies customer preferences or requirements into three major groups:
● Must-be requirements
● Performance requirements
● Attractive requirements
![[Pasted image 20250215161659.png]]
### Must-Be Requirements
The "must-be" requirements are the basic needs for a product. They constitute the main reason that the customer needs the product. That is, they fulfill the abstract need of the customer. The customer expects these requirements, and the manufacturer gets no credit if they are there. On the other hand, if these requirements are not fulfilled, the customer will be extremely dissatisfied (the customer will have no reason to buy the product). Fulfilling the "must-be" requirements alone will not satisfy the customer; they will lead only to a case of a "not dissatisfied" customer. Must-be requirements are unspoken and non-measurable; they are either satisfied or not satisfied. They are shown in Kano's model by the lower right curve of the figure.
### Performance Requirements
Performance requirements are spoken needs. They can be measured for importance as well as for range of fulfillment levels. The customers explicitly tell us or answer our question about what they want. Performance requirements inc1ude written or spoken requirements and are easily identified and expected to be met. Usually, customer satisfaction increases with respect to the degree to which these requirements are fulfilled. Also, performance requirements serve as differentiating needs and differentiating features that can be used to benchmark product performance. The diagonal line in Kano's model depicts these requirements.
### Attractive Requirements
Attractive requirements are future oriented and usually high-tech innovations. These requirements are unspoken and unexpected because the customer does not know they exist. Usually, these requirements come as a result of the creativity of the research and development effort in the organization. Because they are unexpected, these creative ideas often excite and delight the customer and lead to high customer satisfaction. These requirements are shown by the curved line in the upper left corner of the figure. It should be mentioned here that these requirements quickly become expected. But once requirements, either artificially created or realistically captured from a real customer, are in some reasonably good shape, it starts the breakdown activity. The team needs to subdivide work in smaller, more manageable chunks in order to solve the problem in a more effective way.
NewSpace projects usually start with a partial understanding of the problem and/or the market. It is normal they will assume or simplify many things which will prove to be different as the project matures. The design needs to be done in a way that can easily accommodate new requirements found along the way, or for changes in the current set of requirements.
## Life Cycle Cost
How can we possibly know the cost of what doesn’t exist? Clearly, we cannot. Still, oftentimes, when we develop technical things, we are asked to precisely reckon the total cost of what we have just started to design. This usually leads to off estimations, which fire back later in the process. And we try, frequently failing miserably. In Engineering practice, there is a recurrent need for a crystal ball. But, without wanting to challenge what has been part of the job for ages, let’s try to understand how cost predictions can be made; let’s stop using the term prediction and start using estimation instead. Too often, we become fixated on the technical performance required to meet the customer’s expectations without worrying about the downstream costs that contribute to the total life cycle cost of a system (Faber & Farr, 2019).
Multiple factors can affect our cost estimations:
● Constantly changing requirements
● Uncertain Funding
● Changing technology
● Change in regulations
● Changes in competition
● Global markets uncertainties
For very early-stage cost analysis, scope is king. A company embarking in a new project must evaluate what to include and what not to include and cost estimations, and this is not a simple task.
Let’s go by example: a company has an idea selling a debris removal service. It is a LEO spacecraft with the capability of physically coupling to a target satellite and either moving it to a more harmless orbit or forcing both to re-enter the atmosphere together. The company is seeking funds; hence it engages in the process of understanding how much money it would cost to develop such a thing. What is the scope of the cost analysis? Is it:
(we’re excluding facilities costs here for simplicity)
● Just the spacecraft (material costs)?
But wait, someone needs to decide what to buy
● Spacecraft + R&D labor?
But wait, once it’s designed, someone needs to assemble it together and verify it
● Spacecraft + R&D labor + AIT (Assembly, Integration and Test) labor?
But wait, once it’s assembled, needs to be launched into space
● Spacecraft + R&D labor + AIT labor + Launch?
But wait, once it’s in space, someone needs to operate it
● Spacecraft + R&D labor + AIT labor + Launch + Operations?
But wait, to Operate it, tools and systems are needed
● Spacecraft parts + R&D labor + AIT labor + Launch + OPS + Ground Systems?
You can see how the scope only creeps. Each one of these items is most likely non-atomic, so they have internal composition, up to several levels of depth. Take the spacecraft itself: its functional and physical architectures are poorly understood or defined at early stages. The _make vs buy_ strategy hasn’t been properly analyzed. And yet, on top of this very shallow level of understanding, engineers are supposed to define budgets which also impact for example funding. Risks here are:
● Overestimating the cost: will scare investors, funders and potential customers away. Will make the margins look too thin.
● Underestimating the cost: being unable to deliver the project without cost overruns, risking the project to be cancelled, but also creating a bad reputation.
What to do then?
There are three generally accepted methods for determining LCC (Faber & Farr, 2019, 13) and (NASA, 2015):
1. Engineering build-up: Sometimes referred to as “grassroots” or “bottom-up” estimating, the engineering build-up methodology produces a detailed project cost estimate that is computed by estimating the cost of every activity in the Work Breakdown Structure, summing these estimates, and adding appropriate overheads.
2. Analogy: an estimate using historical results from similar products or components. This method is highly heuristic, and its accuracy depends on the accuracy of cost estimations for those other similar products. The analogy system approach places heavy emphasis on the opinions of "experts" to modify the comparable system data to approximate the new system and is therefore increasingly untenable as greater adjustments are made.
3. Parametric: based on mathematical relationships between costs and some product- and process-related parameters.
The methodology chosen for cost estimation cannot remain rigid throughout the project lifecycle. The analytical methodology is better suited at the early stages when the uncertainty is maximum, whereas the build-up methodology is more accurate as the WBS is matured and more insight about the overall system composition is gained.
## Proof-of-Concepts
While we iterate through the life cycle of our designs, is it important to get things tangible as fast as possible and test our assumptions early. This is usually done by means of prototypes, or proofs-of-concepts. In its most essential form, a prototype is a tangible expression of an idea. It can be as simple as a cardboard sketch or as complex as a fully functioning system like an aircraft. Prototypes can show different levels of fidelity. Low-fidelity prototypes provide direction on early ideas. These can happen fast (sometimes just minutes to produce). High-fidelity prototypes can help to fine-tune architectures, and assess areas such as manufacturability, assembly tooling, testability, etc.; these are more typical in engineering, and they can be costly and complex. High-fidelity prototypes can in some cases show the almost full subsystem composition and, in some cases, also the same physical properties and functionalities as the real thing. For example, the aerospace industry reserves big hangars to host full-size aircraft functional test beds, which are called “iron bird” or “aircraft zero”. These testbeds are composed of most of the subsystems an aircraft is made of, laid out matching the general body shape and physical lengths, but without the fuselage. By means of close-loop simulation environments connected to setup, manufacturers can test a high amount of the complex subsystems present in aircraft such as fuel systems, hydraulics, electrical systems, harness, etc. An Iron bird is basically an aircraft, but it cannot fly. It is a framework in which major working components are installed in the relative locations found on the actual airframe, arranged in the skeletal shape of the aircraft being tested. With the components out in the open, they can be easily accessed and analyzed. In the years leading up to a new aircraft’s first flight, changes made during the development phase can be tested and validated using this valuable tool. Aircraft components that function well in isolated evaluations may react differently when operating in concert with other systems. With the aircraft’s primary system components operational on the Iron Bird (except the jet engines, which are simulated), they are put through their paces from an adjacent control room. There, within a mini cockpit, a pilot “flies” the testbed on a simulator through varied environmental conditions while engineers rack up data on the flight. Interestingly, an Iron Bird’s work isn’t done once the aircraft is certified and deployed. These testbeds are still in operation after aircraft reach production, where they can be used to provide insights into specific issues that arise or to test new enhancements before they are introduced on in-service aircraft. Even in the age of advanced computer simulations, Iron Birds (or Aircraft Zeros) maintain a vital role in commercial aerospace testing protocols. They may never fly, but each Iron Bird is the precursor to an aircraft that does.
At the same time, aerospace builds prototypes that _do_ fly, but are not full replicas of the production specimens. For example, flying prototypes are not equipped with cabin subsystems but instead equipped with test equipment to evaluate the in-flight performance.
Even though digital mockups have gained a lot of traction in recent years, the value of laying out subsystems in real physical space with real harness and time constants still provides invaluable insight that computer simulations cannot accurately replicate. Big aerospace companies can afford such facilities, but NewSpace usually cannot. NewSpace companies cannot afford to spend money and resources building expensive replicas that will not fly. Hence, the typical NewSpace approach is to build two setups: one that is usually called a Flatsat or Tablesat, and the actual setup that will see orbit. The latter cannot still be called a “production” thing, but still a prototype since most of its components are intermediate iterations. The former (Flatsat) is a functionally similar setup that can sit on top of a table (hence its name). A flatsat is usually composed of non-ready-for-flight pieces of hardware, development kits and even breadboards. The main purpose of the flatsat is to provide an environment for the software guys to grow their software on top of an environment that comes reasonably close to the real environment.
Recalling the _enabling systems_ concept from early chapters, a flatsat is an enabling system for the prototype. What is more, there are a myriad of other bits and things surrounding the prototype which are also prototypes on their own: telemetry parsers, viewing tools, scripting engines, command checkouts, and a long etcetera. A collection of smaller prototypes combined into a big aggregated prototype.
A prototype that successfully works in orbit is a massive hit for a space project. But it is also a bit of a curse. If things work great, chances are the prototype will be quickly [[Engineering is Broken (but we can fix it)#Prototypes are not Products|dressed as a product]]. Moreover, a product _fog_ will soon cover all the other bits and parts that were rudimentary minimum-viable things, and quickly turn them into ‘production-grade’ things when they are not ready for that. All these early-stage components are suddenly pushed to ‘production’.
Nailing a working prototype does not mean development is over. In fact, development may very well continue on the flying thing. NewSpace development environments reach all the way out to orbit. Things are fleshed out in orbit and debugged in orbit. For early-stage space orgs, it is of paramount importance to get to space as quickly as possible, to show they can deal with what it takes to handle a space system end to end. Unlike classic space, it does not have to be fully completed to get to orbit and be able to work. The important bit during the development stage is to pay attention to the must-haves; the rest can be done in space.
## Time Leakage
In his classic _The Mythical Man Month_, Fred Brook (Brooks, 1995): “How does a project get to be a year late? One day at a time.” On the same note, Hofstadter's Law states: “It always takes longer than you expect, even when you take into account Hofstadter's Law". An entire book could be written about how massively error-prone we are when we estimate the time our tasks will take. Task estimation is deeply affected by multiple psychological factors: we are usually overoptimistic (we think it will take less than it will take) and reductionistic (we overlook dependencies that our task will need to progress). Most of us view the world as more benign than it really is, our own attributes as more favorable than they truly are, and the goals we adopt as more achievable than they are likely to be (Kanehman, 2011). We underestimate uncertainty by shrinking the range of possible uncertain states (by reducing the space of the unknown); the unexpected always pushes in a single direction: higher costs and a longer time to completion (Nassim Taleb, 2007), never the other way around. A way to reduce uncertainty in task estimation can be done by relying on heuristics and data; if you have done something similar in the past, you can use that experience as a calibration factor for your estimation of how long something else will or could take; we rarely do things for the first time ever, unless in the very early stages of our professional career. Time is a central part of what we do. Designing technical systems is organized around projects, whose lifecycle takes a fair amount of time to go through its stages, from infancy to retirement. The duration of those stages is never fixed, but they seem to continuously stretch due to delays.
It is because of the highly collaborative and interdependent nature of what we do that delays are frequent. But delays are not indivisible units.
Delays are the aggregation of small, tiny ones that are usually taken as acceptable or inoffensive when they are not. Just like a million microseconds are needed to elapse to get a second in your watch, a million project micro-ticks are needed to elapse for one project tick to elapse. When one hears of a schedule slippage in a project, it is hard not to imagine that a series of major calamities must have taken place. Usually, however, the disaster is due to ants, not tornadoes; and the schedule has slipped imperceptibly but inexorably. Indeed, major calamities are easier to handle; if one responds with major force, radical reorganization, or the introduction of new approaches; the whole team rises to the occasion. But the day-by-day slippage is harder to recognize, harder to prevent, and harder to make up (Brooks, 1995). Yesterday a key actor was unavailable, and a meeting couldn't be held. Today the machines are all down, because of network maintenance. Tomorrow the latest software patches will not be deployed, because deployments are not allowed on that specific day. Each one only postpones some activity by a half-day or a day. And the schedule slips, one day at a time.
## System Failure
Failure is seldom atomic, i.e. it never happens in a unitary, indivisible action, unless something extremely catastrophic, and not even so. Failure is, more generally, a trajectory or sequence of events towards an outcome that represents some type of sensible loss. Just as a particle resting on a frictionless surface at point A needs a driving action (force) and an unobstructed path to reach point B, failure requires a driving action and a cleared path to reach the point where loss occurs. A particle needs time to travel from A to B, just as failure needs time to build up; it is never instantaneous nor, as said, a single-factor phenomenon. Think about football for example, where team 1 plays against team 2. Any goal scored from team 1 is a failure for team 2, and vice-versa. From this perspective, the forcing action is the offensive pressure from one of the teams against the other; Team 1 pushes continuously Team 2 to fail, until it eventually happens. For the defending team, in order to prevent failure, it is crucial to add as many barriers as possible, to create the right number of obstacles for the force the attacking team is exerting. This requires making decisions about how to locate and move such impedances, and such decisions heavily rely on information. In sports, such information is mostly visual. Also, in sports offensive and defensive roles can easily change places. But not so much when we operate systems such as organizations or systems; we can only adopt defensive roles against failure; it's the external environment against the system and we can't take any offensive action against the environment hoping that will put the forcing action to a stop; an aircraft cannot do anything to modify the physics laws which governs its flight, nor an organization can do much to alter the behavior of the markets it's embedded in. In these cases, we can attack the effects, but we cannot touch the causes.
Failure prevention is a game where we can only play defense; failures are insistently wanting to happen, and we need to work against it. In tennis, there is something called unforced errors. Unforced errors are those types of mistakes that are supposedly not forced by good shots of an opponent. The people in charge of recording the statistics try to determine if, when a player makes a mistake, whether that mistake was forced or not forced by a good play by his or her opponent. But if the player seems to have balanced himself or herself for the shot and has enough time to execute a normal stroke (not an abbreviated one) and still makes a mistake, that would count as an unforced error. As it can be seen, the definition of an unforced error is purely technical, and wrong. There is no mention of physical fatigue, tactical decision-making, and _mental game_. Every tennis player is continuously under the exertion of a forcing action that is pushing her/him to fail. Every error is, therefore, forced (Tennis Mind Game, n.d.). In tennis, whoever makes the least amount of error wins, not the one who makes the most spectacular shots.
When we deal with complex systems, failure-driving _forces_ are multiple, persistent, and adaptive. When we operate those systems, we rely on measurements to construct the understanding for assessing faults and failures. But what if the measures are wrong? Then, our understanding is flawed, detaching from actual reality and tainting our decisions, eventually opening an opportunity for failure forces to align and find a trajectory towards loss. Errors and failures are troublesome. They can cause tragic accidents, destroy value, waste resources, and damage reputations.
An organization like a NewSpace company, at the highest level of abstraction, operates what we have called before the _big system_, which is an aggregation of systems. All in all, operating this System involves an intricate interaction of complex technical systems and humans. Reliability can only be understood as the holistic sum of reliable design of the technical systems and reliable operations of such systems. Any partial approach to reliability will be insufficient. Organizations, particularly those for whom reliability is crucial, develop routines to protect them from failure. But even highly reliable organizations are not immune to disaster and prolonged periods of safe operation are punctuated by occasional major failures (Oliver et al., 2017). Scholars of safety science label this the “paradox of almost totally safe systems,” noting that systems that are very safe under normal conditions may be vulnerable under unusual ones (Weick et al., 1999). Organizations must put in place different defenses in order to protect themselves from failure. Such defenses can take the form of automatic safeguards on the technical side, or procedures and well-documented and respected processes on the human side. The idea that all organizational defenses have holes and that accidents occur when these holes line up, often following a triggering event—the Swiss cheese model of failure—is well known. In the Swiss cheese model, an organization’s defenses against failure are modeled as a series of barriers, represented as slices of cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting a trajectory of accident opportunity, so that a hazard passes through holes in all the slices, leading to a failure.
There is some 'classic' view of automation that points out that its main purpose is to replace human manual control, planning and problem-solving with automatic devices and computers. However, even highly automated systems, such as electric power networks, need human beings for supervision, adjustment, maintenance, expansion, and improvement. ==Therefore, one can draw the paradoxical conclusion that automated systems still are human-machine systems, for which both technical and human factors are important. Quite some research has been done on human factors in engineering, which only reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator (Bainbridge, 1983)==. It might also be implicit that the role of the operator is solely to monitor numbers and call an 'expert' when something is outside some safe boundaries. This is not true. Operators gain crucial knowledge when they operate in real conditions that no designer can gain during the development stage. Such knowledge must be captured and fed back to the designers, continuously.
Automation never comes without challenges. An operator overseeing a highly automated system might start to lose his/her ability to manually react when the system requires manual intervention. Because of this, a human operator of a space system needs to remain well-trained in manual operations in case automations disengage for whatever reason. If the operator blindly relies on automation, under the circumstances that the automation is not working, then the probability of making the situation even worse increases considerably (Oliver et al., 2017). The right approach is about having human operators focus more on decision-making rather than systems management. In summary:
● Reliability can only be achieved holistically by means of combining complex systems with reliable operations through human-machine interfaces that consider the particularities of such systems.
● All defenses to prevent failures might have flaws. It is crucial to avoid flaws (holes in the cheese) aligning, providing a trajectory to failure.
● ==Automation is beneficial to optimizing data-driven repetitive tasks and streamlining human errors, but the possibility to run things manually must always be an alternative and operators must be trained to remember how to do so.==
# References
Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775-779. 10.1016/0005-1098(83)90046-8
Belie, R. (2002). Non-Technical Barriers to Multidisciplinary Optimization in the Aerospace Community. In Proceedings. 9th AIAA/ISSMO Symposium on Multidis- ciplinary Analysis and Optimization, Atlanta, GA.
Brooks, F. P. (1995). The mythical man-month: Essays on software engineering. (20th anniversary ed.). Addison-Wesley.
DeMarco, T., & Lister, T. R. (1999). Peopleware: Productive Projects and Teams (2nd ed.). Dorset House Publishing Company.
Detert, J., Schroeder, R., & Mauriel, J. (2000). A Framework for Linking Culture and Improvement Initiatives in Organizations. Academy of Management Review, 25. 10.2307/259210
DiMaggio, P. (1997). Culture and Cognition. Annual Review of Sociology, 23(1), 263-287.
Faber, I., & Farr, J. V. (2019). Engineering Economics of Life Cycle Cost Analysis (1st ed.). CRC Press.
Goodman, J. (2005). Knowledge Capture and Management: Key to ensuring flight safety and mission success. In Proceedings AIAA Space 2005, Long Beach, CA.
Hackman, J., & Oldham, G. (1980). Work Redesign. Addison-Wesley.
Harris, D. (2001). Supporting Human Communication in Network-Based Systems Engineering. Systems Engineering, 4(3), 213–221.
Hart, P. (1991). Irving L. Janis' Victims of Groupthink. Political Psychology, 12(2), 247. 10.2307/3791464
Kanehman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Miller, G. A. (1994). The Magical Number Seven, Plus or Minus Two Some Limits on Our Capacity for Processing Information. Psychological Review, 101(2), 343-352.
NASA. (2007). Systems Engineering Handbook. NASA.
NASA. (2015). NASA Cost Estimating Handbook, v4.0.
Nassim Taleb, N. (2007). The black Swan : the impact of the highly improbable. New York :Random House.
Oliver, N., Calvard, T., & Potočnik, K. (2017, Jun 9). Cognition, Technology, and Organizational Limits: Lessons from the Air France 447 Disaster. Organizational Science, 28(4), 597-780. https://doi.org/10.1287/orsc.2017.1138
Oswald, A., Proto, E., & Sgroi, D. (2009). A new happiness equation: Wonder + happiness = improved productivity. Bulletin of the Warwick Economics Research Institute, 10(3).
Perlow, L. A. (1999). The Time Famine: Toward a Sociology of Work Time. Administrative Science Quarterly, 44-57. 10.2307/2667031
Ritchey, T. (1991). Analysis and synthesis: On scientific method – based on a study by bernhard riemann. Systems Research, 8(4), 21-41. https://doi.org/10.1002/sres.3850080402
Schein, E. H. (2004). Organizational Culture and Leadership (4th ed.). Jossey-Bass, San Francisco.
St. George, A. (2013, June). Leadership lessons from the Royal Navy: the Nelson touch. Naval Historical Society of Australia. https://www.navyhistory.org.au/leadership-lessons-from-the-royal-navy-the-nelson-touch/
Tennis Mind Game. (n.d.). Unforced Errors in Tennis - Are They Really Not Forced? Tennis Mind Game. https://www.tennismindgame.com/unforced-errors.html
Terrell, S. M. (2018). NASA Work Breakdown Structure (WBS) Handbook. NASA Marshall Space Flight Center Huntsville, AL, United States. 20180000844
Tuckman, B., & Jensen, M. (1977). Stages of Small-Group Development Revisited. Group and Organization Management, 2(4), 419–427.
Twomey Lamb, C. M. (2009). Collaborative Systems Thinking: An exploration of the mechanisms enabling team systems thinking. PhD Thesis at the Massachusetts Institute of Technology.
Weick, K. E., Sutcliffe, K. M., Obstfeld, D., & Straw, B. M. (1999). Organizing for high reliability: Processes of collective mindfulness (R. I. Sutton, Ed.). Research in organizational behavior, Elsevier Science/JAI Press., 21, 81-123.