# Knowledge Management in Systems Engineering > [!cite] > “The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge.” Daniel J. Boorstin Organizations have the unique opportunity to define a Knowledge Management (KM) strategy early in their lifetime and avoid the pain of realizing its importance too late down the road. Small companies like startups grow a body of knowledge to gain the understanding needed in order to create successful technical systems, and usually, such knowledge grows largely unstructured. Multidisciplinary projects are composed of people from different backgrounds, with different terminologies and different past experiences, which imposes challenges when it comes to sharing information throughout the design process. This chapter addresses methods and tools such as diagrammatic reasoning, Model-Based Systems Engineering, concept maps, knowledge graphs, and hierarchical breakdown structures, and how they can facilitate or complicate the design of space systems. You may ask yourself: Why a knowledge management chapter in a Systems Engineering book? Nobody has time for that! I happen to disagree. Or, I would like to contribute to changing that mindset. There is always time for knowledge management. But the main reason why I included this chapter is because startups have the unique blessing (and curse) of being early-stage. Being early stage means that they have the great advantage of standing right at the _big bang_ of many things in the organization, including information and knowledge creation. Things only expand and get more complex with time, and so do information and knowledge. The main goal of this chapter is to raise awareness of how organizational knowledge is created, where it resides, and how to protect it. In the beginning, practically everything is a fuzzy cloud of loose concepts and words that need to be glued according to some strategy. Such a strategy may mutate multiple times, and this mutation will most of the time respond to better insight gained during the journey. Early-stage organizations must deal with the massive challenge of building together a product of reasonable complexity, and at the same time grow the knowledge to achieve the _understanding_ needed in order to create the products they want to create. New product development always requires overcoming many learning curves. Creating a product does not guarantee _per se_ market success. Since life is too short to develop products nobody wants, we also must learn how the audience of our products reacts to them and calibrate our actions towards maximizing their satisfaction (and as a consequence our profits). ==Selling anything is ultimately an endeavor of learning and adjusting.== The need for knowledge management is both tactical and strategic, and it covers a very wide spectrum that goes well beyond the technical knowledge needed to create a system: from understanding business models, creating the right culture, dealing with daily operations, doing the accounting: there are learning curves everywhere, intertwined, highly coupled. Knowledge evolves with the life of an organization, and its creation and capturing is not trivial. Fostering knowledge sharing through the interactions between its actors is essential to any practical implementation of knowledge management (Pasher & Ronen, 2011). But what is knowledge? What is the difference between understanding something and knowledge about something? It’s very easy to assume that these two words have the same meaning, but there is some nuance distinguishing them. Understanding can refer to a state beyond simply knowing a concept. For example, we can know about spacecraft, but we might not have a comprehensive understanding of how they work. It’s possible to know a concept without truly understanding it. To design successful systems, it is not enough just to know a concept but to have the understanding that allows the organization to apply the concepts and skills in a collaborative way to solve even the most unfamiliar problems. Gathering knowledge and building understanding takes time, and knowledge becomes a strategic asset for small organizations. Learning and understanding are not just the accumulation of knowledge but the capacity to discover new insights in changing contexts and environments. What is more, knowledge accumulated in unstructured ways across the organization creates risks due to the fact it becomes difficult to access. ==Organizational knowledge does not sit on files or documents. It resides in people’s brains but, more importantly, in the social networks across the organization. Perhaps the biggest tragedy of tech startups going bankrupt (after the obvious job losses) is how the organization’s knowledge network vanishes away.== There are vast amounts of ontologically unstructured information in organizations, even in mature ones. For startups, understandably, knowledge management comes way later in their hierarchy of needs; when you are burning money, your problems are somewhere else. This is a pity, since startups have the unique chance of defining a knowledge management strategy at probably the best time possible: when things are still small, and complexity has not yet skyrocketed. Minding knowledge management when there are 4 people compared to when there are 400 is a different story. Knowledge must be constructed, i.e., grown in time. From basic terminology up to complex concepts; a solid common “language” across the organization can act as a foundation to organize and structure knowledge. Different engineering disciplines manage different jargon and vocabularies. During meetings, way more often than it should, you most probably got the impression that two colleagues were using the same word with a different meaning. The same words mean different things to different people. We all have our internal dictionaries, so to speak, and we carry them everywhere we go; such dictionaries are populated with words we collected throughout our lives and are shaped by our education and largely our life experiences. When we engage in professional exchanges during our professional practice, we have our dictionaries open in our heads while we try to follow what's being discussed. This mechanism is very automatic, but the mechanism to double check if our dictionary is stating the same as our colleague's is not so automatic, it seems. Frequently, many decisions are taken based on these types of misunderstandings. Or, which is not less serious, partial understanding. Using dictionaries as analogy requires a bit further elaboration, to illustrate the problem I am trying to describe here. A dictionary is an alphabetically ordered list of words with some listed meanings. How are those meanings defined and who defines them? How is it decreed that a word means something and not something else? For a start, dictionaries are created by historians. To add a word to a dictionary, the historian (in the role of the editor) spends a fair amount of time reading literature from the period that a word or words seemed more frequently used. After this extensive reading, all the occurrences of those words are collected, and a context around them is built. The multiple occurrences are classified into a small subset of different meanings and then the editor proceeds to write down those definitions. The editor cannot be influenced by what he thinks a given word ought to mean. The writing of a dictionary, therefore, is not a task of setting up authoritative statements about the “true meanings” of words, but a task of recording, to the best of one’s ability, what various words have meant to authors in the distant or immediate past. Realizing, then, that a dictionary is a historical work, we should read the dictionary thus: “The word _mother_ has, most frequently been used in the past among English-speaking people to indicate -a female parent.” From this, we can safely infer, “If that is how it has been used, that is what it _probably_ means in the sentence I am trying to understand.” This is what we normally do, of course; after we look up a word in the dictionary, we re-examine the context to see if the definition fits. A dictionary definition, therefore, is an invaluable guide to interpretation. Words do not have a single “correct meaning”; they apply to groups of similar situations, which might be called areas of meaning. It is for definition in terms of areas of meaning that a dictionary is useful. In each use of any word, we examine the context and the extensional events denoted (if possible) to discover the point intended within the area of meaning (Hayakawa 1948). While we grow our internal _dictionaries_, we assign meaning to words according to different abstract situations, contexts, actions, and feelings but also establish relationships between them. Dictionaries say nothing about how words relate to each other, it is just an array of loose terms, if you want. So, what we carry is not, strictly speaking, only a dictionary, but a sort of _schema_ (Pankin, 2003); schemas are like recipes we form in our brains for interpreting information, and they are not only affected by our own past experiences but also shaped by culture. Self-schema is a term used to describe knowledge we accumulate about ourselves by interacting with the natural world and with other human beings which in turn influences our behavior towards others and our motivations. Because information about the self is continually coming into the system as a result of experience and social interaction, the self-schema will be constantly evolving over the life span (Lemme, 2006). The schema concept was introduced by British psychologist Sir Frederic Bartlett (1886–1969). Bartlett studied how people remembered folktales and observed that when the recalling was inaccurate, the missing information was replaced by familiar information. He observed that they included inferences that went beyond the information given in the original tale. Bartlett proposed that people have _schemas,_ or unconscious mental structures or recipes, that represent an individual's generic knowledge about the world. It is through schemas that old knowledge influences new information. Bartlett demonstrated the uniqueness of our schemas as well: no two people will repeat a story they have heard the exact same way. The way we create those recipes is totally different for each one of us, even under the exact same stimuli. When we communicate with someone, we are just telling each other a combination of what we have felt and learned in the past. Even with a high degree of present-moment awareness, how we communicate is often a reflection of the information we have previously consumed. The entities that are members of a schema are not tightly attached to it. For example, a chair is a concept in our minds that we know very well (because we happened to understand a while ago what chairs are, their function, etc.). But a chair is a concept that is a member of many different schemas. For example, if you go to a restaurant you expect the tables to have chairs (your restaurant schema tells you so). In the same way, you do not expect to find a chair in your refrigerator (your fridge schema is telling you so). As you can see, the chair concept is “on the loose” and we connect it to different schemas depending on the context or the situation; membership is fluid. On the other hand, the systems we design and build are quite rigid containers, like boxes inside boxes or Russian dolls. What resides inside a box cannot be inside some other box; membership is rather strict. The way our brains work and the things we build are structured differently. But how do our inner schemas connect and relate to the schemas of others? Can organizations grow their collective schemas? Can schemas affect the way an organization collectively learns? They play a part. ●      Schemas influence what we pay attention to: we are more likely to pay attention to things that fit in our current schemas. ●      Schemas impact how fast we can learn: we are better prepared to incorporate new knowledge if it fits with our existing worldviews. ●      Schemas affect the way we assimilate new information: we tend to alter or distort new information to make it fit our existing schemas. We tend to bend facts in our own service. ## Ontologies As engineers, we all tend to work around a domain or area, according to our education or what our professional career has dictated for us. Each domain, whatever it is, revolves around a certain set of information that makes the domain understandable and tractable. Concepts, properties, categories, relationships; if you want to perform in that domain, you need to have at least some grasp. The technical term for this is ontology. It is a fancy, somewhat philosophical term; if you google it you can easily end up reading about metaphysics, but make sure you just turn back if you do; when we say "ontology" we probably mean a bit more of a computational ontology than a philosophical one. Traditionally, the term ontology has been defined as the philosophical study of what things exist, but in recent years, it has been used as a computational artifact in any computer-based application, where knowledge representation and management are required. In that sense, it has the meaning of a standardized terminological framework in terms of which the information is organized. It's very straightforward: an ontology encompasses a representation, formal naming, and definition of categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains of discourse. Ontologies are all around us. Product Management is an ontological activity by definition. Think about how products are usually listed in trees, showing how features and attributes define the way such products relate, or how subfamilies are described. Another concrete example is Software Engineering. The Object-Oriented Programming paradigm has deep ontological roots, since every object-oriented program models reality by means of classes and the way they relate, along with attributes, properties, features, characteristics, or parameters that objects, and classes can have. It happens to go unnoticed, but the object-oriented paradigm is very ontological; it must capture and ultimately execute the behavior of the underlying concepts and entities involved with a particular domain, problem or need. Engineering in general involves and connects multiple fields and domains, and every field creates its own schemas and ontologies to tame complexity and organize information into data and knowledge. As new schemas are made, their use hopefully improves problem-solving within that domain. This means that every field tends to create its own terminology and jargon. Easily, organizations end up dealing with a network of dissimilar domain-specific schemas around a project, and often those go out undocumented. You surely have witnessed a meeting where for example a software person presents something to a technically heterogeneous audience. While she uses very specific software jargon and acronyms, you can feel how nobody outside software understands a thing about what is going on, but they're probably too shy to ask. Years ago, I was in one of those meetings, and after an hour of slides about software architecture and whatnot, one person from the audience raised his hand and asked: — What does API stand for? ## Concept Maps A concept map or conceptual diagram is a diagram that depicts suggested relationships between concepts. It is a graphical tool used to organize and structure knowledge. A concept map typically represents ideas and information as boxes or circles, which connect with labeled arrows in a downward-branching hierarchical structure. The relationship between concepts can be articulated in _linking phrases_ such as "causes", "requires", or "contributes to". Words on the line, referred to as _linking words_ or _linking phrases_, specify the relationship between the two concepts. We define a concept as a perceived regularity in events or objects, or records of events or objects, designated by a label. The label for most concepts is a word, although sometimes symbols such as + or % can be used, and sometimes more than one word is used. Propositions are statements about some object or event in the universe, either naturally occurring or constructed. Propositions contain two or more concepts connected using linking words or phrases to form a meaningful statement. Sometimes these are called semantic units, or units of meaning. Another characteristic of concept maps is that the concepts are represented in a hierarchical fashion with the most inclusive, most general concepts at the top of the map and the more specific, less general concepts arranged hierarchically below. The hierarchical structure for a particular domain of knowledge also depends on the context in which that knowledge is being applied or considered. Therefore, it is best to construct concept maps with reference to some questions we seek to answer, which we have called a focus question. The concept map may pertain to some situation or event that we are trying to understand through the organization of knowledge in the form of a concept map, thus providing the context for the concept map (Novak & Cañas, 2008). Structuring large bodies of knowledge requires an orderly sequence of iterations between working memory and long-term memory as new knowledge is being received and processed (Anderson, 1992). One of the reasons concept mapping is so useful for the facilitation of meaningful learning is that it serves as a kind of template or scaffold to help to organize knowledge and to structure it, even though the structure must be built up piece by piece with small units of interacting concept and propositional frameworks (Novak & Cañas, 2008). In learning to construct a concept map, it is important to begin with a domain of knowledge that is very familiar to the person constructing the map. Since concept map structures are dependent on the context in which they will be used, it is best to identify a segment of a text, a laboratory or field activity, or a problem or question that one is trying to understand. This creates a context that will help to determine the hierarchical structure of the concept map. It is also helpful to select a limited domain of knowledge for the first concept maps. A good way to define the context for a concept map is to construct a Focus Question, that is, a question that clearly specifies the problem or issue the concept map should help to resolve. Every concept map responds to a focus question, and a good focus question can lead to a much richer concept map. When learning to construct concept maps, learners tend to deviate from the focus question and build a concept map that may be related to the domain, but which does not answer the question. It is often stated that the first step to learning about something is to ask the right questions. To start a concept map, it is usually useful to lay out the loose concepts in what is called a “parking lot”. This way, we can observe the boxes without links and start analyzing how these relate according to the context we want to describe. For example, in Fig. 3.5, we laid out the four typical concepts in a space organization: ![[Pasted image 20250215213013.png]] Figure 3.5 - Concept maps start with loose concepts in a _parking lot_ The way these four things connect greatly depends on the business proposition. One try could be: ![[Pasted image 20250215213031.png]] Figure 3.6 - One first attempt to relate the concepts laid out before A different approach could be: ![[Pasted image 20250215213049.png]] Figure 3.7 - Another way of linking the four concepts It quickly appears that the number of concepts laid out might not be enough for a meaningful explanation of what we are trying to describe (the description is too coarse). So, let’s add some more concepts to it while keeping the original ones with a distinctive green color for awareness. ![[Pasted image 20250215213116.png]] Figure 3.8 - Extending the concept map for a description From the figure above, some knowledge can already be extracted: - In this organization, there seem to be three main products: Spacecraft, raw data, and services on top of the raw data. This is not explicitly said, but one can infer those are products since that’s what the organization sells. - Raw data and services can only exist if a spacecraft exists since the spacecraft generates the raw data. - There is no mention if the organization designs and builds the spacecraft or not; it is left unspecified. - There is no mention if the organization operates the spacecraft or not; it is left unspecified. It will be discussed a bit further on with Knowledge Graphs how the missing information can be interpreted in different ways. Note how a single link changes considerably the scenario: ![[Pasted image 20250215213144.png]] Figure 3.9 - Concept map where customers operate the spacecraft The red link in Fig. 3.9 states that “Customers Operate the Spacecraft”. See the difference now if we move the link to the Organization (Fig. 3.10): ![[Pasted image 20250215213158.png]] Figure 3.10 - Concept map where the organization operates the spacecraft There are multiple ways of explaining the same thing, and concept maps are no exception. For example, we could add some more entities for the sake of clarity, like a more explicit “Product” entity, and make the link between Customers and Products explicitly with the relationship “consume”. Diagrams must be readable and tidy. A good rule of thumb is not to add “common sense” to the layout; i.e. concepts or relationships that are in some way obvious. ![[Pasted image 20250215213218.png]] Figure 3.11 - Rearranging the concept map and making the Product entity explicit Concept maps are great diagrammatic tools for early-stage organizations where ideas are still fluid. Something that needs to be considered is that concept maps are not explicitly hierarchical; some hierarchy could be added to them by means of adding the right links, for example “is A” or “is composed of“ links. Concept maps help solidify concepts that might not be fully understood yet across the organization; they turn implicit assumptions into explicit statements. Concept maps can spark discussions and debate: if they do, it surely indicates a concept needs rework. At very early stages of an organization, the documentation should mostly be composed of concept maps! > [!info] > Concept maps in this section were produced using CmapTools version 6.04. CmapTools is free software from the Institute of Human and Machine Cognition ([http://cmap.ihmc.us/](http://cmap.ihmc.us/)), and it runs on Linux, Windows and Mac. ## Hierarchical Structures Breakdown structures and hierarchies are related concepts, but they are not synonyms. When we break things down (work, a system, a product, an organization), we define a set of elements or components a certain object is made of, but to decide how those components relate to each other is another story. In hierarchies lies one of the main “secrets” of successful (or dreadful) engineering. Hierarchies are all around us, and they are a trap in some way. Any design activity (such as engineering) requires arranging the constituent elements of a system together, but it also requires arranging the data and information generated in this process. Hierarchies are inescapable when it comes to arranging anything non-atomic. It is not enough to identify the building blocks; it is also needed to identify how those building blocks will interact in order to provide the overall function of the object. Hierarchies are always shaped by the context. System design is about multiple hierarchical arrangements interacting together, notably the system-of-interest internal arrangement, and the organization. This has been the subject of research, and it has been called the _mirroring hypothesis_. In its simplest form, the mirroring hypothesis suggests that the organizational patterns of a development project, such as communication links, geographic collocation, and team and firm membership, correspond to the technical patterns of dependency in the system under development (Colfer & Baldwin, 2010). According to the hypothesis, independent, dispersed contributors develop largely modular designs (more on modularity later), while richly interacting, collocated contributors develop highly integral designs. Yet many development projects do not conform to the mirroring hypothesis. The relevant literature about this mirroring effect is scattered across several fields in management, economics, and engineering, and there are significant differences in how the hypothesis is interpreted in the different streams of literature. This was also captured by Conway’s law (an adage more than a law): “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations” (Conway, 1968). Some different research notes that organizations are boundedly rational and, hence, their knowledge and information processing structure come to mirror the internal structure of the product they are designing (Henderson & Clark, 1990). These theories propose that the way people group and interact towards creating a technical artifact is reflected not only in the design and architecture synthesized but also in how information and data is structured. This implicitly suggests a unidirectional influence from the team structure to the system under design. But how about in the opposite direction? Can the system architecture also exert forces to reshape the way designers group together? In any case, we are talking about two hierarchies (the group's and the system's) interacting with each other. When it comes to assigning relationships between entities in a hierarchical structure, there are several ways. Let’s use a typical example, let’s consider folders in a hard disk drive. Every time we start a new folder structure in our external drive, we probably struggle with the same problem: ●      Should you create a parent folder called Software and then create a folder called Flight Computer in that? ●      Or should you create a folder called Flight Computer and then create a folder called Software in that? Both seem to work. But which is right? Which is best? The idea of categorical hierarchies was that there were "parent" types that were more general and that children types were more specialized versions of those types. And even more specialized as we make our way down the chain. But if a parent and child can arbitrarily switch places, then clearly something is odd with this approach. And the reason why both approaches can actually work depend on perspective, or what some Systems Engineering literature call viewpoints. Systemic thinking, at the analytical level, is a construction we use to understand something that otherwise is too complex for us. We can choose to observe a complex object using different systemic lenses when we analyze it, depending on our needs. But once we synthesize the system, the structure becomes more rigid and less malleable. Following the example above, once you choose the folder structure, that structure will impose constraints in the way you can use it. A file in a specific folder is right there in that folder and not somewhere else. The file sits there because of the structure chosen, and the way that file relates to the rest as well.  Let’s continue the discussion on what type of hierarchies are there before we elaborate on this and how it impacts knowledge management. There are two main types of relationships in hierarchies: classification/categorical or compositional/containment. Categorical hierarchies are very much related to knowledge management - relating ideas, concepts or things into types or classes. Categorical hierarchies express specialization from a broader type to more specialized sub-types (or generalization from the sub-types upstream). Another way of relating things or ideas together is in a containment hierarchy, by identifying their parts or aspects; is an ordering of the parts that make up a system—the system is "composed" of these parts. In a nutshell: containment hierarchies describe structural arrangement (what's inside what). Containment hierarchies also show what is called emergent properties: behaviors and functions that are not seen at the subpart level. An aircraft for example can only show its emergent properties when assembled altogether; you can't fly home sitting on a turbofan turbine. If you look at the real world, you’ll find containment hierarchies everywhere; categorical hierarchies have rare real-world analogy. The real world is filled with containment hierarchies. All artificial systems out there are basically containers, like your car, your laptop, a chip, a smartphone, or a spacecraft; if you look around, you will perhaps quickly realize that all the things we engineer are just different types of containers. This is the main factor why categorical hierarchies seldom work for representing physical reality; things around us are mostly boxes inside boxes. But containment relationships are somewhat rigid: if something belongs to one box, it cannot belong to some other box, because the structure dictates it so. System structure does not allow many “perspectives” about it once it’s established. The main challenge lies in the fact that structuring our work, information and thinking around system design is a dynamic activity; things can group differently according to the situation and the context. Imagine we want to run a preflight subsystem review of the Flight Computer, so we want to list everything related to it for the review. In that sense, we need the “Software” folder to appear under “Flight computer”, along with any other thing that Flight Computer contains, since we want to review it as a whole.  But what if the review is a System Level Software Review? If that is the case, we want to list all the software bits and parts of the system. In that context, “Flight computer” appears under “Software”. The way engineering teams organize is frequently discipline oriented. We as professionals specialize in things or topics by means of our education: we know engineering, or accounting, or law, or medicine and so on. And inside those professions, we also tend to group with people from our same “flavor” or specialty. So, when companies are small, it is understandable software engineers hang around together, mech engineers do the same. These are _categories_ in the taxonomical sense of the word. Then, it should be no surprise young organizations, in the absence of an active command, naturally group categorically. Ironically, this is the seed of what is called functional organizational structure. It is very typical to find R&D groups structured in such categorical (functional) ways, keeping these reservoirs of people subdivided by what they know to do, with one functional leader supervising them. Categorical hierarchies and function are at odds but for some reason they are called There is even a more interesting caveat. We analyzed in a previous section that the WBS, as all the other breakdown structures, is a function of time. The system has a totally unknown structure (architecture) in the beginning; the design process is supposed to come up with it. The architecture evolves from very coarse blocks into more detailed subsystems and parts as the design progresses. Until the component structure and requirements are well understood, it can be challenging to allocate a team for it, since the overall understanding is poor.  System components will require attention from many different functional areas (a flight computer will require software resources, electrical resources, mechanical, thermal, and so on) and if the functional resources remain scattered at their home bases, the mapping between component and team is broken since the design work for such component will undoubtedly require collaboration which will span multiple groups, boundaries and jurisdictions. Responsibility and authority are severely affected. A way to tackle this is to define a matrix organization. This approach is not new, and many companies in the aerospace industry have been moving/co-locating specialized resources to work on specific subsystems and projects, in an attempt to ease the friction between system and group hierarchies. Then, specialized/functional engineers would move from their home bases to different projects or subsystems, and once these projects are done, the engineers would come back to their functional homeland until the cycle is restarted. This is somewhat aligned with what some This approach has several known drawbacks and some hidden ones. Duality of authority is probably the classic, where the practitioners report to both their functional and project/subsystem leads. ![[Pasted image 20250215213408.png]] Figure 3.12 - Dual authority dilemma of matrix orgs Lack of org design consistency creates hybrid organizations. In these situations, a particular department chooses one org layout (say, functional), whereas another one chooses matrix. This is perhaps the worst-case scenario possible, since subsystems and components will most likely require input from across the organization one way or the other. Without “big system” level harmonization of org philosophies, the hierarchies friction increases and becomes a source of constant maintenance. For multidisciplinary endeavors, a sound approach is to implement Integrated Product Teams; this differs from the matrix organization in the sense it eliminates the “functional” home base. The engineering department has a pool of _capabilities_ (which are managed and assured by executive leadership) and those capabilities are allocated to components and subsystem projects as needs arise and as the architecture evolves. More about IPTs below. ## Projects, Products, Systems and Titles A classic dilemma for organizations that develop systems is how to organize the teams better as the system evolves from whiteboard to production. The fact the engineering world is prone to both evangelism and terminology vagueness does not make things much easier. One of the latest confusions is with the word _product_ and the word _ownership_, which are used in very light manners and for a multiple variety of different things, often sparking antagonisms. One of those antagonisms is the “function vs product”. This one is not an option in this book, as I have probably insisted enough: to avoid damaging friction between the social structure and the system structure, the work breakdown structure must dictate how teams are formed according to the system’s hierarchy. More accurately, the system’s hierarchy dictates how the WBS forms and from there it spills to the teams’ structures. Failing to do so will reveal the _hierarchy problem_ in full power and make everything go downhill from there. Functions must only exist to be fed to multi-disciplinary groups which own some element of the WBS.  Another antagonism is the “product versus project”. Or “product versus system". Hours and hours are spent arguing and discussing which one is better if projects or products. Just google “product manager vs project manager” to see a big dog chasing its own tail. If you ask a project evangelist, of course, “everything is a project”; do the same for a product guy, he will say everything is a product. For some reason, projects have been lately flagged as old-fashioned, and too _waterfall-ey_. This only reveals an alarming level of ignorance, since a project, just the term itself, does not specify in which way the tasks are handled versus time. But there are more infectious misconceptions about projects that are parroted by many. One of the most surprising definitions found about projects is that they are “temporary, one-off endeavors”, with a “clear beginning and an end”, whereas products are more _permanent_ endeavors that constantly _evolve_ to fulfill customers’ needs. Another hilarious one: “A project manager’s job is to ensure that the project is completed on time and on budget. The product manager’s job is to ensure that the right product gets built”. As if a project manager would go and build the wrong thing as long as it is on time and within budget. Again, the dictionary says: ●      **Product**: “_a thing that is the result of an action or process “._ ●      **Project**: _“a set of coordination activities to organize and plan a collaborative enterprise to realize a system or product”._ In fact, as soon as we are dragged to the discussion, it can be quickly realized the whole argument is more about role naming and role evangelism than about the actual definitions of project and product. The definitions are pretty clear: a product is an outcome; a project is a process. We stated before that design process expands to infinity if we allow it to. Products are, then, arbitrary snapshots of our designs, at arbitrary times, which are interesting enough for a customer to pay for it. The concept is straightforward: to create a good product we must organize the multi-disciplinary design process in a project which will handle dependencies, budget and scope, following a system architecture defined by systems engineering. Budget for the project is overseen by the project but also acts as a constraint (or feedback loop) for systems engineering to constraint the solution space. From this budget, a cost for the product is obtained which defines as well the price the customer will end up paying for (adding profit, margins, etc.). Graphically: ![[Pasted image 20250215213534.png]] Fig 3.13 - Relationship between Product, Project, System and the multi-disciplinary design process For Product, in this context, we refer to _something_ technical that a customer needs; this customer is usually an external entity willing to engage in a commercial relationship in order to acquire (pay for) such a product. In contrast, any technical artifact the organization internally needs for its own use is more precisely a system, not a product. Calling it a product is misleading since it removes the technical and multidisciplinary nature it surely carries. A pizza and a Boeing 787 are both products, but only one is a system. A server for ingesting and storing a company's spacecraft payload data is a subsystem, not a product. It is a product from the manufacturer’s perspective, but not from the end user company’s perspective. There is no connection between the term product and the internal complexity of the object it represents. The term _system_, on the other hand, quickly brings a mental idea of internal composition and some assumed minimum level of complexity. Systems Engineering oversees specifying and prescribing the design process while the project, as an overarching process, coordinates the collaborative effort toward a desired outcome. The technical products we create always need an underlying project. On the other hand, projects do not always create products as outputs. In the technology domain, a project can generate a disparate set of outcomes:  systems, studies, investigations, etc. And in life in general, even more disparate outcomes: I recently half-destroyed a wall at home to hang a bookshelf. It was a project for me, from end to end. A long, painful project. I had to measure things, cut the wood, get the fixation hardware, and execute. I cannot call that bookshelf hanging from the wall a product, can I? In short, the term product is just an abstraction to manage a bit easier the outcomes of a project, but as any abstraction it hides details which might be good to keep explicit when it comes to creating complex systems. Picture a fancy dish out of a kitchen in a luxurious restaurant where the _Chef_ (the Systems Engineer) works with the _Sous chef_ (project manager) to coordinate with the _Chefs de Partie_ (mechanics, electronics, software) to realize her recipe. Restaurants don’t usually call their fancy dishes _products_; such a term can opaque the delicate handcrafted nature of cuisine. The term Product carries connotations that can be misleading if not properly used. Having said this, and taking into account that products could be called _plumbuses_ and nothing would really change, all the rest about the Product vs Project dilemma (which is not a dilemma as we just discussed) is irrelevant _title engineering_, or the urge to pay more attention to titles and roles than the actual work needed. These types of pointless arguments are a function of startup size, just because the probability of an evangelist coming through the door increases with headcount. Such arguments turn simple discussions into long and philosophical debates and steer the whole point away from what is important (such as minimizing time to market, having a better market fit, increasing reliability, etc.) to peripheral questions about how people are called and who _owns_ what (there seems to be a growing interest to assign _ownership_ lately in engineering design). The product vs project argument is nothing else than the result of proliferating _role engineering_, which stems from role-centered methodologies. These methodologies propose an inverted approach: a. let’s define the roles first; b. Second, let's discuss how these roles fit the work; c. If the work does not fit the roles, let’s change the work to make it fit the roles. Roles are so rigidly specified that the main objective becomes to comply with the role scope. This is the one long lasting effect of transplanting methods and roles from single-discipline domains (such as software, i.e Scrum) into multi-disciplinary enterprises. The fact multiple disciplines need to be orchestrated to create reliable technical products is what differentiates it from single-discipline engineering for example software. This is not, per se, a critique to Scrum. The only observation here is that the system architecture and the work breakdown must always drive the way teams are formed, selecting the best from each discipline and creating tightly knitted integrated teams. The task of assigning roles and the names we put in our badges and email signatures must be an effect, and not a cause. We must know what to do before knowing who will do it and what role will this person have, and not vice versa. Product and Project are complementary concepts, not antagonistic. Even symbiotic if you want: you cannot expect any results (i.e. a good product as an outcome) if you do not engage in a collaborative enterprise to make it happen; i.e. a project. Can projects and products coexist then? Mind-blowing answer: yes, they can, and they must. To consider the word “project” as outdated or “too waterfall” is just evangelistic parroting, or just blissful ignorance. ## Product Identification, Management and Modularization We may be witnessing a sort of product _beatlemania_ these days, and this perhaps stems from the fact product is a very slippery term in tech. Those industries where the product is easily identifiable and with clear boundaries, like pizza, count with the great advantage of having clarity and unambiguity. Many other industries, such as space, have hard times identifying what the product _is_ and what the product _is not_. Everything can be a product if you are vague enough. The definition of product and its boundaries can change dozens of times throughout the life of an early stage enterprise. If we stay away from the hysterical “freestyle” use of the word _Product_ discussed in the previous section and we stick to the actual meaning of it (_Product_ as something a customer wants to pay for and an instrument to make profits), then Product Management is actually an extremely needed activity, also for early stage organizations where products may be still fluid. Having a solid understanding across the board on what the product is and iterating through different options and attributes is not less than essential from day one. Once prototypes or proofs-of-concept reach minimum-viable-product (MVP) stage and can be sold to customers, even as early adopters, the organization needs to align on how to call it, what variants to consider vs what not to consider, what attributes are open for customers to modify, and so on. Product Management in tech is a combinatorial and experimental activity where multiple combinations of constituent parts can lead to multiple different “flavors” or variants of a product. We said before that pizza is perhaps the paradigm of simple product management, so we can use it as an example in a concept map (Fig. 3.14). It is hard to imagine the owners of a pizza place sitting down to brainstorm what the product is. The product is pizza. People are willing to give their money to put that in their stomachs and feel satisfied. There is no product identification needed; things are crystal clear. ![[Pasted image 20250215213717.png]] Figure 3.14 - Product Management for pizza Product Management for a pizza place is about combinations. The pizza people must sit down and define what variants will be offered vs what will not. As depicted in the concept map above, different variants of pizza are defined by combining a finite combination of ingredients. The variants must be distinctively named (as an overarching abstraction from the ingredients list). The variants modify or impact product attributes, such as cost and weight. Product Management is also about constraining what will *not* be offered. For example, the pizza place could decide not to deliver pizzas and only work with table service. Some attributes can be “optimized” according to customer needs, and Product Management must define which degree of freedom the products will have to adjust those attributes. Different customers will have different optimization wishes. Some customers may want to get the cheapest pizza possible (optimizing for minimum cost), whereas some other customers may have a different criterion. Product Management must define constraints and limits to all these combinations. The pizza concept map is pretty much universal for any other product, and it clearly depicts the highly combinatorial nature of Product Management. Product Management runs with the disadvantage of easily getting lost in the sea of combinations of “ingredients” and variants, going in circles and getting awfully abstract. The product management activity must go incremental and iterate, testing products with early adopters and adjusting the product line according to that. Leaving the pizza analogy behind, the following concept map captures the Product Management relationships for a company which sells systems. In orange color, there are attributes which can be altered depending on the selection of different on-board elements. ![[Pasted image 20250215213741.png]] Fig 3.15 - Concept map depicting the relationship between Product, attributes and variants From the figure above, some resemblance with the pizza concept can be seen; actually, this concept map is the pizza one modified. Some things to extract from the graph; spacecraft attributes to be optimized can be many. A spacecraft can be optimized for specific application (optical, radar, etc.), for mass (to make the launch cheaper), etc. One of the key decisions that must be considered in Product Management, with close collaboration with Systems Engineering, is modularization. Modularization is the process of subdividing a product into a collection of distinct modules or building blocks with well-defined interfaces. A modular architecture is a collection of building blocks that are used to deliver an evolving family of products. A modular system architecture divides the whole system into individual modules that can be developed independently and then plugged together. New products can emerge from adding or re-combining modules, as when adjusting to new market demands, new technological developments, or the phasing out of declining product options. A modular system can be imagined as the opposite to an all-in-one, one-size-fits-all or “uniquely designed for a specific situation, but not adjustable” approach to product design (Fuchs & Golenhofen, 2019). ## Ownership Ownership is another buzzword that has found its place quite up in the rankings as one of the most parroted words around the workplace. Scrum greatly contributed to the spread of this _ownership_ fashion by explicitly introducing the term “owner” in a role: The Product Owner. The product owner represents the product's stakeholders and is the voice of the customer. The Product Owner is responsible for delivering good business results (Rubin, 2013). The product owner is accountable for the product progress and for maximizing the value that the team delivers (McGreal & Jocham, 2018). The product owner defines the product in customer-centric terms (typically [user stories](https://en.wikipedia.org/wiki/User_story)), adds them to the Product Backlog, and prioritizes them based on importance and dependencies (Morris, 2017). The product owner is a multifaceted role that unites the authority and responsibility traditionally scattered across separate roles, including the customer, the product manager, and the project manager. Its specific shape is context-sensitive: It depends on the nature of the product, the stage of the product life cycle, and the size of the project, among other factors. For example, the product owner responsible for a new product consisting of software, hardware, and mechanics will need different competencies than one who is leading the effort to enhance a web application. Similarly, a product owner working with a large Scrum project will require different skills than one collaborating with only one or two teams (Pichler, 2010). The Product Owner has reached deity status in Scrum circles, with sort of magical superpowers. Nowadays, the Product Owner is usually the answer for every question. Thing is, the highly centralized nature of the Product Owner role collides with the natural collaborative DNA of multi-disciplinary engineering design. (Rubin, 2013), for instance, states that “the Product Owner is the empowered central point of product leadership. He is the single authority responsible for deciding which features and functionality to build and the order in which to build them. The product owner maintains and communicates to all other participants a clear vision of what the Scrum team is trying to achieve. As such, the product owner is responsible for the overall success of the solution being developed or maintained.”  There seems to be as many Product Owner definitions as authors there are. Not a lot is written about the relationship between Product Owners and the system hierarchy. (Pichler, 2010) for example refers to “Product owner hierarchies”, and states that these vary from a small team of product owners with a chief product owner to a complex structure with several levels of collaborating product owners. This somehow indicates that there can be a group or team of Product Owners coordinated by a chief. How this hierarchy maps to the system hierarchy is not specified. Having reviewed the software-originated Product Owner concept; how is ownership interpreted in Systems Engineering and multidisciplinary design contexts? As we engineer our systems, we become somewhat entangled to them. Since systems are the result of our intellects, our ideas and thinking are reflected in them, and our intellectual liability grows as the system’s life cycle progresses. We might not own the _property_ the systems we design are made of (which is owned by the company), but we _own_ the intellectual background story behind them. All the decision-making, the trade-offs, the mistakes; all is wired in our brains. Perhaps not in a very vivid manner, but that information we do possess (i.e. own) and it creates ties from us to the system. That’s the ownership sense of Systems Engineering. As perpetrators of our designs, we become the ultimate source of truth for them, throughout its life cycle. This intellectual liability makes us responsible for what we devise with our ingenuity: we know the design the best, probably nobody else does at the same depth. The knowledge about the design resides so deeply in us that even though some of that knowledge could be transferred, the fact we have designed it puts us in a privileged position in terms of understanding. This intellectual advantage does not come for free: we are the ones to respond if the component or element, for whatever reason, does not work as expected. In short: 1.     Responsibility on the space systems we design as engineers ends when they re-enter the atmosphere and evaporate. It does not change if we work on systems that do not go to space. Still, the things that sit on the ground are enabling and/or supporting assets in orbit, and because of that the statement above remains valid, but can be slightly re-written as: 2.     Responsibility on the things we create ends when the systems our work supports re-enter the atmosphere and evaporate. 3.     Put slightly different: design responsibility ends when the system or object we have created has officially reached its irreversible end-of-life.  And the same goes if we work on something which goes on top of something that enables or supports something else that goes to space. That’s the beauty of the systemic approach: our work is a building block in a bigger architecture whose total output is more than the sum of its parts; this means that our work, whatever it sits in the hierarchy, is not less important than anything else; if our (sub)system or component does not work as expected, the whole system underperforms. Responsibility and accountability only fully decay and disappears when the system is completely retired. ## Integrated Product Teams (IPT) Perhaps the bridge between the Product Owner concept referred before and the sense of _ownership_ in Systems Engineering is the Integrated Product Team. Here, the use of the term “Product” refers to anything multidisciplinary that must be created, either for external or internal use. As the architecture of the system forms and blooms, and the work is broken down accordingly, small multidisciplinary teams can be incrementally allocated to synthesize different blocks of the architecture. This is what is called an Integrated Product Team (IPT). IPTs are small multidisciplinary groups populated with resources from different disciplines/functional areas, who are tasked to run studies, investigate, or design and build. Although IPTs have some origin in the “big space” or “classic” Systems Engineering we described some chapters above, it is not a concept which cannot be adapted to small organizations. IPTs do not have to be created up front, and they probably cannot be created up front when the organization is very small, but they can be incrementally allocated as the system architecture matures. In fact, every org begins with an IPT, which we can call "stem" IPT, and then this mother-of-all-IPTs spawns children IPTs as the architecture spawns its children subsystems. This way, as the organization grows both its design and social complexity, the way people group and the system architecture remain aligned, because the system architecture drives it. Conway law (again, not a law, but just an observation) has been largely misinterpreted as something we have to live with, like if the system architecture is hopelessly a victim of how people group. What is more, Mel Conway’s observation is more than fifty years old…it dates from 1968! The context of systems and software design in 1968 was different than today; today teams communicate in ways that were unimaginable in 1968. Perhaps teams were sending letters to communicate decisions in 1968, no wonder things were growing apart and architectures were reflecting this “insular” team effects. A software architecture in 1968 has probably little to do with a software architecture in 2020. That is something every organization must understand very well: there are degrees of freedom on both ends. Companies must stop relying on very outdated observations as ultimate truths. There is coupling between org and system, that is out of discussion (since one creates the other, the coupling is more than obvious), but both ends can be adjusted accordingly. It can be true that some group of people have been working together for some time and it would make sense to keep them together since they seem to bond well. The IPT definition process must take all these topics into account. An IPT is not just an ensemble of disciplines, no matter the affinity between them. An IPT needs to work as a solid unit, or as (DeMarco & Lister, 1999) state, _jell_, and for that all the sociological factors a healthy team needs must be taken into account. Systems Engineering must ensure there is integrity not only between IPTs across the organization but also within the IPT itself. But IPTs must be autonomous and self-contained, and the Systems Engineering should refrain from micromanaging it. Hence the leadership of an IPT must be carefully populated with individuals which are Systems Thinkers but also great communicators, coaches and leaders. IPTs must be teams and not cliques (DeMarco & Lister, 1999), and this also includes the stem, precursor IPT. Given that some types of organizations are usually populated by young people (like startups), it is of great importance the members of the stem IPT are trained in leadership, systems thinking and refine their soft skills since they will become the IPT leaders of tomorrow. The IPT is fully responsible for a specific subsystem or component of the system hierarchy, throughout the component’s entire life cycle. For a spacecraft, this also means assembly, integration and verification (AIV). Then, the IPT is responsible not only for designing the subsystem, but also to put it together and make sure it works as specified before integrating it to the system. Once the component is integrated, the system-level AIV team takes over and remains in close contact with the IPT for a seamless integration process. To ensure the verification of the subsystem is reasonably independent (the IPT verifying its own work could be prone to biases), the system AIV runs their independent verifications and ensures results match what the IPT team stated. Making the IPT intellectually responsible for the design process and life cycle of that component of the hierarchy decentralizes the decision-making. Of course, the IPT cannot go for great distances without collaborating and agreeing with other IPTs and with the Systems Engineer. By being self-contained, IPTs help break the dual-command problem typically found in matrix organizations, where the leadership is typically split between the function and the “verticals” (i.e. components/subsystems). The IPT maps directly to a block in the system hierarchy and straight to the WBS, creating a unit of work that Project Management can gauge and supervise as a solid entity with clear boundaries and faces, streamlining monitoring and resource allocation. Ideally, an IPT exists while the subsystem exists, up to operations and retirement (remember the intellectual responsibility burns in the atmosphere with the re-entering spacecraft). In small organizations like startups, it can happen that keeping an IPT together once the subsystem has been launched can be prohibitively expensive, since basically the IPT would turn into a support team. Here, two things are important: a. The IPT has surely accumulated a great deal of knowledge about the subsystem delivered. The organization must ensure this knowledge is secured before disbanding the team; b. If the subsystem delivered will be re-spun or a new iteration or evolution is planned in the foreseeable future, it is better to keep the IPT formed instead of eventually forming a new one. Systems Engineering is essential in IPT reuse as the system design evolves. IPTs can also be allocated for research & development, advanced concepts or innovation projects by creating _skunkworks_ flavored IPT with loose requirements and an approach which is closer to what some research calls _improvisation_ principles. Improvisation (understood as improvisation in music and art) is defined as a creative act composed without a deep prior thought; this includes creative collaboration, supports spontaneity, and learns through trial and error (Gerber, 2007). As mentioned before, autonomy is key for IPTs. They should not be looking to higher management permission for their decisions-making. They should, however, properly communicate and inform their actions and decisions to others, mainly other IPTs directly interfacing with it, but also the system-level integration team, and of course project management. Autonomy must not be mistaken as a license for isolation. An isolated IPT is a red flag that Systems Engineering must be able to detect in time. The value of the integrated teams is that they can focus on their local problems but the whole must remain always greater than the sum of the parts. This means the true value of the IPTs is in their interactions, and not in their individual contributions.  A point to pay special attention is that System complexity can eventually spawn too many IPTs running in parallel if the architecture has too many components; for small R&D teams, this can impose a challenge to track all the concurrent work and create conflicts with people being multiplexed to multiple IPTs at the same time, breaking their self-contained nature and mission. It can be seen, the interaction between the system under design and the way we group people around it is unavoidable: too granular architectures can be very difficult to handle from a project perspective. Another aspect to consider with IPTs is not to confuse autonomy with isolation. IPTs are meant to be aligned with the System life cycle and structure, so they need to keep the interfaces to other subsystems healthy and streamline communication. Finally, a mention on IPT and multitasking. We said _ad nauseam_ that IPTs must be responsible for a block of the architecture and realize that block end-to-end, from the whiteboard to operations. Truth is that small orgs cannot afford to have one multidisciplinary team fully dedicated to one block of the architecture and absolutely nothing else. So, IPT members usually time-multiplex their tasks (they contribute to more than one subsystem, project or component at a time), breaking the boundaries or “self-containment” IPTs are supposed to have. This problem is inherent in small organizations which start to take more tasks than their current teams can support; a natural reaction for every growing business. In startups and even in scaleups, the federated “1 team does one and only 1 thing” approach is a great idea which makes management way leaner, but it requires a critical mass of workforce that is seldom available. In these situations, technical complexity directly drives headcount; in simple words: it does not easily scale. Managers react in two ways to this reality: a. they acknowledge ballooning the org is a problem in itself and they find ways of organizing the time-multiplexing of tasks for the time being as they incrementally grow the team to meet the needs; b. they ignore it and start bloating the org with people to reach that critical mass needed for the federated approach to work, which creates not only a problematic transient (lots of people coming in and trying to get up to speed, draining resources from the existing people), but also a steady-state issue: huge teams, lots of overhead, and a cultural shake. Moreover, massive spikes in team sizes and uneven distribution of management for that growth fosters the creation of small empires of power inside the organization which can become problematic on their own; i.e small organizations spawn inside the organization. ## The Pitfalls of Model Based Systems Engineering (MBSE) “All models are wrong, but some are useful” says the phrase. All thinking is model based. Thinking involves not only using a model to make predictions, but also to create models about the world. The word _model_ is probably one of the least accurately used terms in engineering, so let’s start by precisely defining it. What are models after all? It is for our need to understand the world that surrounds us that we create simpler representations of a reality that appears as too complex for us from the outset. A model is always a simplified version of something whose complexity we do not need to fully grasp to achieve something with it or to reasonably understand it. We devise scaled versions of things we cannot fully evaluate (like scaled physical mockups of buildings), visual representations of things we cannot see (atoms, electrons, neutrons), or a simplified equivalent of just anything that we don’t need to represent in its full extent which help us to solve a problem. All models lack some level of fidelity with respect to the real thing. A model stops being a model if it is as detailed as the thing it represents. One of modeling foundations is abstraction. I'll restrain myself from the urge of pasting here Wikipedia's definition of abstraction. In short, we all abstract ourselves from _something_ in our everyday lives, at all times, even without thinking we do it: the world around us is too complex to be taken literally, so we simplify it ferociously. This enables us to focus on the most important aspects of a problem or situation. This is not new, we all knew it but yes, it has a name.  Modeling and abstraction go hand in hand. Ok, but what is to _model_? To model is to create simpler versions of stuff for the sake of gaining understanding. But are _modeling_ and _abstraction_ synonyms then? Not quite. Modeling is an activity which usually provides as a result a conceptual representation we can use for a specific goal, whereas abstraction is the mechanism we use to define what details to put aside and what details to include. Abstraction is also a design decision to hide things from users we consider they do not need to see. Drawing the line on where to stop modeling is not trivial, as one can easily end up having a model as complex as the real thing. This has been depicted in Bonini's paradox[[4]](#_ftn4): too detailed models are as useful as a 1:1 scale map of a territory. Look to Jorge Luis Borges' "On the exactitude of science" to see Bonini's paradox written in a beautiful way: _... In that Empire, the Art of Cartography reached such Perfection that the map of a single Province occupied a whole City, and the map of the Empire a whole Province. In the course of time, these Disproportionate Maps were found wanting, and the Colleges of Cartographers elevated a Map of the Empire that was of the same scale as the Empire and coincided with it point for point. Less Fond of the Study of Cartography, Subsequent Generations understood that such an expanded Map was Useless, and not without Irreverence they abandoned it to the Inclemencies of the Sun and of Winters. In the deserts of the West, tattered Ruins of the Map still abide, inhabited by Animals and Beggars; in the whole Country there is no other relic of the Disciplines of Geography._ We all create models, every single day of our lives. We cannot escape models since our brains are modeling machines. When we write the grocery list, we have in our hands or in our phones an artifact which is a simplification from the actual list of real items you will end up carrying in your basket. You leave many details out from this artifact: colors of packaging, weights, even brands for many items (an apple is just an apple after all), and so on. Many things that you don't remotely care about. The grocery list is a model in the strict sense of the word. Now **is** the list the model? It is not. Just as the word “apple” in your grocery list does not mean the real thing, the models we create, either in our heads or on some sort of diagram, are symbols that represent the real thing. And this representation is an incomplete description of the real thing. So, let’s get to the point on this: 1.     Everybody models, every day. Then, engineers also model. No matter the discipline, engineers cope with complexity by creating simpler symbolic representations of whatever they're trying to create. These symbolic representations might or might not correlate physically to the actual thing. A CAD or a solid for a mechanical engineer is a physically representative model: the shapes on the representation will match the shapes on the real thing. A schematic diagram for an Electrical Engineer is a model with loosely physical matching; but a PCB design is a physically representative model. Plain block diagrams written in whiteboards are also models, with or without physical resemblance. Block diagrams are the communication tool of choice when ideas need to be conveyed rapidly. In all these cases (mechanical, electrical, etc.), the modeling activity is fully integrated in the design process, and the practice is very much adopted without questions: nobody remotely sane would design a PCB without knowing what are the building blocks or decide that on the go, in the same way nobody would start stacking bricks for a house without a prior idea on what needs to be built. In all these cases, the models are part of the design _ritual_ and you can't circumvent them in order to get to production; I call this _single path_ design: the way things are done follows a very clear line among designers, no matter experience or tastes. They will go from A to B doing the same, tooling aside. The implementation effort progresses hand-in-hand with this modeling activity; the more you model, the closer you get to production, modeling is an inescapable part of the design process. If you ask an Electrical or Mechanical engineer if he or she is aware they're modeling, they'll probably say no. It is so embedded in the way they work they don't think about it, they have internalized it. For software, the situation seems to be different. Software industry has a very distinctive approach to modeling, where the modeling activity stands out, with its own modeling languages. Notably, UML being by far the most popular. But there is some continuous debate in the software world about the need of modeling as a pre-step before implementation, but also about the language itself. Main criticism towards UML is for not giving the right vocabulary for expressing ideas about Software Architecture. On the one hand it is too vague to describe ideas. On the other hand, it is excessively meticulous up to the point where it is just easier to implement something directly in object-oriented language rather than draw a UML diagram for it (Frolov, 2018). Try now to find if there is any discussion about schematic diagrams being or not necessary for electronics to be implemented. Something is different in software. Unlike other disciplines, software chose to make modeling very explicit in the design process and tried to formalize its practice. This seems to have created an alternative path for getting from A to B, which breaks the _single path_ design paradigm I commented before. And we engineers take the least impedance path, just as current does. In software, implementation does not progress as formal (UML) modeling progresses; implementation starts once formal modeling ends. Someone could argue that explicit modeling with diagrams should be considered part of implementation, but that again is open for interpretation. Strictly speaking, you can implement software without drawing a single diagram, while you cannot implement electronics without drawing a single diagram. The key is the formality of the software modeling. Since we cannot help to model as the humans we are, then software engineers also model (if you can call them humans), regardless if they use a formal language or not. Is source code a _model_ then? Well, it turns out it is. Models do not need to be boxes and arrows. There’s this phrase that goes: “the diagram is not the model”, which I have always found confusing. The way I understand the phrase should be: 1.      Models do not need to be diagrams It _can_ be a diagram but does not necessarily _need_ to be. A piece of code is a model, and its syntax are the artifacts we have at hand to model what we want to model. Someone could argue UML is easier to understand and follow than source code. But is it? If you really, really want to know how a piece of software works, you will sooner than later end up browsing source code. The UML diagrams can be out of sync from the source code since its integrity could have been broken a long time ago without anything crashing; software will continue running, totally unaware of the status of the diagrams. In an ideal world, only source code completely generated from UML models would keep the integrity. But experience shows almost an inverse situation: most of the UML diagrams I have witnessed have been created _from_ existing source code. Modeling with UML can reach the point where it becomes easier to implement something directly in object-oriented language rather than drawing a UML diagram for it (Frolov, 2018). If it is easier to describe the problem in source code and the diagram proves to be “optional”, what is the value of spending time drawing the diagram? Years ago, I had the (bad) luck of attending a course on UML for embedded systems. Course started ok, but suddenly, the instructors started to mix up diagrams together (adding for example state machines in sequence diagrams but also taking many liberties in some class diagrams). I remember pointing that out and even getting a bit triggered about it. The absolute whole point of getting a consistent semantics in diagrams IS about being strict on it. If we are not strict, then it just becomes a regular block diagram where all the postulates I listed above apply. So far, this has been mostly about software, but how does all this relate to Systems Engineering? Systems Engineering deals with multidisciplinary problems, encompassing all disciplines needed to design and build a complex technical system. Explicit modeling has not been a stranger in Systems Engineering. MBSE (Model Based Systems Engineering) is the umbrella term for all-things-modeling in the SE world. MBSE was on the hype maybe around 2010 or 2012; everybody was trying to do it. You could feel that if you were not doing MBSE, you were just an outsider, you were old fashioned. Small and medium sized companies were trying to imitate big companies and adopt MBSE methodologies; they were buying training and books and trying to be cool. MBSE promised to change a paradigm: from the document-centric to a model-centric one, where engineers could revolve around "the model" and create documents from it if needed. In the MBSE context, the model is a collection of diagrammatic (graphical) descriptions of both the structure and the behavior of a system by using some language. For creating this model, MBSE decided to use a graphical language, and the choice was, interestingly, SysML, which is sort of a systemic flavor of UML. First problem: SysML _inherits_ (pun intended) all the problems UML carries: a semantics that is accessible for some few. Second problem: a multidisciplinary system is made of multidisciplinary teams, which are forced to understand a language which may be largely unfamiliar to them; hard for non-software engineers. It is at least arguable that MBSE claims to streamline communication by choosing a language only a small subset of actors will interpret. Third problem: tooling. Most MBSE tools to create nice diagrams are proprietary. There are some open source options here and there, but they don't seem to match the commercial counterparts. There are books, courses, training, there is quite a nice industry around MBSE. Fourth problem: formalizing modeling in Systems Engineering creates an alternative/optional path which teams might (and will) avoid taking; engineers will always choose the least impedance path. If they can go on and design without MBSE, then why bother? Because management asks for it? Fifth problem: MBSE incentivizes a sort of analysis paralysis[[5]](#_ftn5): you could model forever before starting to get anything done. There are multiple cases you can find around of different MBSE initiatives where people have been modeling for years, piling up dozens of complicated diagrams, without anything real ever created, reaching very close to the paradoxical Bonini's boundary in terms of detail. MBSE loop goes like this: the longer you model, the higher the fidelity of the model you get, which (in theory) should increase understanding for all stakeholders involved. The longer you model, the more complex the model gets, the less understandable it becomes, and the longer it takes for actual engineering implementation to start. Sixth problem: MBSE diagrams are seldom “digestible” by external tools to extract knowledge from them. They become isolated files in a folder which requires human eyes and a brain in order to interpret them. Hence, their contribution to knowledge management is limited[[6]](#_ftn6). MBSE is a formal modeling activity which belongs to a path engineers will not take if they don’t naturally feel they must take it. Perhaps the important thing to notice about MBSE is: all Systems Engineering is model based, regardless what diagrams or syntax we use. We are modeling machines since the moment we wake up in the morning. Modeling is inescapable, we all do it and will continue doing it at the engineering domain levels in very established ways, without explicitly thinking about it. Systems Engineering for any organization should be aware of this and adopt an approach of accompanying and facilitating this activity without trying to force a methodology which might only create an optional, suboptimal path in the design process, adding overhead and increasing costs. In short: 1.     All Engineering is Model Based, thus: 2.     All Systems Engineering is Model Based Systems Engineering ## Knowledge Graphs We discussed concept maps a few sections before. But concept maps have to be manually drawn and kept manually updated as the concepts and knowledge evolves. Concept maps are born and meant to be consumed by our human brains. Startups must capture and share knowledge at all cost; a manual approach such a concept map is great compared to nothing. But the best way of capturing knowledge is in formats that computers can process in order to help us connect dots we might not be able to see ourselves with our bare eyes. This section is about discussing more computer “friendly” ways of capturing knowledge. A concept map is nothing else than a graph, and graphs are representations computers can work better with. In a knowledge graph, nodes represent entities and edges represent relationships between entities. A knowledge graph is a representation of data and data relationships which is used to model which entities and concepts are present in a particular domain and how these entities relate to each other. A knowledge graph can be built manually or using automatic information extraction methods on some source text (for example Wikipedia). Given a knowledge graph, statistical methods can be used to expand and complete it by inferring missing facts. Graphs are a diagrammatic representation of ontologies. Every node of this graph stands for a concept. A concept is an entity or unit one can think about. Essentially, two nodes connected by a relationship form a fact. Ontologies can be represented using directed graphs, while some say even more specifically, they are Directed Acyclic Graphs (DAGs). If you need a concrete example of DAGs, note that a [PERT](https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique) diagram is just another case of them. So, existing edges indicate known facts. What about missing edges? There are two possibilities: - Closed World Assumption (CWA): non-existing edges indicate false relationships: for example, since there is no isA edge from a Chair to Animal, we infer that a chair is not an animal. - Open World Assumption (OWA): non-existing edges simply represent unknowns: since there is no _starredIn_ edge from Joaquin Phoenix to Joker, we _don’t know_ whether Joaquin Phoenix starred in Joker or not. KGs typically include information in hierarchical ways (“_Joaquin Phoenix is an actor, which is a person, which is a living thing_”) and constraints (“_a person can only marry another person, not a thing_”). Details and hierarchies can be left implicit; i.e. you must have a proper amount of insight in order to discard some concepts that are “obvious”. But what is obvious for you may not be for someone else. Bonini’s paradox kicks again: you can describe to ridiculous levels of details a concept or a set of concepts and how they relate, but soon you will realize it is as complex as the actual thing you’re describing; it stops making sense. A knowledge graph is also simplification, and as such it, for clarity, leaves certain details aside. Thing is, again: what is obvious for you and you can discard can be a key concept for someone else, and vice versa. And here lies one of the main problems with knowledge management in organizations: if concepts are not captured and formalized in some way, a great deal of meaning is left to interpretation. And there are two sides of this issue. One is related to concepts that are somehow isolated in me, for example, what is my own understanding of the world as such, which might (and will) differ from my colleagues, and that is fine. But when it comes to concepts that are collective and needed for the organization to progress for example something as simple as “what is the company’s main product?”, meaning cannot be left to interpretation. Nodes of a knowledge graph (or ontology) are connected by different kinds of links. One important kind of link is called IS-A or isA link. The nodes and isA links together form a Rooted Directed Acyclic Graph (Rooted DAG). Acyclic means that if you start at one node and move away from it following an IS-A link, you can never return to this node, even if you follow many IS-A links. _Rooted_ means that there is one single "highest node" called the Root. All other nodes are connected by one IS-A link or a chain of several IS-A links to the Root. For the image above, the node "hardware" is having a lot of inbound "isA" links, meaning that this node is a root, or a parent category. From software, a root is usually an abstract class, meaning that there is no "instance" of that concept, but serves as a parent category for real instances of things. You can feel how all this is very similar to an Object-Oriented approach; even the diagrams are quite similar. But here we're talking about just conceptualization of knowledge as a result, not about software. Now, if an isA link or edge points from a concept X to a concept Y that means that every real-world thing that can be called an X also can be called a Y. In other words, every X isA Y. Let's try it: every flight computer isA [piece of] hardware, and every hardware isA[n] entity (square brackets added for grammar). Makes sense. To be fully consistent with my own writing, I mentioned [previously](https://www.linkedin.com/pulse/hierarchies-conflict-ignacio-chechile/) that categorical hierarchies are sort of "anti-pattern", and of little usefulness to describe reality, and I stand on my point when it comes to System Design, because, as I said, it is better to keep hierarchies mirrored with the physical System hierarchy (system description is, in effect, an ontology). But when it comes to specifying knowledge graphs, categorical hierarchies are a bit inescapable, because that’s how our brains construct knowledge; categorizing helps us gain knowledge about something. Taking the case of the flight computer again, we kind of *know* (because of naming chosen and because of our past experiences) that it is a piece of hardware, and we could just skip specifying the "hardware" root/parent, but for someone who is lacking context or experience, that relationship can be useful. When it comes to define ontologies, there has to be some reasonable tailoring of the hierarchy according to potential users/audience, and some over specification (if it doesn't clutter the graph) does not do much harm. Also, nodes can contain information attached to them. This information includes attributes, relationships and rules (or axioms); axioms express universal truths about concepts and are attached to concepts (example: If X is the husband of Y, then Y is the wife of X). An attribute is like a simple number or variable that contains additional information about that concept. I chose, though, a bit of a different approach and I defined an _attribute_ as an entity as well; some others choose attributes to be just numbers attached to a node. For me, an ontology must be explicit enough also about attributes and their hierarchies. In the above example, _mass_ is an attribute _had_ by both hardware and the vehicle. There is some disagreement whether only the attributes are inherited or also the values of attributes. But, if values are inherited, they may be "overridden" by attributes at lower nodes. Example: I could assign the value "1" to the attribute _mass_ at the entity _Hardware_ in my ontology. However, the child entity called Flight Computer (which isA Hardware) could specify the value "2" for its mass and this would be the value that is used. Most researchers say that we inherit semantic relationships down. Some ontologies have "blocking mechanisms" to stop inheritance. I will come back to this when we see the code. A relationship (or an edge) is a link (arrow) that points from one concept to another concept. It expresses how the two concepts relate to each other. The name of a relationship is normally written next to the relationship link. Other links/edges (relationships) are commonly called "Semantic Relationships." This is to avoid confusion with isA relationships. Our human judgement in defining ontologies is important because data could be described in an infinite amount of ways. Machines are still unable to consider the broader context to make the appropriate decisions. The taxonomies and ontologies are like providing a perspective from which to observe and manipulate the data. If the element of interest is not being considered, then the Knowledge Graph won’t provide any insight. Choosing the right perspective is how value is created. Organizations do not spend much time analyzing and capturing what they know. And the key here is that what organizations know is not a monolith body of knowledge but a mixture of both tacit and explicit knowledge. The great deal of “tacit” knowledge is very difficult to capture in frames or schemata. If an organization has a considerable amount of manual work (integrating a spacecraft together is still a handcrafted endeavor), capturing and codifying that type knowledge in any format is very troublesome; for example, go try to learn to play guitar reading a book. Tacit knowledge can be transferred by mentoring and first-hand practice. I had a job many (many) years ago which consisted of repairing fax machines. The job was not great, but it was good pocket money while I was studying at the University. Repairing is maybe too big of a word for it. The task was rather simple: faxes could get dirty after some use, and some dark marks could start showing in the sent documents, so a clean-up was needed to fix the issue. I hadn’t ever seen a fax machine in my life before, so when I opened it (or my mentor opened for me) I was astonished: an intricate system composed of mirrors, plastic gears, and tiny motors revealed in front of my eyes. The challenge was that there was one mirror which was the cause of 99% of the problems, and to access that mirror and be able to clean it with a cloth, half of the thing had to be torn apart. During my “training”, my mentor quickly disassembled it in front of me, expecting that I would remember all the screws, along with what was supposed to come out first, what was supposed to go next. Then, he wishfully sent me to the field to make customers happy. I still remember the first customer I went to: it was a private home of a journalist (or similar), and the guy was in a rush to have the fax fixed for him to be able to send his work, there was (apparently) some deadline he needed to meet. Of course, he never met the deadline because I never figured out how to reach the damn mirror. Guy hated me, and rightfully so. So, after that traumatic experience, I went back to my employer’s office, took one defective fax which had been laying around, sat down one afternoon and assembled and disassembled the thing probably a hundred times (no joke). While I was doing that, I was taking notes (phones with cameras were not a thing), with rudimentary diagrams and flowcharts. I never failed cleaning that mirror again. I trained myself into it. After a short while, I didn’t need the notes anymore. Training is exactly that: knowledge development versus time. The fax machine experience made me think: what would it take to make it possible to pick a random person from the street and get this person to successfully clean that mirror right the first time, and all subsequent times? If I prepare the best instructions ever, and get this random person to be able to do it right the first time, would that mean the person _knows_ how to do it? I figured that because disassembling the fax implied basically unscrewing screws and de-snapping snap-fits (two things that we probably know from before), disassembling and assembling back something like a fax could be turned into a sequence of steps. If that sequence of steps would be properly accompanied by photos and visual cues, it should not be crazy to think someone totally unaware of the task could be able to nail it. Assembling and integrating a system contains a great deal of fax machine-like parts and bits. These assemblies are aggregations of activities people (with the right background) already know how to do: screwing, gluing, torqueing, snapping. It is possible, then, to capture and describe the way of tearing assemblies apart and/or putting them back together using step-by-step sequences, provided they are composed of familiar activities. Is this knowledge? Not quite. Being able to perform a task following instructions is not knowledge. After all, that’s how GPS works in our phones with its “turn right / turn left” approach. Provided we know how to drive a car or walk, we can get to places we had no previous clue how to get, just by following those simple instructions. Same way the famous Swedish furniture supplier can make you assemble a cupboard even if you haven’t ever assembled a cupboard before. This is no different as computers work: they execute instructions specified in some way (language), while the underlying tasks (adding, subtracting, multiplying) have been previously wired  in them, in the sense that specific underlying mechanisms will be triggered to execute the operations according to those instructions. Does this mean computers “know” how to add numbers? Not really, since what the computer is doing is applying basic pre-established rules using binary numbers to achieve the result. When activities are composed of tasks which are unfamiliar or require precise physical operations, training takes considerably longer. Think about riding a bicycle or playing guitar; you cannot learn it from a book, or from a podcast. But robots can ride bikes and play guitar (there are plenty of example videos around). Do robots know all that? What is to know something then? _Knowing_ involves comprehending in a way inferences can be made in order to adapt and apply that information in contexts that are different compared to the original context. Example: you could really practice for two or three months and learn one song on piano by furiously watching a tutorial. You could eventually get good at that one song. Does that mean you learned to play piano? If someone came and invited you for a jam session, you would be probably the worst guest ever. Computers are great at doing specific things for specific contexts, but they are incapable of adapting to other situations. The bike-riding robots will surely fall on a bumpy road, unless its control system takes that into account by design; guitar-playing robots couldn’t jam either. We only know something when we can adapt that knowledge to varying contexts and still obtain satisfactory results; otherwise we are just following a recipe or executing instructions like computers and robots do. We as engineers must adapt what we know from previous experiences and projects to the new contexts we encounter in our careers. We cannot apply rigid “fetch and retrieve” rules, for they will not work. Every new design scenario is different from the previous one. There are not two equal cases. Even though computers cannot know and understand in the human sense of the word, they can be used to process relationships, attributes and properties of different types of entities, and find patterns, unknown contradictions and circular loops. Computers are great at rule-checking, so their value for that is very high. It is surprising organizations haven’t adopted pervasively the practice of capturing their concepts in graphs; new insight can be discovered by using the right set of tools. Early stage projects, having the advantage of being young and small, should not let this opportunity pass. Establishing the practice of eliciting concept maps and knowledge graphs out of the main concepts related to what the organization is trying to accomplish, along with their relationships, and capturing all that in formats computers can process is a great step to grow a semantic information system. This way, the organization’s documentation could be semantically consistent across the board. ## The Brain of the Organization While researching on the topic that gives name to this section, I came across Stafford Beer again, who I introduced before. I found he had written a book called “The Brain of the Firm”[[11]](#_ftn11). I have criticized cybernetics on earlier chapters as oversimplifying by its biasing towards mechanistic metaphors (and I stand on that), so it was a bit disappointing to find a book on the topic but treated in such a way. “The Brain of the Firm” created some impact some decades ago, mostly in the seventies, including some bizarre ramifications[[12]](#_ftn12). Despite the eccentric and intricate approach taken by Beer in his theory (as of today, I still do not fully understand big parts of it, his diagrams are nothing less than cryptic) it is still thought-provoking. Is there such a thing as a brain-like function in the organizations we create? Is there any analogy we can find between how our brains work and how organizations work? What is our function as individuals in a hypothetical “brain of the organization”? Beer’s work subdivides the brain-like functions in layers: from very low-level actions to high-level more “intelligent” decision-making stages. Let’s try to use this framework but in a slightly different way. Our brain, as a sensemaking unit, can only unleash its full functionality if it is connected to a set of peripherals that: a. feed it with information (sensory/sensors), and b. allow it to execute actions to modify (to some extent) the environment according to wishes or needs (motor/actuators). This is a nervous system; or, more precisely, a sensorimotor nervous system. The main function of our nervous system is to arm us with the tools to navigate the world around us, come what may, and to provide us with an introspective assessment of ourselves. Now let’s use this analogy for an organization. The organization needs to navigate through the reality it is immersed in (markets, competitors, financial context), and needs to be aware of its internal state (team dynamics, team commitment, goals, etc.). Organizations have to execute actions to alter those contexts according to wishes/desires (policy-making, product lines, product variants, acquisitions, mergers, etc.). Then, it does not appear too off to say organizations can functionally be represented as a collective sensorimotor nervous system. But analogies need some care. Since we are talking about people and not exactly cells (neurons), then this analogy can sound a bit oversimplifying. Think about this for a moment: I am stating that a brain as an analogy is an oversimplification for an organization! In a brain, neurons communicate by a firing mechanism that is rather simple, at least compared to how we “fire” as members of a social network. Neurons fire according to thresholds set by chemical reactions that are triggered depending on certain stimuli. We, as “neurons” in this hypothetical organizational brain, have very complex firing mechanisms compared to a cell; we communicate in way richer manners. Our “firing thresholds” are defined by very intricate factors such as affinity, competition, insecurities, egos, and a long etcetera. Also, in our own individual nervous systems, the decision-making is more or less centralized in our brain and all information flows into it, whereas in organizations the decision-making can be more distributed across the “network of brains”; this also means the information channels are not all routed to one single decision-maker. As in any network, a network of brains can show different topologies. This network is recursive, meaning that it is a network of networks of networks. ![[Pasted image 20250215213900.png]] Figure 3.19 - Network topologies (credit: public domain) It is naive to believe organizations follow only one topology, for example the “fully connected” topology (Fig. 3.19). In reality, a mixture of all topologies can be found; i.e. a hybrid topology. For example, organizationally, a “bus” topology represents a shared channel or forum where all actors can exchange information openly; nothing goes from node to node. Great for communication and awareness, somewhat problematic for decision-making: who calls the shots? The star topology works somewhat opposite: one actor acts as concentrator, all nodes connect to it; probably more optimal for decision-making, suboptimal for awareness and communication. The way sub-networks connect is not to be overlooked. For example, some individuals can act as _gateways_ between sub-networks (they connect two groups or departments for example), and some others act as “firewalls”, in the sense that everything to/from a group needs to go through them and filter out information that he or she considers not relevant for the collaboration. If the brain analogy holds, the question is: can the organizational sensorimotor nervous system show disorders as our sensorimotor nervous systems can? Can organizations have Alzheimer for example? Or amnesia? Or PTSD? # References Anderson, O. R. (1992). Some interrelationships between constructivist models of learning and current neurobiological theory, with implications for science education. _Journal of Research in Science Teaching_, _19_(10), 1037-1058. Colfer, L., & Baldwin, C. (2010, February 18). _The Mirroring Hypothesis: Theory, Evidence and Exceptions_. HBR - Working Knowledge. [https://hbswk.hbs.edu/item/the-mirroring-hypothesis-theory-evidence-and-exceptions](https://hbswk.hbs.edu/item/the-mirroring-hypothesis-theory-evidence-and-exceptions) Conway, M. (1968). _How Do Committees Invent?_ Mel Conway's Home Page. [http://www.melconway.com/Home/Committees_Paper.html](http://www.melconway.com/Home/Committees_Paper.html) DeMarco, T., & Lister, T. R. (1999). _Peopleware: Productive Projects and Teams_ (2nd ed.). Dorset House Publishing Company. Frolov, V. (2018, June 20). _NoUML_. Medium. https://medium.com/@volodymyrfrolov/nouml-afbb7f07f369 Fuchs, C., & Golenhofen, F. (2019). _Mastering Disruption and Innovation in Product Management: Connecting the Dots_. Springer. [https://doi.org/10.1007/978-3-319-93512-6](https://doi.org/10.1007/978-3-319-93512-6) Gerber, E. (2007). Improvisation principles and techniques for design. Conference on Human Factors in Computing Systems - Proceedings. 1069-1072. 10.1145/1240624.1240786. _Conference on Human Factors in Computing Systems - Proceedings_, 1069-1072. 10.1145/1240624.1240786 Hayakawa, S. I. (1948). _Language in Thought and Action_. George Allen & Unwin Ltd, London. Henderson, R., & Clark, K. (1990). Architectural Innovation: The Reconfiguration Of Existing Product Technologies and the Failure of Established Firms. _Administrative Science Quarterly_, _35_(1), 9. Lemme, B. H. (2006). _Development in Adulthood_. Boston, MA. Pearson Education, Inc. McGreal, D., & Jocham, R. (2018). _The Professional Product Owner: Leveraging Scrum as a Competitive Advantage_. Addison-Wesley Professional. ISBN 9780134686653 Morris, D. (2017). _Scrum: an ideal framework for agile projects. In Easy Steps_. Leamington Spa. ISBN 9781840787313 Novak, J. D., & Cañas, A. J. (2008). The Theory Underlying Concept Maps and How to Construct and Use Them. _Technical Report IHMC CmapTools 2006-01 Rev 01-2008, Florida Institute for Human and Machine Cognition_. available at: http://cmap.ihmc.us/Publications/ ResearchPapers/TheoryUnderlyingConceptMaps.pdf Pankin, J. (2003). _Schema Theory_. MIT. http://web.mit.edu/pankin/www/Schema_Theory_and_Concept_Formation.pdf Pasher, E., & Ronen, T. (2011). _The Complete Guide to Knowledge Management: A Strategic Plan to Leverage Your Company’s Intellectual Capital_. Wiley. ISBN: 9781118001400 Pichler, R. (2010). _Agile Product Management with Scrum Creating Products that Customers Love_. Addison-Wesley. ISBN: 0-321-60578-0 Rubin, K. (2013). _Essential Scrum. A Practical Guide to the Most Popular Agile Process_. Addison-Wesley. ISBN 978-0-13-704329-3 --- [[1]](#_ftnref1) Plumbus: Everyone has one at home, so there is no need to explain what a plumbus is. [[2]](#_ftnref2) Here I borrow the word from botany. It is the main trunk of a plant which eventually develops buds and shoots. [[3]](#_ftnref3) A skunkworks project is a project developed by a relatively small and loosely structured group of people who research and develop a project primarily for the sake of radical innovation. The term originated with Lockheed's World War II _Skunk Works_ project. [[4]](#_ftnref4) https://en.wikipedia.org/wiki/Bonini%27s_paradox [[5]](#_ftnref5) https://en.wikipedia.org/wiki/Analysis_paralysis [[6]](#_ftnref6) Some initiatives such as ESA’s Open Concurrent Design Tool (OCDT, https://ocdt.esa.int/) are trying to tackle this. [[7]](#_ftnref7) https://networkx.github.io/ [[8]](#_ftnref8) https://matplotlib.org/ [[9]](#_ftnref9) https://www.pydev.org/ [[10]](#_ftnref10) Link to NetworkX repo [[11]](#_ftnref11) Beer, S. (1995). _The Brain of the Firm_, 2nd Ed. [[12]](#_ftnref12) https://en.wikipedia.org/wiki/Project_Cybersyn