# Modularity
A module is commonly defined as an independent *chunk* that is highly coupled within, but only loosely coupled to the rest of the system.
We spoke before about how designing involves discretizing a need into more manageable *chunks* for better organizing and managing the task of synthesizing a system. Discretization is an essential part of engineering design. As we break down the system into its constituent parts, we create a network of entities whose relationships define, among other things, ownership and composition relationships. By creating this network, we provide a path for the global function to materialize.
The relevant functions needed for the system to perform well can be grouped into functional components. Functional components are, as the name indicates, function providers; they provide, at least, one function. Functional components are seldom atomic; they have their internal structure as well. For a functional component to be able to provide its function, an interface needs to exist. An interface is the mechanism by definition for coupling functions together. An interface allows a component to convey, contribute, and/or connect its internal function across boundaries to other components, depending on the network topology, equally receiving the same from other components.
Component boundaries define where its functional entity begins and where it ends. ==An interface is a contract between functional component boundaries. This contract specifies how the function will be correctly supplied and consumed.==
Often, the coupling of function between components requires transferring something (information, matter, energy), in which case the interface acts as a vessel or channel for that *something* to flow. Think about a car as an object: the door is a component with an interface that couples to the car body employing hinges and other things. Car doors have mechanical interfaces for coupling with the parent system (the car body), but at the same time, from a different perspective, car doors **are** interfaces themselves, as they are the mechanism we use to get in.
Think about a bridge connecting two roads: the bridge itself is an object of reasonable complexity, but from the road system perspective, the bridge is just an interface. It is the point of view in the system hierarchy from where we observe the bridge which defines it as a component or an interface. Interfaces can be complex although when seen from the system lens, they only appear as "mere" vessels interfacing two parts of said system.
When does a *thing* become modular? It is modular if its relationship to the parent system is not "monogamous" if such a term can be used as an analogy. This means that a component can provide its function to some other parent system without the need to introduce changes to it. Are car doors modular? Certainly not. We cannot take a door from a Hyundai and put it in a Mercedes and expect it to work., due to the fact the coupling depends on the mechanical form factor; the door was designed to specifically fit in the Hyundai.
On the other hand, plenty of car parts are modular, like tires, windshield wipers, lightbulbs, and so on. We could take a tire from a Hyundai and put it in a Mercedes, provided cars are of roughly similar type. Performance might be somewhat impacted, but the function would be reasonably coupled because the interface to the axle is standardized across vendors.
If a software library can be used on different applications, it is modular. Software is a great example of how modularity can enable great ideas. The whole open-source community sits on modular thinking. Software gets well with modular approaches because the interfaces are extremely malleable, and development cycles are shorter.
Modularity is a strategic decision, for it can impact factors such as cost, schedule, and risk. When system integrators own the whole architecture and also own the life cycle of some of the components in the architecture, modularity can be difficult to justify, unless components are meant to be used in another system across the product line.
One may think that there is no real reason why ***not*** going modular, but it might be a tricky analysis to make.
Modularity can also be unattractive from a business perspective. When an object is truly modular, there is an inherent risk of losing identity and being treated as a commodity. If a customer can quickly replace a module with a competitor's, they might. The automotive industry has very selective criteria for modularity, and the reason is cost but largely differentiation and brand image. Automakers go modular on things that do not impact their brand image or aesthetic integrity of their products but do not go modular on things that do impact such integrity. Modularity is not a function of size. Large and complex objects can be designed in a modular way if it is for their benefit; for instance, in the aerospace industry, turbofan engines are quite complex devices but also modular. Their principal attraction lies in the fact they can work across different airframes from different aircraft manufacturers.
==It is only by having a deep understanding of what things are composed of that we can have the power to combine the constituent elements in the best way possible.==
## Principles of Modularity
A **higher degree of modularity** is usually desirable from a customer and business perspective (more variety, more reuse, etc.). **Challenges** to achieving a high degree of modularity:
- **Power levels** can be determinative in limiting modularity choices at the physical design level, due to the need for impedance matching.
- **Packaging and light-weighting** constraints can lead to more integral architectures. Strict modularity in the physical domain carries some inefficiencies (often on the order of ~10-30%).
Software modularity is usually achievable at little or no physical penalty. The exception is some real-time embedded systems.
Advances in **technology and miniaturization** have shifted the old constraints and have the potential to enable more modular mobile devices today.
## Wholes and Parts
Mereology is the study of parts and the wholes said parts form. Mereology emphasizes the—wait, here comes a fancy term—*meronomic* relation between entities. Say what? Allow me to add some more sophisticated jargon. A meronomy or partonomy is a type of hierarchy that deals with part-whole relationships, in contrast to a taxonomy whose categorization is based on discrete sets. The study of meronomy is then known as mereology, and for example, in linguistics, a _meronym_ is the name given to a constituent part of or a member of something. In short: "X" is a _meronym_ of "Y" if an X is a part of a Y. Enough of complicated words.
Why do we even care? Can we just sit back, relax, and not think about parts and wholes at all?
Not really. Poor, half-baked part-whole definitions cause a lot of trouble if left unchecked. This is an incredibly underrated problem that transpires as perhaps one of the most pervasive pitfalls in everything we do. We structure things badly, simply because we think about them badly.
Think of a software engineer being given the power to decide what object or data structure contains what other objects or structures, and how those are supposed to communicate with other composite objects. In short, the engineer is tasked with providing a “box” or “crate” that is supposed to manifest a certain functionality for a customer and then she is happily left alone to her own devices to decide not only the contents of the _crates_ but also how to arrange all objects inside the crate. Who checks that the software engineer has her mereology together?
>[!attention]
>An uncomfortable truth: in cyber-physical systems, [[Software|software]] is never a whole but a PART. Like bolts are.
And you don’t need to be Aristotle to get this, you just need a functioning brain. Software is always a part of something else. That _something else_ can be an aircraft, a smartphone, a pacemaker, a refrigerator, or an airline ticketing system. It’s always amusing to see someone being christened as “Head of Software” or similar in an organization that sells systems. Good luck resolving the friction of leading a bit of everything that contains software and nothing _wholly_ at the same time. It's like being promoted to “Head of Screws And Washers”.
Similar challenges for someone starting a documentation tree. Or a work breakdown structure (WBS), or a product tree for an ERP tool. Or a relatively straightforward task that still contains subtasks. We suck at it: we coarse things down in favor of abstraction but in the end reality calibrates us as way more granular than expected. No wonder everything takes longer than planned and costs more than estimated.
At any given time, someone, somewhere around the world is screwing up the parts and the wholes of something, or inverting a category, fully convinced that a feline is a type of tiger.
Mereology calls this gunks and simples. Gunks are things with a complex composition. Simples are, well, simples; no internal composition. A hierarchy of one single level.
When we mistake _gunks_ for _simples_ because we just ignore the underlying granularity, we are kind of forgiven. But when we do know, or worse, someone is repeatedly telling you what you think are _simples_ are in fact as _gunky_ as they can be, well, we deserve to be slapped by a reality check. Imagine Niels Bohr telling you ‘Hey, I think there’s more than just molecules‘ and you disregarding poor Niels over and over.
Parts and wholes are embedded in everything we own and operate, and how we think. And it is not just about piling up parts and believing they magically become wholes. It’s also fundamentally about how we arrange the parts during the design process.

# Setting the Boundaries
When designing complex systems, the product tree is reasonably open for us to define what goes inside what, but the reality is that not entirely everything in this containment dance is under our control.
Take for instance an FPGA device. An FPGA can:
- Sit inside a chip package (like in the [Zynq Ultrascale+ MPSoC](https://www.xilinx.com/products/silicon-devices/soc/zynq-ultrascale-mpsoc.html), where an FPGA is integrated along with several CPU cores and other resources, all inside one single chip).
- Sit on a mezzanine (for instance on an [[Backplanes and Standard Form Factors#Mezzanines|XMC]] board)
- Sit on a [[Printed Circuit Boards|PCB]]
So, the way this FPGA will connect with other elements, for instance utilizing PCI Express links...
> [!Warning]
> The section is under #development
# Can We Dream of Building Complex Systems Like Legos?
Imagine a world where building high-speed, complex digital systems is just about picking up components from a shelf or using some hypothetical "model-based" tool where we choose them from a library, and we connect them by some "drag & drop". Then, this magical tool would ensure everything is just perfect and pops up a message box saying something like "design complete".
As much as this scenario is desired by many, it remains as distant as can be. Designing and building complex systems is far from being a "drag and drop" activity. It is far from picking up components from a shelf and assembling them like we do with gaming PCs.
When a systems engineer says that something is "plug and play", most likely this person is blatantly lying.
Early computer peripheral devices required the end user physically to cut wires and solder together others to make configuration changes. Such changes were intended to be largely permanent for the life of the hardware.
Making computers more accessible to the general public—that is, making them more marketable in broader audiences—required making the configuration of peripherals as intuitive as possible. The idea was straightforward: there are people out there who can barely manage to turn on the TV, let alone handle a soldering iron at more than 300 degrees Celsius. How do we sell stuff to these clueless neophytes?
The first attempts were the good old jumpers and DIP switches. My 14400-baud Zoltrix modem I used for connecting to BBSs thirty years ago had plenty of jumpers and DIP switches.
All this still required reading manuals thoroughly. So, decent text comprehension and even some language skills were necessary. What is more, adding peripherals required unscrewing and opening the lid of your computer and mechanically connecting things in PCI or ISA slots in the motherboard, previously cleaning industrial loads of dust and dead flies. Also, any new hardware device landing in a computer needed a driver, this is, a piece of software to make sure that the inner workings of the new "thing" would couple in a non-traumatic way with the rest of the existing system.
The motivation to make computers more accessible was plausible. Why would an accountant want to shove boards in a PCI slot, tweak a DIP switch with a screwdriver, and deal with drivers when all they wanted was to calculate tax returns in a spreadsheet?
A relationship was increasingly coming under the spotlight: the relationship between the end user of a computing system and the knowledge needed to operate it. This tension went in a couple of directions: first, computers became more functionally "integral": for example, laptops gained traction by being one cohesive piece altogether, requiring less tampering and screwdriver dexterity. And second, decreasing the variety of interfaces left available to the user: the less amount of holes left open for a user to mess things up, the better. In time, parallel ports, serial ports, and VGA ports, all disappeared. I write these lines from an anodyne laptop that has only 4 USB-C connectors to the outside world, plus a lonely 3.5mm audio jack that stoically resists as the last relic of a bygone age. Thus, things evolved to the point where users can plug a tiny flash drive in and the computer manages to recognize it regardless of its manufacturer, chipset, or internal design. Which makes lots of sense. Now, this technological feat was the result of massive market pressures for vendors of all kinds to consolidate and adopt standardized interfaces and protocols so they could keep on doing business. No one in their sane mind would stubbornly stick to RS232 or parallel port.
The downside of the plug-and-play concept is that it spreads a mindset of "black boxes magically mating to other black boxes". Plug&play was possible only after a wild, ruthless standardization and consolidation of too many things happened, and happened fast. Thus, the plug-and-play mantra colonized the minds of many in the engineering scene, radiating a wave of blissful oversimplification across the board. All in the name of that somewhat bastardized term: abstraction. That means, hiding complexity.
==Hide it as much as you want, but complexity is not going anywhere, anytime soon.==
The evergreen observational law of "conservation of complexity" coined by [Larry Tesler](https://en.wikipedia.org/wiki/Larry_Tesler) captures the idea:
>_"The total complexity of a system is constant. If you make a person's interaction simpler, the hidden complexity behind the scene increases. Make one part of the system simpler, and the rest of the system gets more complex."_
Make interfacing simpler, and you will pay the decrease of complexity here somewhere else.
Design and development of complex and mission-critical systems rarely benefit from naively thinking that higher simplicity in interfaces goes unpaid. Unless said elements went long extents to agree on the way to talk to each other beforehand—in a way that just hooking them together would be necessary for the magic to happen—making them work properly will require high doses of RTFM (reading the freaking manual).
## Physical Limits of Modularity, or Why Designing a Gas Turbine is Different than Designing a System-on-Chip
Architecture, specifically the definition of modules and their interconnections, is a central concern of engineering systems theory. The freedom to choose modules is often taken for granted as an essential design decision. However, physical phenomena intervene in many cases, with the result that:
1) Designers do not have the freedom to choose the modules
2) That they will prefer not to subdivide their system into as small units as possible.
A distinction that separates systems with module freedom from those without seems to be the absolute level of power needed to operate the system. Integrated circuits exemplify the former while mechanical items like gas turbines are examples of the latter. It has even been argued that the modularity of chips should be extended to mechanical systems. There are fundamental reasons, that is, reasons based on natural phenomena, that keep mechanical systems from approaching the ideal modularity of microchips.
Many important military and commercial systems fall into the class of “complex electro-mechanical-optical” (CEMO) items, examples of which include missile seeker heads and cameras. Each of these contains motors, sensors, gears, and control systems.
The distinction between mechanical systems and chips has gained new relevance as attention has turned to developing a theory of engineering systems. Key to that theory is the concept of architecture, the scheme by which functions are allocated to physical objects and the scheme by which those objects interact. Architectures are often characterized by the degree to which they are “integral” or “modular,” and many arguments are advanced in favor of modular architectures. Designers of mechanical systems do not have as great freedom to define modules or to choose the degree of modularity as designers of low-power systems like integrated circuits. To the extent that this is true, the theory of engineering systems will have to take account of such fundamentals while evolving metrics for evaluating architectures and defining system design techniques.
It is widely agreed that design methods and especially computer support of design are generally more mature in electronics than it is in CEMO products. This realization has given rise to speculation that integrated circuit digital design and manufacturing methods might be applied to CEMO products with good results. The question is whether there are fundamental blockages to such a transfer of method, or whether the transfer has not taken place simply because of inertia or lack of appreciation of the potential benefits. Claimed benefits of the VLSI design paradigm include:
- Design benefits: system-on-chips are extremely complex, small, and efficient, and can be designed by relatively few people empowered by well-integrated design tools; a microprocessor with 3 million "parts" can be designed and brought to full production in three years by about 300 people, whereas a car with about 10000 parts requires the efforts of 750 to 1500 people over about four years, and an airplane with 3 million parts may require 5000 people five years. Furthermore, the different SoC modules can be designed relatively independently and thus in parallel, saving time. SoC modules can be given standard interfaces, permitting plug-and-play design and opening up whole industries to new kinds of competition and innovation.
- Manufacturing benefits: the "same" manufacturing processes or even the same manufacturing equipment can be used to make an endless variety of chip items; by contrast, especially at the most efficient high volumes, CEMO production facilities are dedicated to one design or at most a few variations of limited scope.
- “Business” benefits: Product architectures can be tailored to the way a product will be sold or distributed. A more modular architecture permits modules to be identified as the differentiators that will be customized for different purchasers. Differentiation can occur at attractive points in the product delivery process, such as at the very end of the assembly line, at the distributor’s place of business, or even by the customer. Modular architectures lend themselves to outsourcing, permitting companies to share risk or gain access to knowledge and capabilities not available in-house. It has even been argued that modularity is a fundamental source of value in systems because it affords opportunities for innovation, provided that certain “design rules” are followed.
Are these benefits transferable from SoCs to CEMO items? To begin the discussion it is necessary to classify CEMO items and choose one class for further discussion. CEMO products can be classified roughly as follows:
- Those that are primarily signal processors
- Those that process and transmit significant power
The distinction is not merely academic, for two reasons. A major trend in recent decades has been the replacement of mechanical signal processors first by analog electronics and more recently by digital electronics. Signal processing behavior is generally carried out more economically, accurately, and reliably by electronics. The replacement is physically possible because signal processing is, or can be, accomplished at very low power levels because the power is merely the manifestation of a fundamentally logical behavior. The power itself is not required to perform any physical function, such as motion. However, the replacement has not occurred where significant power is the basis for the system's behavior and the main expression of its basic functions. The discussion that follows focuses on such power-level CEMOs. The presence of significant power in CEMOs and its absence in chips is the root of the reasoning in this section.
A generic, approximate list of the steps comprising the design of a microprocessor is as follows:
In Stage 1, elementary devices are created, validated, and entered into a library along with design rules and associated analysis tools that reasonably guarantee successful fabrication. In Stage 2, complex systems are created as designers draw standard validated components from the library and hook them into systems. In Stage 3 the item is manufactured.
Few items in all of technology can be designed so automatically by proceeding from step to step, algorithmically converting requirements and symbolic representations of behavior into specific geometry without intervention by a person.
The situation in CEMO design is quite different from system on chips. The Boeing 777 has, by various estimates, between 2.5 million and 7.5 million parts. The design took about 5 years and involved about 5000 engineers at Boeing plus some thousands of others at avionics, engine, and other subcontractors. In CEMO design, there is nothing comparable to Stage 1 in chips and there is no cell library from which parts can be drawn, with a few exceptions. These exceptions are mainly such items as fasteners, motors, valves, pipe fittings, and finishes like paint. They are typically catalog items supplied by subcontractors and are not often designed to suit the CEMO product.
In CEMOs, the designer puts most of the effort into:
- Converting an elaborate set of requirements on function, size, space, power, longevity, cost, field repair, recurring maintenance, and user interface into a geometric layout
- Identifying subsystems that will carry out the functions
- Allocating functions and space to the subsystems within the allowed space
- Breaking the subsystems into individual parts
- Designing those parts and fitting them into the allocated space
- Determining allowable variations in part and system parameters (tolerances on geometry, voltage, pressure, temperature, hardness, surface finish, etc.)
- Predicting off-nominal behaviors and failure modes and designing mitigators into the parts and systems
- Identifying fabrication and assembly methods, their costs, and yields
- Identifying design verification plans (simulations and prototypes of both parts and systems at various levels of fidelity)
- Revisiting many of the initial decisions up to the system level if their consequences, as discovered in later steps, result in technical or financial inviability.
While this list sounds superficially like the tasks of chip design, the process is profoundly different because each part and subsystem is an individual on which all the above steps must be applied separately. Each part will typically participate in or contribute to several functions and will perform in several media (gas, solid, electricity, heat...). Put another way, CEMO and SoC items differ in how one designs the "main function carriers," the parts that carry out the product's desired functions:
- In chips these parts are made up by combining library devices; a few device types are leveraged into systems with millions of parts; a modular approach to system design works, in which parts can be designed and operated independently
- In CEMO, these parts are designed specifically for the product, although they may be variants of past parts designed for similar products; thousands of distinct parts must be designed to create a product with a similar total number of parts, and many must be verified first individually and again in assemblies by simulation and/or prototype testing; a modular approach works sometimes, but not in systems subjected to severe weight, space, or energy constraints; in constrained systems, parts must be designed to share functions or do multiple jobs; design and performance of these parts are therefore highly coupled.
CEMO systems carry significant power, from kilowatts to gigawatts. A characteristic of all engineering systems is that the main functions are accompanied by side effects or off-nominal behaviors. In microchips, the main function consists of switching between voltage levels, and side effects include capacitance, heat, wave reflections, and crosstalk. In mechanical systems, typical side effects include imbalance of rotating elements, crack growth, fatigue, vibration, friction, wear, heat, and corrosion. The most dangerous of mechanical systems' side effects occur at power levels comparable to the power in the main function. In general, there is no way to "design out" these side effects. A chip will interpret anything between 0 and 0.5 volts as 0, or between 4.5 and 5 volts as 5. There is no mechanical system of interest that operates with 10% tolerances. A jet engine rotor must be balanced to within 0.1% or better or else it will simply explode. Multiple side effects at high power levels are a fundamental characteristic of mechanical systems. One result of this fact is that mechanical system designers often spend more time anticipating and mitigating a wide array of side effects than they do assembling and satisfying the system's main functions. This dilution of design focus is one reason why mechanical systems require so much design effort for apparently so little complexity of output compared to chip design. But this judgment is mistaken. A correct accounting of "complexity of output" must include the side effects, which are also "outputs" that cannot be ignored during design and are usually quite complex.
Systems that operate by processing power are subject to a variety of scaling laws that drive the number and size of components. For example, research shows that as steamships got larger, it was necessary to increase the number of boilers rather than simply build one larger boiler. Chips are signal processors. Their operating power level is very low and only the logical implications of this power are the ones that matter (a result of the equivalence of digital logic and Boolean algebra). Side effects can be overpowered by the correct formulation of design rules: the power level in cross-talk can be eliminated by making the lines farther apart; bungled bits can be fixed by error-correcting codes. Thus, in effect, erroneous information can be halted in its tracks because its power is so low, something that cannot be done with typical side effects in power-dominated CEMO systems. Furthermore, SoCs do not backload each other. That is, they do not draw significant power from each other but instead pass information or control in one direction only. Chips don't backload each other because they maintain a large ratio of output impedance to input impedance, perhaps 6 or 7 orders of magnitude. If one tried to obtain such a ratio between say a turbine and a propeller, the turbine would be the size of a house and the propeller the size of a muffin fan. Such a system would be impractical. Instead, mechanical system designers must always match impedances and accept backloading. This need to match is essentially a statement that the elements cannot be designed independently of each other. An enormously fundamental consequence is that a chip element's behavior is essentially unchanged almost no matter how it is hooked to other elements or how many it is hooked to. That is, once the behavior of an element is understood, its behavior can be depended on to remain unchanged when it is placed into a system regardless of that system's complexity. This is why chips can proceed in two essentially independent stages, module design, and system design, as described above. Furthermore, due to the mathematical nature of digital logic and its long-understood relation to Boolean algebra, the performance of SoCs can often be proven correct, not simply simulated to test correctness. But even the ability to simulate to correctness is unavailable to mechanical system designers. Why is this so?
An important reason why is that mechanical components themselves are fundamentally different from chip components. Mechanical components perform multiple functions, and logic is usually not one of them. This multi-function character is partly due to basic physics (rotating elements transmit shear loads and store rotational energy; both are useful as well as unavoidable) and partly due to design economy. Chip elements perform exactly one function, namely logic. They do not have to support loads, or damp vibrations, contain liquids, rotate, slide, or act as fasteners or locators for other elements.
Furthermore, each kind of element performs exactly one logical function. Designers can build up systems bit by bit, adding elements as functions are required. A kind of cumulative design and design reuse can be practiced, allowing whole functional blocks, such as arithmetic logic units, to be reused en bloc. The absence of backloading aids this process. However, a kind of resource conservation dominates mechanical design: if one element were selected for each identified function, such systems would inevitably be too big, too heavy, or too wasteful of energy. For example, the outer case of an automatic transmission for a car carries the drive load, contains fluids, reduces noise, maintains geometric positioning for multitudes of internal gears, shafts, and clutches, and provides the base for the output drive shafts and suspension system. Not only is there no other way to design such a case, but mechanical designers would not have it any other way. They depend on the multi-function nature of their parts to obtain efficient designs. Building block designs are inevitably either breadboards or kludges. However, the multi-function nature of mechanical parts forces designers to redesign them each time to tailor them to the current need, again sapping the effort that should or could be devoted to system design. Chip designers, by contrast, depend on the single-function nature of their components to overcome the logical complexity challenges of their designs. One can observe the consequences of this fundamental difference by observing that in VLSI the "main function carriers" are standard proven library elements while in mechanical systems only support elements like fasteners are proven library elements; everything else is designed to suit.
The existence of multiple behaviors in CEMO systems means that no analysis based on a single physical phenomenon will suffice to describe the element's behavior; engineering knowledge is simply not that far advanced, and multi-behavior simulations similarly are lacking. Even single-behavior simulations are poor approximations, especially in the all-important arena of time- and scale-dependent side effects like fatigue, crack growth, and corrosion, where the designers worry. In these areas, geometric details too small to model or even detect are conclusive in determining if (or when, since many are inevitable) the effect will occur. And when component models are lacking, there is a worse lack of system models and verification methods.
==The fundamental consequence of backloading is that mechanical elements hooked into systems no longer behave the way they did in isolation.== (Automotive transmissions are always tested with a dynamometer applying a load; so are engines.) Furthermore, these elements are more complex than chip elements due to their multi-function behavior. This makes them harder to understand even in isolation, much less in their new role as part of systems. VLSI elements are in some sense the creations of their designers and can be tailored to perform their function, which is easy in principle to understand. Mechanical elements are not completely free creations of their designers unless, like car fenders, they carry no loads or transmit no power. ==The fact that mechanical components change behavior when connected to systems means that systems must be designed together with components, and designs of components must be rechecked at the system level.== No such second check is required in chip design, as long as the design rules are obeyed. For this reason, CEMO items cannot be designed by the strict top-down Stage 1 - Stage 2 process described above for SoCs.
Moreover, SoC modules and elements transmit so little power that their interfaces can be designed based on other criteria. The interfaces are much bigger, for example, than they need to be to carry such small amounts of power. The conducting pins on electrical connectors that link disk drives to motherboards are subjected to more loads during plugging and unplugging than during normal operation. Their size, shape, and strength are much larger than needed to carry out their main function of transferring information. This excess shape can be standardized for interchangeability without compromising the main function. No such excess design scope is available in high-power systems. Interfaces take up space and weight and must be designed specifically for their application.
(Adapted from #ref/Whitney )
### Conclusions
Modularity manifests itself in three domains:
- Modularity in design
- Modularity in manufacturing
- Modularity in use
In each of these areas, CEMO systems will not be as modular as System-on chips are. Furthermore, the extreme of modularity may not be the best choice for some CEMO systems in at least some of these domains. In design, we have seen that CEMO systems cannot be designed in a feed-forward way with modules designed first followed by system design using the modules. Integrated CEMO designs are often called "refined," indicating that great effort was invested in combining elements, capitalizing on multiple behaviors to achieve design objectives efficiently, and so on.
The ideal of modularity permits one to simulate the system and test or prototype only the modules. Under these conditions, the cost of a system grows essentially linearly with the number of modules. In more integral systems, testing requires building a system, and the substitution of one module for another requires another whole system to be built and tested to uncover any emergent interactions between the new module and the reused ones.
Design is easier in VLSI than in CEMO systems because, in VLSI systems, the information at the system level is entirely logical and connective. This information is transformed and augmented from stage to stage in the design process but its essential logical/connective identity is preserved to the masks. This is not possible in mechanical systems, where the abstractions are not logical homologues (much less homomorphs) of the embodiments and likely never will be. Instead, tremendous conversion is needed, with enormous additional information required at each stage. A block diagram of an automatic transmission captures only the logic of the gear arrangements and shifting strategy. It fails to capture torques, deflections, heat, wear, noise, shifting smoothness, and so on, all of which are essential behaviors. Function-sharing is not a matter of choice in CEMO systems, and side effects cannot be eliminated. In manufacturing, the same issues can arise. If the system is to some degree integral, then several advantages of modular systems will be unavailable. These include the omission of final system tests at the end of the production line as well as the easy substitution of suppliers that build “the same” module. Upgrades and engineering change orders will similarly have to be verified at the system level and cannot be counted on to follow plug-and-play expectations. Interestingly, much progress has been made in CEMO systems in creating even more integrated parts using advanced injection molding, die casting, and rapid prototyping techniques. The reason why "an enormous variety of VLSI products can be built" from the same process is that the variety is embodied at the system level. At the component level, only one item can be made by each process. VLSI escapes the consequences of the process dependence on components because VLSI systems can be designed independently of component design. On the mechanical side, this separation does not exist.
In summary:
- System design methods based on extensions of the VLSI model will greatly underestimate design and debugging time.
- Methods of evaluating the excellence of a design that derives from the VLSI model will value the wrong things and fail to value the right things about good CEMO designs
- Theories based on the VLSI model aimed at evaluating architectures will not properly value CEMO integrality.
- CEMO systems will not become more modular in the future
- The design of CEMO systems will not evolve toward the two-stage separation method applicable to VLSI
- Yet, technical and “business” pressures will pull opposite ways in the CEMO domain, creating ongoing tension.
# Good Architecture is Simple Architecture
The saying goes: "Perfection is attained not when there is nothing left to add, but when there is nothing left to remove". Making things simple can be, ironically, quite complicated. Another way of putting it (which comes from the original phrasing of Ockham's razor[^9]):
>*Entities should not be multiplied beyond necessity*
As engineers and architects, we are great at fabricating necessities where there are none. It takes effort and energy to prevent complexity from increasing.
As we commented in the previous section, Tesler described it as a tradeoff: making things easier somewhere means making them more difficult somewhere else. According to this, every device has an inherent amount of irreducible complexity. The only question is who will have to deal with it; the user or the engineer.
This observation can have a corollary about the relationship between abstraction and simplicity. Consider the automatic transmission in the automobile: a complex combination of mechanical gears, hydraulic fluids, electronic controls, and sensors. Abstracting the driver from having to shift the gears manually is accompanied by more complexity in the underlying machinery. Abstraction does not mean system simplicity. ==Good abstractions only cleverly disguise and hide complexity under the hood, but complexity is very much there.== What Tesler observes is that what is simple on the surface can be incredibly complex inside; what is simple inside can result in an incredibly complex. The "constant complexity" observation from Tesler connects to the initial definition of complexity in this section: the number of "options" or "states" the system has is an inherent property of it. How that variety of states reaches the user is a design decision. Making things simpler for the user (and here a user can be also another subsystem in the architecture), requires encapsulating such variety in some way. Back to [[Engineering is Broken (but we can fix it)#Rams' Tend Commandments|Rams' ten commandments]], and rephrasing principle number 10 slightly: Good Architecture Is as Little Architecture as Possible.