# Complexity Creep
> [!cite]
> "Everything simple is false; everything complex is unusable" Paul Valéry
The life cycle of a system flows in continuous time, and we, as systems engineers, decide to divide such a cycle into chunks for practical reasons. Those *chunks* are visited several times (iterations) until maturity is acceptable. This section is about discussing how to define when such maturity is “acceptable”. When does the moment come when we can declare the design to be done so we can move ahead and produce the system? Quick answer: it never comes. Complete designs just do not exist and have never existed. I’ll elaborate.
Oftentimes mistaken as a mechanistic, almost recipe-oriented activity, Engineering is, on the contrary, a highly creative endeavor. Creativity involves, not in making something out of nothing, but in integrating distinct materials and parts into a new whole. In a typical lifetime of anything engineered, there is quite some time spent by engineers creatively iterating through many different options to find the best solution possible. The process is known: a need enters on one side of the box, magic happens, and a working system or solution comes out on the other side of the box. It is a problem-solving endeavor. Now, the magic is what this chapter is trying to describe and how we engineers sometimes overdo it. I will start with an observational postulate after years doing technology development from the depths of the trenches:
1. Provided infinite resources and isolation, an engineer will design forever.
A corollary of this, probably a bit fairer to engineers by not giving any names, can be:
1. Without constraints, features expand to infinity.
Overengineering everything is a disease of the modern engineer. Let me explain. If you leave an engineer, no matter the discipline, with enough resources and in a total open loop (without anybody or anything to limit his/her degrees of freedom), he or she will enter an endless cycle of never being satisfied with his/her current design and continuously thinking it can be improved. It does look like we engineers are _baroque_ at heart; ornamenting just needs to keep going; more is more. A software engineer will hold the release of that piece of software to profile some functions a bit more and maybe make that algorithm faster to gain some precious milliseconds. Or he will add some web interface because why not? Or add a nifty RESTful API in case someone wants to integrate this software to some other software: everything needs a RESTful API nowadays anyway. The Electrical engineer will look for this new power regulator which has incredibly nice quiescent current and it's a 1:1 swap with the current one so why not! Or hey, this microcontroller has more RAM and more ADC channels, and it is a lower power device on the same package so, easy-piecey, more ADC channels are always welcome. Or, heck, let’s redo the layout for better signal integrity on that DDR bus. The mechanical guy will seek to design the system to be easier to assemble; instead of 400 screws, he can bring it down to 350, or use a new alloy to save 2% mass. But design, as many other things in life, has to come to an end; this seems to be difficult to grasp for many of us. It wants to go on forever. Is it a trait of engineers’ personality or is it the true nature of the design activity? It happens to be a bit of both. In some other activities, for example cooking, you are forced to declare something ready otherwise you will burn it and it will have to be trashed. In medical practice, a surgery is clearly defined as ready at some point otherwise risks would increase. Engineering design does not really have such a clear boundary between ready or not ready. What we can do, we can only take “snapshots” of the work and artificially flag them as ready for the sake of the project. It is a decision to freeze things or put the pencils down at some point and declare something as done. This can be challenging for engineers, since they need to frame their design decisions against time; a big architectural change at some point of the development stage could be too risky for the delivery date, hence it cannot be done, or if it needs to be done, then the deadline will be missed. In tennis, there is a tendency for some players to lose matches after having a clear lead. This difficulty in closing out the match is a psychological response of their minds to the fact of having a win “around the corner”. Some relax too much, expecting the opponent will just give up and just close the game for them. Some others start to listen to that inner critic voice telling them “you better not to lose this one” which ends up being counterproductive, in a self-fulfilling prophecy of sorts. Engineers need to close out their design matches as tennis players do. Thing is, designing systems requires multidisciplinary teams, so no design engineer gets to live alone and isolated. There are multiple other design engineers, from the same or different disciplines, pursuing their own design challenges, playing their own “design matches” and fighting their inner demons telling them to add more and more. Imagine multiple designers in an uncoordinated and silent featuritis. A perfect disaster. Still, this basically describes every design team out there. It does appear like engineering design just does not reach steady state; like it is in a state of constant expansion and growth. The cyclic nature of the design process seems to complot against feature creep, opening the possibility of adding new things at every run.
John Gall, in his always-relevant _Systemantics_ _(Gall, 1975)_, states:
> [!cite]
> “Systems Tend to Expand to Fill the Known Universe”
And then he adds:
>[!cite]
>“A larger system produced by expanding a smaller system does not behave like the smaller system”
Why do we keep on adding features nobody requested when we are left alone with our own souls? You would imagine engineers being reluctant of complexity increase, yet reality seems to show they are more than fine with it.
Without diving into cumbersome definitions of complexity we’ll stick to a very straightforward definition of complexity:
1. Complexity is a direct measure of the possible _options, paths_ or _states_ the system contains.
Complexity creep is surely multi-factor. We, as engineers, do seem to want to exceed expectations and deliver more “value” by adding unrequested, supposedly _free_ functionality, while the truth is we are just making things less reliable and more difficult to operate, one small feature at a time. It is also true that not all complexity _inflation_ is only engineers’ fault. A product can suffer two different types of creep: internal and external. The external is coming from the customers/stakeholders, who can easily get into a feature request frenzy ("I want this, this and that"), with the expected defense barriers (Project Managers, Systems Engineers) being too porous to contain the spontaneous wishes and permitting those wishes to filter in directly to the design team. And then there is the internal creep from within the design team, radiating spontaneously from its very core outwards, again with the Systems Engineering process failing to measure the complexity growth. It can be clearly seen how products are prone to complexity escalation: it comes from every direction. Systems Engineering plays a key role as firewall for _featuritis_ and failing to do so greatly impacts the work. The external (from customers) creeps top-bottom towards the design engineers, while the internal creeps all the way from designers up to the customers.
It cannot go unmentioned that when the engineering design activity is immersed in a surrounding atmosphere of complexity growth like the one you can find in basically any startup environment, then such growth, seen as something desired and even expected, will be pervasive, and will influence decision making, including design. The verb ‘to grow’ has become burdened with positive value connotations that we have forgotten its first literal dictionary definition, namely, ‘to spring up and develop to maturity’. This view reasons that growth cannot continue perpetually, requiring maturing and stabilization. On the same note, the word "feature" happens to carry too much of a positive meaning compared to the word "complexity". _Feature growth_ does not really sound that bad if you add in a board meeting slide; it can be even a great buzz phrase to impress your growth-addicted investors. “Complexity growth”, on the other hand, can be a bit more difficult to sell. At the same time, the word “complexity” is a collective measure, whereas the word “feature” refers to a single thing. Hence, when we talk about features, we refer to the individual and we easily forget its impact to the collective.
But, to design is to introduce change. When you have a baseline, and you want to make it better in some way, for one reason or the other, you introduce changes to it: you alter it. Change is inevitable. Change can be good as well, but when thrown to the wild in a multidisciplinary endeavor, it should be done under some sort of agreement and by providing reasonable awareness.
Picture that you are watching a group of people dancing in a group. Nothing like watching a nice choreography where everybody flows in beautiful synchronicity. Synchronicity is expected, otherwise we just cannot call it a choreography. It can be easily spotted when one of the dances screws up and loses sync with the rest. It is unpleasant to the eye, and somewhat cringy as well. Now, imagine a ballet dance where all the dancers individually decide to just add a bit of extra movement here, an extra jump there, because the audience will love it. You can see how we engineers would look if we would dance ballet as we design. Imagine an orchestra where the guy behind the cymbals decides just to ornament a move to make it cooler, on the go. A technical system, which is composed of many subsystems integrated together (a spacecraft, a UAV, an aircraft, a gas turbine, you name it) amplifies change in ways that are very difficult to predict. Integration acts as an amplification factor. Any change in an integrated system, no matter how small, creates ripple waves in all directions.
Example: An Electrical engineer in charge of a power distribution and measurement board decided to use a more efficient DCDC converter. There was just a small, tiny, insignificant (from his point of view, of course) detail that the enable pin for the power distribution chip was the inverse logic compared to its predecessor: active low instead of active high (a logical 0 was needed now to enable it). Chosen device was 1:1 pin compatible with the existing design, so it's a no brainer! Who would oppose to more efficiency, right? Now he didn’t exactly communicate nor document this in almost any way, he just passed it low-key as a change in the bill-of-materials, as a different part number. Changes in bill-of-materials are somewhat common: capacitors go obsolete and new ones appear, but those are passive components which usually have little impact on performance. Anyway, when the new board arrived, the integration team flashed the firmware, everything went reasonably well, and the board got integrated (let's assume here that board level verification procedure coverage was permeable enough to let this go unnoticed from tabletop verification). Now that's when the fun began: a puzzled operator during rehearsal operations saw that the subsystem hooked to the new power converter was **ON** when it was supposed to be **OFF**. It was draining current and affecting the thermal behavior of some room inside the spacecraft. But how? After many (many more that read would think!) hours of debugging, the issue was found. So, the firmware guy (me in this case…) had to go and patch the firmware for the new hardware design. A piece of source code which was reasonably robust and trusted had to be edited, tampered, increasing the chances of introducing new bugs due to the pressure of schedule. But that's not all, unfortunately. The ground software guy had to re-sync the telemetry databases and charts accordingly to switch the logic (now **OFF** means **ON** and **ON** means **OFF**, probably red means green and vice versa; a semantic nightmare), and the changes had to be reflected in the operations manuals, procedures and all operators need to be informed and re-trained. It can be seen how a small, probably good-hearted, modification created ripple waves all over the place, well beyond the original area where the change was proposed. And this was such a small change, so picture how bigger changes can go.
==The challenge with change is: every time we introduce one, no matter how small, we depart from what we know and we set sails to the unknown.== Then we have to build trust again in the new version/iteration of it, which takes time and effort and needs verification. Changing means stepping into the unknown. This is not a problem per se, but when we are dealing with highly integrated mission-critical systems, well, the unknown is usually not your friend. We will never be able to completely foresee how our changes propagate. Systemic risk can build up in a system as new elements are introduced, risks that are not obvious until after something goes wrong (and sometimes not even then). Isn't all this enough pressure to think twice before adding an unrequested feature? Issue is that often designers' visibility doesn't span wide enough to think about all the other parts of the system where their unsolicited changes can (and will) impact; they are probably impeded to see the consequences of their acts.
Complexity creep and change are two sides of the same coin. Is the solution then just to avoid change? Not at all, we can't escape from it, and innovation blooms in change-friendly environments. But complexity creep in a system has implications; it always impacts somewhere, and that somewhere often is usability. Complexity creep kills, wherever the direction it creeps from. When resources are scarce, the last thing to look for is unnecessary functionality nobody required (internal creep). In the same way, being early stage should not mean signing off to any random complexity requested from external parties, unless this external party is a customer. Reviewers, board members, or advisors should not have direct access to request complexity to grow. At least not without some controls in place to make sure the leap of complexity is justified in some way. In any case, the question that surfaces now is: how to measure all this in order to prevent it? Unrequested features finding their way into the product can be hard to catch after all.
We started this chapter with a thesis that an engineer in “open loop” will creep complexity _ad infinitum_. The key part of this is the “open loop” remark, and it is the subject of the next section.
## Quantitative Systems Engineering
> [!cite]
> "We can't control what we can't measure"
> [!cite]
> "Measure twice, cut once"
Systems Engineering is a closed-loop activity, like many others. Feedback is an essential part for any engineering effort to sustain its life. And here we are not referring about feedback in the technical machines we create, but in the design process itself. Designing and implementing technical systems means creating multiple disparate outcomes which require some level of control, or regulation; this means keeping those outcomes within desired margins or bands, aligned with an overarching strategy. For that, outcomes to be controlled are fed to regulating processes where they will be compared to a desired set point, and actions (ultimately, decisions) will be taken if deviations are perceived; the intensity of the action is expected to be proportional to the magnitude of the deviation from the goal. Feedback loops are ubiquitous in Engineering practice; loops can be balancing or reinforcing, depending on the decision strategy around the difference between outcome and set point. Reinforcing loops tend to increase outcomes indefinitely, or until saturation. Balancing loops apply corrective action in order to minimize the difference between what's measured and what's desired.
The Systems Engineering practice is composed of a collection of processes that are executed on a System-of-Interest throughout its life cycle. Keeping the outcomes of these processes within desired margins relies inevitably on measurement. Outcomes from system design processes are typically of two types:
- Data and Information: blueprints, schematics, source code, requirements, manuals, etc.
- Realized/tangible/physical: subsystems, modules, boards, software executables, libraries, etc.
Now how do we quantify these things if we need to keep them bound to some margins? The variables we need to measure are often very hard to quantify. How do we extract a descriptive quantity from a pile of documents or operation manuals? What metrics can be mined from the process of integrating or verifying a system? How can we quantify the complexity of, say, software to gauge if it has grown outside desirable margins? Imagine I have a fever, but instead of a thermometer, I only have a report about my general condition, where most of the information is about non-numerical observations like the way my face looks. What I need is a number, a quantification (temperature, provided by a sensor), which turns out to be a measure of my health condition at that moment. What I get instead is a more indirect and abstract thing: a few pages of verbal text. Another classic example is a résumé; a sheet of paper stating a person's education and experience. How can we extract a “suitability for the job” metric which we could use for comparing and benchmarking between candidates and help us decide better? Such quantification just refuses to be known. Eventually some sort of suitability valuation will have to take place for us to decide. But such an assessment will be highly qualitative, based on our own and most likely highly biased analysis and interpretation.
This is some sort of classic flow: when we lack metrics, we rely on our qualitative gut feelings, which are far from being objective since they are impregnated with our own beliefs and past experiences. Problems appear when qualitative methods are forced to provide metrics, and then those numbers are fed into feedback loops and used for decision making. Qualitative methods have an inevitable coating of human factors. Our estimations are highly biased and greatly affected by our past experience, among many other cognitive factors. Many activities around Engineering and Project Management are carried using information infested with biases and cognitive offsets. One way of overcoming this issue is by means of _operationalization_, which is a process of defining the measurement of a phenomenon that is not directly measurable, though its existence is inferred by other phenomena. Project management (acknowledging its highly qualitative nature as well) has explored quantitative approaches, for example Quantitative Project Management. The phrase which opens this section: "we can't control what we can't measure" captures, in short, that decision-making on the grounds of poor quantitative information is a dangerous game. It could be rewritten as: we can't correctly decide upon what we cannot quantify. We still, though, decide "without numbers" every single day of our lives, from observation, interpretation, and operationalization. But we cannot easily benchmark or compare without quantification, we cannot easily visualize without quantification, nor can we properly predict, forecast, identify trends, or extrapolate without some level of quantification.
Systems Engineering needs to be as quantitative as reasonably possible, and steer away from forcing qualitative methods to look quantitative, unless qualitative decision-making is fed with good heuristics (experience). The highly recursive and iterative nature of Systems Engineering needs clear metrics we can supervise to understand if the processes are working correctly, and ultimately if we are building the right system, the right way. But how do we capture those numbers? How should we hook our "sensors" around it? There are two different places to _sense_: the product itself and/or the processes.
**Product or System measures**:
What is the group of critical numbers and figures we could gather from our system in terms of performance? What are those numbers that basically matter the most? These are usually called Technical Performance Measures (TPMs). In a nutshell, it basically refers to a set of meaningful numbers that will describe the way our product is expected to perform under certain conditions, and for the right context. These TPMs can be, for example, fed to pilot customers, through marketing or sales, to get feedback from them, to potentially adjust the design if TPMs don't seem to match a solid business or use case. TPMs are a very important player in "closing the loop" with the market, and essential for small organizations building complex systems.
**Process measures**:
The multiple processes we need to execute upon our system-of-interest are a goldmine of information, and often we let that information go down the drain. For example, for the design phase, we could define metrics on complexity to prevent it from creeping as it usually does; it could be inferred from number of lines of source code, or number of levels of the system hierarchy, or scoring from design reviews, from bug reports, etc. For the verification process, we could monitor a figure about defects found per unit of time which would allow us to understand the effectiveness of the process itself but also our improvement over time, or collect the time a particular test takes in order to assess learning curves and predict how long it is expected to take in the future, for better planning. Or quantify defects found on critical third-party components from different suppliers, in order to assess their comparative performance or obtain a metric about how bad our vendor lock-in is. The information is there, it requires a reasonable level of creativity and data mining mindset to acquire the figures.
Engineering technical things is an information-intensive activity. Reasonable resources should be allocated to define a methodology of sensing, storing and retrieving the critical metrics around it. This method should dictate what numbers will be mined and how, making sure metrics with proper quality are obtained. The importance of any measure is that it will most likely feed a loop, or serve as an indicator, early warning or flag that something has changed in the process, and that change may be good, detrimental, or neutral to our system development, augmenting our decision-making.
- Feedback loops are ubiquitous in Systems Engineering due to their recursive and iterative nature.
- Systems Engineering practice relies heavily on qualitative methods due to the difficulty of obtaining quantifiable numbers in many of its processes and activities.
- A more quantitative Systems Engineering approach requires identifying key metrics, and defining a consistent method of collection, storing, and retrieval of such metrics for the sake of comparison, prediction, extrapolation, and better estimation.
- A more quantitative Systems Engineering approach needs to be aware that when a measure becomes a hard target, it ceases to be a good measure, as Goodhart’s law observes. This can occur when individuals or organizations try to anticipate the effect of a policy or a change by taking actions that alter its outcome, in a sort of self-fulfilling prophecy.
## Change Management
Iteration is at the very heart of engineering design. Things evolve, and evolution cannot take place without change. Change is unavoidable and constant. The world around us changes continuously. In startups, this is even truer; things are fluid and in a week's time, many things can look different. This is another effect of the learning process while young organizations grow; as things are better understood, changes are applied to correct previous partial understandings. Feedback loops require changing and adjusting to meet some goal. An early-stage organization that doesn’t change and adapt to the ever-changing context is condemned to disappear.
But change needs coordination and alignment to an overarching strategy. Uncoordinated, disparate changes are as damaging as not changing at all. At the same time, the way change is handled speaks, and loud. For example, constant changes in design decisions can spread the message that “the sky's the limit”, ramping up complexity and causing project overruns. This can put the Systems Engineer or Project Manager reputation and competence on the line as well, since they are expected to act as “shock absorbers” or “firewalls” preventing noise filtering into the design team.
Changes can take place in the system under design or in organization, or both (big system). Organizational changes include for example altering structures, strategies, procedures or cultures of organizations (Quattrone & Hopper, 2001). The term encompasses both the process by which this happens (i.e., “how”) and the content of what is being altered (i.e., “what”). By definition, change implies a shift in the organization from one state to another. This shift may be deliberate, with the aim of gaining or losing specific features of the organization to attain a defined goal, or it may be less deliberate, perhaps occurring as a consequence of developments outside the control of the organization. Moreover, during the change process, additional parts of the organization may be unintentionally affected, particularly when change is experienced as excessive (Stensaker et al., 2001). When does change become _excessive_? Situations that can result in excessive change include: 1) the organization changes when the environment or organizational contingencies do not suggest the need to change; 2) when the organization changes for change's sake, and 3) that excessive change occurs when the organization changes one element but fails to change other dimensions accordingly (Zajac et al., 2000). The number 3 is especially relevant. Changing particular elements while leaving other elements unchanged can result in imbalances. This does not imply the whole organization must change at once; it just means organizational change must be structure-aware and seek balance.
In engineering, change is no stranger. Managing change has been part of the engineering practice since the early days. The discipline that combines the management of changes and configuration of the system in a structured way is called Configuration Management. It is the Management discipline that:
- Applies technical and administrative direction and surveillance to identify and document the functional and physical characteristics of the system under design;
- Controls changes to those characteristics;
- Records and reports change processing and implementation status.
The purpose of Configuration Management is therefore to establish and maintain consistency of the system's performance, functional and physical attributes with its requirements, design and operational information throughout its life. It is typical to distinguish four different Configuration Management disciplines (Altfeld, 2010).
- Configuration Identification: is the Configuration Management activity that assigns and applies unique identifiers to a system, its components, and associated documents, and that maintains document revision relationships to the system configurations.
- Configuration Control: involves controlling the release of and changes to baselined products throughout their life cycle. It is the systematic process to manage the proposal, preparation, justification, evaluation, coordination and disposition of proposed changes, approved by an appropriate level of authority.
- Configuration Auditing: verifies that the system is built according to the requirements, standards or contractual agreements, that the system design is accurately documented, and that all change requests have been resolved according to Configuration Management processes and procedures.
- Configuration Tracking: involves the recording and reporting of the change control process. It includes a continuously updated listing of the status of the approved configuration identification, the status of proposed changes to the configuration, as well as the implementation status of approved changes. This configuration information must be available for use by decision-makers over the entire life cycle of the aircraft. In an environment where a large number of multi-functional teams operate concurrently it is particularly important to understand how Configuration Control works.
In smaller organizations, change and configuration management are usually not enforced in a very rigid way. During the proof-of-concept stages, this may be an intentional decision to keep things fluid and going. Eventually, a compromise must be reached between openness to new functionality and maturing a system or product design. This is a bit like tightrope walking if you want: too rigid change management can kill quick turnarounds and innovation; too loose approach opens the door for complexity creep. A lightweight, tool-agnostic approach to avoid feature creep could be:
- We must be aware of the impact we make when we introduce even the slightest improvements to a subsystem which is integrated in a bigger system: think systemically. A small change can and will impact even the farthest areas.
- We must _close the loop_ with other designers with periodic reviews and processes to make sure our perception of improvement is aligned with the organization grand plan and tame our own cravings to add "value" nobody requested.
- Put safeguards in place to catch uncoordinated changes.
- Do not prefer an awesome unknown over a suboptimal known: i.e., don’t believe new designs will be better than old designs. A corollary: beware of making projects depend on unknown functionalities or features that are yet to be created from the ground up.
- Always prioritize _value_ as the criteria to decide about introducing changes and unknowns but agree on what _value_ means.
- We are fallible and fully capable of making mistakes. We can overestimate our own capacity of assessing complexity and its implications. What is more, we tend to hold overly favorable views of our own abilities in many social and intellectual domains. Our own incompetence can rob us of the ability to realize we are incompetent. (Dunning & Kruger, 1999).
- Go incremental. Small steps will get us more buy-in from stakeholders and ease our way. Show ideas with proof-of-concepts and improve from there. Never try to go from zero to everything. John Gall observes that “a complex system that works is invariably found to have evolved from a simple system that works” (Gall, 1975).
- Design systems which can provide their function using the least variety of options and states as reasonably possible.
- Change, when handled, fuels innovation and differentiation.
In many industries, time to market is essential. Failing to release a product to the market and losing an opportunity to a competitor’s hands can be fatal. Sometimes, having a product in the market, even a prototype far from being perfect, might be the winning card to win a contract or to be the first and capture essential market share. Market forces dictate that _better is the enemy of good enough_, which could be counterintuitive for our somewhat extravagant technologically _baroque_ minds, but essential for product and company success.
## Complexity Can Be Lucrative
If only complexity were a technical problem. But complexity creeps in every single aspect of an organization. As the organization evolves, everything in it gets more complex: logging in to corporate online services gets more complex, door access controls get more complex, devices for video calls get more complex, coffee machines get more complex, even hand-drying equipment in toilets gets more complex. More people, more departments, more managers, more everything. More is more. As it can be seen, the Universe does not really help us engineers to tame our complexity issues: everything around us gets more complex. As more souls join the organization, division of labor gets extremely granular, communication complexity combinatorially skyrockets, and cross-department coordination becomes a game of chess. Departments start to feel they have difficulties correlating their information with other departments’ information. There is an increasing sense of “lack of”: “lack of control”, “lack of tracking”, “lack of awareness”.
The result: an explosive ecosystem of tools and methods. What specific tools or methods will depend perhaps on what company the managers are looking up to at the moment (Google and Apple are always ranking at the top), or what idea the new senior VP has just brought from his or her previous employer: OKRs, Six Sigma, Lean, SAFe, MBSE, or worse, a hybrid combination of all those together. With more and more elements in the org hierarchy appearing, gray areas bloom. Nothing like the ambiguity of something which perfectly fits in two or more different narratives, depending on who you ask. All of a sudden, spreadsheets fall in disgrace and everybody wants to replace them for something else, and a frenzy for automating everything kicks in. The automation, tooling and methods _skirmish_ can be tricky because it transversally cuts the organization. Then, tooling grows detached from particular projects or departments’ realities and needs. They become ‘global variables’ in the software sense of the word. You change one and the implications are always ugly in many places. This is a sort of paradox: tools are usually used by the practitioners, but for some strange reason the practitioners rarely get to choose them, but managers do (they are defined top bottom). As the big system evolves, practitioners seek for tools more specialized for their roles. For example, a tool that tracks customer orders and focuses on service-level agreements (SLAs) differs considerably from one that tracks issues in a software component or one that's targeted at business analysts for modeling customer use cases and workflows. If in the past the organization could use a single tool for those different tasks, as it grows that stops being the case. The one-size-fits-them-all approach eventually stops working but in the process a lot of energy is spent stretching tools to use cases they were not designed for. In all this confusion, a great deal of revenue is made by the tooling world. This is a moment when the organization has its pants at the lowest for vendor lock-in.
On the other hand, allowing every team or project to define its own tooling and methodologies would be impractical. Hence, as the tech organization expands, a key part of the tech leadership needs to look after cross-cutting integration, with a very strong system orientation, with the goal to harmonize tools and methods to work the best way possible well across the _multiverse_ of projects and systems.
## Switching Barriers
Vendor lock-in is just another _switching barrier_ among many other switching barriers that organizations grow with time. Switching barriers are not only a result of greedy suppliers wanting us to stay captives of what they sell. We create our own internal, domestic switching barriers as well. Switching barriers can take many forms, but in tech organizations, it frequently materializes in the system architecture and in the ecosystem of ad-hoc homegrown tools that have expanded and encroached beyond initial plans, where all of a sudden changing them becomes (or feels like) impossible, since the tool or subsystem is now affecting or influencing too many places and processes. You can feel the horror when thinking of changing those things for something else; usually, a shiver goes through your spine, it feels just impossible or prohibitively costly. This brings up a sort of “paradox of the bad component” where the apparent impossibility of changing them makes organizations decide to continue extending the conflicting components only to make the problem worse under the illusion that they would eventually get good in some uncertain future provided enough tampering and refactoring is performed. By adding more complexity to poorly architected things, we just make them worse.
This is one of the main beauties of modular design (to be addressed in the next chapter): it brings back the power of choice to designers and architects. Modular architectures make switch barriers lower, which is an asset when products are in the maturation process. Modular thinking must be applied at all levels of the _big system_, and not just to the things that fly.
# References
Altfeld, H.-H. (2010). _Commercial Aircraft Projects, Managing the Development of Highly Complex Products_. Routledge.
Bar-Yam, Y. (2018, Sept 16). _Emergence of simplicity and complexity, New England Complex Systems Institute_. New England Complex Systems Institute. https://necsi.edu/emergence-of-simplicity-and-complexity
Dunning, D., & Kruger, J. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. _Journal of Personality and Social Psychology_, _77_(6), 1121–1134. https://psycnet.apa.org/doi/10.1037/0022-3514.77.6.1121
Gall, J. (1975). _Systemantics: How Systems Work and Especially How They Fail_. Quadrangle/The New York Times Book Co.
Norman, D. (2011). _Living with Complexity_. The MIT Press.
Quattrone, P., & Hopper, T. (2001). What does organizational change mean? Speculations on a taken for granted category. _Management Accounting Research_, _12_(4), 403-435. [https://doi.org/10.1006/mare.2001.0176](https://doi.org/10.1006/mare.2001.0176)
Saffer, D. (2009). _designing for interaction: Creating Innovative Applications and Devices_. New Riders.
Stensaker, I., Meyer, C., Falkenberg, J., & Haueng, A.-C. (2001). Excessive change: unintended consequences of strategic change. _Paper Presented at the Academy of Management Proceedings, Briarcliff Manor, NY._
Zajac, E., Kraatz, M., & Bresser, R. (2000). Modeling the Dynamics of Strategic Fit: A Normative Approach to Strategic Change. _Strategic Management Journal_, _21_(4). 10.1002/(SICI)1097-0266(200004)21:4<429::AID-SMJ81>3.0.CO;2-#