Data-driven models are powerful tools for detecting patterns in how quickly technologies advance. They can allow us to make forecasts of future change and to estimate the uncertainty of those forecasts. That is great, but it does raise questions: Why does technological change so often follow the particular patterns of Moore's Law, Wright's Law, and Goddard's Law? What are the mechanisms driving the change? In particular, why do some technologies develop so quickly while others crawl along? Why do some technologies improve steadily, while others have sudden improvements? How can we design technologies for rapid improvement?
In this section, we'll offer answers to those questions. We'll do it by developing mechanistic models that directly represent the underlying processes behind innovation. That will give us illuminative, quantitative explanations for how quickly or slowly technologies develop. It will also allow us to understand the data-driven models more deeply. How, for example, do we choose between Moore's Law, Wright's Law and Goddard's Law if they each fit the data about as well? This can be important because they make different predictions over time. When a data-driven model fits the data well, what factors about the technology determine the parameters in the model?
# Unit Scale
One of the challenges in large construction projects, like building an underground tunnel, a new roadway, or even a nuclear power plant, is that each project is a huge investment. While ideas abound about how to make these projects more efficient, implementing these approaches to probe how well they work requires a massive investment of resources on multiple levels: testing to ensure safety, working within government regulations, and gathering a great sum of money (i.e., building interest among public or private investors who may perceive new approaches as risky). Each of those steps takes a lot of time and effort, slowing down experimentation and the implementation of new approaches — in other words, slowing down innovation.
By comparison, demonstrating a new cell phone design requires much less investment. That means you can experiment far more easily — if a new cell phone turns out to be lousy, the loss is far, far smaller than that from a failed large construction project. So that's one reason why it's not surprising that we've seen a lot more innovation in cell phones than in nuclear fission reactors.
The general principle we can draw from these examples is that if the unit scale is small, that is, the cost to develop one functional unit of the technology is low, innovation tends to be faster.
Approaches are being developed to sidestep these constraints, for example, through the use of "[[Abstraction Levels, Transaction-Level Modeling, and Digital Twins#"Digital Twins" and the Price of Agency|digital twins]]" to virtually test out new ideas for large projects at much reduced costs. A new design for a nuclear power plant, for example, could be modeled entirely digitally, allowing for virtual testing of the design itself as well as construction techniques. If the digital twins work in sufficiently representing the real-world projects, this approach would be an example of a soft technology innovation effectively reducing the unit scale of a project, at least for the purposes of developing and testing new ideas.
# Modularity
Before 1964 when IBM developed the System/360, the design of each mainframe computer was unique, with its own operating system, processor, peripherals, and application software (Baldwin & Clark, 1997). So introducing a new computer system with improved technology required developing software and components specifically for that system, slowing innovation. Furthermore, buyers hesitated to buy new equipment because it would require rewriting all their programs.
IBM solved this by creating a computer that was highly modular, with components that could be used in multiple computer designs. Each component had to stick to strict design rules, so that other components could interact with it reliably. This also allowed other manufacturers to create those components and have them work on IBM computers: terminals, memory, software, printers, and central processing units. This meant that anyone could innovate on one piece of the computer, creating, say, a less expensive printer, without having to redesign the whole computer. And this set the stage for the very rapid development of more powerful, less expensive computers and accessories.
Technologies that are highly [[Modularity|modular]] are said to have low design complexity. The general lesson is that low design complexity leads to faster innovation.
# Distance from physical limits
Early in the design of photovoltaic cells, it was obvious that we were far from the physical limits imposed on the design. The conversion efficiencies were less than 10%, the materials in the cells were far thicker than necessary to maintain their mechanical stability, etc. That made it clear there was lots of room for improvement, so it was reasonable to expect that they'd improve a lot. And even now, after costs have fallen and efficiencies have risen so dramatically, we haven't yet bumped up against physical limits that will halt improvement.
A contrasting example can arguably be drawn from sailing vessels. In 2016, the yacht Comanche set a speed record for monohull crossings of the Atlantic Ocean, essentially matching the speed of a single weather system (Yachting World, 2015). (How fast a weather system moves would be the physical limit in this example.) A monohull vessel could, of course, still break this record by sailing in (and maintaining the speed of) a faster weather system. But the time to cross the Atlantic in this class of vessel is encroaching on physical limits.
# Distance from a cost floor
U.S. coal power plant costs have not improved substantially in recent years. As we discussed earlier in the course, a major reason is that the cost of the coal itself is a large component in the cost of coal-fired electricity, and the efficiencies of converting coal-stored energy to electricity have leveled off in power plants constructed in the U.S. Technological innovation still might lead to reduced coal prices—one could come up with a more efficient way of extracting coal, for example—but because coal for power plants is so close to its unprocessed form in the ground, the potential to make coal cheaper is arguably limited. The cost of coal thus creates a cost floor for coal power plants. The distance from this cost floor to the cost of the energy produced is the cost improvement potential. There is still a good amount of cost improvement possible, but this would require that the coal plants themselves become more efficient, which hasn't happened in recent years. But even if full efficiency were reached, there is still a limit imposed by the cost of coal itself. This is the cost floor.
The raw material inputs to any technology provide its cost floor, including such things as fossil fuels, mined metals, grains or other agricultural products, wood from forests, and more.
This illustrates another general principle driving innovation: The further the cost of a technology is from its cost floor, the greater the innovation is likely to be.
# Reliance on hardware versus soft technology
Evidence so far suggests that soft technologies tend to improve more slowly than hardware technologies. The reasons for this aren't yet fully understood. But one reason may be that the process of codifying knowledge typically isn't as effective. It can be done with checklists, for example, or construction processes, but knowledge of processes often doesn’t transfer well to different times and places.
For example, a house constructed from a novel building material tends to be more expensive regardless of the cost of the material itself. The particular design of that house is likely to raise questions about how to use the material that aren't addressed in the manufacturer's guides, and figuring out the answers on the fly is expensive. And even when you've figured out the answers, odds are low that the next person facing the same question will be able to find that answer when they need it.
It's possible that more effective approaches to codifying knowledge might be developed that would reduce this effect, allowing soft technologies to improve at rates closer to those of hardware. But at the moment, for current technology, it's reasonable to expect that soft technologies may improve more slowly than hard ones.
The exception may be when knowledge can be codified in software. In that example of soft technology, the codification tends to be very effective. However, not all soft technology can effectively be achieved in the form of software.
# Design Complexity
Technology is typically made up of many components—in a computer program, this might be the functions; in a factory, it might include the conveyor belts and manufacturing equipment and procedures; in a jet plane, it might include the engine, the wings, and the plane's fuselage.
Note that these components often include soft technologies, such as the assembly procedures in a factory, along with hardware, such as the parts of a jet plane. Also note that one can analyze technologies with different granularities: At a higher resolution, one could view the components of a jet plane as including the individual parts of the engine, such as the turbine, compressor, and fan. Or those components themselves could be decomposed even further, down to individual screws.
==The core idea underlying this model is that innovation can be viewed as a process of searching over a landscape of possibilities for each of these components.==
Initially, the model is implemented as a computer program. At a high level, here's how it works: The program receives a description of the components and how they depend upon one another. This is done through a structure we already discussed called "[[Product Architecture#Design Structure Matrix (DSM)|design structure matrix]]". The program then chooses an alternate possibility at random for one component of the overall system. Changing that component may require changing other components as well—for example, a different wing design for a jet might require a smaller or larger engine. So, the program then assesses the overall cost of the altered technology, adding up the costs of all the components, some number of which will be altered. If the overall cost is lower, the innovation is kept, and if it's higher or doesn't change at all, it's abandoned. The program tracks the overall number of attempts, both successful and unsuccessful.
If a technology has a thousand components that depend on one another in complex ways, how do we keep track of all that in a way a computer can understand? We use the familiar DSM.
The number of dependencies particular components have will turn out to be very important in analyzing the model. They're straightforward to calculate, whether the DSM is represented in network form or matrix form.
In-degrees and out-degrees can be calculated easily from the matrix form of the DSM. To calculate the out-degree of a component, simply add up the marked boxes in its column. To calculate the in-degrees, add up the marked boxes in its row.
## Results of the Model
We’ll run a model considering four different scenarios, where the technologies have either 10 or 30 components (n), and either one or four dependencies per component (d). The figure below shows the results, giving the change in costs over a million attempts at improvement.
![[Pasted image 20250329110028.png]]
> [!Figure]
> Adapted from McNerney, J., Doyne Farmer, J., Redner, S., & Trancik, J. E. (2011, May 31). Role of design complexity in technology improvement. _PNAS, 108_(22), 9008–9013. [URL](https://www.pnas.org/doi/pdf/10.1073/pnas.1017298108)
Note that within each row, the models vary in the number of components but not in the number of dependencies, and those graphs look very similar to one another. ==That means that changing the number of components doesn't affect the results much. Changing the number of dependencies, though, has a huge impact. The costs fall far more quickly with only one dependency compared with four.==
This makes a lot of sense if we look at this from the perspective of [[Modularity|modularity]]. The less the components depend on one another, the more modular the design of the technology. Intuitively, we should expect that if a technology is highly modular, innovations in one part of the technology will be easier to implement. So naturally, costs should come down with fewer attempts. This result gives a solid backing for that intuition.
## A more complex case of variable out-degrees
While we've already gotten a useful insight from this simple analysis, in the real world, it's very unlikely that every component will depend on the exact same number of other components. Handling this variability requires a more sophisticated analysis.
When the number of dependencies varies from one component to another, it turns out that the slowest improving components primarily determine how quickly the technology will improve. These slow-improving components are called "bottlenecks".
What makes a component a bottleneck? Let's imagine a battery as an example, and suppose you're focusing on the electrode. Changing it may require changes to all the components in its outset. So, for example, making the electrode thinner but wider might require modifying the separators, the housing, and the controller. Even if you save money on the electrode, the costs of the new separators, housing, and controller might offset that. So components with large out-degrees may be harder to innovate.
Note, though, that there's another way to go about innovating the electrode: You might try modifying something in its inset. So, for example, suppose you change the electrolyte, and that requires different materials in the electrode but it doesn’t require changing its dimension. Changing its material might not require changes to the separators, housing or controller, eliminating those potential roadblocks. This would mean that the separators, housing and controller aren't in the outset of the electrolyte, even though they are in the outset of the electrode. So changing the electrolyte might provide a shortcut to improving the electrode.
To identify shortcuts like these, then, we need to look at each component in the inset and find the one with the smallest out-degree. And if that number—the smallest out-degree in the inset—is still pretty big relative to other components, that component will be a bottleneck.
We can use the DSM in network form to analyze this. Consider the network below:
![[Pasted image 20250329110554.png]]
> [!Figure]
> Adapted from McNerney, J., Doyne Farmer, J., Redner, S., & Trancik, J. E. (2011, May 31). Role of design complexity in technology improvement. _PNAS, 108_(22), 9008–9013. [URL](https://www.pnas.org/doi/pdf/10.1073/pnas.1017298108)
- Step 1 (find the _inset_ of each component): Consider the component _i_. In the network model, the components that point to _i_ are the components that modify _i_, meaning that, for example, changing component _j_ (and hence its cost), will require a change in component _i_ (and hence its cost). The components pointing to _i_ are called _i_'s “inset”. If we view the DSM in matrix form, the inset consists of the columns that are marked in row _i_.
- Step 2 (calculate the out-degree of each element in the inset): Let's take the example of component _i_. The inset of _i_ contains _i_, _j_, _k_ and _l_. To find their out-degrees, we need to count the number of arrows going out from _i_ and also from _j_, _k_, and _l_. After doing that, we see that the out-degree of _i_ is 4, _j_ is 4, _k_ is 5, and the out-degree of _l_ is 6.
- Step 3 (calculate $d_x^{min}$ for each component): For a component $x$, $d_x^{min}$ is the smallest out-degree of its inset. So for component _i_, $d_x^{min}$ is the minimum of 4, 4, 5, 6, so $d_i^{min}$ is 4. To summarize, then, we calculated $d_i^{min}$ by finding the minimum out-degree of component _i_ and all of the components in its inset. Next, we would calculate $d_j^{min}$, $d_k^{min}$, etc.
- Step 4 (calculate $d*$ and identify bottlenecks): Take the maximum of the various $d_x^{min}$ and call it $d*$. The components for which $d_x^{min}$ = $d*$ are the bottlenecks. The higher the value of $d*$ is, the tighter the bottleneck.
Researchers found that $d*$ is the variable that ultimately determines the rate of improvement of the technology. Running models of this kind with thousands of iterations (50,000), with a million random improvement attempts on each run. Because of the randomness, the different runs had somewhat different results. The following graphs show the results over all the runs for each DSM shown in the graphs. The results of are distributed across the rainbow band, with the runs showing the fastest improvement rates in blue and those with the slowest in red:
![[Pasted image 20250329112011.png]]
> [!Figure]
> Adapted from McNerney, J., Doyne Farmer, J., Redner, S., & Trancik, J. E. (2011, May 31). Role of design complexity in technology improvement. _PNAS, 108_(22), 9008–9013. [URL](https://www.pnas.org/doi/pdf/10.1073/pnas.1017298108)
From which we can conclude:
- The more modular the design of a technology is, the more quickly it's likely to improve.
- The components that form bottlenecks will determine how quickly the technology improves.
- If there's just one bottleneck, innovation is likely to occur in sudden leaps, when improvements to that component are found.
- The more bottlenecks there are, the smoother the improvement is likely to be.
Thus, focusing re-design efforts (to change the DSM) on removing the bottlenecks will likely yield the greatest results.
## Conclusions
- Why does technological change tend to follow these laws? The model gives a mechanistic explanation for Wright's Law. It also explains why Moore's Law is often correct, since its predictions are identical to Wright's Law in the common situation that cumulative production grows exponentially with time.
- What are the mechanisms driving the change? The model suggests that as we try to improve technologies by improving their interconnected components, the most highly-connected components act as bottlenecks that slow the rate of innovation.
- Why do some technologies develop so quickly while others crawl along? A key factor is that those that are more modular, with fewer components depending on other components, will improve faster.
- Why do some technologies improve steadily, while others have sudden improvements? When a technology has a single component acting as a bottleneck, it's likely to improve suddenly when improvements are found for that component. On the other hand, when a technology has many components acting as bottlenecks that are dependent on the same number of components, improvement is more likely to be steady.
- Can we purposely design technologies for rapid improvement? We can increase their modularity, and where there are dependencies among the components, we can focus in particular on re-designing the technology to remove (or minimize) the bottlenecks.
- How do we choose between Moore’s Law, Wright's Law, and Goddard's Law if they each fit the data about as well? This can be important because they make different predictions over time. This model suggests that Wright's Law is likely to be the better choice since it offers a mechanistic grounding for it.
- When a data-driven model fits the data well, what factors determine the parameters in the model? The model suggests that the exponent for the power law, $a$, is determined by $1/γd*$, where $d*$ is the highest number of dependencies for any component and $γ$ represents the difficulty of improving the components.
# References
- Baldwin, C. Y., & Clark, K. B. (1997) Managing in an age of modularity. _Harvard Business Review._ [URL](https://hbr.org/1997/09/managing-in-an-age-of-modularity)
- Kavlak, G., McNerney, J., & Trancik, J. E. (2018). Evaluating the causes of cost reduction in photovoltaic modules. _Energy Policy, 123_, 700–710. [URL](https://hdl.handle.net/1721.1/123492)
- McNerney, J., Doyne Farmer, J., Redner, S., & Trancik, J. E. (2011, May 31). Role of design complexity in technology improvement. _PNAS, 108_(22), 9008–9013. [PDF](https://www.pnas.org/doi/pdf/10.1073/pnas.1017298108)
- Trancik, J. E., & Ziegler, M. S. (2023). _Accelerating Climate Innovation: A Mechanistic Approach and Lessons for Policymakers_. Massachusetts Institute of Technology. [URL](https://dspace.mit.edu/handle/1721.1/147765)
- Yachting World. (2015, December 26). Take a tour of supermaxi Comanche, a yacht so beamy she's called 'the aircraft carrier'. [URL](https://www.yachtingworld.com/yachts-and-gear/comanche-yacht-63102)