# System Level Simulation
As designers of software-intensive systems, finally, something plays in our favor: software does not care on top of what platform it is running as long as all dependencies are met. This means that, as long as all ingredients are present, software will always run.
The software that controls the systems we design tends to run on specialized computers. Software in complex systems is in general not a single, monolithic piece of software but a collection of binaries spread across multiple different targets that are connected in a network through some data interfaces.
There are two levels of dependencies to meet to make any software run: compile-time dependencies and runtime dependencies. One thing is to fool software to make it build, a different story is to fool it to make it execute consistently, that is, with good numerical results. If only the former is accomplished, most likely the software will crash as soon as it's put to run. At compile time, the software is basically expecting source code, compiled libraries, headers, etc. At runtime, the software is expecting values in the right places at the right times. If any of this is missing, the software will simply not run well or will require modifications to cope with the differences. What is more, for software that runs numerical computations such as digital control systems, the underlying architecture can make a big difference: data types, floating point unit precision, and data type widths; all of this needs to be consistent, otherwise some of its algorithms may not converge.
In general, systems are fielded in a physical environment. Think of an aircraft, a spacecraft, or an autonomous vehicle. When out in their environment, functional consistency is provided by the physical environment itself. During development, engineers might need to replicate this environment to be able to design and verify the system before it goes out of the door. However, if we are supposed to run the software against a [[Modeling The Environment|synthetic environment]], it means we will have to replace the physical environment with some sort of artificial replacement if we pretend to obtain the same results.
To start off, the real system is depicted in the figure below, being deployed in the real environment.

> [!Figure]
> _A system composed of two separated physical subsystems, deployed in the real environment_
It is important to emphasize again that the system depicted in the figure above is the **real thing**. No simulation has taken place yet.
## Hardware-in-the-loop
Now, we will take the first important step in our simulation journey. We want to run the system in an artificial environment, but we want to still execute the real code in the real target CPU, unmodified. What would be the necessary steps? Well, let's keep the real target for now, and start replacing some other parts of it with simulated counterparts. For this, we would need to connect the real sensors and actuators to the synthetic environment that will contain the modeled physics laws from the real one in a numerical manner. But because real sensors and actuators are made for interacting with real physical environments we will have to create a piece of software that will replicate the behavior of the sensor as seen from the control CPU and interact with the synthetic environment on the other side. Note that we will from now on only use one subsystem in the diagrams for clarity, but the same applies for any number of subsystems hooked together.

> [!Figure]
> _First iteration towards a simulated system: we still use the real target but sensors and actuators are now simulated and connected to a synthetic "universe" that is running in a simulation station_
With the setup depicted in the figure above, we can run the system as if it were fielded in the real environment. The fidelity of the runs will directly depend on the fidelity of the synthetic environment. Note that the simulation station is a standalone computer running the simulation environment. Also, note that the simulation environment and sensor/actuator simulators will most likely run in user space in that computer, therefore the sim station needs to have an operating system.
There is a possibility for a hybrid scenario where only the actuator(s) are simulated and connected to the simulation environment whereas the sensor can stay real. In this case, the simulation environment will need to feed information to the stimulation equipment to ensure there is consistency in the test case. For instance, if the sensor is an optical sensor, the stimulator will need to create optical patterns that will need to be consistent with the simulation progression.
Thus far, the interfaces between the target CPU and the sensors and actuators have remained the real ones. This means, the application software in the target CPU was using real drivers in the operating system and data was sent and received using the real physical layers. This has advantages and disadvantages. The advantage is that the target CPU software remains untouched even while the hardware is connected to a simulated environment. The disadvantage is that the sim station's complexity is higher because it needs to host hardware for using the same interfaces and also include the drivers and everything to make the communication happen.

> [!Figure]
> _Scenario with simulated actuator and real sensor stimulated (note the difference in the last words, sTimulated, with a t_
An alternative to this is to drop (or to combine) the real interfaces and ad-hoc "test interface" that feeds sensor and actuator data through a dedicated interface (see figure below). A popular choice of interface for this tends to be [[High-Speed Standard Serial Interfaces#Ethernet|Ethernet]] due to its versatility.

> [!Figure]
> _Dedicated simulation interface between target HW and sim environment_
The clear disadvantage of this approach is that the drivers in the target CPU that are in charge of sending and receiving data to sensors and actuators are left orphaned and not used anymore. Additionally, the application software in the target CPU needs to be modified to accept data from the sim interface, and at the same time, the drivers in the OS need to be removed from the loop.
Now, in our system modeling quest, we are put in front of a challenge: how can we stop using the real target CPU and use a simulation of it? That would give us something quite close to a "digital twin". A few issues appear:
- Can we use an emulator? Is there an emulator available for the CPU architecture in the target?
- Do we use a dynamic translator?
- Do we use a virtual machine?
- Do we port the target application software into some other architecture?
One important factor. Emulators tend to only emulate cores, not entire target systems (like boards, or units). If our target hardware is a board with different devices and peripherals like PHYs, ADC/DACs, or similar, the emulator will most surely not include that. Then, what happens to the rest of the devices that are around the CPU? We must find a way of modeling those if we want the application software to continue working unmodified. We discuss this when we talk about [[Target Simulation (Virtual Platforms)|Virtual Platforms]].
## Software-in-the-loop
Most likely, the target hardware has its own operating system, which needs to be hosted somewhere. If a true [[Semiconductors#Emulation and Virtualization|emulator]] were too slow for this, we might need to evaluate some other alternative. Let's for now carry on and consider that there is such an ideal emulator modeling our target processor. The case is depicted below.

> [!Figure]
> _Target software now runs in an emulator (or a Virtual Platform), connected to the simulation station through a network socket_
This setup is now mostly purely software. There is no real target hardware anymore, and an emulator of the target board has appeared. In the setup depicted right above, the application software from the target will need to be modified accordingly, incorporating the socket client necessary for communicating with the simulation station, and any other modifications that might be needed for the application or the operating system to run on the emulator. In extreme cases, for instance when using rate operating systems, the target's OS may need to change for another one more suitable. This will require some porting effort which may be non-negligible. Note that still, the simulation station and the target CPU emulator run on two different stations.
Could we, instead of using two separate computers, use some virtualization here? In that scenario, the target application and OS would be hosted in a VM, whereas the simulated universe and sensors and actuators would be hosted in another VM. If the operating systems are supported, this could work nicely on a type 2 hypervisor, for instance using VMWare Fusion or Virtual Box.

> [!Figure]
> _Target application software + OS and sim station running side by side on a type 2 hypervisor_
## Algorithm-in-the-loop
One last, very high-level approach is to take the application software from the target alongside the simulation environment and put both to run on the same operating system (either native or virtualized).
This is perhaps the most simplistic simulation architecture we can think of, and it requires a very intensive pruning of the application software as it must be removed from all the routines related to hardware like drivers and the like, but also anything that may be OS-specific if the original OS is different than the one used here, for instance, an [[Printed Circuit Boards#Embedded Software#RTOS|RTOS]]. This is usually called "algorithm in the loop", and it allows very fast execution due to the fact that many software layers are gone. This scheme is useful when the focus is on testing high-level algorithms that are hardware-agnostic. For instance, networking algorithms in mobile networks or control algorithms for avionics are typical candidates for algorithm-in-the-loop schemes like the one depicted in the figure below. The obvious downside is that there is almost no correlation anymore between the environment where the application software originally ran on and this environment.
Note that, in the figure below, the application software and the simulation environment communicate through a socket in localhost, but this communication might also be done utilizing IPC (inter-process communication) from the shared OS or memory-mapped files. Application software and the simulation environment could also run on their own containers.

> [!Figure]
> _Simplistic algorithm-in-the-loop simulation environment_
The downside of this simplistic scenario also lies in the fact that, if we compared the setup in the figure above with the real target CPU connected to the simulation environment as depicted at the beginning of this section, the line between "reality" and numerical simulation blurs a tad too much, becoming too intertwined and impacting its usefulness as it may be difficult to replicate real scenarios in these kinds of ideal environments.
---
Note that here we are talking about "System" simulation, without really specifying what this system is. A more accurate title would have been _Cyber-physical_ System Simulation, due to the fact that we are mostly focusing on systems that interact (i.e., sense and act) upon a physical environment.