# 5 Modular Spacecraft Architectures
“As an architect, you design for the present, with an awareness of the past, for an unknown future”
― [Norman Foster](https://www.fosterandpartners.com/)
Abstract:
Designers will claim by default their designs are modular. But experience shows they tend to create largely integral architectures, often under the pretense of vertical integration and in-house know-how protection, incurring in troubling non-recurring costs, creating high switch barriers and more importantly, delaying time to orbit (and to market). Reality is that both vertical integration and IP protection can be achieved using modular open architectures. The important bit is to identify which blocks of the architecture are worth reinventing from scratch. In NewSpace, core differentiating technologies tend to be on the payloads and sensors. Hence, the spacecraft bus architecture can be commoditized by choosing standard backplane-based form factors, high-speed serial communication and configurable interconnect fabrics.
Keywords: Modularity, Conceptual Design, Switched Fabrics, Interconnect Fabrics, VPX, OpenVPX, SpaceVPX, CompactPCI Serial Space, Integrated Modular Avionics, IMA, IMA-SP, Switch Barriers.
With the previous chapter about complexity creep vigilantly watching us, let’s start addressing actual spacecraft design matters.
We spoke about how life cycles and breakdown structures are arbitrary discretizations we perform for organizing and managing better the task of synthesizing a system (or, more generally, to solve a problem); discretization is an essential part of engineering design. As we break down the system into its constituent parts, we create a network of entities whose relationships define, among other things, ownership and composition relationships. By creating such a network, we allow the global “main” function to emerge. Meaningful functions needed for the system to perform can be grouped into functional components. Functional components are, as the name indicates, function providers; they can provide, at least, one function. Functional components are seldom atomic; they have their own internal structures themselves. For a functional component to be able to provide its function, an interface needs to exist. An interface in a function-coupling mechanism or artifact. An interface allows a component to convey, contribute and/or connect its function across boundaries to other component or components, depending on the topology, equally receiving the same from other components. Component boundaries define where its functional entity begins and where it ends; an interface is a contract between functional component boundaries. This contract specifies how the function will be correctly supplied and consumed. Often, the function-coupling between components requires transferring something (information, matter, energy), in which case the interface acts as a channel for that _something_ to flow. Think about a car as an object: the door is a component with an interface which couples to the car body and contributes its main functionality of communicating the inside of the cockpit with the outside world and vice versa. The car door provides secondary functions as contributing to structural sturdiness of the body, and to the general aesthetics; i.e to make it look nice. Car doors have interfaces for them to couple to the higher-level system, but at the same time car doors are interfaces in their own right. Think about a bridge connecting two roads: the bridge itself is a component with a reasonable complexity, but from the road system perspective, the bridge is just an interface. It is the point of view in the system hierarchy from where we observe the bridge which makes it look like a component or an interface.
When does a component become _modular_? It is modular if its relationship to the higher-level system is not “monogamous” if such term can be used as analogy. This means that a component can provide its function to some other higher-level system (either with similar or a total dissimilar goal) without the need of introducing any change to it. Are car doors modular? Certainly not. We cannot take a door from a Hyundai and put it in a Mercedes and expect it to work., due to the fact the _functional coupling_ depends on the mechanical form factor: the door was designed to specifically fit in the Hyundai. On the other hand, plenty of car parts are modular, like tires, windshield wipers, lightbulbs, and so on. We could take a tire from a Hyundai and put it in a Mercedes, provided cars are of roughly similar type; performance might be somewhat impacted, but function would be reasonably coupled, because the interface to the axle is standardized across vendors. If a software library can be used on different applications (provided the way to interface to it is understood and documented), it is modular. In fact, a software library can in turn contain other libraries, and so on. Software is a great example of how modularity can enable great ideas. Whole open source community sits on modular thinking. Software gets well with modular approaches because the interfaces are extremely malleable, development cycles are way shorter, enabling “trial and error” loops to happen fast.
Functionality never exists in isolation: we have to connect functionality to contexts through different mechanisms. Take for example the Unix operating system. Unix comes with a rich set of shell commands such las ls, cat, nc, and sort. Let’s take as an example the useful netcat (or nc). Netcat is a versatile tool which is used for reading from and writing to network sockets using TCP or UDP. Netcat can be piped (combined) with other commands and provide a rich set of tools to debug and probe networks. Netcat provides its functionality by means of the shell context. Typically, when we code, for example in C, we frequently find ourselves needing netcat-like functionality in our code. We very often end up creating handcrafted downscaled clones of netcat. We could just use netcat, but we cannot (at least not easily or effectively). Since our context (a process of our own) is different compared to what netcat has been designed for (a shell), we need to grow a homegrown netcat() ourselves.
Modularity is a design strategic decision, for it can impact factors as cost, schedule and risk. It should be said that when system integrators own the whole architecture and also own the life cycle of some of the components in the architecture, modularity can be difficult to justify, unless there is some potential for a component to be used in another system across the product line. One may think that there is no real reason why **_not_** going modular, but it might be a tricky analysis. Space industry is famously known for its non-modular choices. Most of the satellite platforms in the world are designed for specific payloads. This means, the system and the component (payload) have a very _monogamous_ relationship; changing the payload for a totally different payload is nearly impossible in every single way (mechanically, electrically, etc.). Years ago, a project involving a spacecraft bus provider X and a NewSpace payload provider Y was kicked off as a joint endeavor. The bus provider was quite “classic space” minded, whereas the payload company had a concept of operations (CONOPS) which was very NewSpace. Neither the bus nor the payload had been designed in a modular way, meaning that to make their functionalities couple, a good deal of alterations at both ends were needed. Bespoke software was created for this troublesome coupling to happen. This was largely ad-hoc software that did not get the chance of extensive testing due to the tight schedule. Payload failed after a few weeks of operations due to a software bug which incorrectly handled persistence and it kept on locking a serial communication channel between bus and payload, making the communication impossible. The failure is quite poetic in itself: hacking your way to make two components to couple when they were not designed to couple tends to end badly. You can still put a puzzle together with your eyes closed; pieces will eventually fit if you push hard enough. But when you open your eyes it will not look nice.
Modularity can also be unattractive from a business perspective. When you go modular, you risk yourself to start losing your identity and being treated as a commodity. If a customer can quickly replace your module with a competitors’, they might. The automotive industry has a very selective criteria on modularity, and the reason is cost but largely differentiation and brand image. Automakers go modular on things that do not impact their brand image or aesthetic integrity of their products, but do not go modular on things that do impact such integrity. Organizations can choose to design large and complex things in a modular way if it is for their own benefit; for example, automotive companies make their engines and chassis modular in the way they can use them across their product line. In the aerospace industry, turbofan engines are highly complex devices but also modular; they are designed so they can work across different aircrafts. Modularity is highly connected to the system hierarchical breakdown. It is the decision of the architects to define the scope of the modular approach in a system design. For example, in space applications, the whole payload can be taken as a functional component. This way, the coupling between the bus and the payload could be designed to be modular if needed; a multi-mission platform. But then, inside the bus for example, the avionics could be designed to be modular as well. So, modularity is recursive, as anything related to the recursive breakdowns we have been working on since a few chapters.
But, let’s get a bit less abstract and run the exercise of trying to identify the main functional components of a typical spacecraft:
| | | |
|---|---|---|
|**Functional Component**|**Function it provides**|**How function is coupled to the context**|
|Power Management|1. Convert energy from Sun<br><br>2. Store Energy;<br><br>3. Distribute Energy;<br><br>4. Determine electrical state by measurement|● Mechanical (fixations to structure)<br><br>● Thermal (conduction)<br><br>● Electrical (power and data connectors),<br><br>● Software (command and data handling).|
|Structure|1. Provide physical housing for bus and payload<br><br>2. Protect from launch loads|● Mechanical,<br><br>● Thermal|
|Attitude Management|1. Determine orientation<br><br>2. Alter orientation|Mechanical (fixations to structure), Thermal, Electrical (power and data connectors), software (command and data handling).|
|Thermal Management|1. Determine thermal state<br><br>2. Alter thermal state|Mechanical (fixations of heaters and thermostats to structure), Thermal, Electrical (power and data connectors), software (command and data handling).|
|Comms Management|1. Convey data to/from spacecraft|Mechanical (fixations to structure), Thermal, Electrical (power and data connectors), Software (command and data handling).|
|Resources Management|1. Store data<br><br>2. Collect data<br><br>3. Detect and isolate failures|Mechanical (fixations to structure), Thermal, Electrical (power and data connectors), software (command and data handling).|
Table 5.1 - A Spacecraft functional breakdown
Having these listed in a table does not help visualizing the hierarchical nature of all this, so let’s draw accordingly:

Figure 5.1 - A more graphical functional breakdown of a spacecraft
The diagram depicts a functional breakdown which includes the minimum amount of functions for a spacecraft to perform its main (or master) function: “Operate a Payload in Orbit”. The main function flows down into sub functions as depicted in the diagram. All the sub functions contribute to the main one. These functional blocks can also be seen as handlers; i.e. they are supposed to _take care_ of something.
Now, this functional breakdown structure is totally immaterial. There is not a single mention of how the physical breakdown structure looks like. All these blocks in Fig. 5.1 are not physical entities, just abstract blocks that ideally provide a function that the system needs for its core function. Function, by itself, is weightless or intangible, which is great news for designers because it makes the functional analysis very flexible. What makes designers’ life harder is the way the elements are put together to realize the function. Abstract functional architectures will not get the job done so the design team must distribute functions across the physical architecture (i.e. allocate functions to physical blocks). Bear with me here in the sense that software can also count as “physical” in the sense that it is an entity (binary/object code) which is supposed to perform the function. Physical here means more “an entity which executes the function in the operative environment” than actually physical in the sense that you can hold it in your hand.
This is a point when it starts to be the right time for designers to evaluate whether to take the modular path (i.e. go modular) or not, hence the importance of Functional Analysis when it comes to modular thinking. It is from the functional breakdown structure that modularity can be evaluated. The game of connecting functions to physical/executable blocks is a many-to-many combinatorial challenge. Many functions can be allocated to one single physical/executable element as well as one single function can be broken down to many physical/executable elements.
Let’s take as an example the function: “Handle Power”. This functional component is composed of the following sub-functions:
| | |
|---|---|
|**Sub-Function of “Handle Power”**|**Physical element to realize it**|
|“Convert Solar Energy to Electrical Energy”|Solar Panels|
|“Store Electrical Energy”|Battery|
|“Distribute Electrical Energy”|PCDU (Power Control and Distribution Unit)|
|“Determine Overall Electrical State”|PCDU (Power Control and Distribution Unit)|
Table 5.2
Each one of the physical elements listed above must couple to the system in a way they will be able to deliver their functions accordingly. Coupling will need to take place in several ways: mechanically, electrically, thermally, and software coupled (in case the element contains software, commands, telemetry, etc.). From the example above, a first challenge comes up. “Distributing energy” and “Determining Overall Electrical State” share the same physical entity: PCDU, which is, in this case a metal box with a PCB inside and connectors. We could have chosen differently, by creating a new board called “PMU” (Power Measurement Unit) which would map 1:1 with “Determine Overall Power Status” function. But is it worth it? Probably it is not; more mass, more software, more complexity; i.e. little gain. Hence, we decide to have 2 functions mapped to one physical component. The system design quickly imposes a challenge: the functional architecture does not map 1:1 to the physical architecture (what is usually called a federated architecture), which kicks in the decision about centralization vs decentralization, among other similar dilemmas. The next section will present a graphical way of depicting how the functional breakdown structure maps to the physical one, by means of using concept maps as function-structure diagrams.
Architectural decisions are shaped by context and constraints. For example, cost is a common driver in NewSpace architectures, which sneaks in in the decision making about function allocation. Mass is the other one; although launch prices have decreased in recent years, still less mass means a cheaper launch, easier transportation, etc. This is a strong incentive to concentrate functions in as few physical elements of the architecture as possible (integrated architecture). NewSpace projects understandably choose integrated approaches, as they typically allocate as many functions to as few physical blocks as possible. There are dangers associated with this approach of adding “all eggs into one basket”, related to reliability and single points of failure. A reasonable failure analysis should be performed, and risks taken should be understood and communicated. New technologies are also leveraging integrated architectures, such as hypervisors and virtualization. These technologies are being quickly matured and standardized for mission-critical applications.
## Function-Structure Diagrams
Crossing the boundary between functional analysis and the physical/executable architecture is never an easy one. As said before, when the functional and physical/executable architecture depart from each other, two sorts of “universes” are spawn. It is important to keep visual clues about how those two domains (functional and physical) remain mapped to each other. By doing this, the design team can track design decisions which might be reviewed or changed later in the process. Concept maps can be used for this. In the diagram below (Fig. 5.2), the boxes in blue are abstract functions and subfunctions, whereas the yellow blocks are actual physical/executable elements of the architecture. Note that this type of diagrams can accompany the early-stage analysis of requirements (if there are any…). Typically, requirements are early stages point to functions and not to specific synthesis of functions (how functions are realized). This means, a concept map with the initial understanding of the functions required can be transitioned to a function-structure diagram as the architectural decisions on how to map functions to blocks is performed.

Figure 5.2 -Function Structure Diagrams

Figure 5.3 - Adding more details to Function-Structure diagram
## Conceptual Design
There is great bibliography about spacecraft design, such as (Brown, 2002). Entire books are written about this topic, so this section only aims to summarize and give a quick glimpse on how the design process kicks off, and what defines how a spacecraft ends up looking like. This section is mostly aimed for a non-technical audience who are willing to understand more about what type of tasks are performed during early stages of the design process of a spacecraft. This section is for newcomers to space. If you’re a space nerd or you have put a man on the Moon, you can safely skip it before you get offended.
In this section, it will be assumed the project has been just kicked off, and everything starts from scratch. That’s what conceptual design is about: starting off from a few scribbles on a whiteboard, but not much else. This is a stage of very high uncertainty, and as such it requires lots of analysis and firm decision-making. Typically, during conceptual design there might be (as in, there will be) conflicting opinions about some topic or idea, and it is the Systems Engineer’s task to resolve those by analyzing the alternatives carefully and decidedly choosing one and allow the team to move ahead. It is often better to move ahead with an idea which might still need refining than stopping everything until the last detail is sorted out. Frequently, during conceptual design stages teams get themselves in _analysis-paralysis_ swamps where discussions can take days, and nothing is decided. The design activity freezes in favor of pointless deliberations which someone needs to break. At this stage the role of an experienced Systems Engineer is essential, otherwise a great deal of time, energy and momentum is lost. Eternal deliberations keep the whole organization away from the market. While teams stop to argue on how to split the atom, a competitor is getting closer to a win. Conceptual design must move ahead.
Conceptual design relies greatly on sketching; this should be at this point no surprise, we emphasized enough a few chapters ago how diagram-driven engineering is. Sketches give form to abstract ideas and concepts, and they become the first two-dimensional shapes a project can offer to the people involved. Sketching is a form of communication, and it should be not only encouraged but actively trained across the organization. Unfortunately, it seems sketching does not get as much attention in engineering schools as it should. There should be more courses offered in sketching and drawing in support of design projects. Teaching basic techniques in freehand sketching would help students generate quicker and more effective visualizations ideas during their professional careers.
Note: It is very useful to take photos and archive sketches drawn in whiteboards and/or notebooks, for they become the first visual clues of the project and an important archive of design decisions as the organization matures.
Now, what should we expect from conceptual design as an activity? The main outcome of this stage should be (as a minimum):
● A rough physical configuration (mainly the bus form factor, solar panels' tentative location and area, payload location).
● An understanding of the power margins (i.e, power generation vs power consumption ratio), along with Power Storage (i.e battery sizing)
● An understanding on what sensors and actuators the spacecraft will use to manage its orientation in space and its position in orbit.
● A preliminary understanding on how to keep things in safe thermal margins.
● A rough understanding of the mass.
● A block diagram.
None of the bullets previously listed are totally isolated from the rest. They are all interrelated, and at this stage they are all very preliminary and rough. Many things are very fluid to be taken very seriously, and the idea is to refine those as the process goes. As we said, we don’t go through the conceptual stage only once. We run through it several times: we propose something, things get analyzed and reviewed; if changes are needed, changes are done, and process repeats. The design evolves and often ends up looking quite differently compared to how it was conceptualized. It is always an interesting and somewhat amusing exercise, once a project has reached production, to go back to the conceptual stage notes and diagrams and see how different things turned out to be at the end. Conceptual design does not rigidly specify anything but suggests paths or directions which will or will not be taken, depending on many factors.
At the beginning, spacecraft engineering does not look very fancy; it is more or less a bunch of spreadsheets, some loose code, and analysis logs. Mostly numbers. Good thing is, to get started with conceptual design, not many things are needed: a rough idea of the payload concept of operations (CONOPS) and some ideas about the target orbit.
The payload’s concept of operations will provide a good indication of what type of attitude profile the spacecraft will have to maintain throughout its life. For example, a Synthetic-Aperture-Radar (SAR) payload will need to operate by side-looking, meaning that frequent movements (i.e. slewing) to achieve different look angles will be needed. Or, if the radar will always look at one particular angle, we could design it in a way that side-looking attitude would be the “default” attitude[[1]](#_ftn1). Other types of payloads, for comms missions typically stay with one axis pointing to Earth, so their antennas can continuously cover desired areas. Some other more advanced payload could be gimballed and do the tracking on their own, leaving the satellite with the freedom to point wherever. Payloads greatly define how accurate and precise the orientation determination and control needs to be, from coarse pointing to very precise. For example, a laser comm terminal, for very high-speed data transfer, or a telescope, will impose stricter requirements than, say, a comms relay spacecraft with omnidirectional antennas on-board.
The other important input for the conceptual design process is the orbit chosen for the mission, so let’s quickly discuss how orbits are defined. Orbits are geometrically described by orbital (Keplerian) elements, graphically depicted in Fig 5.5. Orbital elements are a set of parameters that analytically define the shape and size of the ellipse, along with its orientation in space. There are different ways of mathematically describing an orbit, being orbit elements the most common. First, let’s quickly recap on the typical parameters of a generic ellipse (Fig. 5.4):

Figure 5.4 - Elements of the ellipse (By Ag2gaeh - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=57497218)
● Eccentricity (_e_)—shape of the ellipse, describing how much it is elongated compared to a circle. Graphically, it is the ratio of the distance from the center to the foci (F) and the distance from the center to the vertices. As the distance between the center and the foci approaches zero, the ratio approaches zero and the shape approaches a circle.
● Semimajor axis (_a_)—the sum of the periapsis and apoapsis distances divided by two. For classic two-body orbits, the semimajor axis is the distance between the centers of the bodies, not the distance of the bodies from the center of mass.
Two elements define the orientation of the orbital plane in which the ellipse is embedded:
● Inclination (_i_)—vertical tilt of the ellipse with respect to the reference plane, measured at the ascending node (where the orbit passes upward through the reference plane, the green angle _i_ in the diagram). Tilt angle is measured perpendicular to line of intersection between orbital plane and reference plane. Reference plane for Earth orbiting crafts is the Equator. Any three points on an ellipse will define the ellipse orbital plane. The plane and the ellipse are both two-dimensional objects defined in three-dimensional space.
● Longitude of the ascending node (_Ω_)—horizontally orients the ascending node of the ellipse (where the orbit passes upward through the reference plane, symbolized by _☊_) with respect to the reference frame's vernal point (symbolized by ♈︎). This is measured in the reference plane and is shown as the green angle _Ω_ in the diagram.
The remaining two elements are as follows:
● Argument of periapsis (_ω_) defines the orientation of the ellipse in the orbital plane, as an angle measured from the ascending node to the periapsis (the closest point the satellite object comes to the primary object around which it orbits, the blue angle _ω_ in the diagram).
● True anomaly (_ν_, _θ_, or _f_) at epoch (_t_0) defines the position of the orbiting body along the ellipse at a specific time (the "epoch").

Figure 5.5 - Orbit elements (credit: Lasunncty at the English Wikipedia / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/))
The mean anomaly _M_ is a mathematically convenient fictitious "angle" which varies linearly with time, but which does not correspond to a real geometric angle. It can be converted into the true anomaly _ν_, which does represent the real geometric angle in the plane of the ellipse, between periapsis (closest approach to the central body) and the position of the orbiting object at any given time. Thus, the true anomaly is shown as the red angle _ν_ in the diagram, and the mean anomaly is not shown. The angles of inclination, longitude of the ascending node, and argument of periapsis can also be described as the Euler angles defining the orientation of the orbit relative to the reference coordinate system. Note that non-elliptic trajectories also exist, but are not closed, and are thus not orbits. If the eccentricity is greater than one, the trajectory is a hyperbola. If the eccentricity is equal to one and the angular momentum is zero, the trajectory is radial. If the eccentricity is one and there is angular momentum, the trajectory is a parabola. Note that real orbits are perturbed, which means their orbital elements change over time. This is due to environmental factors such as aerodynamic drag, Earth non-oblateness, gravitational contribution from neighboring bodies, etc.
If the orbit is known, at least roughly, it provides the design team a good deal of information to drive initial analyses on power generation. The orbit geometry, and more specifically its orientation in inertial space defines how the Sun behaves with respect to the spacecraft’s orbital plane, the same way the Sun behaves differently in the sky depending on what point we sit on the Earth. The way the Sun vector projects into the orbit plane will influence the way the spacecraft will scavenge energy from sunlight. Let’s quickly see how this Sun vector is defined. We shall start by describing the geometry of a celestial inertial coordinate system (with X axis pointing to vernal equinox[[2]](#_ftn2), Z axis aligned with the Earth’s axis of rotation and Y axis completing the triad following the right-hand rule) and identifying some important planes: ecliptic plane (the plane Earth orbits around the Sun) and Equatorial plane, which is an imaginary extension in space from Earth Equator (Fig. 5.6).

Figure 5.6 - Equatorial Plane, Ecliptic plane and Sun position
We define the solar vector, , as a unit vector in this coordinate system, which points towards the Sun (Fig. 5.7).

Figure 5.7 - Vector s points to the Sun
The apparent motion of the Sun is constrained to the Ecliptic Plane and is governed by two parameters:  and .

Figure 5.8 - Sun revolves around the Ecliptic Plane
Where is the obliquity of the Ecliptic and, for Earth, is 23.45°;  is the Ecliptic True Solar Longitude and changes with date.  is 0° when the Sun is at the Vernal Equinox. In order to find the position of the sun starting from the inertial frame of reference, we must devise a frame of references which is aligned with the ecliptic plane. let’s create a basis of orthogonal unit vectors at the center of this inertial system and define a unit vector  along the X-axis of the test system (Fig. 5.9a). What we need to do in order to point  to the Sun is to tilt the test system an angle  around its X-axis in order to align its xy plane with the ecliptic plane (Fig. 5.9b). Next, we just need a rotation of an angle about the z-axis of the rotated _test_ system;  points now in the direction of the Sun (Fig. 5.9c).


Figure 5.9 (a,b,c) - Obtaining a vector s to point to the Sun
The vector  now revolves around the ecliptic plane. To evaluate better the behavior of the Sun with respect to a specific orbit, we need to geometrically define the orbit orientation. For this, we must find a vector normal to the plane described by our orbit, so let’s define the geometry accordingly:

Figure 5.10 - Orbit plane geometry with respect to inertial frame of reference and vector normal to the orbit plane
From the figure (Fig. 5.10), _i_ is the test orbit inclination, i.e. the orbit’s angular tilt from the equatorial plane;  is the Right Ascension of the Ascending Node, which is an angular measure between the inertial x-axis at the point where the orbit crosses the equatorial plane going from South to North.

Fig 5.11 - Orbit plane, orbit normal, and ecliptic plane.

Figure 5.12 - Phi (𝜙) is the angle between the Sun vector and the normal to the orbit plane
The beta (𝜷) angle is the angle between the projection of in the orbital plane and s itself. The easiest way to calculate the angle between a vector and a plane, it is to determine the angle between the vector and a vector normal to the plane, denoted in Fig 5.12 by 𝜙 (denoting now _o_ and _s_ vectors with hats since they are unit vectors):

But:

We observe that  is limited by:

The beta angle is not static and varies constantly. Two factors that affect beta variation the most:
1. The change of seasons (variation in )
2. Perturbation of the orbit due to the oblateness of the planet (variation in Ω, and rotation of apsides).
The variation that concerns the most design engineers is the variation due to precession of the orbit. Earth is not a perfect sphere; hence the Equatorial bulge produces a torque on the orbit. The effect is a precession of the orbit ascending node (meaning a change of orientation in space of the orbital plane). Precession is a function of orbit altitude and inclination.

Figure 5.13 - Orbit precession
This variation is called the Ascending Node Angular Rate or precession rate, ⍵p, and is given by:

Where:
● ωp is the precession rate (in rad/s)
● RE is the body's equatorial radius (6378137 m for Earth)
● a is the semi-major axis of the satellite's orbit
● e is the eccentricity of the satellite's orbit
● ω is the angular velocity of the satellite's motion (2π radians divided by its period in seconds)
● i is its inclination (in degrees)
● J2 is the body's "second dynamic form factor" = 1.08262668×10−3 for Earth).
Rotation of the apsides is the other perturbation to consider. Rotation of apsides is an orbit perturbation also due to the Earth's bulge and is similar to regression of nodes. Rotation of apsides is caused by a greater than normal acceleration near the equator and subsequent overshoot at periapsis. A rotation of periapsis results. This motion occurs only in elliptical orbits (Brown, 2002).
As 𝛽 changes, there are two consequences of interest to thermal engineers:
1) The time spent in eclipse (i.e., planet shadow) varies.
2) The intensity and direction of heating incident on spacecraft surfaces changes.
Orbit orientation will influence the duration of the daytime and nighttime on board (i.e. sunlight and eclipse times). This factor will impact not only power generation and storage but also thermal behavior, since things can get quite cold during eclipses. Physical configuration and subsystem design are greatly impacted by the beta angle behavior with respect to the body reference of reference. It does not only affect the dimensioning of the power generating surfaces but also defines location of sensors which can be perturbed by the Sun. This can directly affect orbit capacity and payload availability: a suite of sensors being frequently affected by sunlight can reduce the readiness for the spacecraft to perform its mission.
### Ground Track
Our planet remains a very popular science subject. Despite being our home for quite a while already, we keep on observing it, and we continue learning from our observations. It is a big and complex planet, for sure, and the reasons behind our need to observe it are very reasonable and disparate. We observe it to understand its physical processes (atmosphere, weather), we observe it to monitor how things on its surface are changing over time (for example, surveillance or defense), or to understand anomalous or catastrophic events (flooding, fires, deforestation). In any case, we place satellites in orbit with a clear idea of what we want to observe beforehand; this means, what areas we want the spacecraft to sweep while it revolves around the Earth, in order to fulfill its mission. The flight path of the spacecraft defines a ground track, and the ground track is a result of a few things:
● The motion of the spacecraft
● The rotation of the central body
● Orbit perturbations
The geographic latitudes covered by the ground track will range from _–i_ to _i_, where _i_ is the orbital inclination. In other words, the greater the inclination of a satellite's orbit, the further north and south its ground track will pass. A satellite with an inclination of exactly 90° is said to be in a polar orbit, meaning it passes over the Earth's north and south poles. Launch sites at lower latitudes are often preferred partly for the flexibility they allow in orbital inclination; the initial inclination of an orbit is constrained to be greater than or equal to the launch latitude. At the extremes, a launch site located on the equator can launch directly into any desired inclination, while a hypothetical launch site at the north or south pole would only be able to launch into polar orbits. (While it is possible to perform an orbital inclination change maneuver once on orbit, such maneuvers are typically among the costliest, in terms of fuel, of all orbital maneuvers, and are typically avoided or minimized to the extent possible.)
In addition to providing for a wider range of initial orbit inclinations, low-latitude launch sites offer the benefit of requiring less energy to make orbit (at least for prograde orbits, which comprise the vast majority of launches), due to the initial velocity provided by the Earth's rotation.
Another factor defining ground track is the argument of perigee, or 𝜔. If the argument of perigee of the orbit is zero, it means that perigee and apogee lie in the equatorial plane, then the ground track of the satellite will appear the same above and below the equator (i.e., it will exhibit 180° rotational symmetry about the orbital nodes.) If the argument of perigee is non-zero, however, the satellite will behave differently in the northern and southern hemispheres.
As often orbital operations are often required to monitor a specific location on earth, orbits that cover the same ground track periodically are often used. On Earth, these orbits are commonly referred to as Earth-repeat orbits. These orbits use the nodal precession effect to shift the orbit, so the ground track coincides with that of a previous revolution, so that this essentially balances out the offset in the revolution of the orbited body. Ground track considerations are important for Earth observation mission design since it defines the geometry on how the spacecraft will revisit a specific area. But also, for connectivity missions, since antenna footprints will still depend on the way the craft will fly on specific regions.
### Physical configuration and Power Subsystem
A quick googling for spacecraft images gives a clear idea that basically they all look different. If you do the same with cars or aircraft, you will realize they look pretty much the same, regardless of the manufacturer and the aesthetic slight differences. An A320 and a Boeing 737 might have different “faces” (nose, windshields), and some differentiating details here and there, but mostly everything else is pretty much the same: windows, turbines, wings, ailerons, empennage, etc. The farther you observe them from, the more similar they look, at least for the untrained eye. Aircraft more or less look all the same across manufacturers. Think about bulldozers, or digging machines, same applies; you cannot really tell from a distance (provided colors and legends are not perceived as clues) which brand or model they are. Space is somewhat different. Two spacecraft from different vendors and for a similar mission might look totally different. They will have different body shapes, different antennae, different solar panels. They will most likely be visually very different. A probable exception are GEO spacecraft which sort of look similar to each other (basically a box with two long and distinctive solar panel wings, some dish antennas and feeders). For LEO, the shape variety is way more noticeable. GEO space industry is a very market-oriented sector, as commercial aerospace and digging machines are. A wild theory could say that companies tend to imitate more physical configurations that seem to do the work effectively, when physical configuration is not a branding factors (like in cars). If this holds true, spacecraft in LEO (for the same type of application) should start to look more alike in some future.
During conceptual design, the rough physical configuration (i.e. shape, size, volume) usually comes as the first thing to tackle. The process of defining the spacecraft's physical configuration is multidisciplinary and highly inter-coupled and of course dependent on mission needs; a highly iterative one. The whole conceptual design stage is highly iterative. Initially, an IPT can be made responsible to carry this task (in startups typically the stem/core IPT). The early factors that shape the configuration are:
● Type of orbit:
○ Altitude (LEO, MEO, GEO): defines the reliability approach (radiation environment for LEO is less strict than MEO), but also defines the architecture. For example, MEO and GEO will not use torquers but thrusters to unload the wheels. This impacts the Propulsion and AOCS directly, and indirectly the mass budget and thermal.
○ Inclination: as we will see later, it defines the Sun geometry from the spacecraft perspective, which impacts the sizing of power subsystem elements.
○ Orbital maintenance: Is the orbit the final orbit +/- station keeping? Or will the spacecraft need to move to a different orbit? This impacts propulsion, but also thermal and AOCS.
● Payload:
○ Pointing: The payload pointing requirements have great impact on the attitude control subsystem.
○ Power: The payload’s power requirements obviously impact the power subsystem sizing.
● Launch vehicle: fairing and interfaces (it constrains S/C volume envelope, mass requirement). Vibration profiles of launchers drive decisions about primary and secondary structures.
#### Structure & Mechanisms
The spacecraft structure is a key subsystem in a spacecraft since it is the “house” where all the subsystems are accommodated. One of the principal functions of the structure is to keep the spacecraft together during the launch environment and protect the equipment from vibrations and loads. The structure defines spacecraft’s key components locations, as well as field of view for optical sensors, orientations for antennae, etc. Space mechanisms are in charge of deploying surfaces such as antennas and sensors with enough stiffness to keep them stable. The overall structural vibration of the spacecraft must not interfere with the launch vehicle’s control system. Similarly, the structural vibration, once “alone in space”, must not interfere with its own control system.
Structures are usually categorized in: primary, secondary and tertiary. The primary Structure is the backbone or skeleton of the spacecraft, and the major load path between the spacecraft’s components and the launch vehicle; it withstands the main loads. The primary structure also provides the mechanical interface with Mechanical Ground Support Equipment (MGSE). The design of the primary structure is mandated by the external physical configuration and the launch loads.
Secondary Structures include support beams, booms, trusses and solar panels. Smaller structures, such as avionics boxes and brackets which support harnesses are called tertiary structures.
The primary structure is designed for stiffness (or natural frequency) and to survive steady-state accelerations and transient loading during launch. Secondary structures are designed for stiffness as well, but other factors as orbital thermoelastic stresses, acoustic pressure and high frequency vibrations are considered.

Figure 5.14 - Primary, secondary and tertiary structures
For the primary structure, typically the options are:
Trussed Structure:
● A truss is a structure that can withstand loads applied to its joints (nodes) with its members loaded only axially. The arrangement forms triangles.
● The shear and torsion acting on a structure can transfer through axial loads in diagonal members, shear in individual members (in frames without skin) and shear in attached skin.
● Frames without skin are efficient only when shear transfers over a short distance.
● Buckling is typically the critical failure mode for trusses and structures with skin.
● It is a good solution since trusses are weight-efficient and spacecraft components can be mounted internally with good access.
Skin-Frame Structures:
● A skin-frame structure has a metal skin (sheets or panels) surrounding a skeletal framework made of stringers (members oriented in the vehicle’s axial direction) and lateral frames which introduce shear into the skin.
● Intermediate frames are used to increase the buckling strength of the skin and are used to mount equipment.
● Skin in spacecraft structures are usually made of sandwich panels since they help to carry compression loads and this reduces stringer size, increases buckling load, stiff enough for unit mounting and are weight efficient.
● It is one of the most common spacecraft structure types found.
● Sheet skin is also used when no units are mounted on it: airplane structures use this solution for fuselage.
Monocoque Structures:
● A monocoque structure is a shell without stiffeners or ring frames.
● It is based on sandwich construction which results in a light structure; _isogrid_ shells can also be made at relatively low weight.
● This solution provides a stiff structure and panels can carry all kinds of loads.
● Mounting equipment is easy and when loads are too high, local reinforcement can be added without impacting in mass to the complete structure.
● A monocoque structure is a good solution for cylinder construction. Cylinders provide the most efficient way to carry the launch loads and transfer to the launch vehicle interface.
Typical required characteristics of mechanical structures are:
● Strength: The amount of load (static) a structure can carry without rupturing, collapsing or deforming enough to jeopardize the mission
● Structural Response: Magnitude and duration of vibration in response to external loads
● Natural Frequency: The frequency the structure will vibrate at when excited by a transient load.
This characteristic depends on mass properties and stiffness. Each structure has an infinite number of natural frequencies corresponding to different mode shapes of vibration.
● Stiffness: A measure of the load required to cause a unit deflection.
● Damping: The dissipation of energy during vibration. It is a structural characteristic that limits the magnitude and duration of vibrations.
● Mass Properties: Density, Center of gravity and moments of inertia.
Factors impacting structural design:
● Launch Vehicle Selection: Structural impacts of launch vehicle specs, fairing sizes and launch environments.
● Spacecraft Configuration: Locate and arrange the payload and subsystems into stowed and deployed configuration.
○ Establish operating concepts for deployable payloads, solar arrays and antennas.
○ Establish general load paths for the primary and secondary structures.
○ Estimate mass properties
○ Confirm that the spacecraft stowed envelope, mass and center of gravity satisfy launch vehicle requirements.
■ Primary Structure:
● Idealize the spacecraft as an “equivalent beam” with concentrated masses to represent key components and derive design loads and required stiffness. Compare different materials, type of structure and methods of attachment.
● Arrange and size the structural members for strength and stiffness.
■ Subsystems:
● Demonstrate that payloads, solar arrays and antennas have their required field-of-view.
● Derive stiffness requirement for secondary structures to ensure adequate dynamic decoupling.
● Derive design loads.
● Derive requirements for mechanisms and develop conceptual designs.
In terms of structural loading, mechanical loads can be static or dynamic. Static loads are constant and dynamic loads vary with time. Examples:
● Static: the weight of units (mass loading when applied steady acceleration)
● Dynamic: launch vehicle engine thrust, sound pressure and gusts of wind during launch
Launch generates the highest loads for most spacecraft structures. Launch starts when the boosters engines ignite (lift-off) and ends with the spacecraft separation. During flight, the spacecraft is subjected to both static and dynamic loads. Such excitations may be of aerodynamic origin (e.g. wind, gusts or buffeting at transonic velocity) or due to the propulsion systems (e.g. longitudinal acceleration, thrust buildup or tail-off transients, or structure-propulsion coupling, etc.). The highest longitudinal acceleration occurs at the end of the solid rocket boost phase and, for example for Ariane 5 rocket, does not exceed 4.55 g (Arianespace, 2016). The highest lateral static acceleration may be up to 0.25 g. A load factor is a dimensionless multiple of g that represents the inertia force acting on the spacecraft or unit. Accelerations are directly related to forces/stresses and are “easy” to estimate and measure. There are many loads during launch in lateral and axial directions. Some loads are predicted as a function of time while some others can only be estimated statistically (random loads).
Thermal effects on structures:
Materials expand when heated and contract when cooled. In space, solar energy causes spacecraft’ temperatures to be neither uniform nor constant. As a result, structures distort. The different materials that make up a spacecraft expand or contract in different amounts as temperatures change. Thus, they push and pull on each other, resulting in stresses that can cause them to yield or rupture. Space structure design requires precise predictions of thermal deformations to verify pointing and alignment requirement for sensors and communication antennas. Thermo-elastic forces are usually the ones driving the design of joints in structure with dissimilar materials regarding CTE (coefficient of thermal expansion) thus causing high shear loads in fasteners which join those materials. In order to minimize the effects of thermal deformation and stresses, it is important to enclose critical equipment and assemblies in MLI to keep temperature changes and gradients as low as possible. When possible, it is important to design structural parts from materials with low CTE to minimize thermal deformations. For structures with large temperature excursions, it is recommended to use materials with similar coefficients of expansion. And when attaching structural components of different materials, it is important to design the joints to withstand the expected differences in thermal expansion.
To assess structural requirements, typically a set of tests are performed (ECSS, 2002):
● Design & Development Test:
○ Purpose: to demonstrate design concepts and acquire necessary information for further design.
○ Requires producing low-cost dev elements/hardware.
○ Test is performed on non-flight hardware.
● Qualification Test:
○ Purpose: Qualification tests are conducted on flight-quality components, subsystems, and systems to demonstrate that structural design requirements have been achieved. In these tests, critical portions of the design loadings are simulated, and the performance of the hardware is then compared with previously established accept-reject criteria based on mission requirements. The test loadings and durations are designed to give a high level of confidence that, if test specimens perform acceptably, similar items manufactured under similar conditions can survive the expected service environments. These loads and durations usually exceed expected flight loads and duration by a factor of safety which assures that, even with the worst combination of test tolerances, the flight levels shall not exceed the qualification test levels.
● Acceptance test:
○ Purpose: The purpose of acceptance testing is to demonstrate conformance to specification and to act as quality control screens to detect manufacturing defects, workmanship errors, the start of failures and other performance anomalies, which are not readily detectable by normal inspection techniques.
○ Usually not exceeding flight-limit loads.
○ The acceptance tests shall be conducted on all the flight products (including spares)
● Protoflight-testing:
○ In classic space, structural elements that were subjected to qualification tests are not eligible for flight, since there is no demonstration of remaining life of the product.
○ In NewSpace, elements which have been subject of qualification tests can be eligible for flight, provided a strategy to minimize the risk can be applied by enhancing for example development testing, by increasing the design factors of safety and by implementing an adequate spare policy.
Type of Environmental Mechanical Tests used to Verify Mechanical Requirements:
● Random Vibration:
○ Purpose: Verify strength and structural life by introducing random vibration through the mechanical interface.
○ Usually applied to electrical equipment and small spacecraft.
● Acoustic:
○ Purpose: Verify structural strength by introducing random vibration through acoustic pressure.
○ Usually done to lightweight structures with large surface areas, for example solar arrays with low mass/area ratio, reflectors, etc.
● Sine Test:
○ Purpose: used to verify natural frequencies and to achieve quasi-static loads at spacecraft level.
○ Usually applied to medium size spacecraft can perform a sine test to identify frequencies and mode shape and to achieve quasi-static loads.
● Shock Test:
○ Purpose: to verify resistance to high frequency shock waves caused by separation devices.
○ Performed to electrical equipment and full integrated spacecraft.
○ Typically up to 10kHz
● Static Test:
○ Purpose: to verify primary structure overall strength and displacements
○ Applied to primary structure and large spacecraft structures
The Physical Config IPT also needs to kick off different budgets which will be picked up by subsystems. Budgets are kept tracking the evolution of some particular factors or design metrics about a subsystem which impact the overall system design. Subsystem IPTs are responsible for keeping budgets up to date as the architecture of the subsystems evolves.
Once there is some preliminary idea of the functional architecture and the different subsystems which will be part of the spacecraft, these subsystems IPTs can be kicked off. They will:
● Elaborate the Resources and Equipment List (REL) which will feed the general Bill of Materials (BOM) of the spacecraft.
● Start keeping track of their mass + contingency margins.
● Kick off and keep track of their power profiles, plus contingency. Subsystem Power directly contributes to solar arrays size, Power Management Unit specs and Battery dimensioning.
● The AOCS IPT works closely with the Physical Config IPT to iterate on Propellant budget, thrusters definition (orientation, technology and number of thrusters). AOCS iterates on star-tracker location (including blinding analysis for all payload use cases), sizing of the wheels, micro-vibration requirements, etc.
Frequent and fluid communication between the Physical Config IPT lead and the subsystems IPT leads are needed to closely follow the progress and manage the evolution. Every team wants the best from their own perspective, so the pragmatic (as in, neutral) perspective of the Physical Config IPT leader is key in order to converge into a good solution. The Physical Config IPT can be “promoted” to the System IPT once the preliminary design milestones are met and the project moves forward. It is important to note that many projects never make the transition from a concept into anything else. This can happen due to funding, cancellations due to political reasons (in governmental projects), or other factors. Once a System IPT is formed, the concept stage gives way to a more organized and thorough phase; the project has been approved for the next stage, which gets closer to flight so all the fluidity of the conceptual stage needs to start to solidify.
#### Power
Spacecrafts must generate more power than they consume for sustained operations. Sounds obvious, but it is not such a trivial task to analyze all the potential scenarios during the mission to guarantee a consistent positive margin. Estimating margins at very early stages can be challenging, since many things such as on-board equipment have not been defined. For generating power, typically spacecraft are equipped with solar panels (some deep space probes[[3]](#_ftn3) use Radioisotope Thermal Generators, or RTGs, but this technology is outside of the scope of this book). It is advisable to choose very high margins during early stages of design (Brown, 2002, 325). The need to define as precise as possible margins at the early stages is due to the fact that power margins impact solar panel size, physical configuration, mass, and reliability. If later in the process the team discovers the power margins were insufficient, the overall impact in the project is major, since it will require a general physical redesign. Better to overestimate in the beginning and be less sorry later.
For maximum energy generation, solar panels need to present their surface as perpendicular to the Sun as possible, and this is a factor that can impact the overall physical configuration. Solar panels are very ineffective devices, converting to electrical energy approximately 15%-20% of the incident energy. The other remainder of the solar energy that is not converted to electrical power is either reflected or converted into heat. The heat in turn is reradiated into space (if properly thermally isolated from the rest of the bus); the net effect is to raise the solar-panel temperature. Considering this, making the solar panels to point as straight to the Sun is of paramount importance. Different approaches can be used as we shall see.
And here is where the true multidisciplinary nature of spacecraft design kicks in. The first of surely many conflicting choices designers will have to make will require a trade-off analysis (designing spacecraft is plagued by trade-offs). If the payload requires constant pointing to Earth (for example for comms applications, or surveillance), then mechanically moving (slewing) the whole platform to track the Sun (assuming panels are fixed to the bus) is a no-go, otherwise you will stop pointing to where the payload needs to point. This requires some decision-making: either you acquiesce to the fact you cannot make your panels look straight to the Sun and therefore you dimension the power system to deal with that power generation downside (by choosing low-power subsystems, decreasing redundancy, etc.), or, you try to find some other way of looking to the Sun with the best angle possible while keeping your payload happily looking to the ground. It can happen your on-board power consumption will not allow you to accept panels not perpendicular to the Sun, and you will be invited (i.e. forced) to choose an option which will guarantee enough power generation, no-matter-what. There are some alternatives. One variable to consider when it comes to power generation is the total area your solar panels will expose to the Sun. The number of solar cells and how you arrange and connect them together will impact their power output. Individual solar cells generate small power, voltage, and current levels on their own. A solar array connects cells in series to increase voltages to required levels, which are called strings. Strings, then, are connected in parallel to produce required current levels. The series-parallel arrangement of cells and strings is also designed to provide redundancy or string failure protection.
But the challenge is, how to distribute the strings of solar cells in the spacecraft body. Assuming the bus is a square box of, say, 1 meter each face, we cannot count on being able to completely cover it with solar cells all around. Usually only some sides only from the theoretical total six a box has can be counted. The other faces will be taken by different objects, such as sensors, actuators, data and telecommands antennae, payload, separation mechanism/ring, other deployables, and propulsion (Fig. 5.15). Also, since the faces of the square will present different angles to the solar beta angle, this can be suboptimal for power generation reasons. Often, it can be more convenient to have all the panels in a coplanar arrangement.

Figure 5.15 - Solar cells available area for a box-shaped spacecraft. Angles to the Sun can be suboptimal for power generation in this geometry
To present all the panel area surfaces in a coplanar way, the option is to mechanically fold out the panels, as shown in Fig. 5.16. The layout shown still depends on the panels being able to be folded in in their “home” positions. If this is not a possibility, one other layout approach is to stack the panels on one face and fold them out accordingly, as shown in Fig. 5.17.

Figure 5.16 - Solar panels folding out in a coplanar manner for better Sun angle geometry

Figure 5.17 - Stacked configuration for coplanar folded solar panels
Solar panels that fold out impact system complexity and reliability. The key factor in these mechanisms is to keep the panels secured during launch, and this is usually done by means of Hold Down Release Mechanism (HDRMs) which may include either release nuts (also known as separation nuts), explosive bolts, pyrotechnic cutters, or similar. In any case, the mission profile greatly shapes what path to take. For Earth pointing missions, the options shown in the previous figures are great for Polar missions and Sun-Synchronous orbits, where the beta angle appears consistently on one side of the orbit plane. For other orbits such as low inclination missions, the Sun traverses high and at times along the orbital plane, hence the optimal solar panel layout is one where the panels point upwards (Fig. 5.18).

Figure 5.18 - Solar panels for Sun angles along the orbital plane
In some particular orbits, for example equatorial (0 deg inclination) orbits, the Sun will lay almost in the orbital plane. This means the Sun will describe a trajectory from very low angles to very high angles, which no “fixed” configuration can fulfill. An alternative for that is to use a mechanism which will tilt the solar arrays as the Sun traverses during the orbit. These devices are typically called Solar Array Driver Motors (SADM, even though naming varies depending on manufacturer), which will make the panels spin on one axis as the Sun moves (Fig. 5.19). This adds mass, complexity, and a source of mechanical vibrations on the platform, which can be problematic for applications which require very low mechanical jitter. Again, payload will define what’s the best course of action. Always seeking simplicity, it is a good call to always choose the option which will minimize complexity. Regardless of the configuration chosen to create enough solar panel area for power generation, every decision must be communicated to the rest of the design team. Attitude control for example, must be aware of how the mass distribution of the different options (fixed vs deployable) will look like, for control reasons. The mechanical teams need to understand how the physical envelope of the spacecraft will be for launcher interfacing reasons. This is multidisciplinary design in a nutshell: move a screw and we are surely screwing something (and someone) up.

Figure 5.19 - Spacecraft sketch with rotating solar panels option
The process of finding the right physical configuration which fulfills power generation and other subsystems’ requirements is highly iterative and analysis based. But these iterations can also be eventually automated if there is a solid understanding of the most relevant input parameters and the way those parameters impact the architecture. For example, it would not be impossible to come up with a programmatic or computational way of defining the optimal solar-panel locations versus beta angle, or star tracker orientation considering beta angle and payload attitude profiles. For automated solar panel and battery sizing, factors as avionics and payload power consumption would be must-haves.
Power subsystem design requires understanding of the energy balance of the spacecraft. Both the solar-panel specifications and the battery capacity requirement can be determined from such analysis. Diving a bit in more details, the energy to be supplied from the solar panels consists of four parts:
1. Energy required to supply the daytime loads.
2. Energy required to charge the battery for the nighttime loads.
3. All the energy losses involved in the system including
a. power losses of solar panels to the daytime loads.
b. power losses of solar panels to the battery.
c. battery charging losses.
d. power losses of battery to the nighttime loads.
4. All energy reserves, day and night.
Ageing and degradation are factors that must take part in spacecraft design. Solar panels performance degrade with time. Batteries degrade with time. The source and rate of degradation depends on the orbit chosen and the mission type. Missions with very high altitudes like Medium Earth Orbits (MEO) will present accelerated degradation scenarios compared to LEO (Low Earth Orbits). The power system must be designed in a way its end-of-life (EOL) performance is adequate for the mission. Degradation in solar cells comes from radiation and it is caused by high-energy protons from solar flares and from trapped electrons in the Van Alien belt (Brown, 2002). Power loss due to degradation is counted as approximately 25% for spacecraft in geosynchronous orbit for seven years and 30% for 10 years (Agrawal, 1986).
The battery must be dimensioned accordingly. Batteries are required during eclipse periods, peak load periods, and during launch operations. There are different technologies of batteries available, being the most popular Li-Ion. Battery market for space is a very lucrative market which is facing positive perspectives taking into account the multiple constellations that are planning to get to orbit in the next two to five years. Some studies expect the space battery market to grow to reach US$51.3 billion in 2020 from US$ 44.8 billion in 2019. The global lithium-ion power battery market is expected to continue growing steadily during the period of 2018-2025. It is estimated that by 2025, the global lithium-ion power battery market size will exceed US $ 100 billion. Ease of availability, high energy density, low discharge rate, and long-life cycle are some of the key features that make lithium-ion power batteries superior to similar products, and are expected to boost global market revenue (QY Research, 2020).
The traditional definition of battery capacity is the current that can be supplied by the battery multiplied by the time from fully charged to depletion in ampere hours (Ah). Battery ampere-hour capacity and energy capacity are proportional. It is convenient to define battery energy capacity from the energy balance and convert energy capacity to ampere-hours. The battery energy capacity is the average nighttime power multiplied by the maximum eclipse time divided by the transmission efficiency battery to loads. Batteries are thermally sensitive equipment, imposing strict requirements to the thermal management subsystem.
As with any other subsystem on board, the power subsystem design cannot be done in an isolated manner. Many decisions taken in this subsystem impact other areas, such as mechanics, attitude control, software, payload, thermal, and the like. The Power Control IPT (perhaps in the beginning just one engineer) needs to closely collaborate with all other subsystems to make sure the power dimensioning does not overlook any relevant design factor.
#### Outcome
The physical configuration conceptual design provides important documents and analyses. Notably, a preliminary physical depiction of the spacecraft, i.e. a CAD design, with a good description on what type of structure will be used and the preliminary location of equipment. This design should generally depict the body configuration, general dimensions, and a good grasp of deployables and mechanisms. Theoretical measures of moments of inertia will be useful for AOCS analyses. At this stage, a document showing power dimensioning rationale along with a power budget shall be generated. This includes dimensioning the solar array and battery and having a grasp on how the power will be distributed across the bus. The power budget must present how power generation will be guaranteed for worst-case scenarios. Due to the early stage of the project, it may very well be the case power consumptions are not entirely known, so in that case by using information from quotations and adding enough margins should be enough at this stage. A block diagram of the power subsystem shall be created.
### Thermal Management
We are totally familiarized with using electronics in our everyday lives. Laptops, smartphones, TVs, they are all around us. Little attention we pay to the fact electronics is designed to work at limited thermal ranges. For example, Helsinki winters remind me of this every year when my phone (a phone that is not from the highest end but also not the lowest) starts rebooting or turning off when I am outside walking the dog. Electronics need a specific thermal environment for it to work as intended. Spacecraft electronics is no exception. It is the role of the Thermal Control Subsystem (TCS from now on) to make sure all subsystems operate within their allowable flight temperatures.
The thermal design engineer needs to consider all heat inputs the spacecraft will have, typically the Sun, the Earth and onboard electronics and subsystems. All these inputs are not steady but vary with the position in the orbit or seasonally. Thermal control oversees thermally isolating the spacecraft from the environment, except in specific parts where radiators are placed.
Thermal control engineering activity heavily relies on software tools to be performed; these tools provide simulations and numerical computations to have a good understanding of the thermal scenarios the satellite will experience during its mission way before the spacecraft is subject of environmental testing to verify the thermal design. Thermal control engineering runs for quite long times in projects solely on numerical models and theoretical calculations until the chance to verify them in some controlled environment appears. As expected, thermal subsystem interacts with many other subsystems, particularly with Power Subsystem. This results from the need of accounting for all dissipated electrical energy and transferring this energy to a radiator for rejection to space. Also, batteries generally have a narrow temperature operating range and often require special attention from the thermal control engineers. The thermal subsystem also interacts with on-board software, since often the software to automatically measure and control zones and rooms is done in the main computer, as well as with mechanical and structure since fixations of heaters, thermostats and insulation blankets must be agreed with mechanical team, and it also impacts the mass budget. Multidisciplinary engineering at its best.
Heat in space does not transfer by convection but only by conduction and radiation. This means heat produced by on-board electronics needs to be guided internally (by means of conduction) towards radiators so it can be radiated away to space. Same way, spacecraft can (and will) absorb radiation from the Sun and from the Earth. This radiation can be either absorbed for practical purposes (to heat things up) or reflected, to avoid overheating some critical components. Thermal control design internally divides the spacecraft in zones or rooms and makes sure all the equipment inside those rooms will stay in AFT (allowable flight temperature) margins. Heaters are located in places where the thermal balance changes over the mission lifetime. The cause of such changes are:
● Units dissipation changes. For example, important heat load being turned on or off or varies greatly over time.
● External heat fluxes changes: spacecraft attitude is changed, eclipses.
● Radiator efficiency changes: changes of optical properties of radiators.
Thermal control techniques can be:
● Passive: fixed area radiators, thermal blankets, etc.
● Active: heaters (controlled by thermostats and/or software).
Thermal control uses the following devices to accomplish its task:
● Electrical heaters.
● Thermistors
● Bimetallic Thermostats
● Radiator surfaces
● Thermal blankets (MLI)
● Insulation materials
● Thermal fillers
● Paints
● Heat Pipes
Again, orbits and payloads dictate which approach (passive, active) or what type of hardware and devices to use to thermally control the spacecraft. For active control, the routines to keep the zones under safe margins are executed in software running in either a dedicated CPU for thermal control or in the onboard computer. A heavyweight, active and redundant control system will greatly impact the power consumption of the bus.
#### Outcome
A general notion of the thermal control strategy shall be produced at the end of the conceptual design. A preliminary location of radiators, heaters, and measurement points shall be defined, following the physical configuration. A preliminary idea of the thermal control logic, redundancies, and software considerations (for example, a preliminary list of software modes or software requirements) shall be specified at this stage. A block diagram of the subsystem shall be generated.
### Attitude and Orbit Control Subsystem
A spacecraft randomly tumbling in space is of very little use. A great deal of why satellites are so useful and ubiquitous is not only because they can sweep and revisit spots on the Earth very quickly, but also because when they do, they can point their onboard resources in very precise manners to perform different types of tasks. This can be a camera, a radar, a directional antenna, etcetera. Precise orientation is not only related to Earth-pointing needs. A space telescope, or a data relay system are cases where very precise pointing is needed but the target might not be the Earth. The Attitude and Orbit Control Subsystem (AOCS, even though it can also be found as ADCS when orbit control is not part of it or not present) is probably the most complex subsystem in a spacecraft, and perhaps the most exotic; it easily captures a lot of attention from all directions. It is because of that complexity and its highly specialized purpose and functionality that it is the hardest subsystem to grasp in NewSpace companies. There is not an incredibly high amount of people with experience in attitude control and its extremely multidisciplinary nature of electronics, control theory, physics, algebra, math and of course software does not make it any simpler. But, more importantly, AOCS subsystem interfaces literally with every single other subsystem on board.
A very quick introduction to the general principles of attitude control will be provided next. There are great references on this topic, for example, (Wertz, 1990) and (Montenbruck & Gill, 2001).
The motion of a rigid spacecraft is specified by its position, velocity, attitude, and the way the attitude changes over time. The first two quantities (position and velocity) describe the way the center of mass of the spacecraft translates in three-dimensional space and are the subject of celestial mechanics, orbit determination, or space navigation, depending on the aspect of the problem that is emphasized. The latter two quantities (attitude and its time evolution) describe the rotational motion of the body of the spacecraft about its center of mass. In general, orbit and attitude are coupled with each other. For example, in a low altitude Earth orbit (LEO), the attitude will impact on the atmospheric drag which will affect the orbit. On the other hand, the orbit determines both the atmospheric density and the magnetic field strength at that location, which will, in turn, affect the attitude. However, the dynamical coupling between orbit and attitude will be normally ignored and it will be assumed that the time history of the spacecraft position is known and has been supplied by some process external to the attitude determination and control system. Attitude management (or analysis) may be divided into determination, prediction, and control functionalities.
● **Attitude determination** is the process of computing the orientation in three axes of the spacecraft with respect to either an inertial reference or some object of interest, such as the Earth. Attitude determination typically involves several types of sensors on each spacecraft and sophisticated data processing procedures. The accuracy limit is usually determined by a combination of processing activities and on-board spacecraft hardware. Typical sensors used for attitude determination are: star trackers, magnetometers, Earth Sensors, Inertial Measurement Units, Gyros, Sun sensors. Many of these sensors are complex and contain computing resources on their own. This means they are subject to any issues found in any computer-based systems operating in space such as bit-flips, resets and of course bugs. The on-board fault-handling capabilities must deal with this accordingly and prevent faulty sensors affecting the overall mission reliability by isolating and correcting (is possible) the fault.
● **Attitude estimation** is the process of forecasting the future orientation of the spacecraft by using dynamical models to extrapolate the attitude history. Here the limiting features are the knowledge of the applied and environmental torques and the accuracy of the mathematical model of spacecraft dynamics and hardware.
● **Attitude control** is the process of orienting the spacecraft in a specified, predetermined direction. It consists of two areas—attitude stabilization, which is the process of maintaining an existing orientation, and attitude maneuver control, which is the process of controlling the reorientation of the spacecraft from one attitude to another. The two areas are not totally distinct, however. For example, we speak of stabilizing a spacecraft with one axis toward the Earth, which implies a continuous change in its inertial orientation. The limiting factor for attitude control is typically the performance of the maneuver hardware and the control electronics, although with autonomous control systems, it may be the accuracy of orbit or attitude information. Some form of attitude determination and control is required for nearly all spacecraft. For engineering or flight-related functions, attitude determination is required only to provide a reference for control. Attitude control is required to avoid solar or atmospheric damage to sensitive components, to control heat dissipation, to point directional antennas and solar panels (for power generation), and to orient rockets used for orbit maneuvers. Typically, the attitude control accuracy necessary for engineering functions is on the order of fractions of degrees. Attitude requirements for the spacecraft payload are more varied and often more stringent than the engineering requirements. Payload requirements, such as telescope or antenna orientations, may involve attitude determination, attitude control, or both. Attitude constraints are most severe when they are the limiting factor in experimental accuracy or when it is desired to reduce the attitude uncertainty to a level such that it is not a factor in payload operation (Wertz, 1990). Typical sensors used for attitude control are reaction wheels, magnetorquers, thrusters, control-moment gyroscopes, among others.
The attitude control functionalities described above are realized by a combination of hardware and software. The hardware is composed of the sensors and actuators described in the previous paragraphs. The software is in charge of reading the data from the sensors suite and running the determination, estimation and control routines which will compute the torque needed to orient the spacecraft according to a desired set point, and applying the computed torque by means of the actuators suite, all in a stepwise manner. This is a digital control system, or a Cyber-Physical System, which we will cover more in detail further ahead.
#### Outcome
An AOCS conceptual design would in general produce a document, few spreadsheets and some high-level models in Octave, MATLAB, or some C code, or a mixture of all. Provided the physical configuration converges to some mass, deployables and inertia values, the AOCS conceptual design will include a preliminary analysis on perturbation torques for the orbit chosen, which will give a first iteration on actuator dimensioning. A first understanding of AOCS modes (typically sun pointing, fine reference pointing, safe hold, etc.), tip-off rates after separation, etc. AOCS modes will specify a preliminary combination of sensors and actuators used per mode, which will be of good value for the power design. Some preliminary selection of sensors, actuators and computers will give a good mass indication for the AOCS subsystem. A block diagram of the subsystem shall be expected at this stage.
### Propulsion
Even though propulsion can be considered a subsystem on its own, it is always tightly coupled to the AOCS subsystem; AOCS is the main _user_ of the Propulsion subsystem. Propulsion provides torques and forces at the service of the orientation and orbital needs of the mission. Propulsion technologies have been diversifying throughout the years.
#### Electric Propulsion
Electric propulsion is a technology aimed at achieving thrust with high exhaust velocities, which results in a reduction in the amount of propellant required for a given space mission or application compared to other conventional propulsion methods. Reduced propellant mass can significantly decrease the launch mass of a spacecraft or satellite, leading to lower costs from the use of smaller launch vehicles to deliver a desired mass into a given orbit or to a deep-space target. In general, electric propulsion (EP) encompasses any propulsion technology in which electricity is used to increase the propellant exhaust velocity. There are many figures of merit for electric thrusters, but mission and application planners are primarily interested in thrust, specific impulse, and total efficiency in relating the performance of the thruster to the delivered mass and change in the spacecraft velocity during thrust periods. Ion and Hall thrusters have emerged as leading electric propulsion technologies in terms of performance (thrust, Isp, and efficiency) and use in space applications. These thrusters operate in the power range of hundreds of watts up to tens of kilowatts with an Isp of thousands of seconds to tens of thousands of seconds, and they produce thrust levels typically of some fraction of a newton. Ion and Hall thrusters generally use heavy inert gases such as xenon as the propellant. Other propellant materials, such as cesium and mercury, have been investigated in the past, but xenon is generally preferable because it is not hazardous to handle and process, it does not condense on spacecraft components that are above cryogenic temperatures, its large mass compared to other inert gases generates higher thrust for a given input power, and it is easily stored at high densities and low tank mass fractions. In the past 20 years, electric propulsion use in spacecraft has grown steadily worldwide, and advanced electric thrusters have emerged over that time in several scientific missions and as an attractive alternative to chemical thrusters for station-keeping applications in geosynchronous communication satellites. Rapid growth has occurred in the last 10 years in the use of ion thrusters and Hall thrusters in communications satellites to reduce the propellant mass for station keeping and orbit insertion. The use of these technologies for primary propulsion in deep-space scientific applications has also been increasing over the past 10 years. There are many planned launches of new communications satellites and scientific missions that use ion and Hall thrusters in the coming years as the acceptance of the reliability and cost benefits of these systems grows. On the disadvantages of using Electrical Propulsion, spacecraft charging can be dangerous for on-board electronics if proper care is not taken in terms of avoiding high electrostatic potentials from building up across the structure. Performance fatigue of the neutralizer and/or the electron leakage to the high-voltage solar array can cause charging (Kuninaka & Molina-Morales, 2004). Spacecraft grounding strategy and a very careful operation of neutralizers is of great importance.
Electric thrusters are generally described in terms of the acceleration method used to produce the thrust. These methods can be easily separated into three categories: electrothermal, electrostatic and electromagnetic. Common EP thruster types are described next (from Goebel & Katz, 2008, reproduced with permission):
● Resistojet
○ Resistojets are electrothermal devices in which the propellant is heated by passing through a resistively heated chamber or over a resistively heated element before entering a downstream nozzle. The increase in exhaust velocity is due to the thermal heating of the propellant, which limits the Isp to low levels.
● Arcjet
○ An arcjet is also an electrothermal thruster that heats the propellant by passing it through a high current arc in line with the nozzle feed system. While there is an electric discharge involved in the propellant path, plasma effects are insignificant in the exhaust velocity because the propellant is weakly ionized. The specific impulse is limited by the thermal heating.
● Ion Thruster
○ Ion thrusters employ a variety of plasma generation techniques to ionize a large fraction of the propellant. These thrusters then utilize biased grids to electrostatically extract ions from the plasma and accelerate them to high velocity at voltages up to and exceeding 10 kV. Ion thrusters feature the highest efficiency (from 60% to >80%) and very high specific impulse (from 2000 to over 10,000 s) compared to other thruster types.
● Hall Thruster
○ This type of electrostatic thruster utilizes a cross-field discharge described by the Hall effect to generate the plasma. An electric field established perpendicular to an applied magnetic field electrostatically accelerates ions to high exhaust velocities, while the transverse magnetic field inhibits electron motion that would tend to short out the electric field. Hall thruster efficiency and specific impulse is somewhat less than that achievable in ion thrusters, but the thrust at a given power is higher and the device is much simpler and requires fewer power supplies to operate.
● Electrospray/Field Emission Electric Propulsion Thruster
○ These are two types of electrostatic electric propulsion devices that generate very low thrust (<1 mN). Electrospray thrusters extract ions or charged droplets from conductive liquids fed through small needles and accelerate them electrostatically with biased, aligned apertures to high energy. Field emission electric propulsion (FEEP) thrusters wick or transport liquid metals (typically indium or cesium) along needles, extracting ions from the sharp tip by field emission processes. Due to their very low thrust, these devices will be used for precision control of spacecraft position or attitude in space.
● Pulsed Plasma Thruster
○ A pulsed plasma thruster (PPT) is an electromagnetic thruster that utilizes a pulsed discharge to ionize a fraction of a solid propellant ablated into a plasma arc, and electromagnetic effects in the pulse to accelerate the ions to high exit velocity. The pulse repetition rate is used to determine the thrust level.
● Magnetoplasmadynamic Thruster
○ Magnetoplasmadynamic (MPD) thrusters are electromagnetic devices that use a very high current arc to ionize a significant fraction of the propellant, and then electromagnetic forces (Lorentz forces) in the plasma discharge to accelerate the charged propellant. Since both the current and the magnetic field are usually generated by the plasma discharge, MPD thrusters tend to operate at very high powers in order to generate sufficient force for high specific impulse operation, and thereby also generate high thrust compared to the other technologies described above.
#### Chemical Propulsion
Chemical propulsion subsystems are typically: cold-gas systems, monopropellant systems and bipropellant systems (Brown, 1996).
● Cold gas systems:
○ Almost all spacecraft of the 1960s used a cold-gas system. It is the simplest choice and the least expensive. Cold-gas systems can provide multiple restarts and pulsing. The major disadvantage of the system is low specific impulse and low thrust levels, with resultant high weight for all but the low total impulse missions.
● Monopropellant systems:
○ A monopropellant system generates hot, high-velocity gas by decomposing a single chemical, a monopropellant. The monopropellant is injected into a catalyst bed, where it decomposes; the resulting hot gases are expelled through a converging-diverging nozzle generating thrust. A monopropellant must be a slightly unstable chemical that decomposes exothermically to produce a hot gas. Typical chemicals are Hydrazine and Hydrogen Peroxide.
● Bipropellant systems:
○ In bipropellant systems, an oxidizer and fuel are fed as liquids through the injector at the head end of the chamber. Rapid combustion takes place as the liquid streams mix; the resultant gas flows through a converging-diverging nozzle. Bipropellant systems offer the most performance and the most versatility (pulsing, restart, variable thrust). They also offer the most failure modes and the highest price tags.
#### Outcome
Propulsion conceptual design outcome will consist of an educated choice of the propulsion technology to be used (electrical, chemical), which will shape power and mass estimated needs. A preliminary location of thrusters in the physical configuration should be expected. A propulsion budget shall be produced; such budget will differ whether propulsion will only be used for orbital trim or station keeping or also for attitude control. In any case, a good understanding of the amount of propellant and a block diagram of the subsystem shall be produced. A shortlist of potential subsystem providers will be gathered at this stage.
### Data Links
Space Missions require sending and receiving data from the ground segment. These data links functionally couple the spacecraft to the ground in order to transfer the commands (what the spacecraft is ought to do) and receive telemetry (health status of the satellite). At the same time, data links are used to transfer (i.e. downlink) the data the payload generates. Payload data is of course very mission dependent. For an optical Earth Observation spacecraft, payload data is in general raw pixels as captured by the camera sensor. For a radar mission, the raw data will be composed of digital samples of the echoes received while sweeping an area of the ground; these echoes will then be processed in order to obtain a visual representation of the area illuminated. Whether for telecommand and commanding (TTC), or for payload data transmission, radio links must be analyzed and designed according to the on-board and ground capabilities.
An important characteristic value to compute at the conceptual stage is an estimate of the signal-to-noise ratio (SNR) at both uplink and downlink, in order to understand how reliable the transmission and reception of data will be amid noise. SNR is computed considering output power, antenna gain and losses (also known as Equivalent Isotropic Radiated Power, or EIRP), atmospheric losses, free-space loss (FSL) and receiver sensitivity. After computing SNR and considering channel coding and symbol and bit rates, other important derivative parameters are obtained such as Energy per Symbol, Energy per coded bit and Energy Per Bit to Noise Energy (Eb/No). In digital communications, the quality of signals is evaluated by the BER (Bit-Error-Rate). Required Eb/N0 in decibels for a BER range is usually specified by the manufacturer based on the used modulation type (Ghasemi et al., 2013).
Historically, space links bandwidths and data rates have been limited, hence information has been carefully packed between space segment and ground segment, minimizing overhead and ensuring that literally every bit transmitted and received had an explicit meaning, usually by means of defining lightweight ad-hoc bit stream protocols. With higher data rates, wider bandwidths, and richer on-board computing resources, with embedded operating systems capable of running complex networking tasks, the use of standard protocol stacks such as IP became common practice in NewSpace. With this approach, the meaningful data is carried as payload on top of several protocol layers, meaning that not every bit modulated in the channels is meaningful (from an application perspective) anymore; in other words, some overhead in the space link is accepted as a reasonably “price” to pay for the rich ecosystem of well proven service the IP stack provides. For example, in the past, in order to transfer a file from ground to spacecraft, a file-transfer ad-hoc protocol had to be devised by the engineering team. In an IP-based link, the ground and space segment can send files using SFTP (Secure File Transfer Protocol). Other services such as SSH (secure-shell), rsync (file system sync), and netcat facilitate the administration tasks of the remote spacecraft. With an IP based link, a great deal of the concept of operations boils down to something very similar to operating a server across a network; i.e operating a spacecraft becomes a sysadmin task. This also enables the use of sysadmin automation tools and methods which eases the operation of multisatellite constellations in an automated manner.
But IP datagrams cannot be directly modulated on a space radio link, since it lacks the underlying layers to deal for example with the physical layer (layer 1) and data link (layer 2). Typically, IP datagrams are encapsulated in CCSDS SDLPs (Space-Data-Link protocols): Telecommand (TC), Telemetry (TM), Advanced Orbiting Systems (AOS), and Proximity-1 (Prox-1). IP datagrams are transferred by encapsulating them, one-for-one, within CCSDS Encapsulation Packets. The Encapsulation Packets are transferred directly within one or more CCSDS SDLP Transfer Frames (CCSDS, 2012). CCSDS SDLPs are supported by most ground station providers, which makes it a “safe bet” for designers, but it is not the only option. Companies which own end-to-end TTC capabilities (meaning owning also the software-defined-radios at the ground segment) could define their own layer 1 and 2 protocols. This is highly discouraged for NewSpace orgs since it can be very time consuming and error prone.
#### Outcome
Typically, a conceptual design of data links should include a link budget. This is typically a spreadsheet where all the characteristic values of a digital radio link must be assessed (SNR basically, with BER assessment and Eb/No), to the best knowledge at that stage. Candidate equipment (radios), antennae, and ground stations are evaluated, and a preliminary selection is presented. For a CCSDS-based link, an assessment of the encoding/decoding strategy (either hardware, software, or both) needs to be made, since it impacts the avionics and the software architecture.
### Fault-Detection, Isolation and Recovery (FDIR)
As we discussed before (see section 3.10), failure tends to happen. Since failure is rarely an atomic effect (unless a catastrophic event), but a combination of constitutive faults which combine towards a damaging event, a functionality must be included on-board to monitor occurrences of these faults and prevent them to find a path through the “Swiss cheese” towards disaster. For any fault-handling functionality to be effective, first it is required to have a thorough understanding how the system can fail; i.e. a failure analysis must be performed. To understand how a system can fail, the failure analysis must first understand how the system operates. There are several ways to do this, but first, one needs to recognize that knowing how the system is supposed to operate does not mean one will know how it can fail. In fact, systems designers and development engineers (while helpful in defining how the system is supposed to operate) are sometimes not very helpful in defining how it can fail. Designers and development engineers are trained to think in terms of how the system is supposed to work, not how it is supposed _not_ to work (Berk, 2009). Failure analysis requires multidisciplinary brainstorming, and a well-documented output for easing the implementation: this is usually in the form of fault trees, Ishikawa diagrams, or concept maps. Faults can happen at every layer of the system hierarchy, but frequently it is software the discipline to be informed about them (detect them), and the one to apply the logic once the fault has been detected.
In short, on-board fault handling is usually a software routine that runs in the onboard computer. This software routine continuously observes a set of variables and performs some action if those variables meet some criteria. Defining what variables to observe and what actions to take is a designer’s decision, fed by the failure analysis. FDIR configuration can be very complex for big spacecraft, with thousands of variables to monitor, depending on particular operation modes. NewSpace must adopt a very lightweight approach to it; only the most critical variables must be monitored for faults. Moreover, since NewSpace typically cannot afford to spend months running failure analyses, this means knowledge about how the system can fail is partial. Therefore, FDIR capabilities must be progressively “grown” on-orbit as more understanding about how the system performs is gained. When the failure modes are not well understood, it is recommended not to add many automatic actions in order to prevent unintended consequences.
#### Outcome
At the conceptual stage, the FDIR strategy will be highly preliminary since the architecture remains fluid, and failure modes will not be entirely understood. But as preliminary as this analysis can be, it can provide good information on the complexity of the FDIR strategy and its different configurations and modes, and help the software team kickstart the development of the FDIR module.
### Further Design
The boundaries between conceptual, preliminary, and critical design are blurry and arbitrary. The previous section on conceptual design stayed at a very high level of abstraction: spreadsheets, some documents, and perhaps some code. At the conceptual stage, things are fluid and there are still big unknowns to be sorted, such as a general avionics decomposition, suppliers, subsystem decompositions, make versus buy, etc. Many learning curves are at the beginning. The conceptual stage is very necessary to define directions for all these unknowns. Then, as design matures, some prototypes and early iterations of the work start to emerge, and this is what is usually called preliminary design.
Preliminary design evolves until a point when a “snapshot” of such design needs to be put under scrutiny before moving further ahead. The purpose of reviewing this snapshot is to make sure no further steps will be taken if the design does not seem (at least at this stage) to fulfill the requirements.
At the Preliminary Design Review, a set of information “pieces” are expected to be defended against a board of people who are supposed to know
● Master WBS and a schedule
● Current physical configuration:
○ The size and shape of the spacecraft body (bus)
○ Solar panel locations
○ Payload location
○ AOCS Sensors and actuators locations and orientations
● A system block diagram
● Detailed Subsystems block diagrams
● Thermal design
● Power Subsystem dimensioning: Battery, Solar Panels.
● Electromagnetic Compatibility considerations
● Fabrication and manufacturing considerations
● Preliminary Supply Chain Breakdown Structure (Bill of Materials)
● All technical budgets (Mass, Power, Pointing, Propulsion, Link, Thermal)
● Software Development Plan
● Risk Management Strategy
● Reliability Studies / Risk analyses: what are the biggest concerns in terms of reliability, what are the single points of failure.
● Safety and Environmental Strategy
● Cost Breakdown
For the design to move ahead, an approval of the preliminary proposal needs to be granted. For this, the main stakeholders of the mission (including the payload) are brought around a table where all the IPTs present their rationales behind the design decisions taken. If the preliminary design is accepted, the design moves forward to synthesize the different elements of the architecture. By the time the design reaches critical stage, no major unknowns or learning curves must remain unsorted.
## Modular Avionics Design
Engineers are extremely good at reinventing the wheel. They (alas, we) tend to believe that everything is better if designed and developed from scratch. We typically run very biased and partial _make versus buy_ analyses and we consistently rig them to look like going in-house is the best option. For space startups this is particularly damaging, since developing things from ground up is (of course) very time consuming but at the same time it generates non-recurring costs that can be directly suicidal. A NewSpace project should only insist on developing from scratch the technology that represents the strategic differentiating “core” of whatever the company is trying to do, and all the rest should come from off a shelf. NewSpace orgs should (must) put themselves in the role of system integrators and minimize non-recurring costs; i.e. avoid burning money. Of course, off-the-shelf is not such an easy option for every discipline or domain. Mechanics, for example, is often designed in a very ad-hoc way, for specific mission requirements. This is understandable since the payload defines many factors (as we say in the Conceptual Design section) such as sensor and actuator placing, solar panels, battery size, etc. But avionics is an area where options in the market are many and increasing. Computing is probably the one area that benefits the most from a rich ecosystem of vendors and options. Today, there are fewer and fewer on-board computing requirements that cannot be met with off-the-shelf commodity embedded computers that are very capable.
A very generic avionics architecture for a spacecraft looks as depicted in Fig. 5.20.

Figure 5.20 - A generic avionics block diagram
The green boxes are different functional _chains_ or blocks that provide some essential capability for the spacecraft. Regardless what type of application or mission the spacecraft is supposed to perform, those functional blocks are mostly always present; in other words, you cannot do space without them. The “payload” yellow box encloses the functionality of the actual application which gives a purpose to the mission, which can be:
● Connectivity:
○ IoT
○ Satcom
○ LaserComm
● Earth Observation:
○ Optical
○ Radar
○ Infrared
● Other:
○ In-orbit robotics
○ Debris Removal, in-orbit assembly, inspection, etc.
○ Pure science: Atmosphere, Astronomy, etc.
Some of these functional chains or blocks do not need to have a computer inside every time; they can be passive as well. For example, Thermal Control can be passive (using insulation, paints, and radiators) hence computing will not be present there.
What stands out from the figure is that spacecraft avionics needs a lot of interconnection. This means: the functional chains must exchange data with each other to couple/combine their functionalities for the global function of the spacecraft to emerge. That data is in the form of commands, telemetry, or generic data streams such as files, firmware binaries, payload data, etc. The architecture is recursive which means the functional chains have internal composition as well which will (in most cases) also require interconnection. For example, for the attitude control subsystem, the internal composition is depicted in Fig. 5.21.

Figure 5.21 - AOCS Functional Chain as a member of the avionics architecture
Spacecraft function coupling highly depends on an aggregation of communication buses. It is clear that interconnection is probably the most important functional coupling requirement of any spacecraft avionics architecture. A spacecraft with poor interconnection between functional chains will see its performance and concept of operations greatly affected. This is a factor that is usually overlooked by space startups: low-speed, low-bandwidth buses are chosen at the early stages of the project, just to find out later that the throughputs are insufficient for the overall performance. Changing interconnection buses at late stages can be costly, both in money and time. With high-speed serial buses and high-performance processors dropping prices and being more and more accessible, there is no reason not to design the avionics to be highly interconnected and using high speed connections.
Historically, spacecraft avionics has used hybrid interconnect approaches. Most typically, ad-hoc, daisy-chain based topologies, where cables come out from a box and go inside the next one. Legacy spacecraft avionics feature a fair deal of “private” buses; i.e buses that are only accessible by some subsystems and not from the rest of the architecture. When discussing interconnection, there are two different levels we will discuss:
● **Subsystem level**: how a subsystem chooses to connect and functionally couple with its internal components.
● **System level**: how different subsystems can connect to each other to provide the spacecraft “global” function. For example, how the command and data handling subsystem connect to the power subsystem, and vice versa.
At the subsystem level, the approach has been hybrid as well. Typically:
● Box-centric “star” approach: the subsystem main unit (which usually hosts its CPU) resides in a box of customized form factor and this box is the central “concentrator” of the subsystem. Everything flows towards it. The box exposes a set of connectors. Then, different harnesses and cables come in and out from those connectors, towards external peripherals. These peripherals can be either point-to-point or connected through a bus, or both (Fig. 5.22).

Figure 5.22 - Subsystem federated architecture
In this type of design, the mechanical functional coupling between different parts of the subsystem is likely different for the different peripherals; i.e different types of connectors, pinouts, harness, etc.
● Backplane: In this approach, the computing unit and the peripherals share a mechanical interface which allows them to connect to a board (called backplane) acting as the mechanical foundation. The peripherals connect by sliding in through slot connectors and mating orthogonally to the backplane. The backplane not only provides the mechanical coupling but also routes signal and power lines between all the different modules connected to it (Fig. 5.23).

Figure 5.23 - A backplane connecting 1 CPU Unit and 2 Peripheral Boards
How to route the signals in the backplane is a design decision, since the backplane is basically a printed circuit board like any other. The backplane approach quickly gained popularity among designers: the advantage of routing signals in a standardized way quickly became an advantage. This made it possible for multiple vendors to be able to interconnect their products in backplanes and achieve interoperability. Several backplane standards proliferated, but one stood out as probably the most popular standard from those years: VME (Versa Module Europe), and it is still in use today in some legacy applications. VME is one of the early open-standard backplane architectures. It was created to enable different companies to create interoperable computing systems, following standard form factors and signal routing. Among the typical components in the VME ecosystem, you can find processors, analog/digital boards, etc., as well as chassis (housings), backplanes, power supplies, and other subcomponents. System integrators benefited from VME in the next aspects:
● It provided multiple vendors to choose from (supply chain de-risk)
● A standard architecture versus costly proprietary solutions
● A tech platform with a known evolution plan
● Shorter development times (not having to start from scratch)
● Lower non-recurring costs (by not starting from scratch)
● An open specification to be able to choose to do subsystems in-house if needed
The VME specification was designed with upgrade paths so that the technology would be usable for a long time. VME was based on the Eurocard form factor, where boards are typically 3U or 6U high. Design was quite rugged; with shrouded pins and rugged connectors, the form factor became a favorite for many military, aerospace and industrial applications. VME was upgraded to VME64x (VITA 1.1), while retaining backwards compatibility. Over the years though, even these upgrades weren’t enough bandwidth for many applications. Then, the switch fabrics entered the game.
## Integrated Modular Avionics (IMA)
A modular subsystem interconnect would be only partially exploited without a software architecture on top that matches and exploits this modular approach. A modular software architecture will be discussed in this section.
We have assessed so far interconnection approaches which generally relate to the way signals are routed from modules to slots and backplanes, but not much has been said about the software running on the CPUs connected to those backplanes.
Space is a conservative industry, and for valid reasons. Spacecraft designers don’t get the chance to push the reset button in space if something hangs, so things must be done in special ways to minimize the chance of a failure which could turn the mission useless, with all the losses and damages associated with that. Surprisingly, the aerospace industry is less conservative when it comes to exploring new architectural approaches, despite its extremely strict safety requirements. Main reason for this is a never-ending search for fuel efficiency which means weight reduction. Due to this, in the last fifteen or twenty years the architecture behind aerospace avionics development has shifted its paradigm considerably. The federated architecture (as in, one computer assigned to one functionality) that has been popular up to the end of the century is being replaced by a different approach, which is called Integrated Modular Avionics (IMA). Some sources point that the origins of the IMA concept originated in the United States with the new F-22 and F-35 fighters and then migrated to the commercial airliner sector. Others say the modular avionics concept has been used in business jets and regional airliners since the late 1980s or early 90s. The modular approach also is seen on the military side in tankers and transport aircraft, such as KC135, C-130 as well as in the Airbus A400M (Ramsey, 2007). In a federated architecture, a system main function is decomposed into smaller blocks that provide certain specific functions. Each black box – often called Line Replaceable Unit (LRU) - contains the hardware and software required for it to provide its function. In the federated concept, each new function added to an avionics system requires the addition of new LRUs. This means that there is a linear correlation between functionality and mass, volume and power; i.e. every new functionality proportionally increases all these factors. What is more, for every new function added to the system there is a consequent increase in multidisciplinary configuration control efforts, updates, iterations, etc. This approach quickly met a limit. The aerospace industry understood that the classical concept of “one function maps to one computer” could no longer be maintained. To tackle this issue, the IMA (Integrated Modular Avionics) concept emerged. Exploiting the fact that software does not weigh anything in and of itself, IMA allowed retaining some advantages of the federated architecture, like fault containment, while decreasing the overhead of separating each function physically from others. The main architectural principle behind IMA is the introduction of a shared computing resource which contains functions from several LRUs. This means, function does not directly map 1:1 to the physical architecture, but one physical computing unit (CPU) can share its computing resources to execute more than one function.
> [!important]
> A contradiction surrounds the IMA concept. It could be argued that IMA proposes an architecture that technology has already rendered obsolete: centralized architectures. With embedded processors, memories and other devices becoming more reliable and less expensive, surely this trend should favor _less_ rather than more centralization. Thus, following this argument, a modern avionics architecture should be more, not less, federated, with existing functions “deconstructed” into smaller components, and each having its own processor (Rushby, 1999). There is some plausibility to this argument, but the distinction between the “more federated” architecture and centralized IMA proves to be debatable on closer inspection. A federated architecture is one whose components are very loosely coupled—meaning that they can operate largely independently. But the different elements of a function—for example, orbit and attitude control - usually are rather tightly coupled so that the deconstructed function would not be a federated system so much as a _distributed_ one—meaning a system whose components may be physically separated, but which must closely coordinate to achieve some collective purpose. Consequently, a conceptually centralized architecture will be, internally, a distributed system, and the basic services that it provides will not differ in a significant way from those required for the more federated architecture (Rushby, 1999).
Then, IMA quickly gained traction in aerospace, and its success caught the attention of the space industry, mainly of the European Space Agency (ESA). IMA became of interest for space applications due to the fact that it allowed for mass, volume and power savings by pushing more functionalities to the software.

Figure 5.51 - Federated (left) vs Integrated (right) architectures
Combining multiple functions on one processing/computing module requires specific considerations and requirements which are not relevant for regular federated LRUs. The physical separation that existed between LRUs has now to be provided virtually for applications running in the same core processing module (CPM). Furthermore, the sharing of the same computing resource influences the development process, because new dependencies appear among different elements. Input/output common resources are provided by I/O modules. These I/O Modules (IOMs) interface with sensors/actuators, acting as a bridge between them and core processing modules (CPMs). A Core processing module that also contains I/O capabilities is named a Core Processing Input/Output Module (CPIOM). If the architecture does not make use of CPIOMs, the IO layer remains as _thin_ as can be. Removing all software from the IOMs removes complexity and therefore verification, and configuration control costs considerably. The physical separation which was inherent in the LRUs for the federated architecture from a software point of view must be virtually enforced in an IMA platform. The performance of each application shall be unaffected by the presence of others. This separation is provided by partitioning the common resources and assigning those partitioned resources to an application. The partitioning of processing power is enforced by strictly limiting the time each application can use the processor and by restraining its memory access. Memory access is controlled by hardware thus preventing partitions from interfering with each other. Each software application is therefore partitioned in space and time.
Avionic software applications have different levels of criticality based on the effects that a failure on a given application would cause to the system. Those criticality levels are specified in standards such as the RTCA/DO178C[[5]](#_ftn5) which defines five different criticality levels (from A: catastrophic, to E: no safety effect). The software development efforts and costs grow exponentially with the criticality level required for certification, since the process of testing and validation becomes more complex. In an integral, non-partitioned architecture all the software in a functional block has to be validated under the same criticality level. The IMA approach enables different software partitions with different criticality levels to be integrated under the same platform and certified separately, which eases the certification process. Since each application is isolated from others, it is guaranteed that faults will not propagate, provided the separation “agent” is able to do so. As a result, it is possible to create an integrated system that has the same inherent fault containment as a federated system. For achieving such containment, Rushby specifies a set of guidelines (Rushby, 1999):
● Gold Standard for Partitioning: A partitioned system should provide fault containment equivalent to an idealized system in which each partition is allocated an independent processor and associated peripherals and all inter-partition communications are carried on dedicated lines.
● Alternative Gold Standard for Partitioning: The behavior and performance of software in one partition must be unaffected by the software in other partitions.
● Spatial Partitioning: Spatial partitioning must ensure that software in one partition cannot change the software or private data of another partition (either in memory or in transit) nor command the private devices or actuators of other partitions.
● Temporal Partitioning: Temporal partitioning must ensure that the service received from shared resources by the software in one partition cannot be affected by the software in another partition. This includes the performance of the resource concerned, as well as the rate, latency, jitter, and duration of scheduled access to it.
The mechanisms of partitioning must block the spatial and temporal pathways for fault propagation by interposing themselves between avionics software functions and the shared resources that they use.
### Application Interfaces and ARINC-653
From the previous section, we still have not discussed the entity in charge of guaranteeing that different applications will run on different partitions on top of a shared processing platform and with enough isolation. ARINC 653 defines a standard interface between the software applications and the underlying operating system. This middle layer is known as application executive (APEX) interface. The philosophy of ARINC 653 is centered upon a robust time and space partitioned operating system, which allows independent execution of different partitions. In ARINC 653, a partition is defined as portions of software specific to avionics applications that are subject to robust space and time partitioning. They occupy a similar role to processes in regular operating systems, having their own data, context, attributes, etc. The underlying architecture of a partition is similar to that of a multitasking application within a general-purpose computer. Each partition consists of one or more concurrently executing processes (threads), sharing access to processor resources based upon the requirements of the application. An application partition is limited to use the services provided by the APEX defined in ARINC 653 while system partitions can use interfaces that are specific to the underlying hardware or platform.

Figure 5.52 - Building blocks of a partitioned system
Each partition is scheduled to the processor in a fixed, predetermined, cyclic basis, guaranteeing temporal segregation. The static schedule is defined by specifying the period and the duration of each partition’s execution. The period of a partition is defined as the interval at which computing resources are assigned to it while the duration is defined as the amount of execution time required by the partition within one period. The periods and durations of all partitions are used to compose a major time frame. The major time frame is the schedule basic unit that is cyclically repeated.
Each partition has predetermined areas of memory allocated to it. These unique memory areas are identified based upon the requirements of the individual partitions and vary in size and access rights.
### Benefits of using Integrated Modular Avionics (IMA)
The expected benefits from implementing IMA solutions are:
● Optimizations and saving of mass, volume and power consumption;
● Simpler Assembly, Integration and Verification (AIV) activities due to smaller number of physical units and simpler harness.
● Focused development efforts: the developer can focus wholly on their software, instead of focusing on the complete development of an LRU;
● Retaining federated system properties like fault containment;
● Incremental validation and certification of applications;
● IMA has already been applied in the development of several aircrafts, most noticeably on the development of the A380 and Boeing 787. The Airbus and Boeing programs reported savings in terms of mass, power and volume of 25%, 50% and 60% respectively (Itier, 2007). IMA has eliminated 100 LRUs from the Boeing 787 (Ramsey, 2007).
### IMA for Space (IMA-SP)
There are some major differences between the space and aeronautical domains which constrain the development of space avionic systems. While most aeronautical systems operate with human intervention, most space systems are unmanned. The IMA approach used in the aeronautics domain cannot be used directly in the space domain. The reasons are: (1) The processing platforms currently used in space are less powerful than the platforms in aeronautics, i.e. there is a technology gap. (2) there are strong requirements that hardware and software modules already developed for the current platform shall remain compatible with any new architecture in order to keep the costs of the architecture transition low (Herpel et al., 2016).
Radiation is a major threat to onboard electronic systems and software since it can cause electrical fluctuations and software errors. Additionally, space systems are very constrained in terms of available power, mass and volume. It is expensive and impractical to develop and deploy complex platforms therefore most systems have very limited hardware. Satellites are usually limited to one or two main computers connected with the required transducers and payload equipment through robust data buses. When compared with commercial aviation the space market largely lacks standardization. Each major player in the industry designs and operates its systems using their own internal principles. The European Space Agency has invested a great amount of effort in the last decades to standardize space engineering across Europe, as well as the European Cooperation for Space Standardization[[6]](#_ftn6) (ECSS, which includes ESA as a member). ESA has defined[[7]](#_ftn7) several ground rules which guide the IMA-SP system platform specification. They intend to adapt IMA to space without totally breaking with the current space avionics approach.
To enable the usage of an operating system with the requirement to implement time and space partitioning, ESA defined a two-level software executive. The system executive level is composed by a software hypervisor that segregates computing resources between partitions. This hypervisor is responsible for the robust isolation of the applications and for implementing the static CPU allocation schedule. A second level, the application level, is composed by the user’s applications running in an isolated environment (partition). Each application can implement a system function by running several tasks/processes. The multi-tasking environment is provided by a paravirtualized operating system which runs in each partition. These partition operating systems (POS) are modified to operate along with the underlying hypervisor. The hypervisor supplies a software interface layer to which these operating systems attach to. In the context of IMA for Space, ESA selected the use of RTEMS as the main partition operating system.
Three hypervisors are currently being evaluated by ESA as part of the IMA-SP platform: XtratuM, AIR[[9]](#_ftn9) and PikeOS[[10]](#_ftn10). XtratuM is an open source hypervisor available for the x86 and LEON architectures that is developed by the Universidad Politécnica de Valencia. Despite providing similar services to the ARINC 653 standard, XtratuM does not aim at being ARINC 653 compatible. PikeOS is a commercial microkernel which supports many APIs and virtualized operating systems. Finally, AIR is an open source operating system developed by GMV and based on RTEMS. The IMA-SP specification also leaves an option to have partitions without RTOS. These “bare metal” partitions can be used for very critical single threading code which doesn’t require a full featured real time operating system.
### Networking and Input/Output Considerations for IMA-SP
In a partitioned system, the quantification of time spent in I/O tasks is even more critical, since it shall be known on whose behalf I/O tasks are performed. The costs for these tasks should be booked to the applications that actually benefit from it. Robust partitioning demands that applications use only those time resources that have been reserved for them during the system design phase. I/O activities shall, hence, be scheduled for periods when the applications that use these specific capabilities are being executed. Furthermore, safety requirements may forbid that some partitions are interrupted by hardware during their guaranteed execution time slices. In consequence, it must be ensured that I/O devices have enough buffering capabilities at their disposal to store data during the time non-interruptible applications are running. Segregating a data bus is harder than segregating memory and processor resources. I/O handling software must be able to route data from an incoming bus to the application to which that data belongs to. General purpose operating systems use network stacks to route data to different applications. In a virtualized and partitioned architecture, the incoming data must be shared not only with applications in the same partition but among partitions themselves. Each partition operating system could manage their own devices, but this is only feasible if devices are not shared among partitions. If a device is used by more than one partition then there is the latent risk of one of the partitions leaving the shared device in an unknown state, therefore influencing the second partition. In aeronautical IMA this problem is approached by using partitioning aware data buses like AFDX. AFDX devices are smart in the sense that they can determine to which partition of an end system the data belongs to. This point is critical since the I/O in the platform must be managed in such way that behavior by one partition cannot affect the I/O services received by another From the rationale exposed in the last paragraphs we can sum up a set of characteristics the I/O system must have to respect the characteristics of a partitioned system:
● The I/O module shall be generic and therefore decoupled from the application.
● The I/O module shall be robust, in the sense that it can be used by more than one partition without interference.
● The I/O module shall be able to route data to its rightful owner (a given application in a given partition).
● The I/O module shall be quantifiable (i.e. its execution time must be bound and measurable).
● The I/O module shall not interrupt, disrupt or have any kind of impact in the time and space partitioning of the applications.
Another option for networking is to use TTEthernet[[11]](#_ftn11). A TTEthernet network includes features as global time using clock synchronisation and offers fault isolation mechanisms to manage channel and node failures. TTEthernet defines three types of data flow: Time-Triggered (TT) data flow which is the higher priority traffic; Rate Constrained (RC) traffic, which is equivalent to AFDX traffic, and Best Effort (BE) traffic. This makes TTEthernet suitable for mixed-criticality applications such as avionic and automotive applications where highly critical control functions such as a flight management system cohabit with less critical functions such as an entertainment system. By adding TTEthernet switches guaranteed hard real-time communication pathways can be created in an Ethernet network, without impacting any of the existing applications. It can be used for the design of deterministic control systems, fault-tolerant systems and infotainment/media applications which require multiple large congestion-free data streams (Robati et al., 2014). The TTEthernet product family supports bandwidths of 10, 100 Mbit/s, 1 Gbit/s, and higher, as well as copper and fiber optics physical networks. It enables purely synchronous and asynchronous operation over the same network (Herpel et al., 2016). The combination of IMA and TTEthernet enables the error isolation provided not only at the level of the modules through the partitioning but also the level of the network using different data traffics and the concept of virtual links. Third, TTEthernet enables the safe integration of data traffics with different performance and reliability requirements (Robati et al., 2014).
## Conclusion: Don’t Make A Sandwich from Scratch
Time to market and wheel reinvention do not combine very well. A while ago some guy on YouTube decided to make himself a sandwich totally from scratch; it took him six months and 1,500 dollars. NewSpace orgs cannot make their sandwiches from scratch, yet they often do. The only way to get to space quickly and reliably is to pick up things from the space shelf, use standardized form factors and high-speed interconnects, and set the mindset at the system level. Avionics and software modularity not only streamline interoperability between NewSpace suppliers and actors in general, but it also enables scalability. Thus, a baseline architecture can support small, medium and large satellites. Modular avionics, by adopting high speed wireless inter-satellite-links can also leverage some advanced concepts such as in-orbit assembly, in-orbit servicing and segmented architectures.

Figure 5.53 – Modularity, scalability and interconnection concepts and how they relate
**References**
Agrawal, B. N. (1986). Design of Geosynchronous Spacecraft. Prentice-Hall, Upper Saddle River, NJ.
Arianespace. (2016). Ariane 5 User's Manual (Issue 5, Rev 2 ed.).
Berk, J. (2009). Systems Failure Analysis. ASM International.
Brown, C. D. (1996). Spacecraft Propulsion (J. S. Przemieniecki, Ed.). AIAA Education Series.
Brown, C. D. (2002). Elements of Spacecraft Design. AIAA Education Series. 10.2514/4.861796
CCSDS. (2012). IP OVER CCSDS SPACE LINKS - RECOMMENDED STANDARD CCSDS 702.1-B-1. CCSDS.
ECSS. (2002). ECSS-E-10-03A - Space engineering - Testing. European Cooperation for Space Standardization (ECSS).
Ghasemi, A., Abedi, A., & Ghasemi, F. (2013). Propagation Engineering in Radio Links Design. Springer. 10.1007/978-1-4614-5314-7
Goebel, D. M., & Katz, I. (2008). Fundamentals of Electric Propulsion: Ion and Hall Thrusters. JPL Space Science and Technology Series.
Herpel, H., Schuettauf, A., Willich, G., Tverdyshev, S., Pletner, S., Schoen, F., Kiewe, B., Fidi, C., Maeke-Kail,, M., & Eckstein, K. (2016). Open modular computing platforms in space — Learning from other industrial domains. 1-11. 10.1109
Itier, J.-B. (2007). A380 Integrated Modular Avionics The history, objectives and challenges of the deployment of IMA on A380. Artist - European Network of Excellence in Embedded Systems Design. http://www.artist-embedded.org/docs/Events/2007/IMA/Slides/ARTIST2_IMA_Itier.pdf
Kuninaka, H., & Molina-Morales, P. (2004). Spacecraft charging due to lack of neutralization on Ion thrusters. Acta Astronautica, 55(1), 27-38.
Montenbruck, O., & Gill, E. (2001). Satellite Orbits Models, Methods, and Applications. Springer.
QY Research. (2020, Apr 17). Global Lithium-ion Power Battery Market Insights, Forecast to 2026 (preview). https://www.marketstudyreport.com/reports/global-lithium-ion-power-battery-market-insights-forecast-to-2026
Ramsey, J. (2007, February 1). Integrated Modular Avionics: Less is More. Aviation Today. https://www.aviationtoday.com/2007/02/01/integrated-modular-avionics-less-is-more/
Robati, T., El Kouhen, A., Gherbi, A., Hamadou, S., & Mullins, J. (2014). An Extension for AADL to Model Mixed-criticality Avionic Systems Deployed on IMA architectures with TTEthernet. Conference: 1st Architecture Centric Virtual Integration Workshop (ACVI), ceur-ws 1233.
Rushby, J. (1999). Partitioning in Avionics Architectures: Requirements, Mechanisms, and Assurance. NASA Technical Report DOT/FAA/AR99/58.
VITA. (2015). ANSI/VITA 78.00-2015 SpaceVPX System. ANSI.
VITA. (2019). ANSI/VITA 65.0-2019 - OpenVPX System Standard. ANSI.
Wertz, J. (Ed.). (1990). Spacecraft Attitude Determination and Control. Kluwer Academic Publishers.
---
[[1]](#_ftnref1) See for example SAOCOM Mission
[[2]](#_ftnref2) The Vernal Equinox occurs about when the Sun appears to cross the celestial equator northward. In the Northern Hemisphere, the term _vernal point_ is used for the time of this occurrence and for the precise direction in space where the Sun exists at that time.
[[3]](#_ftnref3) Voyager I and II are powered by RTGs. They have been working since 1977, even though power output has decreased considerably.
[[4]](#_ftnref4) https://www.picmg.org/openstandards/compactpci-serial/
[[5]](#_ftnref5)DO-178C, Software Considerations in Airborne Systems and Equipment Certification is the primary document by which the certification authorities such as FAA, EASA and Transport Canada approve all commercial software-based aerospace systems. The document is published by RTCA, Incorporated.
[[6]](#_ftnref6) https://ecss.nl/
[[7]](#_ftnref7) https://www.esa.int/Enabling_Support/Space_Engineering_Technology/Shaping_the_Future/IMA_Separation_Kernel_Qualification_-_preparation
[[8]](#_ftnref8) https://fentiss.com/products/hypervisor/
[[9]](#_ftnref9) https://indico.esa.int/event/225/contributions/4307/attachments/3343/4403/OBDP2019-S02-05-GMV_Gomes_AIR_Hypervisor_using_RTEMS_SMP.pdf
[[10]](#_ftnref10) https://www.sysgo.com/products/pikeos-hypervisor/
[[11]](#_ftnref11) https://www.sae.org/standards/content/as6802/