# The Quality of Quality
The result of our activities in designing and building complex systems can be either good, bad, or somewhere in between. How do we gauge "good", "bad", or mediocre in this context? "Bad" could mean many different things:
- The artifact has defects
- The artifact is difficult to use
- The artifact is unsafe
- The artifact is inefficient in the way it works
- The artifact is unintuitive
One thing is for sure: assessing if a system "works" or not is not a Boolean yes/no question. Things might "work" only marginally. But also, there are safety measures, performance measures, compliance measures, and many other nuances when it comes to assessing how close (or far) a system performs compared to what was intended.
The gap between the ideal image of what was intended for a technical artifact versus what was delivered tends to be captured by the somewhat ambiguous term of "quality".
The ISO/IEC Systems and Software Engineering Vocabulary (ISO/IEC 2009) has the following set of definitions for quality:
1. The degree to which a system, component, or process meets specified requirements
2. Ability of a product, service, system, component, or process to meet customer or user needs, expectations, or requirements
3. The totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs.
4. Conformity to user expectations, conformity to user requirements, customer satisfaction, reliability, and level of defects present.
5. The degree to which a set of inherent characteristics fulfills requirements.
Quality is, in general, a slippery topic with rather lax terminology and riddled with overlapping concepts. Any research on the topic will throw slightly similar yet substantially different terms on the matter, like a cloud of keywords:
- Quality Management (QMS)
- Quality Assurance (QA)
- Quality Control (QC)
- Product Assurance (PA)
- Verification & Validation, sometimes also called Integration, Verification and Validation (IV&V).
- Testing: Unit tests, Functional tests, Qualification tests, Acceptance tests
- Inspection
Some industries are fond of using some of these terms, whereas some other industries may feel more comfortable using some other ones to refer to the same thing. One thing is certain: there are standards galore, each one redefining the same thing in a slightly different way.
Quality is a multifaceted concept that can be defined in various ways depending on the perspective. Quality is not perceived the same by everyone.
Generally, it refers to the degree to which a set of characteristics of a product or service fulfills expectations. These expectations can be explicit, such as technical requirements, or implicit, like customer satisfaction. Here are some points of view or perspectives when defining quality:
- Product-Based Perspective: Quality is seen as a measurable attribute or a set of attributes. In this view, products with more of these desired attributes or fewer defects are considered to have higher quality. This perspective is often used in manufacturing and production.
- User-Based Perspective: Here, quality is defined based on how well the product or service meets customers\' needs and expectations. This view is more subjective, as it varies based on individual preferences and requirements.
- Manufacturing-Based Perspective: In this perspective, quality is about the absence of defects in the produced items. The focus is on the processes used to create the product or service, ensuring they are consistent and reliable.
- Value-Based Perspective: Quality is seen in terms of cost and price. A quality product or service provides performance or conformance at an acceptable price or conforms to specific requirements at a cost that represents value.
With all, we need to unpack quality in several different subtypes, or "dimensions" of it:
- Quality of design
- Quality of production (which includes assembly, integration, and verification)
- Quality of procurement
- Quality of operations (which includes usability, user experience, and human factors)
- Bonus (last but not least): Quality of Management
These factors form a rectangle of sorts (or a pentagon), and they must coexist in balance and harmony to produce a product of reasonable quality and the end of the road. Any partial approach will result in something of low quality.
A product with a high-quality design but poor quality of production will still be of subpar quality overall as it will most likely fail early in its lifetime due to workmanship issues.
A product with high-quality design, and a high-quality production but poor quality of procurement will still be low quality due to the fact the components will be perfectly assembled and integrated but will perform badly and most likely fail.
Any overarching quality strategy must, of course, consider the quality of procurement, due to the fact critical elements in the architecture will originate at third parties whose quality management policies may differ from the ones employed at the integrator. Note how quality is also "fractal" here: a supplier also must internally consider the "pentagon" of quality we referred to above: design, production, procurement, operation, and management.
A system designer and manufacturer must select its suppliers on the basis that the supplier has demonstrated consistent capability to furnish items or services of the type and quality level being procured, all supported by objective documentation. This may require a certain level of auditing and due diligence performed by the system manufacturer toward the supplier.
Needless to say, a product with perfect quality of design, quality of production, and quality of procurement but low quality of operations will still yield a low-quality product due to the fact it will be unusable, inoperable, and therefore unreliable because of the intricacies involved in using the device in the field.
Last but not least, with low-quality management, everything falls like a house of cards. For an organization pursuing quality and customer satisfaction, success greatly depends on how the workforce contributes to this effort. If quality does not occur at the workforce level, it will not occur at all.
We will see in the next section what can happen if those in power do not join forces to sponsor and nurture quality. There is no quality possible with low-quality management, except for a miracle. But engineering systems of good quality must not, if possible, rely on miracles.
Eons ago, in some past life, I was working for a company that had recently acquired an automated [[Printed Circuit Boards#PCB Assembly|PCB Assembly]]. After reading lots of documentation and not without a dose of trial and error, we managed to program the line; the solder paste, pick & place, reflow oven, etc. Once the line was put to work and it started providing assembled boards, the testing activities showed that most of the boards did not work. There were multiple issues: misplaced components, crooked components, tombstoning, solder shorts, and overheated components. The yield of the line was very low, and overall quality was very poor, to say the least; we had almost a 100% defective record. Upon careful investigation, the team realized there were multiple issues with the programming of the different stations of the lines (quality of production). But this was not all. The boards [[Printed Circuit Boards#DFx Design for Manufacturability, Fabrication and Assembly|were clearly not designed for automated assembly]], as they had been originally designed to be assembled manually (low quality of design). Last but not least, there was no strong support from the organization for training (quality of management). There was no possibility whatsoever for these boards to be of high quality, considering no factor in the "Pentagon" was addressed properly.
## Quality Assurance vs Quality Control vs Testing
Perhaps the epitome of slightly similar yet substantially different terms from all terms that are frequently used interchangeably in the quality domain are Quality Assurance, Quality Control, and Testing. They are very distinct activities, despite belonging to a common domain that undeniably connects them. These three activities belong to any QMS (Quality Management System) but they tend to lose entity when crammed together under one single umbrella term. So, I'll treat them separately.
The disambiguation is rather simple: ==Assuring quality is about defining the processes to achieve quality and strictly works in foresight; Quality Control is about verifying that actual instances of systems and products comply with the processes and meet quality criteria. QC works in hindsight (as in, the product must already exist). Testing supports QC by gauging the product to find the spots where quality criteria are not met.==
We will use as an analogy a representative democracy works (or should work), with legislative, judicial, and executive branches, checking and balancing each other. Quality works as the three powers in a democracy.
QA creates the *laws* (legislative branch) aligned to what the organization believes quality is about, whereas QC is the "judicial" branch which interprets the laws for everyday life and takes actions against those who do not stick to them. Testing is like police, helping the judicial system to bring the reckless who do not stick to the rules under the law. The distinction between QC and testing is rather relevant: quality control does not necessarily need to be the ones executing the tests, but they need to be the ones knowing **what to test**.
The executive branch is rather self-explanatory: it's management. Management sponsors and empowers the lawmakers and governs under their laws. Perhaps the most important role of the executive branch is to "lead by example": if you see someone from management breaking the laws of quality, then it all falls like a house of cards since the whole process instantly loses legitimacy.
For a quality management system like this to work, there has to be a healthy balance between all three powers. Any partial approach turns the effort ineffective. Laws without real power to deter those wanting to break the law will not work. That is what happens with a world-class QA a a weak QC. Moreover, chasing everyone to stick to half-baked laws sounds terrible as well. That is what a strong QC with a weak QA brings. Last but not least, giving too much power to police without clear laws and a sober interpretation of the laws, it's mayhem. That is what testing without QC and QA looks like.
In reality, we see that true separation of powers is somewhat utopic. Organizations mix them up or make them overlap, by accident or by choice. This can be understandable in a way, but dangerous in another way: you give people the power to create the laws, interpret them, enforce and police them, and with lots of power comes lots of responsibilities, giving way to an "authoritarian" quality regimes where tools, methods, and metrics, are all dictated by a group of illustrated individuals. In these situations, quality becomes manipulated to meet desired outcomes, and the probability of going "Volkswagen" increases considerably[^3].
Often, testing gets too much of a high status, where at the end of the day it is yet another verification method, and it should always exist under the warm roof of QC and QA and not in isolation. Depending on what industry we are talking about, testing could be even optional. In other industries, for instance, in some manufacturing areas, you could do quality control by just eyeballing the product or measuring it with a ruler.
Why blind testing is pointless? Picture a hypothetical software organization that decides it has had enough "without quality" so it brings the best testers it can find on board. Picture these testers are the reincarnation of the mythical "Back Team" from IBM which many books have romanticized[^4]. So, as the team settles, they are given the task of "finding bugs". And they do; they are so good they find myriads of bugs. They manage to brick the systems consistently, to the rejoicing of the managers who pat themselves on the shoulder: we have higher quality, they think. But do they? As heroic (and perhaps apocryphal) the Black Team story goes, the story is a story of a shortsighted approach to quality. It's infested by hindsight: when you have found a bug in a system that is integrated and assembled and about to be shipped, you are coming too late to the party. The true thing to pat yourself on the shoulder is to catch defects earlier, or, even better, to improve design and integration and manufacturing methods so they will not happen or happen less frequently, or in less critical areas. Mind I am assuming here that quality means "fewer defects", which is not always the bar to use when defining quality, as we discussed.
The Black Team was working at the coarsest level of granularity possible: when the product was fully realized and about to go through the door. But given the system has an internal composition, we must also test the underlying components.
But, as we split the system apart the components lose critical stimuli from neighboring components, hence we cannot test them properly anymore.
In Italo Calvino's book _Invisible Cities_—an ode to complexity—Kublai Khan (a Mongol emperor) asks Marco Polo if there is a single stone that supports a bridge. Marco Polo responds that a bridge stands not because of one solitary rock, but because of an arch drawn through many stones. It is sort of a paradox: ==we can fully test a bridge only when it becomes a bridge. We wouldn't gain much by testing individual stones.==
==Here's a philosophical flaw of quality management: to think that testing the components of a whole in isolation as they are developed and claim that the sum of individually isolated tested components equals a high-quality integrated system.==
Others acknowledge this is purely wishful thinking, and they emphasize testing the integrated artifact extensively, putting extra effort into the places where the failure analyses indicate things can go the ugliest. This does not necessarily mean doing no test at the component level but balancing the best of both worlds.
And here lies the true value of Quality Assurance. ==QA has to be the voice stating what makes sense vs what does not make sense in the quest of achieving quality. QA has to formulate the "laws" where it is clearly stated what granularity levels make sense to observe and unambiguously declaring when "test in isolation" is pointless and risky. But then, QC has to ensure (mind the difference, ***ensure***) that the different instances of the system and its components, as they come alive, are compliant with the established laws. To assure is to remove doubt, and QA is in charge of that. To ensure is to make sure, and QC is on that, doing so by relying on an army of testing police officers to patrol the streets aiming to capture the outlaws.==
The quality strategy must ensure that the product is designed such that it can be produced with the specified level of quality in a repeated capacity. This means that the performances and characteristics of the product can be reproduced consistently over different models and serial production. Important is also important to ensure that the product is designed such that it can be inspected and tested under representative conditions.
## Traceability and Identifiability
There is no quality possible without a paper trail. All the activities to convert an idea into a realized technical artifact leave footprints and a substantial part of quality management is to manage those footprints. These footprints can be in the form of documents, design files, logs, etc.
The design of complex systems must ensure that a bidirectional and unequivocal relationship between parts, materials, or products and their associated documentation or records is established and maintained throughout the lifecycle of the system. This means establishing the methods and approaches for being capable of tracing data, personnel, and equipment related to procurement, manufacturing, inspection, test, assembly, integration, and operations activities.
A manufacturer must be capable of tracing backward the locations of materials, parts, and sub-assemblies, and capable of tracing forward the locations of materials from raw stock.
This also includes an important factor for traceability: identifiability. system manufacturers must establish controls to ensure that identification numbers are assigned systematically to all parts involved in the design, manufacturing, and integration process. This includes adding identification numbers of scrapped or destroyed items that are not used again while ensuring that identification numbers, once allocated, are not changed unless the change is reviewed and authorized. What is more, a system manufacturer must put in place enterprise systems to handle the necessary information volume that a traceability process will generate. We will discuss this a bit further when it [[Printed Circuit Boards#Reference Designators (RefDes)|comes to naming things]], and when it comes to [[Product Management]].
## Verification
> [!warning]
> This section is under #development
### Inspection
> [!warning]
> This section is under #development
### Testing
#### Testability and the Testing Environment
Testability is the capability of revealing how well something works. In artificial systems, testing determines whether design objectives are being met. A testable system must be _observable_, where observability is a measure of how well the internal states of a system can be inferred from knowledge of its external outputs.
But what happens when the system of interest proves to be, for a variety of reasons, untestable?
Untestability can happen due to the system being, as said above, non-observable: the system is there but there is no way of extracting any information about how well or how bad it is performing. The system might be untestable just because the sole act of testing would cause implications that invalidate the pay-off of the testing process itself—think of nuclear warheads and the implications of testing them. Or the system might be untestable because the environment where it must perform is not replicable before full deployment as is the case for spacecraft.
This introduces us to the concept of “post-deployment testing”: assuming the system is observable, it is only by fielding it in its real operational environment that we are capable of assessing its true performance. No staging, scaled-down, or simulation environment will stimulate the system in the same way nor provide the same amount of information compared to taking the system for a real spin.
Certain types of systems, for instance, complex adaptive systems such as organizations or even a country’s economy, follow a similar pattern. No scaled-down environment will provide the same information as the full population—what is more, mockups may provide misleading information.

> [!Figure]
> SpaceX's approach to testing (source: SpaceX's Twitter account)
A staging environment is a high-fidelity replica of the environment where the application software can run and said environment is put together for testing purposes. Staging environments are made to test code, builds, patches, and updates to ensure a good level of quality under a production-like environment before the application is rolled out to the field. Software failing in a staging environment doesn’t affect customers or put any operator at risk, and it helps the development engineers understand what went wrong in a controlled manner. Truth is, a staging environment can mean different things to different people; differences between staging and production environments are common and problematic.
#### Test Automation
A good deal of the steps to test software are easily automatable. I have had the _pleasure_ of running excruciatingly long, manual test procedures, and the only thought that was always in my mind was how insulted I felt as a human having to do things that could be automated with a bunch of scripts.
The known "irony of automation" must be considered, though. It refers to a situation where the introduction of automation, initially intended to improve efficiency and save time, can end up requiring more time and effort than the manual processes they were meant to replace, at least in the short term. This paradox appears from several factors:
1. **Setup and Development Time**: Automating a process often requires significant upfront time and resources to design, develop, and implement the automation needed. This initial investment may not be recouped until the automation has been in place for a certain period. It may take time to see the automation's payoff.
2. **Complexity**: Sometimes, the task at hand may be relatively simple to perform manually but requires complex automation to replicate efficiently. In such cases, the complexity of designing and maintaining the automation can outweigh the time saved.
3. **Maintenance Overhead**: Automation is also made of software; it has an architecture and a code base, and it requires maintenance, updates, and troubleshooting. If these requirements are not adequately planned for, they can consume more time and resources than the manual process they were meant to replace.
4. **Adaptation Period**: Introducing automation often necessitates changes in workflows, employee training, and organizational structures. During this transition period, productivity may temporarily decrease as people adjust to the new system.
5. **Frequency of the Task**: The irony of automation is less pronounced for tasks that need to be performed frequently. Even if automating a task takes longer initially, the time saved per execution can accumulate over multiple repetitions, eventually justifying the initial investment in automation.
#### Qualification
A qualification test can also be thought of as a stress test or an accelerated life test. Qualification test replicates defined operational environmental conditions with a predetermined safety margin, with the results indicating whether a given design can perform its function within the simulated operational environment of a system. Note that qualification aims to the design, and not to individual instances of a technical object.
Qualification activities are performed to ensure that the design will meet functional and performance requirements in anticipated environmental conditions. A subset of the verification activities are performed at the extremes of the environmental envelope to ensure the design will operate properly with the expected margins. Qualification is performed once regardless of how many units may be generated (as long as the design doesn't change).
If changes significantly affecting form, fit, or function are made to the product or manufacturing process, the quality assurance policy shall indicate requalification testing.
> [!warning]
> This section is under #development
#### Acceptance
In some industries like aerospace (and perhaps more frequently in space) quality control is frequently called acceptance test. The Acceptance activities are performed on each of the produced objects as they are manufactured and readied for use. An acceptance report must be prepared for each of the units and shipped with the unit. The acceptance test/analysis criteria are selected to show that the manufacturing/workmanship of the unit conforms to the design that was previously verified/qualified. Acceptance testing is performed for each unit produced, as it is a quality control activity.
> [!warning]
> This section is under #development
## The Economics of Quality
Of course, ==both implementing and not implementing an organization-wide quality policy will have a dollar figure attached to it==. On one hand, the cost of producing defective objects will create a negative economic impact translated from fewer sales, more recalls, and an overall damaged reputation. But also, the cost of implementing an overarching quality management system will be non-negligible. This includes the cost of developing or acquiring the tools, training employees, and modifying existing processes and infrastructure to accommodate the new standards. There's also a recurrent cost, which includes regular audits, continuous training, renewal of certifications, and periodic updates to the system to ensure compliance with evolving industry standards and regulations.
There is an obvious link between costs associated with having no quality strategy (failures, recalls) with the costs of implementing a quality strategy (tools, training, etc.). In theory, the deeper we go with establishing the quality management approach, the lower our costs of having "no quality" would be. This would give us the possibility of reaching a point of "minimal cost of quality" (see figure below). However, this point may be hard to determine because many of the costs associated with a no quality policy such as the cost of customer dissatisfaction or lost reputation can be hard to measure or predict. Also, as appraisal costs continue increasing, the "sweet spot" is abandoned, and the cost of quality starts to increase again.

A more modern model about the optimal cost of quality stands on the increasing empirical evidence that process improvement activities and loss prevention techniques are subject to increasing cost-effectiveness. This means, that as the new policies are established, there is an ever-decreasing cost of quality (see figure below).

## Criticality Levels
Not all systems are engineered the same. It does not require the same level of quality management to engineer a smartwatch, a pacemaker, an airliner, or a nuclear reactor. Not only their complexity levels may differ by orders of magnitude, but also the implications of a failure in these systems might be abysmally different.
The ARP4754, also known as "Guidelines for Development of Civil Aircraft and Systems," is a standard used in the aerospace industry for the development of aircraft systems. It outlines various criticality levels for system functions, which are essential for ensuring safety and reliability. These levels are determined based on the severity of the effect that a failure would have on the aircraft, its occupants, or its ability to complete its mission. The criticality levels are as follows:
- Catastrophic (Level A): This is the highest level of criticality. A failure at this level would typically result in multiple fatalities, usually involving the loss of the system. These systems require the highest level of rigor in development and testing to ensure that the likelihood of failure is as low as reasonably practicable.
- Hazardous/Severe-Major (Level B): Failures at this level would not lead to the loss of the system but could cause a large negative impact on safety or performance, potentially leading to serious injury or major damage. The development of these systems involves extensive testing and validation to minimize the risk of failure.
- Major (Level C): A failure in this category might not be immediately dangerous but could lead to discomfort or increased workload for the operators, possibly causing a significant reduction in safety margins or operational capabilities. These systems still require rigorous development processes, but the level of rigor is less than that for Levels A and B.
- Minor (Level D): Failures at this level are more of an inconvenience than a safety risk. They may result in some minor disruption of normal operation. While still important, the development and testing requirements for these systems are less stringent compared to higher criticality levels.
- No Effect (Level E): This level indicates that a system failure would have no impact on safety, performance, or crew workload. Systems in this category require the least rigorous development process, as their failure does not affect the operational capability or safety.
Each criticality level dictates the rigor of the development processes, including design, verification, and validation activities, to ensure that the systems meet the necessary safety and performance standards. The classification of a system into one of these levels is a relevant step in the development process and has a significant impact on the entire lifecycle of the aircraft system.
## Software Quality
Software exists in many types and variants and there is no widely adopted software taxonomy.
Although it is hard to grasp the characteristics of software in general, there is a long-lasting and still-growing understanding of how software quality is constituted. The ISO 25010 provides an updated set of software quality requirements compared to ISO 9126 or other previously established software quality models. Still, since software technologies evolve, the understanding of software quality needs to evolve as well. For example, software usability addresses the ease of use of software. For specified consumers, it seeks to improve their effectiveness, efficiency, and satisfaction in achieving their objectives by the given context of use and the usage scenarios for a software. Another example is the growing importance of data (as software in itself and as a software artifact in use) in Internet of Things or artificial intelligence applications. Data quality assessment aims at deriving objective data quality metrics that resemble also subjective perceptions of data. But also in the growing use of software in emulating reality in virtual and augmented reality applications in gaming, for education, or training. The more software permeates into every field of our society, transparency, traceability, and explainability become the central quality criteria that also demand attention from software developers. Software, as a component of complex systems, quality cannot grow detached from the overall quality observations we did earlier in this page when we said that quality is a multi-layered thing that must include design, usability, procurement, and management considerations.
To the extent that society is increasingly dependent on autonomous, intelligent, and critical software-intensive systems in domains such as energy supply, mobility services, and production, new strategies must be found to ensure not only their well-understood quality characteristics such as safety, efficiency, reliability, and security but also their associated socio-technical and socio-political implications and all additional requirements in the context of human-machine interaction and collaboration.
> [!warning]
> This section is under #development
# ECSS Standards
The European Cooperation for Space Standardization (ECSS) is a collaboration among ESA, the European space industry, and several space agencies, to develop and maintain a coherent, single set of user-friendly standards for use in all European space activities, and it was established in 1993 to unify space Product Assurance standardization at the European level. Officially adopted by ESA on 23 June 1994 through the resolution ESA/C/CXIII/Res to replace its own Procedures, Specifications, and Standards (PSS) system. **The ECSS system currently has 139 active standards** and almost 60 active handbooks, forming the ECSS system. The ECSS is managed by the ESA Requirement and Standard Division, based in ESTEC in Noordwijk, the Netherlands. The ECSS maintains connections with multiple European and international standardization organizations, to contribute to standardization and to adopt relevant standards as part of the ECSS system.
>[!Attention]
>ECSS is **NOT** an engineering process. ECSS is also **NOT** a set of guidelines. ECSS is a collection of requirements captured in documents categorized by domain, with each domain subdivided into a discipline. All with the hope that space systems in Europe will be developed in a more or less standardized manner. And yes, most ECSS standards are riddled with ugly diagrams, like the one below.
> ![[Pasted image 20240515125936.png]]
>[!Info]
> ECSS in numbers:
>
> At the time of writing, there are **more than 29000 ECSS requirements**:
> - 18000+ engineering requirements
> - 500+ project management requirements
> - 9000+ product assurance requirements
> - 83 sustainability requirements
==ECSS neither provides nor recognizes any certification process of suppliers or products according to ECSS requirements==, by any party. Nothing prevents individual ECSS members from certifying against ECSS on their own behalf. In fact, ECSS standards are not mandatory, unless they are made mandatory by other binding legal document, for example, a contract.
In ECSS, the standards are normative documents for direct use in invitations to tender (ITT) and business agreements. The content of ECSS standards is limited to verifiable requirements. ==ECSS standards state **what** to do, not **how** to do it.==
The ECSS documents themselves ==do not have legal standing== and they do not constitute business agreements: they are made applicable by invoking them in business agreements, most commonly in contracts. The applicability of standards and requirements is specified in the project requirements documents (PRDs), which are included in business agreements, which are agreed upon by the parties and binding them. The project’s requirements within the PRD are composed of two sets:
- Requirements covered by ECSS disciplines, subject to tailoring,
- Other requirements specific to the project (for instance, mission-specific requirements or system-specific requirements)
## The Tree of Requirements
The ECSS System is a tree-like structure with only 2 level-1 documents (called "the S branch"):
- ECSS-S-ST-00C: The mother of all ECSS documents. Describes the ECSS system in general
- ECSS-S-ST-00-01C: This defines the terminology and the jargon
- ECSS-S-ST-00-02C: Tackles tailoring, although this standard is in DRAFT, waiting for a "pilot case" to be completed.
Under those two parent documents, lie the rest of all of them (level 2, 3, and beyond), divided into big "branches": MEQU. You can also re-arrange the letters in QEMU if that would make it easier to remember. I'll do that.
- Q: Product Assurance branch
- E: Engineering branch
- M: Management branch
- U: Sustainability branch
Each of these branches is a little world on its own, we will dive into those in time.
![[Pasted image 20240512222735.png]]
## ECSS Standards and The ODSI Paradigm
ECSS standards, and especially Level 2 standards, ==are intentionally generic==. Many requirements follow the ODSI Paradigm:
- **O**rganize the activity in your own way
- **D**ocument how you have organized the activity in your own way. The relevant aspects to be documented are normally covered by a DRD.
- **S**ubmit to the customer the document describing the approach, for approval.
- **I**mplement the documented approach.
## Customer-Supplier Model
ECSS standards are defined around a customer-supplier relationship. Customer and Supplier are roles played by the actors that cooperate to produce, operate, and dispose of a Space System. One actor can be a Customer, a Supplier, or both. ECSS system has specific responsibilities to customers and suppliers for each aspect of the ECSS standards in general.
## DRDs
When a requirement asks for the delivery of a document, the scope and content are specified in a dedicated piece of documentation called DRD (Document Requirements Definition), which forms an integral part of any standard. Note that, for any organization claiming to be "ECSS-compliant" all documents generated from that supposed compliance must be DRDs from the standards. Mind you, requirements spit DRDs left and right.
According to the ESA way of doing projects, all the requirements (generic or project-specific) applicable to a project are capitalized in a document that in ECSS jargon is called “Project Requirement Document” (PRD). Traditionally, this document has also been called the “System Requirement Document” (SRD) in ESA. Normally, the term SRD/PRD is used. The SRD/PRD could be a physical document listing all the requirements, but more frequently is a document pointing to other documents containing the requirements. All explicit and official tailorings of the ECSS system must be captured in the PRD/SRD at the contract level. If a contract does not include a PRD, it is tough for a customer to ask for compliance later on.
## Handbooks
Handbooks are non‐normative documents providing background information, orientation, advice, or recommendations related to one specific discipline or a specific technique, technology, process, or activity. ECSS handbooks provide guidelines and good practices, and collection of data.
Handbooks do not contain requirements but provide additional information on selected topics addressed by the ECSS standards. ==Handbooks can be used as reference documents or transformed into normative documents by the customer.==
## The Tailoring Process
Depending on the mission type and complexity, only a subset of the ECSS body of standards might apply to the project. This is called "tailoring", and the process requires creating a matrix called the ECSS applicability table (EAT), which must state unambiguously which standards are fully applicable, applicable with modifications, or non-applicable. An official tailoring process is a time-consuming, non-trivial endeavor. What happens most of the time are "implicit" tailorings, in which customers and suppliers agree that not all standards and requirements will be applied although they do not go to an extent to state which standards and requirements are used or not.
![[Pasted image 20240519192313.png]]
The EAT is basically a table (see below).
![[Pasted image 20240519192631.png]]
To state which requirements inside a standard apply to a project, a table called ECSS Applicability Requirement Matrix (EARM) must be provided by the customer.
![[Pasted image 20240519192747.png]]
## Product Assurance
According to the ECSS system, ==the prime objective of Product Assurance is to ensure that space products accomplish their defined mission objectives in a safe, available, and reliable way==.
The early identification of aspects potentially detrimental to safety and mission success, and the cost-effective prevention of any adverse consequence of such elements are the basic principles for the ECSS Product Assurance requirements. Product Assurance Management ensures the integration of activities from the Product Assurance disciplines defined in the other ECSS standards of the Q branch, namely:
- Q-20 Quality assurance
- Q-30 Dependability
- Q-40 Safety
- Q-60 Electrical, electronic, electromechanical (EEE) components
- Q-70 Materials, mechanical parts, and processes
- Q-80 Software product assurance
The requirements for Product Assurance planning specified in clause 5.1 of Q-ST-10C address the following aspects:
- Definition of a Product Assurance organization with the allocation of adequate resources, personnel, and facilities
- Definition of Product Assurance requirements for lower tier suppliers
- Definition of a Product Assurance Plan describing the Product Assurance program and how it fulfills project objectives and requirements
### PA Plan
The objective of the PAP is to describe the activities to be performed by the supplier to assure the quality of the space product with regard to the specified mission objectives and to demonstrate compliance with the applicable PA requirements.
[^2]: NASA scientist Donald J. Kessler proposed in 1978 a scenario in which the density of objects in low Earth orbit (LEO) due to space pollution is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.