# Backplanes and Standard Form Factors For a very long time, digital systems were not conceived or designed for interoperability. When someone needed a system for data acquisition, processing, storage, and retrieval, system architects would likely create designs from scratch, choosing the components, designing the PCBs, the enclosures, and all. There was no other plan than vertically integrating everything from the ground up. Needless to say, the development time of complex digital systems following this approach was prohibitively long and costly. The computer industry realized the established approach was wrong, and soon standardization efforts like the AT (Advanced Technology) form factor, developed by IBM, gained popularity in the 1980s. In parallel, the ISA bus (Industry Standard Architecture), was a hardware interface used for adding expansion cards to a computer. This approach fits nicely with IBM's strategy to set a standard for personal computing, which was a rapidly growing market at the time. Originally, the ISA bus was an 8-bit interface, matching the data width of the Intel 8088 processor used in the first IBM PC. As technology progressed, a 16-bit version was introduced with the IBM PC/AT in 1984, using the more powerful Intel 80286 processor. This upgrade was significant as it allowed for greater data transfer speeds and improved performance. The ISA bus became widely adopted due to IBM's dominant position in the PC market. IBM's decision to allow other manufacturers to produce compatible hardware without significant licensing fees led to a vast ecosystem of ISA-based computers and peripherals, fueling the growth of the PC industry. However, as computing needs evolved, the limitations of the ISA bus became apparent. It was relatively slow compared to newer technologies, and the parallel bus architecture limited the number of devices that could be connected. In response, newer interfaces like the PCI (Peripheral Component Interconnect) bus were developed in the early 1990s. PCI offered faster data transfer rates and greater flexibility, eventually leading to the decline of the ISA standard. In time, AT found its limitations, including issues with the layout leading to poor system cooling and inefficiencies in the use of space. As technology advanced, these limitations became more pronounced, particularly with the advent of more powerful processors and the need for better power distribution and cooling solutions. # ATX The evolution of AT is the ATX form factor, standing for Advanced Technology eXtended. ATX has been a standard for computer motherboards and their enclosures for several decades now. The shift from AT to ATX included several significant improvements for improving airflow and cooling. Secondly, it reorganized the placement of the motherboard components and power connectors to reduce cable clutter, thus also improving airflow and making system assembly easier. The size of the motherboard was also standardized in the ATX specification, allowing for different sizes of motherboards—like microATX and miniATX—to emerge, giving system builders more flexibility. Another important feature introduced with ATX was the integration of I/O ports directly onto the motherboard, which eliminated the need for separate I/O cards and made connecting peripherals much more straightforward. ![ATX and ITX form factors and evolutions (source: Flickr user "VIA Gallery" from Hsintien, Taiwan)](image334.jpeg) > [!Figure] > _ATX and ITX form factors and evolutions (source: Flickr user "VIA Gallery" from Hsintien, Taiwan)_ Since the initial release, the ATX form factor has undergone several updates to accommodate advances in technology. The ATX12V specification, for example, was an enhancement that provided additional power to the CPU, allowing for the development of more powerful processors. Over the years, ATX has been updated to support new features such as USB, new types of expansion slots, and the latest memory and data storage interfaces. The ITX form factor also appeared (see figure above). ITX was created by VIA Technologies back in 2001, and it includes versions like Mini-ITX and Nano-ITX, among others. Mini-ITX, for example, is a popular small form factor size that typically measures 17 x 17 centimeters (6.7 x 6.7 inches). This compact size makes ITX motherboards well-suited for small form factor cases, which are used to build compact and portable systems. They are commonly found in applications where space is a premium. Despite their small size, ITX motherboards often come with features found on larger boards, such as PCI Express slots, although these are typically more limited in number. # VME On the industrial computing side, several standard form factors also proliferated, but one stood out as the most popular: VME (Versa Module Europe), which is still in use today in some legacy applications. The VMEbus was introduced in 1981, whereas the AT form we described in the previous paragraphs was introduced in 1984. Therefore, VME came before AT. VME is one of the early open-standard backplane architectures (we will see soon what backplanes are). VME was created to enable different companies to create interoperable industrial computing systems, following standard mechanical guidelines and signal routing. Among the typical components in the VME ecosystem, you can find processors, analog/digital boards, etc., as well as chassis, backplanes, power supplies, and other subcomponents. System integrators benefited from VME in the following aspects: - It provided multiple vendors to choose from (supply chain de-risk) - A standard architecture versus costly proprietary solutions - A tech platform with a known evolution plan - Shorter development times (not having to start from scratch) - Lower non-recurring costs (by not starting from scratch) - An open specification to be able to choose to do subsystems in-house if needed. The VME specification was designed with upgrade paths so that the technology would be usable for a long time. VME is based on the Eurocard form factor, where boards are typically 3U or 6U high. The design was quite rugged; with shrouded pins and rugged connectors, the form factor became a favorite for many military, aerospace, and industrial applications. VME was eventually upgraded to VME64x (VITA 1.1) while retaining backward compatibility. Over the years though, even these upgrades weren't enough bandwidth for many applications. Then, [[Physical Layer#Parallel versus Serial Interconnects|serial interfaces]] entered the game. ![VME card (Artisan Technology Group)](image335.jpg) > [!Figure] > _VME card (Artisan Technology Group)_ The VMEbus specification defines the signals used, the mechanics, and the hierarchy of a VME system. The VMEbus cards exist in 3 standard heights: 3U, 6U, and 9U, where the definition of 1U is 1.75 inch. The hierarchy of a VME system is composed of basically three elements: - Backplane - Module - Crate/Chassis Backplanes provide not only mechanical anchorage for all the boards in the system but also the routing of the VMEbus signals and power supply lines. The figure below shows a VME backplane. ![VMEbus 6U backplane](image336.jpg) > [!Figure] > _VMEbus 6U backplane_ The backplane includes the VMEbus signals for data transfer, control, and monitoring between boards. - Data Transfer Signals: These are used for sending and receiving data over the bus. The VMEbus can have a data path of 8, 16, 24, or 32 bits, with wider data paths allowing for faster data transfer rates. - Address Signals: These signals carry the address information to select where data should be read from or written to. The VMEbus can support 16-, 24-, or 32-bit addresses, allowing for a range of memory addressing options. - Control Signals: Control signals are used to manage the data transactions on the bus. They indicate when data is being read or written, start and stop transactions, and manage access between multiple devices on the bus. - Interrupt Signals: The VMEbus provides a mechanism for devices to interrupt the CPU. These signals are used by devices to signal the processor for immediate attention. - Utility Signals: These include power, ground, and various system-level functions such as system reset, system fail, and bus request/grant signals which are used to control which device has access to the bus. - Arbitration Signals: These signals are used for bus arbitration, where multiple devices may need to control the bus. They ensure that only one device has control of the bus at any given time to avoid conflicts. - Function Signals: Including signals for specialized functions like bus error, write protection, etc. Modules are the boards that perpendicularly insert into the backplane: ![VME modules form factors (credit: interfacebus.com)](image337.jpg) > [!Figure] > _VME modules form factors (credit: interfacebus.com)_ Classes or roles of modules include: - Master: A module that can initiate data transfers - Slave: A module that responds to a master - Interrupter: A module that can send an interrupt (usually a slave) - Interrupt handler: A module that can receive (and handle) interrupts (usually a Single Board Computer) - Arbiter: Circuitry (usually included in the Single Board Computer) that arbitrates bus access and monitors the status of the bus. It should always be installed in slot 1 of the VMEbus crate if interrupts are used. It's important to note that all lines in the VMEbus are single-ended and use TTL levels, with a low state being from 0 to 0.6 V and high from 2.4 to 5 V. ![VMEbus signals (* denotes active low)](image338.png) > [!Figure] > _VMEbus signals (\* denotes active low)_ VME systems mechanically come together with a crate that houses the backplane and all the modules. Crates play a critical role as the housing volume that contains and connects multiple VMEbus modules. A VME crate is essentially a metal enclosure that includes slots for VME modules to be inserted. The roles of crates in a VME system are multifaceted: - Physical Support: Crates provide a sturdy mechanical structure to hold the VME cards securely. The size and design of the crate ensure that the boards are properly aligned with the backplane connectors. - Power Distribution: Crates often include a power supply unit that converts AC mains power into the DC voltages required by the VME modules. The backplane distributes this power to each slot. - Cooling: The modules will generate heat. Crates usually incorporate cooling mechanisms, such as fans or convection cooling channels, to dissipate this heat and maintain an operational temperature range for the modules. - Mechanical Protection: The metal enclosure of the crate provides electromagnetic interference (EMI) shielding, which is important in environments with strict EMI requirements, like military or aerospace applications. - System Expansion: Crates can allow for the expansion of a digital system. Adding more functionalities can be as simple as sliding in new VME modules into available slots and possibly updating software drivers. - Maintenance and Serviceability: Crates make it relatively easy to service and maintain the system. Faulty modules can be quickly identified and replaced without needing to dismantle the entire system. VME crates come in various sizes, typically driven by the number of slots the backplane contains and can be designed to accept different sizes of VME cards, such as 3U, 6U, or 9U high cards. The choice of crate size and configuration depends on the specific requirements of the application, including the number of VME modules needed, the required data throughput, power requirements, and physical space constraints. ![VME 6U Crate (credit: wiener-d.com)](image339.jpg) > [!Figure] > _VME 6U Crate (credit: wiener-d.com)_ # VPX As discussed in the previous section, VMEbus is based on a single-ended, parallel bus. [[Physical Layer#Parallel versus Serial Interconnects|Parallel buses]] convey many signals simultaneously and concurrently, where signals physically run close to each other, and each signal in a parallel bus has its own private physical interconnect all the way from transmitter to receiver. In crowded parallel buses, arbitration must take place to avoid multiple actors in the bus claiming it and disrupting the flow of other nodes, causing data corruption and delays. Also, if a node breaks down in a parallel bus, it can take down the whole bus altogether, affecting the function of the system. Last but not least, VMEbus also eventually faced limitations in terms of the performance of the connector. Therefore, to meet the increasing demand for higher data rates, point-to-point, switched interconnects based in high-speed interfaces appeared as a sound alternative, making it possible to have multiple paths for a node to reach another node in the backplane. This architectural shift allowed for data rates to go much higher for a substantial increase in reliability. In this architectural approach, switches are a central part of the network topology, and nodes can be connected to switches in different ways. For example, in a centralized topology, there's one switch in a star configuration or two switches in a dual-star configuration. In a distributed topology, there can be multiple connection options with a mesh configuration (where each node is connected to every other node) being quite common. VPX materialized as the evolution from VMEbus. First, VPX adopted high-speed connectors; the Multi-Gig RT2. By changing connectors, pinouts had to change as well. So, the VITA 46 standard was created to specify how to route the pins across the new connector; still, the pinout choice remained on the designer's side. VPX quickly found adoption in computing-intensive applications such as radar, and defense, but also aerospace, acquisition, test & measurement, and more. Therefore, mission-critical applications became VPX's main driver and sponsor. As VPX grew in complexity, with multiple options about signal routing, protocols, and data rates it offered, it became extremely difficult to ensure interoperability, so the OpenVPX (Vita 65) specification was devised as the primary architecture toolbox to make components from different suppliers work with each other seamlessly. VPX supports a wide breadth of fast serial interconnects. However, these are described in extra specifications complementing the base specification which only defines the mechanics and the electrics of the standard. VPX/OpenVPX offers several improvements, including: - High-speed MultiGig RT2 connector rated to 10 Gbps (Figure 3‑161 and Figure 3‑162). Also, MultiGig RT3 is now available[^81] (16-32+ Gbps). - Support of standard SERDES data-rate options of 3.125, 5.000, and 6.250 Gbps (with 10 Gbps and beyond becoming more popular) - Defined areas for the control plane, data plane, utility/management plane, etc. - Options for system management as per VITA 46.11 - Fully tested and rugged differential connectors - Guide pins on the backplane for easy blind-mating ![ VPX system main building blocks: Plug-in Module, Slot, Backplane, Chassis](image340.png) > [!Figure] > _VPX system main building blocks: Plug-in Module, Slot, Backplane, Chassis_ ![MultiGig RT2 connector, PCB side (VITA46) (credit: TE connectivity)](image341.png) > [!Figure] > _MultiGig RT2 connector, PCB side (VITA46) (credit: TE connectivity)_ ![MultiGig RT2 connector, backplane side (VITA 46) (Credit: TE Connectivity)](image342.png) > [!Figure] > _MultiGig RT2 connector, backplane side (VITA 46) (Credit: TE Connectivity)_ ![MultiGig RT3 connector (credit: TE connectivity)](image343.png) > [!Figure] > _MultiGig RT3 connector (credit: TE connectivity)_ ## OpenVPX (VITA 65) Based on the VPX baseline standard (VITA 47), the OpenVPX System Standard uses module mechanical, connectors, thermal, communications protocols, utility, and power definitions provided by specific VPX standards and then describes a series of profiles that define slots, backplanes, modules, and chassis (we will define what each of these is soon). The figure illustrates how the different VPX elements interface with each other. In a nutshell, the OpenVPX defines the allowable combinations of interfaces between a module, a backplane, and a chassis. In the VPX terminology, a chassis hosts a backplane. A backplane contains a series of slots, and plug-in modules connect to those slots utilizing their on-board connectors. The main task (and challenge) for systems integrators is to ensure the right signals are routed across the design, and for that, OpenVPX specifies a naming convention as an abstraction tool that can be used while selecting elements for the design. ![System Interoperability Diagram (Credit: VITA)](image344.png) > [!Figure] > _System Interoperability Diagram (Credit: VITA)_ It is important to note that the OpenVPX Standard acknowledges but does not define the interfaces between the application and the module or chassis (grayed-out text and lines in the figure above). ## OpenVPX Key Element Descriptions It is important first to discuss some key elements and terminology that are central to the VPX/OpenVPX architecture: Port, Plane, Pipe, Module and Profile. - Port: A physical aggregation of pins for a common I/O function on either a Plug-In Module's backplane connectors or a backplane slot's connectors. - Plane: A physical and logical interconnection path among elements of a system used for the transfer of information between elements. How the planes are used is a guideline. The following Planes are predefined by OpenVPX: - Control Plane: A [[High-Speed Standard Serial Interfaces#Network Planes Control Plane, Data Plane, Utility Plane, Timing Plane|Plane]] that is dedicated to application software control traffic. - Data Plane: A [[High-Speed Standard Serial Interfaces#Network Planes Control Plane, Data Plane, Utility Plane, Timing Plane|Plane]] that is used for application and external data traffic. - Expansion Plane: A [[High-Speed Standard Serial Interfaces#Network Planes Control Plane, Data Plane, Utility Plane, Timing Plane|Plane]] that is dedicated to communication between a logical controlling system element and a separate, but logically adjunct, system resource. - Management Plane: A [[High-Speed Standard Serial Interfaces#Network Planes Control Plane, Data Plane, Utility Plane, Timing Plane|Plane]] that is dedicated to the supervision and management of hardware resources. - Utility Plane: A [[High-Speed Standard Serial Interfaces#Network Planes Control Plane, Data Plane, Utility Plane, Timing Plane|Plane]] that is dedicated to common system services and/or utilities. The Utility Plane (UP) in an OpenVPX backplane includes common power distribution rails, common control/status signals, common reference clocks, and System Management signals. Typically, the plane speeds increment from utility (low speed), control plane (medium), and data plane (high). Utility planes are used for low-speed communication between modules (control signals, alarms, configuration of low-level devices like serial EEPROMS, etc). Control planes are typically used for commands and telemetry for the application software. The data plane is used for high bandwidth data transfers between application software, typically payload data. - Pipe: A physical aggregation of differential pairs or optical fibers used for a common function that is characterized in terms of the total number of differential pairs or optical fibers. A Pipe is not characterized by the protocol used on it. The following Pipes are predefined by OpenVPX: - Single Pipe (SP): A Pipe consisting of a single differential pair or optical fiber. - Ultra-Thin Pipe (UTP): A Pipe comprised of two differential pairs or two optical fibers. Example: 1000BASE-KX Ethernet, 1x Serial RapidIO, x1 PCIe, and 10GBASE-SR interfaces. - Thin Pipe (TP): A Pipe composed of four differential pairs or four optical fibers. Example: 1000BASE-T interfaces. - Triple Ultra-Thin Pipe (TUTP): A Pipe comprised of six differential pairs or six optical fibers. - Fat Pipe (FP): A Pipe composed of eight differential pairs or eight optical fibers. Example: 4x Serial RapidIO, x4 PCIe, [[High-Speed Standard Serial Interfaces#Ethernet#Backplane-Based, Multi-Gigabit Ethernet|10GBASE-KX4]], 40GBASE-SR4 (40Gbit Ethernet over [[Physical Layer#Fiber Optic|fiber]]) interfaces. - MP: A Pipe composed of ten differential pairs or ten optical fibers. - WP: A Pipe composed of twelve differential pairs or twelve optical fibers. - Double Fat Pipe (DFP): A Pipe composed of sixteen differential pairs or sixteen optical fibers. Example: x8 PCIe interface. - MMP: A Pipe composed of twenty differential pairs or twenty optical fibers. - Triple Fat Pipe (TFP): A Pipe composed of twenty-four differential pairs or twenty-four optical fibers. Example: 12x [[High-Speed Standard Serial Interfaces#InfiniBand|InfiniBand]] interface. - Quad Fat Pipe (QFP): A Pipe composed of thirty-two differential pairs or thirty-two optical fibers. Example: x16 PCIe interface. - Octal Fat Pipe (OFP): A Pipe composed of sixty-four differential pairs or sixty-four optical fibers. Example: x32 PCIe interface. - Module: A printed circuit board assembly (PCBA), which conforms to defined mechanical and electrical specifications. Pre-existing examples of Modules that apply to OpenVPX include 3U Plug-In Modules; 6U Plug-In Modules; backplanes, mezzanine Modules such as [[Backplanes and Standard Form Factors#XMC Module Standards|XMC]], [[Backplanes and Standard Form Factors#PMC Module Standard|PMC]], or [[Backplanes and Standard Form Factors#FMC Module Standards|FMC]] Modules (specified in VITA 57); and Rear Transition Modules. Additionally, the following Module types are defined by OpenVPX: - Bridge Module: A Plug-In Module in an OpenVPX system that might be required to provide communication paths between multiple Plug-In Modules that support different Plane protocols and/or implementations. When the transfer of information is necessary between Plug-In Modules utilizing dissimilar interfaces for communication, the Bridge Module terminates the channel and/or bus from the Plug-In Module(s) communicating via the initial protocol and transmits the information along to the Plug-In Module(s) communicating via the second protocol on a separate channel or bus. - Payload Module: A Plug-In Module that provides hardware processing and/or I/O resources required to satisfy the needs of the top-level application. Example: A Payload Module might be an embedded processor or an I/O controller Module. - Peripheral Module: A Plug-In Module such as an I/O device interface that is usually subservient to a Payload Module. - Plug-In Module: A circuit card or module assembly that is capable of being plugged into the front side of a backplane. - SpaceUM: Space Utility Management module contains the Utility Management selection circuitry for the SpaceVPX module. The SpaceUM module receives redundant Utility Plane signals through the backplane and selects one set to be forwarded to the standard slot Utility plane signals for each slot it controls. Note that SpaceVPX is specified in the VITA 78 standard. - Storage Module: A Module providing the functionality of a disk drive. An example is a SATA HDD/SSD (Hard Disk Drive / Solid-State Drive) carrier. - Switch Module: A Plug-In Module in an OpenVPX system that minimally serves the function of aggregating channels from other Plug-In Modules. These channels might be physical partitions of logical Planes as defined by a Backplane Profile. This Module terminates the aggregated channels and provides the necessary switch fabric(s) to transfer data frames from a source Plug-In Module to a terminating Plug-In Module as defined by the assigned channel protocol. This Module is typically used in systems that implement centralized switch architectures to achieve interconnection of their logical Planes. Distributed switch architectures typically do not include a Switch Module. ![VPX module (credit: XES)](image345.png) > [!Figure] > _VPX module (credit: XES)_ - Profile: A profile is a specific variant of a possible set of many combinations. In the VPX context, it applies to backplanes, chassis, modules, and slots. - Backplane Profile: A physical definition of a backplane implementation that includes details such as the number and type of slots that are implemented and the topologies used to interconnect them. Ultimately a Backplane Profile is a description of channels and buses that interconnect slots and other physical entities in a backplane. - Chassis Profile: A physical definition of a chassis implementation that includes details such as the chassis type, slot count, primary power input, module cooling type, Backplane Profile, and supplied backplane power, which are implemented in the Standard Development Chassis Profile. - Module Profile: A physical mapping of Ports onto a given Module's backplane Connectors and protocol mapping(s), as appropriate, to the assigned Port(s). This definition provides a first-order check of operating compatibility between Modules and slots as well as between multiple Modules in a Chassis. Module Profiles achieve the physical mapping of ports to backplane connectors by specifying a Slot Profile. Multiple Module Profiles can specify the same Slot Profile. - Slot Profile: A physical mapping of Ports onto a given slot's backplane connectors. These definitions are often made in terms of Pipes. Slot Profiles also give the mapping of Ports onto the Plug-In Module's backplane connectors. Unlike Module Profiles, a Slot Profile never specifies protocols for any of the defined Ports. - Slot: A physical space on an OpenVPX backplane with a defined mechanical and electrical specification intended to accept a Plug-In module. Pre-existing examples of slots that are applicable to OpenVPX include 6U and 3U: - Controller Slot: A slot in a VPX system that will accept a Controller Plug-In Module. A Controller Slot always has the control plane switch and System Controller function. It can be combined with the switch function for the data plane. - Payload Slot: A slot in a VPX system that will accept a Payload Plug-In Module such as, but not limited to, a hardware processing and/or I/O Plug-In Module. - Peripheral Slot: A slot in a VPX system that will accept a Peripheral Plug-In Module that is usually subservient to a Payload or Controller Module. It can also serve to bridge an interface such as PCI from the Payload or Controller slot. - Switch Slot: A slot in a VPX system that will accept a Switch plug-in Module. ## Profiles Backplane Profiles: At the center of each OpenVPX architectural definition is the Backplane Profile. This profile contains two important elements: a backplane topology for each communication plane and a slot interconnection definition for each slot type used. Each Backplane Profile references a Slot Profile for each slot position on the backplane and then defines how each pipe in each slot is interconnected and each pipe's electrical performance. The Backplane Profile defines which pins or set of pins are routed in the backplane. The Backplane Profile also defines the allowed slot-to-slot pitch. Slot Profiles: Slot Profiles define the connector type and how each pin, or pair of pins, is allocated. Single pins are generally allocated to the Utility Plane for power, grounds, system discrete signals, and system management. Differential pins/pairs are generally allocated for the three communication Planes (Control, Data, and Expansion). Differential paired pins are grouped together to form "pipes" (the definition of Planes, Pipes, and Profiles has been specified before). Slot Profiles also specify which pins are User Defined. Slot Profiles are divided into categories, but not limited to Payload, Switch, Peripheral, Storage, Bridge, and Timing. Chassis Profiles: Within the context of OpenVPX, a chassis is targeted for Plug-In Module system integration and test. OpenVPX defines three variants of Standard Development Chassis: small, medium, and large. Module Profiles: The Module Profile defines what communication protocol can be used on each pipe as defined in a corresponding Slot Profile. Each Module Profile specifies a particular Slot Profile, which in turn specifies the Connector Types. The Module Profile also specifies the module height (6U/3U), see Figure 3‑166 OpenVPX Profile Relationships (credit: VITA). ![OpenVPX Profile Relationships (credit: VITA)](image346.png) > [!Figure] > _OpenVPX Profile Relationships (credit: VITA)_ Module Profiles and Backplane Profiles guide the system integrator in selecting Plug-In Modules and backplanes that can work together. However, when User-defined pins are used, the system integrator needs to ensure that the User-defined pins on the backplane are routed accordingly. ### Slot vs Module Profiles Slot Profiles are used to specify how ports are mapped to pins of a backplane slot. A guiding principle in the way the OpenVPX standards are organized is that things that affect the implementation of the backplane are in Slot and Backplane Profiles. Module Profiles specify the protocols running over the physical connections specified by Slot and Backplane Profiles. ### Profile Names - Use and Construction OpenVPX implements a detailed, intricate naming convention for all its profiles. This provides an abstraction for the system architect when it comes to selecting VPX components off-the-shelf. Check the standard for further details on the OpenVPX naming convention. ## Signal Integrity Considerations When using a standard form factor like VPX, signal Integrity needs to be addressed to avoid interoperability issues, which will become increasingly severe as systems continue pushing the limit into higher serial baud rates. To reduce interoperability risks for system integrators, a standard [[Physical Layer#Scattering Parameters|S-parameter]] set needs to be defined in a way multiple vendor models can be evaluated within the same channel. The concatenated S-parameter data can then be used against the standard eye mask requirements for a given [[Semiconductors#Field Programmable Gate Arrays (FPGA)#SerDes and High-Speed Transceivers|SERDES]] protocol to be used in the design. The VITA 68.2 defines the following for S-parameters: - Topology definition - S-parameter port locations - S-parameter port naming convention - Connector S-parameter standardization - Define how many ports are needed in the model With the definitions above, any VPX topology can be created and used to evaluate any VPX channel, whether simple or complex. Below are example topologies that are seen in a typical VPX system. The figure below shows a very typical VPX Plug-In Module to VPX Plug-In Module topology thru a VPX Backplane. ![Example VPX Plug-In Module to VPX Plug-In Module Via Backplane](image347.png) > [!Figure] > _Example VPX Plug-In Module to VPX Plug-In Module Via Backplane_ The figure below shows a more complex VPX topology in which an XMC Plug-In Module connects to another XMC Plug-In Module via a VPX Backplane. ![Example XMC to XMC Via Backplane Topology](image348.png) > [!Figure] > _Example XMC to XMC Via Backplane Topology_ > [!info] > Bauds are the number of symbols (data signaling events) per second sent across a single transmission path. Depending on the encoding method used, a data signaling event (symbol) may contain less than 1 bit, exactly 1 bit, or more than 1 bit. For example, PCI Express revision 2.0 over a single path (x1) has a bit rate of 4.0 Gbps. The data is transmitted using 8b10b encoding which encodes 8 bits of data into a 10-bit stream. One way to look at the encoding is that each transmitted symbol represents 0.8 of a bit. Thus, the baud rate to achieve a 4Gbps data rate is the bit rate divided by the symbols per bit which is 4 Gbps/0.8 = 5Gbaud. For example: 1000BASE-T over 4 paths has a bit rate of 1 Gbps in one direction. The data is transmitted using PAM5 encoding (pulse-amplitude modulation with 5 levels). PAM5 encoding has five levels - +2, +1, 0, -1, -2; 4 levels are used for data encoding two bits per symbol (00, 01, 10, 11) and the 5th level is used to support forward error correction. Baud rate = bit rate/number of paths/bits per symbol = (1 Gbps) / 4 / 2 = 125 Mbaud. ![S-Parameter Model Port Locations for Backplanes and Modules with On-Board](image349.png) > [!Figure] > _S-Parameter Model Port Locations for Backplanes and Modules with On-Board_ ### Channel Model The channel model used in IEEE 802.3 (more specifically in 1000BASE-KX, 10GBASE-KX4, 10GBASE-KR, and 40GBASE-KR4) is rather robust and employs more parameters to constrain the performance of the channel. Consequently, it was concluded that the \[IEEE 802.3\] model is a good model to follow for the VPX channel parameters. This is particularly true since many OpenVPX profiles include 10GBASE-KX4 already, so they must be able to support the 3.125Gbaud 10GBASE-KX4 portion of this model. Also, many profiles in VITA 65.1 use 10GBASE-KR4, which must be able to support 10.3125 Gbaud. Note that different fabric standards vary in their Tx and Rx characteristics, and particularly vary in their Rx equalization capabilities; these variations will be dealt with in the "dot specs" for those specific fabrics. Note that the VITA 68.1 Compliance Channel Model is intended for signal fabrics that utilize encoding schemes such as 8B/10B or 64B/66B, where a symbol contains less than one bit of data. ![](image350.png) # CompactPCI Serial CompactPCI Serial is a standard managed by PICMG[^82]. PICMG is a consortium of roughly 50 companies, notably Intel, Airbus, BAE, Advantech, National Instruments, and others. PICMG works on defining and releasing a set of open standards for backplane-based architectures. What are open standards? An open standard is a definition of everything a vendor needs to know to build equipment (and write software) that will work with compatible products offered by other vendors. CompactPCI Serial combines the concept of backplanes inherited from VME with the benefits of serial interconnects. Technically speaking, CompactPCI Serial cannot be strictly considered a "switched fabric" since basically no switching infrastructure is required, provided the slot count remains within limits. CompactPCI Serial is a design evolution for the proven parallel PCI and the somewhat rare [[Backplanes and Standard Form Factors#Compact PCI and CompactPCI Express|CompactPCI Express]]. The new base standard (CPCI-S.0) replaces the parallel signals with fast serial point-to-point data links and introduces a new connector type (Airmax connector). CompactPCI Serial supports all modern high-speed serial data connections while keeping mechanical compatibility with the IEEE 1101 and Eurocard format. CompactPCI Serial defines a star topology for PCI Express (also for Serial RapidIO), SATA, and USB. In CPCI-S, one slot in the backplane is reserved as the system slot. The system slot supports up to eight peripheral lots. ![Compact PCI Serial plug-in card and backplane (Credit: EKF)](image351.png) > [!Figure] > _Compact PCI Serial plug-in card and backplane (Credit: EKF)_ The System Slot board (CPU) is the [[High-Speed Standard Serial Interfaces#PCI Express#Root Complex|root complex]] (source) for up to 8xPCIe Links, 8xSATA, 8xUSB, and 8xGbE distributed across the backplane to the Peripheral Slots (see figure below). In CPCI-S architecture, all peripheral slots are the same. The pin assignment of all peripheral slots is identical, so there are no slot nor backplane profiles as in VPX. The backplane remains also the same all along. Two slots are especially connected to the system slot using an extra wide PCI Express link called a Fat Pipe. These slots can be used for high-end processing-intensive applications that need high throughput back and forth from the computing unit. CompactPCI Serial does not require any switches or bridges in systems with up to nine slots. ![Compact PCI Serial Architecture](image352.png) > [!Figure] > _Compact PCI Serial Architecture_ ![CompactPCI Serial Backplanes (Credit: EKF)](image353.png) > [!Figure] > _CompactPCI Serial Backplanes (Credit: EKF)_ In CPCI-S, Ethernet is wired as a full mesh network. In full mesh architectures each of the nine slots is connected to each of the other eight slots via a dedicated point-to-point connection. In the figure below, a detailed description of the different connectors on the backplane for the different slots and board roles can be seen. System slots use all six connectors (J1-J6), whereas fat-pipe slots use J1-2 with J6 being optional for GigE. For regular peripheral slots, J1 and optionally J6 (for GigE connection) are used. Note that all connectors not used can be used for Serial Rapid IO. ![Slots in a CompactPCI Serial system (credit: EKF)](image354.png) > [!Figure] > _Slots in a CompactPCI Serial system (credit: EKF)_ One pin signals to the plugged-in module/board whether it is located in a system slot or peripheral slot. This allows plugging a system slot board (normally a CPU card) also into any peripheral slot. In addition, there are several "utility" signals to support the slots and for general system management, such as reset, hot plug, geographical addressing, etc. 12 V are available for power supply, allowing a maximum power consumption of 60 W for one 3U slot. This includes the peripheral slots. In case the system architecture requires more than the 8 peripheral slots CompactPCI Serial allows, it is possible to use backplane couplers. ![Backplane coupler (Credit: EKF)](image355.png) > [!Figure] > _Backplane coupler (Credit: EKF)_ As shown before, CompactPCI Serial supports a fully meshed Ethernet network. The backplane wiring creates a dedicated connection of every slot to every other slot. Each of the nine slots in a CompactPCI Serial system is connected with each of the other eight slots via the backplane, through the J6 connector. CompactPCI Serial allows using a system-slot CPU also as a peripheral card without any problems, which makes it very straightforward to create modular multi-CPU architectures. As these are point-to-point connections, no switchboard is needed, and no special infrastructure or configuration is required. CPCI-S allows for mezzanines as well. The connector P6 is then placed on the mezzanine board and a cut-out on the mezzanine host is provided. Only the interconnection between the mezzanine host to the CompactPCI Serial backplane will be specified within this specification. The mezzanine concept can be implemented within a 3U and a 6U system. ![Mezzanine concept for a 3U board](image356.png) > [!Figure] > _Mezzanine concept for a 3U board_ ![Mezzanine concept for a 6U board](image357.png) > [!Figure] > _Mezzanine concept for a 6U board_ CompactPCI Serial relies on a single rail +12V main power supply. +5V standby is optionally available. 48V optional voltage is available within a 6U system only. The backplane distributes the supply voltages to the front boards. Rear boards are indirectly supplied by the corresponding front board. ![CompactPCI Serial Power Distribution](image358.png) > [!Figure] > _CompactPCI Serial Power Distribution_ In CPCI-S, [[Physical Layer#I2C|I2C]] is used in the utility bus and it implements a System Management Bus electrically compliant with the SMBus specification. The System Management Bus I2C_SCL and Bus I2C_SDA signals are bussed to every slot and the utility connector. ![System Management Bus Signal Routing](image359.png) > [!Figure] > _System Management Bus Signal Routing_ ## Connector Types Connectors used for CompactPCI Serial are optimized for high-speed differential signal transmission. Shielding and impedance control is maintained through the connectors. The connector is arranged in rows with 12 pins each. 12 pins are sufficient for 4 high-speed signal pairs. 4 pins in a row are required for ground. The pins within the connector are not specialized, however, so a pin can be used for signals (differential or single-ended), for ground, as well as for the supply voltage. For front boards, receptacle connectors are used on the backplane, and right-angle headers are used on the plug-in boards. On rear boards, it is the opposite. All connector types are designed for press-fit mounting. The press-in pin length in the PCB is just 1.6 mm to minimize the stub lengths thereby enhancing the signal integrity. To realize rear I/O, a plug connector is pressed onto the back of the backplane. This mirrors the pin assignment from the front to the backside. To connect the front boards to the backplane, four different plug types of the same connector family are used. ![3U Connector Plug Types A, B, C and D on Front Boards](image360.png) > [!Figure] > _3U Connector Plug Types A, B, C and D on Front Boards_ Front Board Connector Types are listed in the table below. | **Designator** | **type** | **nr of rows** | **nr of walls** | **usage** | | -------------- | -------- | -------------- | --------------- | --------- | | **P0** | A | 6 | 4 | Optional | | **P1** | A | 6 | 4 | Mandatory | | **P2** | B | 8 | 2 | Optional | | **P3** | B | 8 | 2 | Optional | | **P4** | B | 8 | 2 | Optional | | **P5** | C | 6 | 2 | Optional | | **P6** | D | 8 | 4 | Optional | ## CompactPCI Serial Space Standing on the shoulders of Compact PCI Serial, CompactPCI Serial Space (CPCI-SS) initiative started in 2015, triggered by DLR project OBC-SA and conformed by companies such as Thales Alenia Space (Germany), SpaceTech, Airbus Defence and Space, MEN, and supported by EBV, EKF, FCI, Fraunhofer, Heitec, TTTech. CPCI Serial Space is obviously based on CompactPCI Serial. However, some unnecessary features were removed. For example, CPCI Serial Space does not need to route USB or SATA signals over the backplane so these signal lines can now be used for additional rear I/O connections. On the other hand, due to the requirements of some space applications, CPCI Serial Space requires in comparison to CompactPCI Serial new signal lines. CompactPCI Serial defines dedicated [[High-Speed Standard Serial Interfaces#PCI Express|PCI Express]] and [[High-Speed Standard Serial Interfaces#Ethernet|Ethernet]] links on the backplane. CPCI Serial Space maintains the routing of these signal lines but defines these backplane links as physical connections not dedicated especially to Ethernet or PCI Express. This has the advantage that they can be used for Ethernet and PCI Express links but also other protocols like SpaceWire, for inter-board communication. Like in CompactPCI Serial, a full mesh network on the backplane is supported. All slots can have a connection to the full mesh network; a point-to-point connection from each slot to all other slots. A CompactPCI Serial backplane routes PCI Express as a single star (the system slot is the center of the star). CPCI Serial Space backplane routing is compatible with that and additionally supports a second system slot. The dual-star architecture improves the reliability and flexibility of the system. All peripheral slots can have a serial connection to system slot A. This complies with the CompactPCI Serial base specification. Peripheral slot 8 is extended to be system slot B, the second system slot. System slot B is identical to system slot A. It can be used as an additional system slot, but it can also be used as a peripheral slot (per. slot 8). However, each peripheral slot has a connection to system slot A and additionally a second serial connection to system slot B. A shelf controller can control the power supply of all boards separately. Also, the shelf controller can check the status of the boards and can reset the boards individually. Two redundant CAN buses are available additionally as board management buses. Neither the shelf controller connector nor the shelf controller itself is specified in this specification. The mechanical design of CPCI Serial Space is fully compatible with CompactPCI Serial. However, the board-to-board pitch is 5HP (= 25,4 mm) only for air-cooled as well as for conduction-cooled systems. ![Compact PCI Serial Space Architecture](image361.png) > [!Figure] > _Compact PCI Serial Space Architecture_ In [[Units, Chassis and Racks|unit]]-redundant CompactPCI Serial Space configurations, the second unit's system slots can host a network module. This way, all boards in peripheral slots are connected to the primary and the redundant command and data handling link. In CompactPCI Serial Space the maximum number of slots is limited as well. In case an architecture requires a higher number of boards than the maximum allowed, more chassis need to be allocated and interconnected. ## Power Distribution Neither the power supply itself nor the connecting of the power supply to the backplane is a part of the specification. Redundancy concepts for power supplies are also not specified but of course common practice. The power distribution over the backplane can be implemented in different ways depending on the application. ![CompactPCI Serial Space Power Architecture](image362.png) > [!Figure] > _CompactPCI Serial Space Power Architecture_ ## Compact PCI and CompactPCI Express There is a high chance of confusion in this section. In the previous section, we discussed CompactPCI Serial, whereas in this one we will discuss CompactPCI. Note the difference? CompactPCI is the prequel of CompactPCI Serial. CompactPCI is a computer bus interconnect standard for industrial computers, combining the electrical characteristics of the Peripheral Component Interconnect (PCI) specification with the physical form factor and high-reliability connectors of the Eurocard type. The main aim of CompactPCI is to provide a high-performance and highly reliable computing platform suitable for mission-critical applications where standard desktop PC components might not be suitable due to environmental, reliability, or scalability concerns. The architecture leverages the robust 3U or 6U Eurocard form factor, providing a modular approach that supports hot swapping, which is essential for applications requiring minimal downtime and for ease of maintenance. This feature is particularly important in telecommunications, military, and industrial applications where systems need to operate continuously and reliably over long periods. Unlike CPCI-Serial, CompactPCI uses a parallel bus structure, which, at the time of its inception, offered a significant improvement over other bus technologies in terms of data transfer speeds and bandwidth. This made it highly suitable for complex computations and data-intensive applications. Additionally, the standard's support for rear I/O and the use of ruggedized connectors enhances its reliability and makes it well-suited for harsh environments. CompactPCI systems are built around a chassis that houses a backplane with slots for inserting processor boards, peripheral cards, and other modules. This modularity allows for flexible system configuration and easy upgrades, as components can be added or replaced without disrupting the entire system. ![Dual backplane (CompactPCI Serial and CompactPCI Express) (credit: EKF)](image363.jpeg) > [!Figure] > _Dual backplane (CompactPCI Serial and CompactPCI Express_ But wait, there's more confusion available. There's also CompactPCI Express, which came as an attempt to solve the limitations of the parallel bus of CompactPCI. Isn't that just CompactPCI Serial, then? CompactPCI Express is an evolution of the original CompactPCI specification that integrates the high-speed serial connectivity of PCI Express (PCIe) into the established CompactPCI form factor. CompactPCI Express introduces a hybrid backplane design, which supports both the new serial PCI Express lanes and the traditional CompactPCI parallel bus. This design allows for a gradual transition from CompactPCI to CompactPCI Express, enabling system designers and end-users to upgrade to newer, faster technology at their own pace without necessitating a complete overhaul of existing systems. It provides a flexible upgrade path, ensuring that investments in current technology remain protected while offering a clear route to adopting advanced capabilities. In reality, CompactPCI Serial and CompactPCI Express are both evolutions of the original CompactPCI standard, designed to address the need for higher data transfer rates in industrial and embedded computing systems. While they share a common heritage, there are fundamental differences in their approach to enhancing performance and connectivity, reflecting the diverse requirements of modern computing applications. CompactPCI Express, as an intermediate step between CompactPCI and CompactPCI Serial, integrates PCI Express (PCIe) technology into the CompactPCI framework. It achieves this by adding high-speed serial PCI Express lanes to the existing parallel bus system of CompactPCI. This dual-bus approach allows for a significant boost in data transfer rates while maintaining backward compatibility with existing CompactPCI modules. CompactPCI Express is particularly suited for applications that require a balance between leveraging the high-speed capabilities of PCIe and maintaining investments in CompactPCI infrastructure. On the other hand, CompactPCI Serial represents a more radical departure from the original standard. It eliminates the parallel bus altogether and is built entirely around serial communication technologies, including PCI Express, Ethernet, SATA, and USB. This shift to a fully serial architecture allows CompactPCI Serial to achieve even higher data transfer rates and greater bandwidth than CompactPCI Express. Additionally, CompactPCI Serial simplifies the backplane by relying solely on point-to-point connections, which enhances reliability and scalability. The standard is designed to support a wide range of applications, from simple to highly complex systems, by providing a versatile and high-performance platform that can accommodate the latest communication technologies. > [!hint] > The primary difference between CompactPCI Serial and CompactPCI Express lies in their architectural approach and the extent of their departure from the original CompactPCI standard. > CompactPCI Express serves as a hybrid that offers improved performance through the addition of PCIe lanes while retaining a level of backward compatibility with CompactPCI. CompactPCI Serial, however, fully embraces serial communication technologies, offering a more future-proof solution but requiring a greater commitment to transitioning away from legacy CompactPCI components. We wouldn't need to dive too much into CompactPCI Express if it were not for the fact it is the baseline architecture of PXI Express. PXI Express is an extension of the PXI standard, integrating PCI Express technology to enhance data transfer rates and bandwidth. This integration significantly improves the performance of PXI systems, particularly beneficial for applications requiring high-speed data acquisition, signal processing, and real-time control. In PXI Express systems, each slot has a dedicated bandwidth, reducing data bottlenecks common in shared bus architectures. Key features of PXI Express include increased data throughput, with potential speeds far exceeding the original PXI standard, and backward compatibility with existing PXI modules. This compatibility ensures that users can integrate new PXI Express modules into their current systems without completely overhauling their setups. PXI Express also maintains the timing and synchronization features of PXI, making it suitable for applications that require precise coordination between different instruments. The adoption of PCI Express technology in PXI Express allows for a wide range of applications, from basic instrumentation to complex, high-bandwidth test systems, serving industries like telecommunications, automotive, aerospace, and more. PXI Express supports 3U and 6U Module form factors just like PXI. Several new connectors have been added to support PCI-Express and are defined by the CompactPCI Express specification. This specification uses different names for the Module and slot types as compared to CompactPCI Serial and introduces some new types as well. The table below shows the PXI Express component name and the equivalent CompactPCI Express component name. CHASSIS CONTROLLER MODULES SOFTWARE SERVICES > [!warning] > This section is under #development ![Exploded view of a PXI system (credit: NI)](image364.png) > [!Figure] > _Exploded view of a PXI system (credit: NI_ # Mezzanines A mezzanine is a type of circuit board that augments a carrier PCB. Their name reflects the fact they are much like a mezzanine floor in architecture which sits between the main floors of a building. Mezzanine PCBs allow for additional functionalities to be added to an existing electronic system without the need to redesign or significantly alter the main PCB. This approach is particularly beneficial in complex digital systems, where space is at a premium or when future upgrades and scalability are anticipated. By using a mezzanine, designers can add new features or change interfaces with minimal impact on the base design. The connection between the main and mezzanines is typically made through high-density connectors. These connectors ensure data interfaces and power transfer between the boards while maintaining a compact and efficient design. Today, three popular mezzanine standards dominate the market: [[Backplanes and Standard Form Factors#PMC Module Standard|PMC]] (PCI Mezzanine Card), [[Backplanes and Standard Form Factors#XMC Module Standards|XMC]] (Switched Mezzanine Card), and [[Backplanes and Standard Form Factors#FMC Module Standards|FMC]] (FPGA Mezzanine Card). These mezzanines support all popular industry architectures including [[Backplanes and Standard Form Factors#VME|VME]], [[Backplanes and Standard Form Factors#OpenVPX (VITA 65)|OpenVPX]], [[Backplanes and Standard Form Factors#Compact PCI and CompactPCI Express|CompactPCI]], and [[Backplanes and Standard Form Factors#CompactPCI Serial|CompactPCI Serial]] for both 3U and 6U form factors and across a range of cooling techniques and ruggedization levels. Each of these three mezzanine standards presents a unique set of advantages and shortcomings that we will discuss in this section. ## PMC Module Standard Defined under the IEEE 1386.1 standard over 15 years ago, PMC uses the mechanical dimensions of the CMC (Common Mezzanine Card) from IEEE 1386 with the addition of up to four 64-pin connectors to implement a 32- or 64-bit PCI bus as well as user I/O. As shown in the figure below, two connectors, P11 and P12, handle a 32-bit PCI bus, which is expandable to 64 bits with the addition of the P13 connector. Operating at PCI bus clock speeds of 33 or 66 MHz, the 32-bit interface delivers a peak transfer rate of 132 or 264 MB/sec respectively, and twice that of a 64-bit interface. PCI-X boosts the clock rate to 100 or 133 MHz for a peak transfer rate of 800 or 1000 MB/sec for 64-bit implementations. The optional P14 connector supports 64 bits of user-defined I/O. As PCI buses met their limit and industrial computing systems migrated towards serial-based interconnects like PCI Express, the need for a similar migration for mezzanine modules became evident. ![PMC Module outline dimensions and connectors (credit: Pentek)](image365.png) > [!Figure] > _PMC Module outline dimensions and connectors (credit: Pentek)_ ## XMC Module Standards XMC modules are defined under VITA 42 as the serial, point-to-point version of the PMC module. It requires either one or two multipin connectors called the primary (P15) and secondary (P16) XMC connectors shown in the figure below. Each connector can handle eight bidirectional serial lanes, using a differential pair in each direction. The VITA 42.3 sub-specification defines pin assignments for PCIe, while VITA 42.2 covers SerialRapidIO. Typically, each XMC connector is used as a single x8 logical link or as two x4 links, although other configurations are also defined. Data transfer rates for XMC modules depend on the gigabit serial protocol and the number of lanes per logical link. ![XMC Module outline dimensions and connectors (credit: Pentek)](image366.png) > [!Figure] > _XMC Module outline dimensions and connectors (credit: Pentek)_ ## FMC Module Standards Defined in the VITA 57 specification, FMC modules are intended as I/O modules for FPGAs. They depart from the CMC form factor, with less than half the real estate, as shown in the figure below. Two different connectors are supported: a low-density connector with 160 contacts and a high-density [[Physical Layer#Connectors|connector]] with 400 contacts. Connector pins are generically defined for power, data, control, and status with specific implementation depending on the design. FMC modules rely upon the FPGA sitting on the carrier board to provide the necessary interfaces to the FMC components. These can be single-ended or differential parallel data buses, gigabit serial links, clocks, and control signals for initialization, timing, triggering, gating, and synchronization. For data, the high-density FMC connector provides 80 differential pairs or 160 single-ended lines. It also features ten high-speed gigabit serial lanes, with differential pairs for each direction. ![Single-width FMC Module outline dimensions and connectors (credit: Pentek)](image367.png) > [!Figure] > _Single-width FMC Module outline dimensions and connectors (credit: Pentek)_ ![Double-width FMC boards (credit: VITA)](image368.png) > [!Figure] > _Double-width FMC boards (credit: VITA)_ ## Advanced Mezzanine Card (AMC) Advanced Mezzanine Cards are printed circuit boards (PCBs) that follow a specification of the [PCI Industrial Computers Manufacturers Group](https://en.wikipedia.org/wiki/PICMG "PICMG") (PICMG). Known as AdvancedMC or AMC, the official specification designation is AMC._x_. Originally AMC was targeted to requirements for carrier-grade communications equipment, but later used in other markets. AMC modules are designed to work standalone, hot-pluggable on any carrier card (baseboards and system carrier boards in AdvancedTCA Systems or as a hot-pluggable board into a backplane directly as defined by MicroTCA specifications. ![Examples of AMC cards showing processor, IO, and storage functions all in the same form factor.](ATCA.jpg) > [!Figure] > _Examples of AMC cards showing processor, IO, and storage functions all in the same form factor._ ## Comparison XMCs have an inherent data rate advantage over PMCs because they use gigabit serial links. Even the slowest x4 PCIe 1.0 interface still matches the fastest PCI-X 64-bit bus at 133 MHz. However, a major system-level implication for the gigabit serial interfaces is that they are dedicated point-to-point links and are not subject to the sharing penalty (or advantage, depending on the application) of parallel buses. Unlike PMCs and XMCs, FMCs do not use industry-standard interfaces like PCI or PCIe. Instead, each FMC has a unique set of control lines and data paths, each one differing in signal levels, quantity, bit widths, and speed. At a 1 GHz data clock rate, the 80 differential data lines can deliver 10 GB/sec. At a 5 GHz serial clock rate, the ten-gigabit serial lanes can deliver 5 GB/sec. In fact, specification design goals for FMCs are twice these rates. In terms of board area, FMC modules are less than half the size of PMCs and XMCs, and less [[Printed Circuit Boards#Board Layout|real estate]] means less freedom to place components for shielding, isolation, and heat dissipation. For example, A/D converters are extremely sensitive to spurious signal pickup from power supplies, voltage planes, and adjacent copper traces. Often, the required power supply lines must be re-regulated and filtered locally on the same board as the A/D converters for best results. Arranging this circuitry on a small FMC module can be challenging. Even though XMC modules have more components, they can often be rearranged more easily because of the larger board size. FMCs require the FPGA to reside on the carrier board, while FPGA-based XMC modules include the FPGA on the mezzanine board. Schematically, the circuitry between the front end and the system bus may be nearly identical, but the physical partitioning of the hierarchy occurs at two different points. To illustrate this, the figure below shows two different implementations of a four-channel A/D converter module for 3U OpenVPX. Notice that both block diagrams feature the same A/D converters and FPGAs and provide the same x8 PCIe interface to the OpenVPX backplane. The XMC implementation on top uses the XMC connector between the FPGA and the backplane, while the FMC implementation below uses the FMC connector between the A/Ds and the FPGA. ![Comparison of XMC and FMC for a typical application (source: https://www.pentek.com/tutorials/21_2/mezz.cfm)](image369.png) > [!Figure] > _Comparison of XMC and FMC for a typical application (source: https://www.pentek.com/tutorials/21_2/mezz.cfm)_ Because most of the power is consumed by the FPGA, comparing power dissipation between FMC and XMC modules will strongly favor the FMC. However, since the same resources are used in both block diagrams, the 3U module power dissipation is nearly identical. In a comparison between PMC and XMC or FMC modules, there is one additional factor to consider: Gigabit serial interfaces implemented in FPGAs typically consume more power than parallel bus interfaces. So, when considering PMC products versus XMC/FMC product applications like the one described above, the PCI bus of the PMC module will draw less power than a PCIe link. Of course, the extra power required for PCIe delivers additional benefits in both speed and connectivity. In general, FMC modules can be more effective if the same vendor supplies both the mezzanine module and the carrier with tested and installed FPGA bitstreams. Otherwise, XMC modules appear as a better option for modular digital systems due to the proliferation of links, carriers, backplanes, and adaptors all based on PCIe. This eliminates the need for a custom FPGA development effort, minimizes product support issues, and speeds development cycles. The AMC standard differs from PMC, XMC, and FMC by the 0-degree instead of the 90-degree orientation of its connector enabling the hot plug of the AMC. # COM Express (Computer-on-Modules) and Carrier Boards A Computer-On-Module, or CoM, is a pluggable module with all components necessary for a bootable host computer, packaged as a board that sits on another board as a "topping" or a "cape". A CoM requires a Carrier Board to bring out I/O and to power up. CoMs are used to build computer solutions and offer OEMs fast time-to-market with reduced development costs. COM Express is a specification managed by the PICMG that defines several aspects of Computer-on-Modules and is released as an open standard. Since its initial ratification in 2005, COM Express has become one of the most popular embedded hardware standards in the world, spawning eight different Types, four different sizes, and three major revisions while retaining a modular architecture that promotes vendor interoperability and technology reuse. The primary feature that sets COM Express apart from traditional single-board computers (SBCs) is the ability to plug off-the-shelf modules into custom carrier boards designed to application-specific requirements. In other words, a custom COM Express carrier board can be designed into a system to transport all necessary signals to and from subsystems and peripherals, and off-the-shelf COM Express processor modules can plug directly into the carrier and serve as the main controller. This dual-board architecture precludes the high-speed interfacing and signal integrity expertise users would otherwise require in the design of their own processor modules, reduces time to market, and provides a scalable upgrade path to newer modules should the application require more performance in the future. COM Express modules connect to carriers and other hardware via connectors with different standardized pinouts, the most common of which today are Types 6, 7, and 10. The latest revision to the COM Express specification, COM.0 R3.1, helps modernize these interfaces by adding USB4 to Type 6 designs, CEI sideband 10 GbE signaling for Type 7 modules, and PCI Express Gen 4 support across all module types. ![COM Express form factors (credit: PICMG)](image370.png) > [!Figure] > _COM Express form factors (credit: PICMG)_ Four module sizes are defined: the Mini Module, Compact Module, Basic Module, and Extended Module (see figure above). The primary difference between the different size modules is the overall physical size and the performance envelope supported by each. The Extended Module is larger and can support larger processor and memory solutions. The Compact Module, Basic Module, and Extended Module use the same connectors and pinouts whereas the 84x55 Mini Module targets but is not limited to using the COM Express A-B connector, Type 10 pin-out. In addition, the Mini Module allows for a wide range of power supply operation. The different size Modules share several common mounting hole positions. This level of compatibility allows a Carrier Board can be designed to accommodate multiple Module sizes. Up to 440 pins of connectivity are available between COM Express Modules and the Carrier Board. New Interfaces include high-speed serial interconnects such as PCI Express, Serial ATA, USB 2.0/3.0, Gigabit and 10 Gigabit Ethernet. To enhance interoperability between COM Express Modules and Carrier Boards, several common signaling configurations (Pinout Types) have been defined to ease system integration. Pin-out Type 10 definition requires only a single 220-pin connector and pin-out types 6 and 7 require both 220-pin connectors to supply all the defined signaling. The Carrier Board connector shall be a 440-pin plug that is composed of 2 pieces of a 220-pin, 0.5 mm pitch plug. The pair of connectors may be held together by a plastic carrier during assembly to allow handling by automated assembly equipment. The Carrier Board connector is a plug by the vendor's technical definition of a plug, and to some users, it looks like a receptacle. ![Carrier Board Plug (credit: PICMG)](image371.png) > [!Figure] > _Carrier Board Plug (credit: PICMG)_ COM Express modules should be equipped with a heat spreader. This heat spreader by itself does not constitute the complete thermal solution for a Module but provides a common interface between Modules and implementation-specific thermal solutions. If implemented, a heat-spreader for the Compact, Basic, and Extended form factor shall be used and the Mini form factor may use an implementation-specific set of holes and spacers to attach the heat-spreader to the Module. The intent is to be able to provide a Module and heat-spreader as an assembly that can then be mounted to a Carrier without having to break the thermal interface between the Module components and the heat-spreader. ![Cross-section of a Module and heat-spreader assembled to a Carrier](image372.png) > [!Figure] > _Cross-section of a Module and heat-spreader assembled to a Carrier_ ![COM Express mounting positions (all dimensions in mm) (credit: PICMG)](image373.png) > [!Figure] > _COM Express mounting positions (all dimensions in mm) (credit: PICMG)_ Below, is an example of a Mini size COM Express Computer-On-Module: ![ADLINK nanoX-EL Mini size COM Express, Type 10 module (top view) (credit: ADLink)](image374.png) ![ADLINK nanoX-EL Mini size COM Express, Type 10 module (bottom view) (credit: ADLink)](image375.png) > [!Figure] > _ADLINK nanoX-EL Mini size COM Express, Type 10 module (top view) (credit: ADLink)_ Specifications of the nanoX: - Intel Atom® x6000E Series, Intel® Pentium® and Intel® Celeron® N and J Series processors - LPDDR4, up to 16GB, IBECC capable - 4K display via DDI 0, HD Audio - LVDS or eDP, 4 lanes - 4x PCIe Gen3 - 2.5GbE Ethernet, TSN capable - 2x USB 3.x/2.0 and 6x USB 2.0 - 2x SATA - GPIO/SD and eMMC - 4.75V to 20V wide voltage input Carrier (miniBase-10R): ![miniBASE-10R COM Express Type 10 Reference Carrier Board (credit: ADLink)](image376.png) > [!Figure] > _miniBASE-10R COM Express Type 10 Reference Carrier Board (credit: ADLink)_ # PC/104 To prove that legacy standard interconnects are still around, PC/104 is living proof that the ISA bus is still standing strong. PC/104 is the standard that provides the mechanical and electrical specifications for a compact version of the ISA bus, optimized for the requirements of compact embedded systems applications. PC/104 modules can be of two bus types, 8-bit and 16-bit. As shown in the figures below, each of the two bus types (8-bit and 16-bit) offers two bus options, according to whether or not the P1 and P2 bus connectors extend through the module as "stack through" connectors. These options are provided to help meet the tight space requirements of embedded applications. Figure 3‑198 illustrates a typical module stack including both 8- and 16-bit modules and shows the use of both the \"stack through\" and "non-stack through" bus options. As shown in Figure 3‑199, when 8- and 16-bit modules are combined in a stack, the 16-bit modules must be stacked below (i.e., on the \"secondary side\" of) the 8-bit modules. A \"passive\" P2 bus connector may optionally be included in the design of 8-bit modules, to allow the use of these modules anywhere in a stack. More information can be found in the PC/104 Specification Version 2.5. ![PC/104 8-bit Module Dimensions](image377.png) > [!Figure] > _PC/104 8-bit Module Dimensions_ ![PC/104 16-bit Module Dimensions (note the extra connector J2)](image378.png) > [!Figure] > _PC/104 16-bit Module Dimensions (note the extra connector J2)_ ![PC/104 stack with different bus widths (8-bit and 16-bit)](image379.png) > [!Figure] > _PC/104 stack with different bus widths (8-bit and 16-bit_ ![PC/104 stack showing board clearances](image380.png) > [!Figure] > _PC/104 stack showing board clearances_ ![Maximum component height in a PC/104 board](image381.png) > [!Figure] > _Maximum component height in a PC/104 board_ ## PC/104 (ISA) signal definition The signals in the PC/104 are single-ended and parallel and they include: - Address and Data - Cycle Control - Bus control - Interrupt & DMA Note that all signals marked with "\#" are active low. ### Address and Data | **Signal** | **Description** | | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **BALE** | **Bus Address Latch Enable** line is driven by the platform CPU to indicate when SA<19:0>, LA<23:17>, AENx, and SBHE# are valid. It is also driven to a logical HIGH when an ISA add-on card or DMA controller owns the bus. | | **SA<19:0>** | Address lines are driven by the ISA bus master to define the lower 20 address signal lines needed for the lower 1 MB of the memory address space. | | **LA<23:17>** | Latched Address lines are driven by the ISA bus master or DMA controller to provide the additional address lines required for the 16 MB memory address space. | | **SBHE#** | System Byte High Enable line is driven by the ISA bus master to indicate that valid data resides on the SD <15:8> lines. | | **AENx** | Address Enable line is driven by the platform circuity as an indication to ISA resources not to respond to the ADDRESS and I/O COMMAND lines. This line is the method by which I/O resources are informed that a DMA transfer cycle is occurring and that only the I/O resource with an active DACKx# signal line can respond to the I/O signal lines. | | **SD<15:0>** | Data lines 0 – 7 or 8 – 15 are driven for an 8 data bit cycle, and 0 – 15 are driven for a 16 data bit cycle. | ### Cycle Control | **Signal** | **Description** | | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **MEMR#** | Memory Read line is driven by the ISA bus master or DMA controller to request a memory resource to drive data onto the bus during the cycle. | | **SMEMR#** | System Memory Read line is to request a memory resource to drive data onto the bus during the cycle. This line is active when MEMR# is active and the LA signal lines indicate the first 1 MB of address space. | | **MEMW#** | Memory Write line is to request a memory resource to accept data from the data lines. SMEMW# System Memory Write line is to request a memory resource to drive data onto the bus during the cycle. This line is active when MEMW# is active and the LA signal lines indicate the first 1 MB of address space. | | **IOR#** | Read line is driven by the ISA bus master or DMA controller to request an I/O resource to drive data onto the data bus during the cycle. | | **MEMCS16#** | Memory Chip Select 16 line is driven by the memory resource to indicate that it is an ISA resource that supports a 16 data-bit access cycle. It also allows the ISA bus master to execute shorter cycles. | | **IOCS16#** | I/O Chip Select 16 line is driven by an I/O resource to indicate that it is an ISA resource that supports a 16 data bit access cycle. It also allows the ISA bus master to execute shorter default cycles IOCHRDY I/O Channel Ready line allows resources to indicate to the ISA bus master that additional cycle time is required. | ### Bus Control | **Signal** | **Description** | | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **REFRESH#** | Memory Refresh is driven by the refresh controller to indicate a refresh cycle | | **MASTER16#** | MASTER16# line is only driven active by an ISA add-on bus owner card that has been granted bus ownership by the DMA controller. | | **IOCHK#** | I/O Channel Check line is driven by any resource. It is active for a general error condition that has no specific interpretation. | | **RESET** | Reset line is driven active by the platform circuitry. Any bus resource that senses an active RESET signal line must immediately tri-state all output drivers and enter the appropriate reset condition. | | **BCLK** | System Bus Clock line is a clock driven by the platform circuitry. It has a 50% ± approximately 5% (57 to 69 nanoseconds for 8 MHz) duty cycle, at a frequency of 6 to 8 MHz (± 500 ppm). OSC Oscillator line is a clock driven by the platform circuitry. It has a 45 – 55 % duty cycle, at a frequency of 14.31818 MHz (± 500 ppm). It is not synchronized to any other bus signal line. | ### Interrupt & DMA | **Signal** | **Description** | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------- | | **IRQx** | Interrupt Request lines allow add-on cards to request interrupt service by the platform CPU. | | **DRQx** | DMA Request lines are driven active by I/O resources to request service by the platform DMA controller | | **DACKx#** | DMA Acknowledge lines are driven active by the platform DMA controller to select the I/O resource that requested a DMA transfer cycle | | **TC** | Terminal Count line is driven by the platform DMA controller to indicate that all the data has been transferred. | ### Signal Pinout ![PC/104 8-bit and 16-bit ISA Bus Signal Assignments](image382.png) > [!Figure] > _PC/104 8-bit and 16-bit ISA Bus Signal Assignments_ ## PC/104 & Cubesats The PC/104's small, stackable, compact form factor made it ideal for applications where volume is limited, like in CubeSats. CubeSats are a class of nanosatellites that are built to standard dimensions (Units or "U" of 10 cm x 10 cm x 10 cm). They were initially developed for academic and hobbyist purposes but have since found broader applications in space research, Earth observation, and even commercial deployments. The compact and standardized nature of CubeSats makes them cost-effective and quicker to develop compared to traditional satellites. There is a breadth of off-the-shelf components for CubeSats based in the PC/104 form factor, from radios to power subsystems and on-board computers (see figure below). > [!note] > Although components for CubeSats tend to comply with the mechanical requirements of the standard, they do not necessarily comply with or use the ISA bus signaling scheme. In general, providers route signals of different protocols and standards using proprietary pinouts. Which is an ironic twist for a standard. ![On-board computer for a CubeSat (credit: Nanoavionics)](image383.jpg) > [!Figure] > _On-board computer for a CubeSat (credit: Nanoavionics)_ Also, radio equipment for CubeSats is available in PC104 form factor. See the figure below (note the stack of boards using the PC104 connector). ![An S-Band transceiver for a CubeSat (Credit: Syrlinks)](ewc31.png) > [!Figure] Also, radio equipment for CubeSats is available in PC104 form factor. See the figure below (note the stack of boards using the PC104 connector). # AdvancedTCA and MicroTCA (Blade Servers) The Advanced Telecom Computing Architecture (AdvancedTCA) specifications are a series of PICMG[^83] specifications, designed to provide an open, multi-vendor architecture targeted to requirements for the next generation of carrier-grade telecommunications equipment. The PICMG specifications incorporate the latest trends in high-speed interconnect technologies, next-generation processors, and improved reliability, manageability, and serviceability. AdvancedTCA, also known as ATCA, is the first open architecture that provides a robust system management architecture that enables high-availability systems that keep running in the event of individual component or sub-system failure. This also enables "on-the-fly" software upgrades while the system is operating The original AdvancedTCA specification was released in January 2003 and has been adopted by many telecommunication equipment providers. It has expanded its reach into non-carrier grade environments where high processor and I/O density coupled with high system bandwidth are required. AdvancedTCA is the most widely used open standard for global telecommunications infrastructure and is becoming so for a variety of critical military applications. AdvancedTCA is also employed in large-scale physics experiments and ruggedized applications in the military market Companies participating in the AdvancedTCA effort have brought a wide range of knowledge of the industry. They include telecommunications equipment manufacturers, board and system level vendors, computer OEMs, software companies and chassis, connector, and power supply vendors. The specifications provide enough information to allow board, backplane, and chassis vendors to independently develop products that will be interoperable when integrated together. Details include board dimensions, equipment practice, connectors, power distribution, and a robust system management architecture that can be used independently of the switch fabric link technology. Interoperability of system components from different manufacturers is tested regularly through an ongoing series of PICMG-sponsored Interoperability Workshops The AdvancedTCA community has recently completed and released a fairly major enhancement to the core ATCA standard. This new specification is known as "PICMG 3.7" or "ATCA Extensions". It expands the packaging definitions to include dual-sided shelves, where ATCA boards can plug into either the front or the back of a double-deep rack and interconnect through the backplane. In addition to this, the Extensions specification also allows for something called Extended Transition Module (ETM) which is essentially a front board-sized circuit board that connects to a front board via Zone 3, much like a standard Rear Transition Module. There are many variations of interconnects allowed, but Figure 1 below gives a general idea of the concept. Importantly, PICMG 3.7 provides a much more detailed definition of, and support for, double-wide modules than the original specification. These can support multiple processors, bigger heatsinks, cheaper full-height memory modules, and multiple disk drives on a single assembly if desired. PICMG 3.7 also defines requirements for typical data center environments in addition to the telco central office. Double-wide modules can support up to 800W of power dissipation if the shelf is built for that. AC as well as traditional -48VDC power environments are also supported. The PICMG 3.1R3.0-100GbE ATCA specification has also been ratified. Driven by the need for higher bandwidth in mobility, video, and security, this effort provided capacity improvement to the ATCA platform by incorporating 100Gb backplane Ethernet. Backward compatibility has been maintained. PICMG 3.1 R3.0 updated the PICMG 3.1 specification to incorporate 100GBASE-KR4 (NRZ) Ethernet signaling with full simulation/characterization studies to ensure compliance > [!warning] > To be #expanded # Optical Backplanes The maximum data rate achievable by a single differential pair using copper traces and reasonable quality dielectrics depends on various factors including the quality of the materials involved (dielectrics and copper foils), the quality of the manufacturing and assembly process, the design of the interconnects, the signal integrity techniques employed, and the overall system design. The use of copper for the conductive paths and good-quality dielectrics greatly influences the maximum achievable data rates. Equalization techniques like pre-emphasis and de-emphasis (discussed earlier), and clock data recovery can significantly increase the maximum rates. However, the limiting factors for data rates on copper differential pairs will be there; mainly frequency-dependent losses and crosstalk. These factors become more pronounced as the frequency of operation increases. As of 2023, differential pairs in copper-based high-speed interfaces like USB 3.2, Thunderbolt, and high-speed Ethernet can achieve data rates in the range of tens of gigabits per second per pair. For example, USB 3.2 can achieve 20 Gbps, and Thunderbolt 3 up to 40 Gbps. Although the industry is continuously advancing with new materials, better signal processing techniques, and improved design methodologies which might push the practical limits even further, it is also worth checking what the alternatives would be. One alternative is to use optical backplanes. > [!warning] > To be #expanded [^81]: https://www.te.com/commerce/DocumentDelivery/DDEController?Action=srchrtrv&DocNm=2352031-1\_multigig-rt3&DocType=DS&DocLang=EN [^82]: https://www.picmg.org/openstandards/compactpci-serial/ [^83]: PICMG, or PCI Industrial Computer Manufacturers Group, is a consortium of over 140 companies founded in 1994. The group was originally formed to adapt PCI technology for use in high-performance telecommunications, military, and industrial computing applications, but its work has grown to include newer technologies. PICMG is distinct from the similarly named and adjacently focused PCI Special Interest Group (PCI-SIG). PICMG currently focuses on developing and implementing specifications and guidelines for open standards-based computer architectures from a wide variety of interconnects. [^84]: SOSA PICs are based on OpenVPX. ANSI/VITA standards use the term "plug-in module" instead of PIC.