# Units, Chassis and Racks After having discussed extensively [[Printed Circuit Boards|boards]] and the routing and propagation of signals in interconnects, along with protocols and board-to-board communications and interconnection, we will spend some time now discussing how all this hardware is housed in mechanical enclosures. ## Units One thing is clear. Bare boards cannot survive the elements in the field. Although PCBs can be protected with coatings and epoxy resins, almost no PCB is meant to perform in their final application unprotected. The reasonable approach is to host the boards in mechanical enclosures that can provide shelter to the hardware, protecting it from the harsh environment, and only expose the interfaces that must be exposed to the outside. With this in mind, let's try to define what a *unit* is. When we discussed [[Printed Circuit Boards#Reference Designators (RefDes)|reference designators]], we observed that the taxonomy of components and parts is defined in the ASME Y14.44-2008 standard. Using the same standard, we find the definition of a unit: > *Unit: a major building block for a set or system, consisting of a combination of basic parts, subassemblies, and assemblies packaged together as a physically independent entity.* For instance, a unit can be the mechanical housing that hosts a VPX backplane and a set of plug-in modules as per VITA46 and FMC mezzanines as per VITA57. However, units do need fixation and anchoring in the target location where the unit will be performing. Then, the way units are attached to the target system defines if the enclosures will be custom-made or also follow standard form factors. The figures below show custom-made enclosures. ![Desert Gecko 6-slot VPX unit (credit: PCI Systems)](image384.jpg) > [!Figure] > _Desert Gecko 6-slot VPX unit (credit: PCI Systems)_ ![Avionics Processing Unit (APU) (credit: Recab)](image385.jpg) > [!Figure] > _Avionics Processing Unit (APU) (credit: Recab)_ ![Herschel-Planck Mission On-Board Computer (credit: ESA)](image386.jpeg) > [!Figure] > _Herschel-Planck Mission On-Board Computer (credit: ESA)_ ## Rack Units An alternative to custom-made unit enclosures is the concept of rack mounting. In this approach, a mechanical frame hosts a set of standardized units that are anchored to the rack structure. These racks provide a modular way to stack equipment vertically (or trays), optimizing space. Each unit of space in a rack is discretized in rack units (U), ensuring compatibility and easy installation of units from different vendors and with different functionalities. This approach not only saves space but also simplifies cabling, maintenance, and airflow management, improving scalability. ### EIA-310 The standard that specifies the dimensions of rack units in telecommunications and industrial computing is known as the Electronic Industries Alliance (EIA) standard, particularly the EIA-310. This standard defines the specifications for mounting dimensions of rack systems, including the height, width, and spacing of rack units (RU) or rack spaces. In EIA-310 rack units, 1U equals 1.75 inches (44.45 mm) in height. The standard also includes other aspects like the width of the equipment to be mounted, which is usually 19 inches or 23 inches (48 to 58 cm), and the hole spacing in the mounting flanges. Rack units can host ATX motherboards, solid-state storage, or basically anything that can fit in the volume. ![A 1U Rack Unit](image387.png) > [!Figure] > _A 1U Rack Unit_ ![1U Rack with Micro-ATX Motherboard (credit: Advantech)](image388.png) > [!Figure] > _1U Rack with Micro-ATX Motherboard (credit: Advantech)_ Rack units are then mounted in cabinets which may have different numbers of Us. Below, the figure illustrates a 27U cabinet. ![A 27U Transportable Rack Cabinet (credit: GAW Technology)](image389.jpeg) > [!Figure] > _A 27U Transportable Rack Cabinet (credit: GAW Technology)_ The figure below depicts a 48U rack cabinet. ![ A 48U rack cabinet (credit: APC)](image390.jpeg) > [!Figure] > _A 48U rack cabinet (credit: APC)_ As cabinets start to pile up, it becomes rather inefficient to have them lying around on the loose, so more modular solutions are used. For instance, containment technologies allow to concentrate multiple cabinets together to make energy and thermal management more efficient. In contained data centers, rows of racks are mounted together, and aisles are created between rows (see figure below). Within a contained data center environment, end-of-aisle doors are normally used to prevent air mixing at the end of rows of cabinets. Roof systems can also be deployed to enclose the cold aisle completely. Several different roof systems are available including fixed, passive drop away, or active roofs that are connected directly to a data center's fire suppression system and are triggered to pivot in an alarm state. Aisle containment systems can be retrofitted or used in new build applications irrespective of the brand of racks being used. Such systems are flexible and can be designed to fit with any rack configuration including aisles with varying rack heights, widths, depths, and alignment. Even obstacles such as building column supports can be incorporated into the solution, as well as overhead power and data services, fire suppression, and security systems. ![A 48U rack containment system (credit: Minerva Star Technology)](image391.jpg) > [!Figure] > _A 48U rack containment system (credit: Minerva Star Technology)_ ![48U rack containment (credit: Upsite Technologies)](image392.jpg) > [!Figure] > _48U rack containment (credit: Upsite Technologies)_ #### ARINC-600 The EIA-310 specification is intended for racks that perform in lab environments and therefore are not ruggedized by design. Other types of rack mount architectures must include considerations for harsh environments, for instance, the ones used in avionics bays in civil, commercial, and military aircraft. Avionics racks typically adhere to different standards that are specific to the aerospace industry. These standards consider the unique requirements of aircraft, such as weight constraints, resistance to vibration and G-forces, and specific environmental conditions. ![A380 avionics bay (credit: u/drone_driver24 in Reddit)](image393.jpeg) > [!Figure] > _A380 avionics bay (credit: u/drone_driver24 in Reddit)_ ![A view of the Boeing 767 avionics bay underneath the main cabin entrance galley (credit: u/L1011TriStar in Reddit)](image394.jpeg) > [!Figure] > _A view of the Boeing 767 avionics bay underneath the main cabin entrance galley (credit: u/L1011TriStar in Reddit)_ ARINC 600 is a specification that outlines a standardized rack and panel system used in avionics, the aerospace industry, and military applications. It represents an evolution over the earlier ARINC 404 (also known as Air Transport Rack, or ATR) standard, offering enhanced performance, higher density connectors, and a modular design that improves versatility and ease of use. The ARINC 600 standard specifies the physical dimensions, electrical connections, and environmental requirements for the rack and panel system. The ARINC 600 rack modular design allows the system to accommodate a diverse range of electronic equipment, including computers, network devices, and other avionics components. The rack's design supports both line-replaceable units (LRUs) and sub-rack assemblies, making it highly adaptable for different aircraft types and systems. One of the notable features of the ARINC 600 rack is its high-density connectors. These connectors offer a greater number of contact positions within a compact space, facilitating the transmission of more data and power through a single connector. This aspect is particularly beneficial in modern avionics, where the demand for higher data throughput and power distribution is continually increasing. Environmental robustness is another critical aspect of the ARINC 600 specification. The racks and panels are designed to withstand harsh conditions typical in aerospace environments, including extreme temperatures, vibrations, and electromagnetic interference. Key components in the ARINC 600 specification are the tray and the enclosure (called MCU, or Modular Concept Unit, which serves as the basic mechanical housing for modules or units. The tray's primary function is to provide a secure and stable environment for the electronic equipment it holds, protecting it from vibrations, temperature fluctuations, and other environmental challenges commonly encountered in aviation. ![An ARINC tray with connector and harness (credit: Collins Aerospace)](image395.jpg) > [!Figure] > _An ARINC tray with connector and harness (credit: Collins Aerospace)_ Additionally, ARINC 600 trays are equipped with connectors that facilitate electrical and data connections between the mounted equipment and the aircraft's systems (figure above). The design often includes features for cooling and shielding the electronic components from electromagnetic interference, ensuring reliable operation under various flight conditions. The ARINC 600 tray system's modular nature allows for easy installation, maintenance, and replacement of avionic components, contributing to the efficiency and flexibility of aircraft system design and upgrades. ### HGX The HGX form factor, which stands for "Hyperscale Graphics eXtension," is a standard developed by NVIDIA to enable easier integration of GPUs into hyperscale data centers and high-performance computing environments. This form factor is designed to streamline the deployment of GPU-accelerated applications, particularly in large-scale computing clusters. HGX establishes a set of specifications for GPU server boards, including the physical dimensions, power requirements, and connectivity options. This standardization enables compatibility among different hardware vendors and facilitates the interchangeability of components, making it easier for data center operators to adopt and scale GPU-accelerated solutions. HGX pursues [[Modularity|modularity]], allowing for flexible configurations based on the specific requirements of different applications and workloads. It enables the integration of multiple GPUs, high-speed interconnects, and other peripherals onto a single server board, providing scalability and performance optimization options. HGX promotes interoperability among various components within the server ecosystem, including CPUs, memory modules, storage devices, and networking infrastructure. This interoperability is essential for building heterogeneous computing systems that leverage both CPU and GPU resources effectively. With its modular design and standardized form factor, HGX enables scalability, allowing data center operators to easily add or upgrade GPU resources to meet evolving computational demands. This scalability is useful for accommodating diverse workloads, ranging from deep learning and scientific simulations to data analytics and virtualization. The HGX form factor has gained significant traction within the industry, with major server manufacturers and cloud service providers embracing it to deliver GPU-accelerated computing solutions at scale. NVIDIA collaborates with partners to develop HGX-compatible hardware platforms, ensuring widespread availability and support for the standard. > [!warning] > To be #expanded ### Open Rack The original [[Data Centers and "The Cloud"#The Open Compute Project|OCP]] rack design was designated Open Rack Version 1 (ORV1). This system was a triplet rack that challenged the traditional [[Units, Chassis and Racks#EIA-310|EIA 19-inch rack mount standard]], increasing the width to 21 inches and creating a new 1.89-inch height standard called “OU” instead of the traditional EIA 1.75-inch “RU” height standard. The new approach also introduced a new power topology. In a traditional data center environment, the servers and networking equipment have individual AC power supplies. The Open Rack standard introduced the power shelf concept. The power shelf is basically a large power supply fitted with modular rectifiers. In the ORV1 standard, the power shelf accepts AC power and distributes 12V DC power to OCP servers using busbars in the back of the rack. This offers several benefits: - The power supplies are centrally located and accessible from the front of the rack to simplify servicing. - The busbars are fixed in the rear of the rack. Servers slide in and dock with the busbars, reducing installation time and removing server power cables from the design. - The power shelf design is universal, allowing multiple manufacturers to make the power shelf and rectifier modules. As Facebook looked to include more users and manufacturers in the Open Compute Project, they quickly learned that other approaches had value and developed working groups devoted to advancing new standards. The Open Rack Version 2 (ORV2) standard introduced 48V DC power as an alternative to 12V DC. There were also changes to the rack, introducing a single rack bay instead of the triplet rack. Microsoft’s approach to hyperscale data centers In 2016 gave way to the Project Olympus subgroup of OCP. Project Olympus uses a more traditional 19-inch rack and employs a rack PDU to distribute AC power to the servers. One unique aspect of the PDU design is the universal input on the PDU, which uses a proprietary connector from Harting. The detachable AC input cord can be configured for different types of power input, allowing one rack PDU to be used at 208V three-phase Delta 60A and 415/240V three-phase Y 32A. This simplifies deployment by reducing SKU count on rack PDUs and is especially helpful for consistency in global deployments. Project Olympus was a departure from the power shelf topology but illustrated that OCP could incorporate a variety of different ideas under its open-source umbrella. As OCP has grown, changed, and included new members and working groups, the focus on improving ORV2 has led to the Open Rack Version 3 (ORV3) standard. ORV3 seeks to address several needs expressed by the OCP community: - Higher power densities, up to 30 kW per rack enclosure - New busbar design to handle more power, with split zones - Toolless blind-mate connection of the power shelf to the busbar - Standards for cold plate liquid cooling and mounting cooling manifolds in the rack - Flexibility to mount 21-inch OU or 19-inch RU equipment in the rack enclosure - Options for side panels and doors • General effort to make the rack and power more flexible for wider-scale adoption Open Rack V3 builds upon the principles of the previous versions while introducing several key improvements aimed at enhancing flexibility, scalability, and efficiency in data center infrastructure. The design philosophy of Open Rack V3 aims to provide a standardized, open-source platform for data center hardware, allowing for easy customization and optimization based on specific requirements. A typical Open Rack data center deployment flow is shown in the figure below. The process starts with capturing user requirements, such as processing performance, storage needs, space constraints, power feed, and so on. Typically, a configurator tool is used to build the needed configuration, which can be compute-only, storage-intensive, or hybrid rack-level configuration. ![](OpenRack2.png) > [!Figure] > Open Rack typical flow Open Rack configurations use the following hardware building block types: - Rack: provides mounting positions for Open Rack HW products - Power Shelf: Feeds power from the site power feed to Open Rack HW building blocks - Server node: 2 OU 1/3 shelf dual socket Purley-based server including interconnection adapters, security modules, storage devices, cooling, power, and HW management - Ethernet Switches: provides interconnection between server nodes and aggregation switches and routers ![](OpenRack.png) > [!Figure] > Open Rack building blocks ![](OpenRack3.png) > [!Figure] > Open Rack V3 reference design (source: #ref/Skorjanec ) From the figure above: - A: OCP “OU” mounting rails for 21-inch rack equipment (Convertible to EIA rails) - B: EIA “RU” mounting rails for 19-inch rack equipment (Convertible to OCP rails) - C: ORV3 busbar (Blind-mate power connection) - D: ORV3 power shelf (15 kW N+1; includes six 3 kW modular power supplies)