General Micro Systems brings AI capabilities onto the battlefield with deep learning system
General Micro Systems introduced the rugged, conduction-cooled, commercial off-the-shelf (COTS) deep learning/artificial intelligence (AI) mobile system that offers real-time data analysis and decision making directly on the battlefield.
The X422 “Lighting” integrates two of Nvidia’s V100 Tesla data center accelerators into a fully sealed, conduction-cooled chassis, enabling it to withstand the rigors of the battlefield.
Designed as a dual co-processor companion to GMS Intel Xeon air-cooled or conduction-cooled servers, X422 is the product to bring the computational power of Nvidia’s Tensor Core general-purpose graphics processing unit (GPGPU) to the battlefield’s tactical edge or right into a ground or air vehicle to speed data analysis and interpretation from hours to minutes.
Sensors on military vehicles collect massive amounts of battlefield data and store it locally before transporting it for analysis and interpretation by remote deep learning systems. Transporting this raw data saturates networks and satellite communication uplinks, slowing them to a crawl and preventing access from other users. To work around the network, removable hard drives are often transported from the front lines, leaving the data subject to theft or loss during transport and introducing additional lag time.
“The X422 GPGPU system allows extraordinary quantities of data to be collected and processed right on the battlefield in real time, significantly shortening the decision loop for providing solutions and recommendations to warfighters,” said Ben Sharfi, chief architect and CEO, General Micro Systems. “Ultimately, the X422 enables live targets to be identified, flagged and even fired upon in record time. It represents a complete paradigm shift in electronic warfare, SIGINT and C4ISR.”
Rugged leadership with X422 industry firsts:
- Offers 225 TFLOPS and 10,200 GPGPU cores in a “shoebox”-sized fanless system
- Brings Nvidia deep learning data center-class V100 Tesla GPGPU to battlefield
- Ruggedizes Nvidia V100 Tesla for wide temperature, conduction-cooled applications
- Offers external PCIe Gen 3 data pipe at 8 GT/s between rugged systems using FlexVPX fabric bus extension.
The X422, which is approximately 12×12 inches square and under 3 inches high, includes dual x16 PCIe Gen 3 slots for the GMS-ruggedized PCIe deep learning cards including Nvidia’s V100 Tesla (computation only) or Nvidia’s Titan V (computation with graphics outputs).
Each card boasts 5120 CUDA processing cores, giving X422 over 10,200 GPGPU cores and in excess of 225 TFLOPS for deep learning.
In addition to using Nvidia GPGPU co-processors, the X422 can accommodate other co-processors, different deep learning cards, and high-performance computers (HPC) based upon FPGAs from Xilinx or Altera, or ASICs up to a total of 250 W per slot (500 W total).
The X422 includes no fans or moving parts, wide temperature operation and massive data movement via an external PCI Express fabric in ground vehicles, tactical command posts, UAV/UAS, or other remote locations—right at the warfighter’s “tip of the spear.”
GMS relies on the company’s RuggedCool thermal technology to adapt the GPGPUs for harsh battlefield conditions, extending their temperature operation while increasing environmental MTBF.
“No one besides GMS has done this before because we own the technology that makes it possible. The X422 not only keeps the V100s or other 250 W GPGPU cards cool on the battlefield, but our unique x16 PCIe Gen 3 FlexVPX™ fabric streams real-time data between the X422 and host processor/server at an astounding 32 GB/s all day long,” Sharfi said.
“From sensor to deep learning co-processor to host: X422 accelerates the fastest and most complete data analysis and decision making possible.”
X422 technical specifications
General Micro Systems brings I/O to X422 via GMS’s FlexVPX bus extension fabric. X422 interfaces with servers and modules from GMS and from One Stop Systems, using industry-standard iPass+ HD connectors offering x16 lanes in and x16 lanes out of PCI Express Gen 3 (8 GT/s) fabric for a total of 256 GT/s (about 32 GB/s) system throughput.
X422 deep learning co-processor systems can be daisy-chained up to a theoretical limit of 16 chassis working together.
The pair of X422’s two PCIe deep learning cards can operate independently or work together as a high-performance computer (HPC) using the user-programmable onboard, non-blocking low-latency PCIe switch fabric.
For PCIe cards with outputs—such as the Titan V’s DisplayPorts—these are routed to separate A and B front panel connectors.