RADX® Technologies, Inc., at the Association of Old Crows (AOC) 2022 Annual Convention, announced the Trifecta-GPU™ Family of COTS PXIe/CPCIe GPU Modules. Trifecta-GPUs are the first COTS products that bring the extreme compute acceleration and ease-of-programming of NVIDIA® RTX® A2000 Embedded GPUs to PXIe/CPCIe platforms for modular test & measurement and electronic warfare applications.
Designed to complement RADX Catalyst-GPU products announced earlier this year, Trifecta-GPUs deliver even greater compute performance by employing NVIDIA RTX Embedded GPUs. The Trifecta-GPU model introduced at AOC 2022 is based on the RTX A2000, which features 8 GB of GDDR6 DRAM, PCIe Express 4.0 and up to 8.3 FP32 TFLOPS peak compute performance. As with Catalyst-GPUs, Trifecta-GPUs feature comprehensive support for MATLAB™, Python, and C/C++ programing, as well as industry-best support for virtually all popular computing frameworks, making Trifecta-GPUs easy-to-program for both Windows and Linux operating environments. With their extreme levels of performance, Trifecta-GPUs are ideal for the most demanding signal processing, machine learning (ML) or deep learning (DL) inference applications for AI-based signal classification and geolocation, semiconductor and PCB testing, failure prediction, failure analysis and other important missions.
Trifecta PXIe/CPCIe GPUs – Flexible and Scalable
Many PXIe and CPCIe chassis are limited to 38 W/Slot for input power and thermal dissipation. To address this, Trifecta-GPUs are available in both single and dual-slot configurations, with dual-slot configurations for 38 W/slot conventional and legacy chassis and single-slot configurations for NI Chassis that support 58 W/slot and 82 W/slot.
With peak performance of 8.3 FP32 TFLOPS, the new Trifecta A2000 GPU delivers compute acceleration that’s almost 5x that of the Catalyst T600 GPU and over 20x that of a Xilinx® Kintex® Ultrascale® KU060 FPGA. Until now, this level of compute acceleration has not been available in PXIe/CPCIe systems. With Catalyst and Trifecta PXIe-GPUs, users can now conduct fast and accurate signal analysis and ML and DL on acquired data—directly in the PXIe/CPCIe systems where the data is acquired.
“With over 8.3 FP32 TFLOPS, the Trifecta A2000 GPU brings remarkable compute acceleration and compelling price performance to PXIe systems,” said Ross Q. Smith, RADX cofounder and CEO. “Combined with the flexibility of single and dual-slot configs, long life cycle support and ease-of-programming, Catalyst and Trifecta-GPUs enable PXIe users and integrators to develop their GPU accelerated software once, and then select the Catalyst or Trifecta-GPU that’s appropriate for their application and budget, for both legacy and new systems, without changing their software.”
Easy-to-Program via MATLAB, Python, C/C++ and LabVIEW
One of the most important aspects of Catalyst and Trifecta-GPUs are their ease-of-programming, which stems from their underlying NVIDIA GPUs that support programming via MATLAB™, Python and C/C++ that enables compute acceleration via NVIDIA CUDA® and OpenCL®. This ease-of-programming has resulted in NVIDIA GPUs becoming the most popular compute accelerators in the world today—with millions of engineers, developers and computer scientists using NVIDIA GPUs to accelerate their applications. Catalyst and Trifecta-GPUs support both Windows and Linux operating environments. In addition, Catalyst and Trifecta-GPUs support popular AI and other frameworks, including LabVIEW™, MATLAB™, TensorFlow, PyTorch, RAPIDS AI and RAPIDS cuSignal, to name a few.
“Calling Python, C/C++ or MATLAB libraries from LabVIEW is straight forward and efficient because of the facilities NI has integrated into LabVIEW, so adding Catalyst or Trifecta-GPU acceleration to LabVIEW-based PXIe applications is relatively quick and easy,” said Matt Dennie, director of engineering and certified LabVIEW architect at Acquired Data Solutions. “This ease-of-integration means we can incorporate scalable, portable, and affordable GPU acceleration into LabVIEW apps in significantly less time and with greatly improved portability than we could do with FPGAs.”