Any export orders must be placed through WPG Americas sales person, please call 1.800.553.8406

MX3-2280-M-4

Module/Tool_Memory_Others
  • $149.0000
Qty:  

MX3-2280-M-4 AI Accelerator Module

The MX3-2280-M-4 is a state-of-the-art M.2 AI Accelerator Module crafted by MemryX, designed to redefine edge computing with unparalleled artificial intelligence (AI) inference performance. Encased in the compact M.2 2280 M-Key form factor (22mm x 80mm), this module leverages a PCIe Gen 3 interface to deliver seamless integration and exceptional computational efficiency. Powered by four MemryX MX3 Edge AI Accelerator chips, it achieves up to 24 TFLOPS of AI compute power while maintaining an ultra-low power envelope of 6–8 watts.

Engineered for real-time, low-latency processing of deep neural network (DNN) models, the MX3-2280-M-4 excels in computer vision (CV) applications, making it ideal for robotics, smart surveillance, industrial automation, and IoT deployments. Its innovative dataflow architecture offloads complex AI workloads from host CPUs, ensuring high-throughput inferencing with minimal dependency on external memory or processing. Supporting 4/8/16-bit weights and BFloat16 formats, the module handles up to 80 million 4-bit parameters. Compatibility with industry-standard frameworks such as ONNX, TensorFlow, PyTorch, Keras, and TensorFlow Lite, combined with a robust cross-platform SDK, ensures effortless deployment across x86, ARM, and RISC-V platforms running Ubuntu, Windows, or Android.

The MX3-2280-M-4 features integrated on-module memory and a scalable architecture supporting up to 16 interconnected chips, enabling up to 96 TOPS of performance. Its automated model compilation tools eliminate the need for retraining or quantization, offering a plug-and-play solution that sets a new standard for efficiency, adaptability, and performance in edge AI systems.

Specification Table

FeatureSpecification
Form FactorM.2 2280 M-Key (22mm x 80mm)
InterfacePCIe Gen 3
AI Accelerator Chips4x MemryX MX3 Edge AI Accelerators
Compute PerformanceUp to 24 TFLOPS (Teraflops)
Power Consumption6–8 Watts
MemoryIntegrated on-module memory
Supported Precision4-bit, 8-bit, 16-bit weights; BFloat16
Parameter CapacityUp to 80 million 4-bit parameters
Supported FrameworksONNX, TensorFlow, PyTorch, Keras, TensorFlow Lite
Supported Platformsx86, ARM, RISC-V
Operating SystemsUbuntu, Windows, Android
ArchitectureDataflow architecture for DNN inferencing
ScalabilitySupports up to 16 chips (up to 96 TOPS)
ApplicationsComputer vision, robotics, smart surveillance, industrial automation, IoT
SDKCross-platform SDK with automated model compilation (no retraining/quantization)
    • Weight
      1 kg
    • SKU
      MX3-2280-M-4
  • Manufacturer
    MEMRYX
*Some items purchased may be subject to a 25% China Tariff. Purchaser will be contacted before shipment to coordinate payment of any Tariff on items ordered.