REFERENCE DESIGN

MAU Accelerator for AI Financial Trading Models

Myrtle.ai logo

Ultra-low Latency, High Throughput Machine Learning Inference

Well suited to a range of applications in financial services, with deployment scenarios ranging from co-location to offline, the Myrtle.ai MAU Accelerator is provided as IP to run on the latest FPGAs.

Market Data Prediction

Quantitative Trading

Algorithmic Trading

What is the MAU Accelerator IP?

Designed to be integrated into your existing software stack, the IP supports various bit depths for Floating Point, Block Floating Point, Brain Floating Point and Integer formats. Existing models developed in popular frameworks can be imported using the ONNX OSI format.

Benefits

System Examples

Tap to zoom

System 1: Low-latency Line-rate Processing Direct from Ethernet
System 2: Data Pre-processing in Host
System 3: Point to Point DMA Hybrid Processing

Performance Examples

About Myrtle.ai

Myrtle.ai has considerable experience in efficient hardware-acceleration of ML models, such as RNN and LSTM networks, using FPGA accelerator cards. These are designed to achieve the highest throughput and lowest cost for inference workloads with very tight latency constraints.

Deliverables

Open-source reference model and export scripts in PyTorch

Example application code for inference

C and Python bindings for MAU Accelerator inference API

FPGA bitstream and source code (conditions apply)

Designed for BittWare Hardware

The MAU Accelerator reference design can be run on a range of BittWare products featuring Intel and Xilinx FPGAs. For deployment, we recommend the ultra-high density TeraBox 1401B, with four cards and an AMD EPYC CPU.

520N-MX PCIe card photo
520NX
XUP-VV8
TeraBox 1401B 1402B server
TeraBox 1401B

Get more details on MAU performance!

Request a meeting to get in depth on how the MAU Accelerator IP can work for your organization!

Preview of materials we'll share: