
Building NVMe Over Fabrics
White Paper Building NVMe Over Fabrics with BittWare FPGA Solutions Overview Since the introduction of the Non-Volatile Memory Express (or NVMe) protocol, data center customers
Well suited to a range of applications in financial services, with deployment scenarios ranging from co-location to offline, the Myrtle.ai MAU Accelerator is provided as IP to run on the latest FPGAs.
Myrtle.ai has considerable experience in efficient hardware-acceleration of ML models, such as RNN and LSTM networks, using FPGA accelerator cards. These are designed to achieve the highest throughput and lowest cost for inference workloads with very tight latency constraints.
The MAU Accelerator reference design can be run on a range of BittWare products featuring Intel and Xilinx FPGAs. For deployment, we recommend the ultra-high density TeraBox 1401B, with four cards and an AMD EPYC CPU.
Request a meeting to get in depth on how the MAU Accelerator IP can work for your organization!
"*" indicates required fields
White Paper Building NVMe Over Fabrics with BittWare FPGA Solutions Overview Since the introduction of the Non-Volatile Memory Express (or NVMe) protocol, data center customers
BittWare Webinar High Performance Computing with Next-Generation Intel® Agilex™ FPGAs Featuring an Example Application from Barcelona Supercomputing Center Now Available On Demand (Included is recorded
White Paper Comparing FPGA RTL to HLS C/C++ using a Networking Example Overview Most FPGA programmers believe that high-level tools always emit larger bitstreams as
High-speed networking can make timestamping a challenge. Learn about possible solutions including card timing kits and the Atomic Rules IP TimeServo.