
Comparing FPGA RTL to HLS C/C++ using a Networking Example
White Paper Comparing FPGA RTL to HLS C/C++ using a Networking Example Overview Most FPGA programmers believe that high-level tools always emit larger bitstreams as
Well suited to a range of applications in financial services, with deployment scenarios ranging from co-location to offline, the Myrtle.ai MAU Accelerator is provided as IP to run on the latest FPGAs.
Myrtle.ai has considerable experience in efficient hardware-acceleration of ML models, such as RNN and LSTM networks, using FPGA accelerator cards. These are designed to achieve the highest throughput and lowest cost for inference workloads with very tight latency constraints.
The MAU Accelerator reference design can be run on a range of BittWare products featuring Intel and Xilinx FPGAs. For deployment, we recommend the ultra-high density TeraBox 1401B, with four cards and an AMD EPYC CPU.
Request a meeting to get in depth on how the MAU Accelerator IP can work for your organization!
"*" indicates required fields
White Paper Comparing FPGA RTL to HLS C/C++ using a Networking Example Overview Most FPGA programmers believe that high-level tools always emit larger bitstreams as
BittWare On-Demand Webinar High Performance Computing with Next-Generation Intel® Agilex™ FPGAs
BittWare On-Demand Webinar Computational Storage: Bringing Acceleration Closer to Data High-performance storage is changing as acceleration moves closer to storage and traditional form factors change
PCIe FPGA Card 520N-MX Stratix 10 FPGA Board with 16GB HBM2 Powerful solution for accelerating memory-bound applications Need a Price Quote? Jump to Pricing Form