What is Computational Storage?
An architecture in which data is processed in close physical proximity to the storage device. Primary benefit: reduce the amount of data that must move between the storage plane and the compute plane.
Traditional vs. Computational Storage
Traditional Architecture
The CPU processes data compute tasks such as compression; data is sent between storage and compute planes.
Computational Storage Architecture shifts data compute tasks to a hardware accelerator (FPGA), offloading the CPU. Data stays close to the compute, avoiding data movement on the slower CPU compute plane.
High-Speed 100 Gb/s NVMe Data Recorder
SNIA Terminology
CSS | CSP | CSD | CSA
Service, processor or drive? We’ve standardized on the SNIA definitions for these, which fall into the following categories:
CSS: Computational Storage Service
What is a CSS?
A data service or information service that performs computation on data where the service and the data are associated with a storage device.
It’s important to remember a CSS is not a device or module itself—rather it’s the acceleration services performed by a CSP or CSD. For example, the compression service that our 250-U2 can provide (through Eideticom’s NoLoad IP) is a CSS.
CSP: Computational Storage Processor
What is a CSP?
As you can see in the diagram, the CSP is a separate device from FLASH persistent data storage. Acceleration services (CSS), such as compression, are offloaded from the CPU to the FPGA.
As a further advantage, the 250-U2 (a CSP module) transfers data to/from FLASH using peer-to-peer transfers, offloading the CPU from not only acceleration but data movement. This is enabled using Eideticom’s NoLoad IP.
CSD: Computational Storage Drive
What is a CSD?
CSDs provide very close computation and storage, with the tradeoff from CSPs in the loss of using traditional SSD drives in favor of integrated FLASH storage. An example of a CSD is the 250-HMS BittWare created for IBM; click here to read more.
CSA: Computational Storage Array
What is a CSA?
Applications
We target applications when demand to process storage outpaces traditional architectures featuring CPUs.
Controller
Examples: Compression, Erasure Coding and De-duplication
Artificial Intelligence Inference
Big Data Analysis
Machine Learning
Content Delivery
Database Acceleration
Examples: RocksDB, Cassandra, Hadoop and MySQL
Flexible Form Factors
We have customization capabilities for building a range of current and emerging form factors like ESDFF. Talk to us about how we can turn your application needs into an enterprise-class solution!

PCIe Add-in Card (AIC)
U.2
M.2 Accelerator Module
EDSFF
Eideticom Investment
BittWare (a Molex company) is a strategic investor in Eideticom, a recognized thought leader in NVMe based Computational Storage solutions.
Their mission is to develop world-class Computational Storage Solutions for cloud and enterprise data centers. Eideticom’s NoLoad® Computational Storage Processor (CSP) is accelerating data center infrastructure, enabling greater scalability and dramatically lowering cost.
Los Alamos National Laboratory Collaboration
Eideticom and Los Alamos National Laboratory are collaborating on a storage acceleration solution using the BittWare 250-U2 computational storage processor. Key news from the press release:
- World’s first NVMe-based computational storage compressed parallel filesystem built using Eideticom’s NoLoad® CSP and deployed in LANL’s Lustre/ZFS-based HPC parallel filesystems.
- NoLoad CSP’s high performance compression engines provide scalable offload of storage centric services and enable capacity increases with no impact on performance.
- NoLoad’s NVMe-compliant interface simplifies deployment of computational offload by making it straightforward to consume in servers of all types and across all major operating systems.
Browse Our Storage Acceleration Products
Purchase Your Cards Pre-integrated with TeraBox™
Got a Question?
Ask our technical staff where Computational Storage fits in your business.
"*" indicates required fields