Zsolt István, PhD Student, ETH Zurich, Switzerland
In the era of Big Data, datacenter and cloud architectures decouple compute and storage resources from each other for better scalability. While this design choice enables elastic scale-out, it also causes unnecessary data movements. One solution is to push parts of the computation down to storage where data can be filtered more efficiently. Systems that do this are already in use and rely either on regular server machines as storage nodes or on network attached storage devices. Even though the former provide complex computation and rich functionality since there are plenty of conventional cores available to run the offloaded computation, this solution is quite inefficient because of the over-provisioning of computing capacity and the bandwidth mismatches between storage, CPU, and network. Networked storage devices, on the other hand, are better balanced in terms of bandwidth but at the price of offering very limited options for offloading data processing.
With Caribou, we explore an alternative design that offers rich offloading functionality in a much more efficient package (size, energy consumption) than regular servers, but without sacrificing features such as a general purpose interface, reliable networking or replication for fault tolerance. Our FPGA-based prototype system has been designed such that the internal data management logic can saturate the network and the processing logic can saturate the storage bandwidth without either of the two being over-provisioned. Each Caribou node is a stand-alone FPGA that implements all functionality necessary for a distributed data store, including replication that is typically not supported by FPGA-based solutions.
Caribou has been released as open source. Its modular design and extensible processing pipeline make it a convenient platform for exploring domain-specific processing inside storage nodes.