The data center is coming under immense pressure. Connected cars, smart factories and smart cities are among the technologies driving a boom in connected devices, which Gartner predicts will see 20.4 billion connected things worldwide by 2020. The volumes of data flowing from these devices are also increasing.
At the same time, cloud service providers (CSPs) must work within tight constraints in the physical environment of their data centers, with restrictions on the space, power and cooling available. CSPs need to process the growing data volumes in a way that is power efficient, and has sufficiently low latency to support real-time applications, such as artificial intelligence.
Some Tier 1 CSPs are using Field Programmable Gate Arrays (FPGAs) to accelerate customer workloads at a lower level of power consumption. FPGAs are semiconductor devices which can be reconfigured after shipping to provide hardware circuitry that is tailored for a particular process, such as encryption or data analytics. Because the processing is carried out in hardware, it’s significantly faster than it would be if carried out in software on a general purpose processor. FPGAs can be used for a wider range of workloads than GPUs, including networking, storage and encryption, and are more power efficient than a CPU. By increasing the throughput of servers, FPGAs help CSPs to meet demanding data processing requirements within the constraints of their data centers.
In the past, using FPGAs has required specialist programming skills, including a good understanding of how the hardware works, to configure each accelerator. Thanks to the latest developments in software and the FPGA ecosystem, it’s much easier to use FPGAs today. In this guide, we’ll introduce some of the technologies and strategies that can help nextwave CSPs to deliver FPGAs-as-a-Service, providing acceleration to their customers on demand.