Richard Chamberlain is System Architect for BittWare. Richard handles OpenCL development projects, having extensive experience working with high-level programming of FPGAs. He regularly advises customers on the best hardware and development approach for new designs or adapting designs based around other system types such as CPUs or GPUs.
Until only a decade ago, Artificial Intelligence resided almost exclusively within the realm of academia, research institutes, and science fiction. The relatively recent realization that Machine Learning (ML) techniques could be applied practically and economically, at scale, to solve real-world application problems has resulted in a vibrant eco-system of market players.
However, any news of breakthroughs in machine learning is still to be weighed against the reality that this is a very computationally heavy approach to solving problems, both in the training phase of a dataset and what’s called the inference phase—the “runtime” where unknown input is translated to inferred output. While the training phase for a machine learning application only needs to happen once in the datacenter over an unconstrained time period often extending to hours or days, the live inference must often happen in a fraction of a second using a constrained hardware platform at the edge of a system.
For the machine learning to grow in adoption, inference solutions must be developed that can rapidly implement the latest machine learning libraries in hardware that can be tailored to the application needs.