Breakthrough Efficiency in NLP Model Deployment

As AI moves from a ‘nice to have’ to a ‘must have,’ Life Sciences companies should build a strategy to leverage AI, then put building blocks in place to scale its use, incl. the right IT infrastructure, talent and skill sets, ecosystems and alliances to access or build AI capabilities. – Deloitte

Scaling up AI across the life sciences value chain: Enhancing R&D, creating efficiencies, and increasing impact, November 2020

As Natural Language Processing (NLP) models evolve to become ever bigger, GPU performance and capability degrades at an exponential rate, leaving organizations across a range of industries in need of higher quality language processing, but increasingly constrained by today’s solutions.

Throughout their lifecycles, modern industrial NLP models follow a cadence. They start from one-time task-agnostic pre-training and then go through task-specific training on quickly changing user data. These periodically updated models are eventually deployed to serve massive online inference requests from applications.

 Digital
SambaNova Systems

Share content on email

Share