Laer AI How to Deploy LLMs Part 1: Where Do Your Models Live?

laer ai logo

Extract from Laer AI’s article “Laer AI How to Deploy LLMs Part 1: Where Do Your Models Live?”

In the early days of first-generation AI models – such as logistic regression or support vector machines – the industry did not frequently encounter the question “Where do my models live?”  

The assumption had always been that the models live where the data lives and so, if you had an on-prem Relativity Assisted Review license, for example, the models lived “locally” in your data center or in your own cloud instance. 

Things are different now with Large Language Models: clients face a new kind of technology with tremendous capabilities but a new set of challenges. This blog post is about the questions you should ask your provider and, more importantly, understanding what their answers mean for you in terms of the quality and performance you will get from their solution, but most importantly, the security and privacy implications. 

Why ask now?

“Where does your model live?” has become a suddenly prevalent question because these models, specifically Large Language Models, have outgrown the computational footprint available to most Fortune 100 companies, let alone solution providers themselves. 

Read more here

ACEDS