Author's): Jofia Jose Prakash
Originally published in Towards Artificial Intelligence.
How I went from “response” to “action” with search, tools, and agent loops
Large-language models (LLMs) are born as generic engines of static, yet powerful, brains. Static in the sense that while their results may be magical, they are limited to the level of knowledge sealed during training. This is a fact that any developer using these models quickly encounters when asked about anything beyond training: current events, the latest internal policies, or proprietary information that isn't in the pre-training data. This recognition led me (and many engineers like me) down a rabbit hole of what I would describe as three great evolutionary steps to elevating LLM from mere answering machines to active partners: search augmented generation (RAG), invoking tools, and autonomous agents. In this article, I share insights into why these three things are important and the changing paradigms that result from them.
The article discusses the evolution of large-language models (LLM) from static tools to dynamic agents capable of performing a range of tasks. It examines three key developments: search-assisted generation (RAG), which improves LLM by providing relevant data in real time, invoking tools that enable models to interact and perform external actions, and autonomous agents that can observe, plan, and act with varying degrees of complexity. Each level increases user interaction and capabilities while introducing new challenges and issues, especially regarding security and the need for responsible development of artificial intelligence.
Read the entire blog for free on Medium.
Published via Towards AI


















