Control and Conversational Applications

New LLM features like OpenAI’s Tools and Anthropic’s Model Context Protocol offer some quick wins for interfacing with external systems.

However, when used as the only means of interfacing, they can also introduce a major challenge: lack of ultimate control.

When an LLM is put in control of the conversational flow, the result is unpredictability. It introduces the unnecessary risk of being misaligned with user needs and business objectives.

Just because LLM’s produce natural language, it doesn’t mean they are qualified to dictate the user experience alone. Instead they should be used only as and when they are required. Retrieval Augmented Generation is a prime example of this approach. RAG leverages a model’s language capabilities while reducing the risk that it responds with irrelevant or inaccurate information, as the method of retrieval remains in control of the application.

LLM’s work best when they are just one part of a broader system. They shouldn’t be the whole system.