Designing adapted and dynamic hints for large languages ​​models

Author: Shenggang Li

Originally published in the direction of artificial intelligence.

Practical comparison of context building techniques, templates and orchestrations in the modern LLM frameworkFree nomad photo on Unsplash

Imagine you are in a cafe and ask for coffee. Simple, right? But if you do not specify details such as milk, sugar or a type of roast, you may not get exactly what you wanted. Similarly, when interacting with large language models (LLM), as you ask – hints – makes a big difference. That is why it is important to create custom (static) and dynamic hints. Adapted hints are as permanent recipes; They are consistent, reliable and simple. On the other hand, dynamic hints adapt based on the context, as is a qualified barista adapting the order of coffee based on mood or weather.

Let's assume that you are building a custom customer service powered by artificial intelligence. If you only use static hints, the bot can provide general answers by leaving frustrated users. For example, with the question “How can I help you today”? He is static and may be too vague. But dynamic prompt may contain the last user interactions, asking something like: “I see that you checked our order status. Would you like to help you follow in further”? This personalized approach can radically improve user satisfaction.

I will deal with practical comparisons of these fast methods, studying context building strategies, templates and orchestration tools. I will analyze the real world … Read the full blog for free on the medium.

Published via AI

LEAVE A REPLY

Please enter your comment!
Please enter your name here