Original): Ahmed bolahia
Originally published in the direction of artificial intelligence.
A quick injection can manipulate AI agent for data leakage or unpredictable behavior. What is it exactly and how to overcome it?
In the current era of programming and creating websites dominated by AI, there is a constantly growing tendency to integrate LLM via chatbots and agents with internet products and software. However, like any other new technology at the beginning, it is susceptible to malicious attacks.
Chatbots and agents are no exception to this principle. There are several types of malicious attacks that can be directed by applications based on LLM in 2025, as reported by the openwide Application Security Project (OWSP), and “quick injection” is at the top of the list.
Recently, I came across a tweet @Jobergum about the GitHub repository, which contains all the hints of the system of famous agents at the production level, such as Cursor, Windsurf, Devin, etc.
This can be achieved by meticulously designed attacks, such as jailbreaking and quick injection, and clearly shows us that even LLM systems at the production level are susceptible to attacks and without solid means to counteract such attacks that companies risk not only compromising user trust and data security, but also lose valuable customers, and suffer significant financial losses.
In this post I will explain to you what is a quick injection, how it is used maliciously for aiming in applications based on LLM and various ways of defending against it.
Chatbots and agents … Read a full blog for free on the medium.
Published via AI

















