LLM Prompt Injection — Part 2: How to Break and Defend LLMs
In Part 1 we covered the business impact of prompt injection. Now let’s get more technical.
There are two flavors of attacks:
Direct injection — a user types “ignore all prior instructions” and the model spills its guts.
Indirect injection — poisoned docs or web pages sneak hostile instructions into your pipeline.
Real-world stories in Part 2 include:
A finance bot leaking keys from a “routine” PDF.
A support assistant calling internal APIs and relaying results outside.
A scraper hijacked by hidden HTML comments.
In Short: treat LLM calls like untrusted code. Defense in depth wins: sanitize I/O, verify sources, sandbox tools, log and monitor everything.
Attackers don’t need 0-days — they just need to sound convincing.
In the full post we show how defenses fail — and what pragmatic hardening actually works: link below.
