diff --git a/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md b/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md index d0ddc8250..8b61026f2 100644 --- a/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md +++ b/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md @@ -1,3 +1,7 @@ # Prompt Injection -Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries. \ No newline at end of file +Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries. + +Visit the following resources to learn more: + +- [@video@What Is a Prompt Injection Attack?](https://youtu.be/jrHRe9lSqqA?si=6ZN2qrorBDbynFWv)