From c280d4860838d5c0e985c3730167d4a7f4a34e45 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Agust=C3=ADn=20Fern=C3=A1ndez?= <45650558+Dasher83@users.noreply.github.com> Date: Tue, 25 Nov 2025 05:55:16 -0300 Subject: [PATCH] Enhance prompt injection section with additional resources (#9286) Added a resource link for further learning about prompt injection. --- .../content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md b/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md index d0ddc8250..8b61026f2 100644 --- a/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md +++ b/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md @@ -1,3 +1,7 @@ # Prompt Injection -Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries. \ No newline at end of file +Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries. + +Visit the following resources to learn more: + +- [@video@What Is a Prompt Injection Attack?](https://youtu.be/jrHRe9lSqqA?si=6ZN2qrorBDbynFWv)