Enhance prompt injection section with additional resources (#9286)

Added a resource link for further learning about prompt injection.
This commit is contained in:
Agustín Fernández
2025-11-25 05:55:16 -03:00
committed by GitHub
parent ffa064ecff
commit c280d48608

View File

@@ -1,3 +1,7 @@
# Prompt Injection
Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries.
Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries.
Visit the following resources to learn more:
- [@video@What Is a Prompt Injection Attack?](https://youtu.be/jrHRe9lSqqA?si=6ZN2qrorBDbynFWv)