mirror of
https://github.com/kamranahmedse/developer-roadmap.git
synced 2026-03-12 17:51:53 +08:00
Enhance prompt injection section with additional resources (#9286)
Added a resource link for further learning about prompt injection.
This commit is contained in:
committed by
GitHub
parent
ffa064ecff
commit
c280d48608
@@ -1,3 +1,7 @@
|
||||
# Prompt Injection
|
||||
|
||||
Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries.
|
||||
Prompt injection is a security vulnerability where malicious users manipulate LLM inputs to override intended behavior, bypass safety measures, or extract sensitive information. Attackers embed instructions within data to make models ignore original prompts and follow malicious commands. Mitigation requires input sanitization, injection-resistant prompt design, and proper security boundaries.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@video@What Is a Prompt Injection Attack?](https://youtu.be/jrHRe9lSqqA?si=6ZN2qrorBDbynFWv)
|
||||
|
||||
Reference in New Issue
Block a user