mirror of
https://github.com/kamranahmedse/developer-roadmap.git
synced 2026-03-12 17:51:53 +08:00
chore: sync content to repository - prompt-engineering (#9592)
* chore: sync content to repo * Update chain-of-thought-cot-prompting@weRaJxEplhKDyFWSMeoyI.md * Enhance LLM self-evaluation section with details Added explanation of LLM self-evaluation and its benefits. * Enhance LLMs overview with prediction engine details Added explanation of LLMs as prediction engines and their token generation process. * Enhance one-shot and few-shot prompting section Added explanation of one-shot and few-shot prompting techniques, including their applications and benefits. * Enhance prompt debiasing section with techniques Added explanation of prompt debiasing techniques and resources. * Update react-prompting@8Ks6txRSUfMK7VotSQ4sC.md * Update role-prompting@XHWKGaSRBYT4MsCHwV-iR.md --------- Co-authored-by: kamranahmedse <4921183+kamranahmedse@users.noreply.github.com> Co-authored-by: Javier Canales <56018501+jcanalesluna@users.noreply.github.com>
This commit is contained in:
committed by
GitHub
parent
e8017f3e85
commit
3b580515d5
@@ -2,4 +2,6 @@
|
||||
|
||||
Chain of Thought prompting improves LLM reasoning by generating intermediate reasoning steps before providing the final answer. Instead of jumping to conclusions, the model "thinks through" problems step by step. Simply adding "Let's think step by step" to prompts often dramatically improves accuracy on complex reasoning tasks and mathematical problems.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203)
|
||||
|
||||
@@ -2,5 +2,6 @@
|
||||
|
||||
LLM self-evaluation involves prompting models to assess their own outputs for quality, accuracy, or adherence to criteria. This technique can identify errors, rate confidence levels, or check if responses meet specific requirements. Self-evaluation helps improve output quality through iterative refinement and provides valuable feedback for prompt optimization.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@LLM Self-Evaluation](https://learnprompting.org/docs/reliability/lm_self_eval)
|
||||
|
||||
@@ -2,4 +2,6 @@
|
||||
|
||||
LLMs function as sophisticated prediction engines that process text sequentially, predicting the next token based on relationships between previous tokens and patterns from training data. They don't predict single tokens directly but generate probability distributions over possible next tokens, which are then sampled using parameters like temperature and top-K. The model repeatedly adds predicted tokens to the sequence, building responses iteratively. This token-by-token prediction process, combined with massive training datasets, enables LLMs to generate coherent, contextually relevant text across diverse applications and domains.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@video@How Large Language Models Work](https://youtu.be/5sLYAQS9sWQ)
|
||||
|
||||
@@ -2,4 +2,6 @@
|
||||
|
||||
One-shot provides a single example to guide model behavior, while few-shot includes multiple examples (3-5) to demonstrate desired patterns. Examples show output structure, style, and tone, increasing accuracy and consistency. Use few-shot for complex formatting, specialized tasks, and when zero-shot results are inconsistent.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Fi2igdPTBUocqnX7&t=177)
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# Prompt Debiasing
|
||||
|
||||
Prompt debiasing involves techniques to reduce unwanted biases in LLM outputs by carefully crafting prompts. This includes using neutral language, diverse examples, and explicit instructions to avoid stereotypes or unfair representations. Effective debiasing helps ensure AI outputs are more fair, inclusive, and representative across different groups and perspectives.
|
||||
Prompt debiasing involves techniques to reduce unwanted biases in LLM outputs by carefully crafting prompts. This includes using neutral language, diverse examples, and explicit instructions to avoid stereotypes or unfair representations. Effective debiasing helps ensure AI outputs are fairer, inclusive, and more representative across different groups and perspectives.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@Prompt Debiasing](https://learnprompting.org/docs/reliability/debiasing)
|
||||
|
||||
@@ -2,4 +2,6 @@
|
||||
|
||||
ReAct (Reason and Act) prompting enables LLMs to solve complex tasks by combining reasoning with external tool interactions. It follows a thought-action-observation loop: analyze the problem, perform actions using external APIs, review results, and iterate until solved. Useful for research, multi-step problems, and tasks requiring current data.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@video@4 Methods of Prompt Engineering](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203)
|
||||
|
||||
@@ -2,4 +2,6 @@
|
||||
|
||||
Role prompting assigns a specific character, identity, or professional role to the LLM to generate responses consistent with that role's expertise, personality, and communication style. By establishing roles like "teacher," "travel guide," or "software engineer," you provide the model with appropriate domain knowledge, perspective, and tone for more targeted, natural interactions.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=9orzEniOGmRD7g-o&t=136)
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
# Structured Outputs
|
||||
|
||||
Structured outputs involve prompting LLMs to return responses in specific formats like JSON, XML, or other organized structures rather than free-form text. This approach forces models to organize information systematically, reduces hallucinations by imposing format constraints, enables easy programmatic processing, and facilitates integration with applications. For example, requesting movie classification results as JSON with specified schema ensures consistent, parseable responses. Structured outputs are particularly valuable for data extraction, API integration, and applications requiring reliable data formatting.
|
||||
Structured outputs involve prompting LLMs to return responses in specific formats like JSON, XML, or other organized structures rather than free-form text. This approach forces models to organize information systematically, reduces hallucinations by imposing format constraints, enables easy programmatic processing, and facilitates integration with applications. For example, requesting movie classification results as JSON with a specified schema ensures consistent, parseable responses. Structured outputs are particularly valuable for data extraction, API integration, and applications requiring reliable data formatting.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@Generating Structured Outputs from LLMs](https://towardsdatascience.com/generating-structured-outputs-from-llms/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
|
||||
Reference in New Issue
Block a user