diff --git a/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md b/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md index 9e0ef6b4d..f6692a248 100644 --- a/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md +++ b/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md @@ -1,3 +1,6 @@ # LLM Self Evaluation -LLM self-evaluation involves prompting models to assess their own outputs for quality, accuracy, or adherence to criteria. This technique can identify errors, rate confidence levels, or check if responses meet specific requirements. Self-evaluation helps improve output quality through iterative refinement and provides valuable feedback for prompt optimization. \ No newline at end of file +LLM self-evaluation involves prompting models to assess their own outputs for quality, accuracy, or adherence to criteria. This technique can identify errors, rate confidence levels, or check if responses meet specific requirements. Self-evaluation helps improve output quality through iterative refinement and provides valuable feedback for prompt optimization. + + +- [@article@LLM Self-Evaluation](https://learnprompting.org/docs/reliability/lm_self_eval)