Enhance LLM self-evaluation documentation (#9301)

Added a reference link for LLM self-evaluation.
This commit is contained in:
Agustín Fernández
2025-11-25 06:29:46 -03:00
committed by GitHub
parent a2051c6af0
commit c732275ecb

View File

@@ -1,3 +1,6 @@
# LLM Self Evaluation
LLM self-evaluation involves prompting models to assess their own outputs for quality, accuracy, or adherence to criteria. This technique can identify errors, rate confidence levels, or check if responses meet specific requirements. Self-evaluation helps improve output quality through iterative refinement and provides valuable feedback for prompt optimization.
LLM self-evaluation involves prompting models to assess their own outputs for quality, accuracy, or adherence to criteria. This technique can identify errors, rate confidence levels, or check if responses meet specific requirements. Self-evaluation helps improve output quality through iterative refinement and provides valuable feedback for prompt optimization.
- [@article@LLM Self-Evaluation](https://learnprompting.org/docs/reliability/lm_self_eval)