chore: sync content to repo (#9676)

Co-authored-by: kamranahmedse <4921183+kamranahmedse@users.noreply.github.com>
This commit is contained in:
github-actions[bot]
2026-03-03 14:13:43 +01:00
committed by GitHub
parent ab9a60827e
commit a27d607e79
28 changed files with 33 additions and 47 deletions

View File

@@ -6,5 +6,4 @@ Visit the following resources to learn more:
- [@article@Top 15 Use Cases Of AI Agents In Business](https://www.ampcome.com/post/15-use-cases-of-ai-agents-in-business)
- [@article@A Brief Guide on AI Agents: Benefits and Use Cases](https://www.codica.com/blog/brief-guide-on-ai-agents/)
- [@video@The Complete Guide to Building AI Agents for Beginners](https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX)
- [@article@How to Build Effective AI Agents to Process Millions of Requests](https://towardsdatascience.com/how-to-build-effective-ai-agents-to-process-millions-of-requests/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@The Complete Guide to Building AI Agents for Beginners](https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX)

View File

@@ -6,5 +6,4 @@ Visit the following resources to learn more:
- [@article@Building an AI Agent Tutorial - LangChain](https://python.langchain.com/docs/tutorials/agents/)
- [@article@AI Agents and Their Types](https://play.ht/blog/ai-agents-use-cases/)
- [@article@How to Design My First AI Agent](https://towardsdatascience.com/how-to-design-my-first-ai-agent/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@The Complete Guide to Building AI Agents for Beginners](https://youtu.be/MOyl58VF2ak?si=-QjRD_5y3iViprJX)

View File

@@ -6,5 +6,4 @@ Visit the following resources to learn more:
- [@article@What does an AI Engineer do?](https://www.codecademy.com/resources/blog/what-does-an-ai-engineer-do/)
- [@article@What is an ML Engineer?](https://www.coursera.org/articles/what-is-machine-learning-engineer)
- [@article@Machine Learning vs AI Engineer: What Are the Differences?](https://towardsdatascience.com/machine-learning-vs-ai-engineer-no-confusing-jargon/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@AI vs ML](https://www.youtube.com/watch?v=4RixMPF4xis)

View File

@@ -5,5 +5,4 @@ AI (Artificial Intelligence) refers to systems designed to perform specific task
Visit the following resources to learn more:
- [@article@What is AGI?](https://aws.amazon.com/what-is/artificial-general-intelligence/)
- [@article@The crucial difference between AI and AGI](https://www.forbes.com/sites/bernardmarr/2024/05/20/the-crucial-difference-between-ai-and-agi/)
- [@article@Stop Worrying about AGI: The Immediate Danger is Reduced General Intelligence (RGI)](https://towardsdatascience.com/stop-worrying-about-agi-the-immediate-danger-is-reduced-general-intelligence-rgi/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@article@The crucial difference between AI and AGI](https://www.forbes.com/sites/bernardmarr/2024/05/20/the-crucial-difference-between-ai-and-agi/)

View File

@@ -4,5 +4,4 @@ Anomaly detection with embeddings works by transforming data, such as text, imag
Visit the following resources to learn more:
- [@article@Anomaly in Embeddings](https://ai.google.dev/gemini-api/tutorials/anomaly_detection)
- [@article@Boosting Your Anomaly Detection With LLMs](https://towardsdatascience.com/boosting-your-anomaly-detection-with-llms/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@article@Anomaly in Embeddings](https://ai.google.dev/gemini-api/tutorials/anomaly_detection)

View File

@@ -7,5 +7,4 @@ Visit the following resources to learn more:
- [@official@Claude](https://claude.ai)
- [@course@Claude 101](https://anthropic.skilljar.com/claude-101)
- [@video@How To Use Claude Pro For Beginners](https://www.youtube.com/watch?v=J3X_JWQkvo8)
- [@article@How To Use Claude Pro For Beginners](https://www.youtube.com/watch?v=J3X_JWQkvo8)
- [@video@Claude FULL COURSE 1 HOUR (Build & Automate Anything)](https://www.youtube.com/watch?v=KrKhfm2Xuho)

View File

@@ -4,6 +4,6 @@ Open-source models are freely available for customization and collaboration, pro
Visit the following resources to learn more:
- [@article@Open-Source LLMs vs Closed: Unbiased Guide for Innovative Companies [2026](https://hatchworks.com/blog/gen-ai/open-source-vs-closed-llms-guide/)
- [@article@Open-Source LLMs vs Closed: Unbiased Guide for Innovative Companies [2026]](https://hatchworks.com/blog/gen-ai/open-source-vs-closed-llms-guide/)
- [@video@Open Source vs Closed AI: LLMs, Agents & the AI Stack Explained](https://www.youtube.com/watch?v=_QfxGZGITGw)
- [@video@Open-Source vs Closed-Source LLMs](https://www.youtube.com/watch?v=710PDpuLwOc)

View File

@@ -5,4 +5,4 @@ Context compaction is a technique used to reduce the length of the context provi
Visit the following resources to learn more:
- [@article@Context Engineering](https://blog.langchain.com/context-engineering-for-agents/)
- [@article@Context Compaction](https://gist.github.com/badlogic/cd2ef65b0697c4dbe2d13fbecb0a0a5f)
- [@opensource@Context Compaction](https://gist.github.com/badlogic/cd2ef65b0697c4dbe2d13fbecb0a0a5f)

View File

@@ -6,5 +6,4 @@ Visit the following resources to learn more:
- [@article@Context Engineering Guide](https://www.promptingguide.ai/guides/context-engineering-guide)
- [@article@Effective context engineering for AI agents](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents)
- [@article@How to Perform Effective Agentic Context Engineering](https://towardsdatascience.com/how-to-perform-effective-agentic-context-engineering/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://www.youtube.com/watch?v=vD0E3EUb8-8)

View File

@@ -1,7 +1,8 @@
# External Memory
# External Memory for LLMs
External memory, in the context of large language models (LLMs), refers to mechanisms that allow these models to access and utilize information stored outside of their internal parameters. This can involve retrieving relevant data from databases, knowledge graphs, or other external sources during the prompt processing or generation phases to augment the model's knowledge and improve its performance on specific tasks. This enhances the LLM's ability to handle complex queries and generate more accurate and contextually relevant responses.
External memory refers to the techniques used to provide Large Language Models (LLMs) with access to information that is not stored directly within their parameters. This allows LLMs to access and utilize a much broader and more up-to-date knowledge base than what was available during their training. By using external memory, LLMs can overcome limitations related to knowledge cut-off, hallucination, and the inability to incorporate new information, leading to more accurate, reliable, and contextually relevant respons
Visit the following resources to learn more:
- [@article@How to Maximize Agentic Memory for Continual Learning](https://towardsdatascience.com/how-to-maximize-agentic-memory-for-continual-learning/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@article@Context Engineering - LLM Memory and Retrieval for AI Agents](https://weaviate.io/blog/context-engineering)
- [@article@4 context engineering strategies every AI engineer needs to know](https://newsletter.owainlewis.com/i/180013006/1-write-external-memory)

View File

@@ -1,10 +1,9 @@
# Fine-tuning
Fine-tuning involves taking a pre-trained large language model (LLM) and further training it on a smaller, task-specific dataset. This adapts the LLM to perform better on a particular task or domain. However, fine-tuning can be resource-intensive and may not always be the most efficient approach. Prompt engineering, retrieval-augmented generation (RAG), or using smaller, specialized models can sometimes achieve comparable or even better results with less computational overhead and data requirements.
Fine-tuning involves taking a pre-trained large language model (LLM) and further training it on a smaller, task-specific dataset. This adapts the LLM to perform better on a particular task or domain. However, fine-tuning can be resource-intensive and may not always be the most efficient approach. Prompt engineering, retrieval-augmented generation (RAG), or using smaller, specialized models can sometimes achieve comparable or even better results with less computational overhead and data requirements.
Visit the following resources to learn more:
- [@article@What is fine-tuning?](https://www.ibm.com/think/topics/fine-tuning)
- [@article@What is fine-tuning? A guide to fine-tuning LLMs](https://cohere.com/blog/fine-tuning)
- [@article@How I Fine-Tuned Granite-Vision 2B to Beat a 90B Model — Insights and Lessons Learned](https://towardsdatascience.com/how-i-fine-tuned-granite-vision-2b-to-beat-a-90b-model-insights-and-lessons-learned/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models](https://www.youtube.com/watch?v=zYGDpG-pTho)

View File

@@ -4,6 +4,6 @@ The Google Agent Development Kit (ADK) is a framework designed to help developer
Visit the following resources to learn more:
- [@course@ADK Crash Course - From Beginner To Expert](https://codelabs.developers.google.com/onramp/instructions#0)
- [@official@Agent Development Kit](https://google.github.io/adk-docs/)
- [@official@Overview of Agent Development Kit](https://docs.cloud.google.com/agent-builder/agent-development-kit/overview)
- [@official@Overview of Agent Development Kit](https://docs.cloud.google.com/agent-builder/agent-development-kit/overview)
- [@course@ADK Crash Course - From Beginner To Expert](https://codelabs.developers.google.com/onramp/instructions#0)

View File

@@ -1,6 +1,6 @@
# Google's Gemini
# Google Gemini
Google Gemini is an advanced AI model by Google DeepMind, designed to integrate natural language processing with multimodal capabilities, enabling it to understand and generate not just text but also images, videos, and other data types. It combines generative AI with reasoning skills, making it effective for complex tasks requiring logical analysis and contextual understanding.
Google Gemini is a family of multimodal large language models (LLMs) developed by Google AI. It's designed to understand and generate content across various modalities, including text, images, audio, and video. Gemini comes in different sizes and capabilities, allowing developers to choose the best model for their specific needs and resource constraints.
Visit the following resources to learn more:

View File

@@ -1,10 +1,9 @@
# Haystack
# Langchain
Haystack is an open-source Python framework that helps you build search and question-answering agents fast. You connect your data sources, pick a language model, and set up pipelines that find the best answer to a users query. Haystack handles tasks such as indexing documents, retrieving passages, running the model, and ranking results. It works with many back-ends like Elasticsearch, OpenSearch, FAISS, and Pinecone, so you can scale from a laptop to a cluster. You can add features like summarization, translation, and document chat by dropping extra nodes into the pipeline. The framework also offers REST APIs, a web UI, and clear tutorials, making it easy to test and deploy your agent in production.
Visit the following resources to learn more:
- [@official@Haystack](https://haystack.deepset.ai/)
- [@official@Haystack Overview](https://docs.haystack.deepset.ai/docs/intro)
- [@official@@Haystack Overview](https://docs.haystack.deepset.ai/docs/intro)
- [@opensource@deepset-ai/haystack](https://github.com/deepset-ai/haystack)

View File

@@ -4,5 +4,5 @@ The Hugging Face Hub is a central platform where users can discover, share, and
Visit the following resources to learn more:
- [@course@The Hugging Face Hub (LLM Course)](https://huggingface.co/learn/nlp-course/en/chapter4/1)
- [@official@Hugging Face Documentation](https://huggingface.co/docs/hub/en/index)
- [@official@Hugging Face Documentation](https://huggingface.co/docs/hub/en/index)
- [@course@The Hugging Face Hub (LLM Course)](https://huggingface.co/learn/nlp-course/en/chapter4/1)

View File

@@ -4,6 +4,6 @@ Hugging Face is a leading AI company and open-source platform that provides tool
Visit the following resources to learn more:
- [@course@Hugging Face Official Video Course](https://www.youtube.com/watch?v=00GKzGyWFEs&list=PLo2EIpI_JMQvWfQndUesu0nPBAtZ9gP1o)
- [@official@Hugging Face](https://huggingface.co)
- [@course@Hugging Face Official Video Course](https://www.youtube.com/watch?v=00GKzGyWFEs&list=PLo2EIpI_JMQvWfQndUesu0nPBAtZ9gP1o)
- [@video@What is Hugging Face? - Machine Learning Hub Explained](https://www.youtube.com/watch?v=1AUjKfpRZVo)

View File

@@ -6,6 +6,5 @@ Visit the following resources to learn more:
- [@article@What is a large language model (LLM)?](https://www.cloudflare.com/en-gb/learning/ai/what-is-large-language-model/)
- [@article@Understanding AI: Everything you need to know about language models](https://leerob.com/ai)
- [@article@New to LLMs? Start Here](https://towardsdatascience.com/new-to-llms-start-here/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@How Large Language Models Work](https://www.youtube.com/watch?v=5sLYAQS9sWQ)
- [@video@Large Language Models (LLMs) - Everything You NEED To Know](https://www.youtube.com/watch?v=osKyvYJ3PRM)

View File

@@ -4,5 +4,5 @@ Meta Llama is a family of large language models (LLMs) developed by Meta AI. The
Visit the following resources to learn more:
- [@course@Building with Llama 4](https://www.deeplearning.ai/short-courses/building-with-llama-4/)
- [@official@Llama](https://www.llama.com/)
- [@official@Llama](https://www.llama.com/)
- [@course@Building with Llama 4](https://www.deeplearning.ai/short-courses/building-with-llama-4/)

View File

@@ -6,5 +6,4 @@ Visit the following resources to learn more:
- [@roadmap@Visit Dedicated Prompt Engineering Roadmap](https://roadmap.sh/prompt-engineering)
- [@article@hat is Prompt Engineering? - AI Prompt Engineering Explained - AWS](https://aws.amazon.com/what-is/prompt-engineering/)
- [@article@Advanced Prompt Engineering for Data Science Projects](https://towardsdatascience.com/advanced-prompt-engineering-for-data-science-projects/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@What is Prompt Engineering?](https://www.youtube.com/watch?v=nf1e-55KKbg)

View File

@@ -7,5 +7,4 @@ Visit the following resources to learn more:
- [@article@Context engineering vs. prompt engineering](https://www.elastic.co/search-labs/blog/context-engineering-vs-prompt-engineering)
- [@article@Effective context engineering for AI agents](https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents)
- [@article@Context Engineering vs Prompt Engineering](https://medium.com/data-science-in-your-pocket/context-engineering-vs-prompt-engineering-379e9622e19d)
- [@article@Beyond Prompting: The Power of Context Engineering](https://towardsdatascience.com/beyond-prompting-the-power-of-context-engineering/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://www.youtube.com/watch?v=vD0E3EUb8-8)

View File

@@ -5,5 +5,4 @@ A vector database is designed to store, manage, and retrieve high-dimensional ve
Visit the following resources to learn more:
- [@article@What is a Vector Database? Top 12 Use Cases](https://lakefs.io/blog/what-is-vector-databases/)
- [@article@Vector Databases: Intro, Use Cases](https://www.v7labs.com/blog/vector-databases)
- [@article@When (Not) to Use Vector DB](https://towardsdatascience.com/when-not-to-use-vector-db/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@article@Vector Databases: Intro, Use Cases](https://www.v7labs.com/blog/vector-databases)

View File

@@ -5,5 +5,4 @@ Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) by pr
Visit the following resources to learn more:
- [@article@4 context engineering strategies every AI engineer needs to know](https://newsletter.owainlewis.com/p/4-context-engineering-strategies)
- [@article@Context Engineering](https://blog.langchain.com/context-engineering-for-agents/)
- [@article@Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI](https://towardsdatascience.com/beyond-rag/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@article@Context Engineering](https://blog.langchain.com/context-engineering-for-agents/)

View File

@@ -6,5 +6,4 @@ Visit the following resources to learn more:
- [@article@Retrieval augmented generation use cases: Transforming data into insights](https://www.glean.com/blog/retrieval-augmented-generation-use-cases)
- [@article@Retrieval Augmented Generation (RAG) 5 Use Cases](https://theblue.ai/blog/rag-news/)
- [@video@Introduction to RAG](https://www.youtube.com/watch?v=LmiFeXH-kq8&list=PL-pTHQz4RcBbz78Z5QXsZhe9rHuCs1Jw-)
- [@article@How to Train a Chatbot Using RAG and Custom Data](https://towardsdatascience.com/how-to-train-a-chatbot-using-rag-and-custom-data/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@Introduction to RAG](https://www.youtube.com/watch?v=LmiFeXH-kq8&list=PL-pTHQz4RcBbz78Z5QXsZhe9rHuCs1Jw-)

View File

@@ -5,5 +5,4 @@ Retrieval-Augmented Generation (RAG) is an AI approach that combines information
Visit the following resources to learn more:
- [@article@What is Retrieval-Augmented Generation? - Google](https://cloud.google.com/use-cases/retrieval-augmented-generation)
- [@article@RAG Explained: Understanding Embeddings, Similarity, and Retrieval](https://towardsdatascience.com/rag-explained-understanding-embeddings-similarity-and-retrieval/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@video@What is Retrieval-Augmented Generation? - IBM](https://www.youtube.com/watch?v=T-D1OfcDW1M)

View File

@@ -1,8 +1,9 @@
# Tokens in Large Language Models
# Tokens
Tokens are fundamental units of text that LLMs process, created by breaking text into smaller components such as words, subwords, or characters. Understanding tokens is crucial because models predict the next token in sequences, API costs are based on token count, and models have maximum token limits for input and output.
Tokens are the fundamental building blocks of large language models (LLMs). They are discrete units of text that the model processes and uses to understand and generate language. These units can be words, parts of words, or even individual characters, depending on the model's vocabulary. LLMs work by predicting the next token in a sequence, based on the preceding tokens and their learned patterns.
Visit the following resources to learn more:
- [@article@Explaining Tokens — the Language and Currency of AI](https://blogs.nvidia.com/blog/ai-tokens-explained/)
- [@article@Understanding Tokens and Parameters in Model Training: A Deep Dive](ttps://www.functionize.com/blog/understanding-tokens-and-parameters-in-model-training)
- [@article@Understanding Tokens and Parameters in Model Training: A Deep Dive](https://www.functionize.com/blog/understanding-tokens-and-parameters-in-model-training)
- [@video@Most devs don't understand how LLM tokens work](https://www.youtube.com/watch?v=nKSk_TiR8YA&t=33s)

View File

@@ -4,7 +4,7 @@ Tools and function calling equip AI agents with the ability to interact with the
Visit the following resources to learn more:
- [@course@A Comprehensive Guide to Function Calling in LLMs](https://thenewstack.io/a-comprehensive-guide-to-function-calling-in-llms/)
- [@official@What are Tools? - Hugging Face](https://huggingface.co/learn/agents-course/en/unit1/tools)
- [@article@A Comprehensive Guide to Function Calling in LLMs](https://thenewstack.io/a-comprehensive-guide-to-function-calling-in-llms/)
- [@article@What are Tools? - Hugging Face](https://huggingface.co/learn/agents-course/en/unit1/tools)
- [@article@Compare 50+ AI Agent Tools in 2026](https://aimultiple.com/ai-agent-tools)
- [@article@AI Agents Explained in Simple Terms for Beginners](https://www.geeky-gadgets.com/ai-agents-explained-for-beginners/)

View File

@@ -5,5 +5,4 @@ AI engineers are professionals who specialize in designing, developing, and impl
Visit the following resources to learn more:
- [@article@How to Become an AI Engineer: Duties, Skills, and Salary](https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/how-to-become-an-ai-engineer)
- [@article@AI Engineers: What they do and how to become one](https://www.techtarget.com/whatis/feature/How-to-become-an-artificial-intelligence-engineer)
- [@article@I Transitioned from Data Science to AI Engineering: Heres Everything You Need to Know](https://towardsdatascience.com/i-transitioned-from-data-science-to-ai-engineering-heres-everything-you-need-to-know/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration)
- [@article@AI Engineers: What they do and how to become one](https://www.techtarget.com/whatis/feature/How-to-become-an-artificial-intelligence-engineer)

View File

@@ -1,9 +1,10 @@
# Zero Shot Prompting
# Zero-Shot Prompting
Zero-shot prompting is a prompt engineering method that relies on the pretraining of a large language model (LLM) to infer an appropriate response. In contrast to other prompt engineering methods, such as few-shot prompting, models arent provided with examples of output when prompting with the zero-shot technique.
Zero-shot prompting is a prompt engineering method that relies on the pretraining of a large language model (LLM) to infer an appropriate response. In contrast to other prompt engineering methods, such as few-shot prompting, models arent provided with examples of output when prompting with the zero-shot technique.1
Visit the following resources to learn more:
- [@article@What is zero-shot prompting?](https://www.ibm.com/think/topics/zero-shot-prompting)
- [@article@Zero-Shot Prompting](https://www.promptingguide.ai/techniques/zeroshot)
- [@article@Technique #3: Examples in Prompts: From Zero-Shot to Few-Shot](https://learnprompting.org/docs/basics/few_shot)
- [@video@Zero-shot, One-shot and Few-shot Prompting Explained | Prompt Engineering 101](https://www.youtube.com/watch?v=sW5xoicq5TY)