Compare commits

..

1 Commits

Author SHA1 Message Date
Arik Chakma
2cec63e86b fix: roadmap overflow 2024-10-28 19:11:33 +06:00
40 changed files with 123 additions and 348 deletions

View File

@@ -176,7 +176,7 @@
"type": "article"
},
{
"title": "Cookies - Mozilla",
"title": "Cookes - Mozilla",
"url": "https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/cookies",
"type": "article"
}

View File

@@ -811,8 +811,13 @@
"type": "article"
},
{
"title": "Subnetting",
"url": "https://www.youtube.com/playlist?list=PLIhvC56v63IKrRHh3gvZZBAGvsvOhwrRF",
"title": "Lets subnet your home network!",
"url": "https://www.youtube.com/watch?v=mJ_5qeqGOaI&list=PLIhvC56v63IKrRHh3gvZZBAGvsvOhwrRF&index=6",
"type": "video"
},
{
"title": "Subnetting for hackers",
"url": "https://www.youtube.com/watch?v=o0dZFcIFIAw",
"type": "video"
}
]
@@ -1398,16 +1403,11 @@
},
"LKK1A5-xawA7yCIAWHS8P": {
"title": "SSL / TLS",
"description": "Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols used to provide security in internet communications. These protocols encrypt the data that is transmitted over the web, so anyone who tries to intercept packets will not be able to interpret the data. One difference that is important to know is that SSL is now deprecated due to security flaws, and most modern web browsers no longer support it. But TLS is still secure and widely supported, so preferably use TLS.\n\nLearn more from the following resources:",
"description": "Single Sign-On (SSO) is an authentication method that allows users to access multiple applications or systems with one set of login credentials. It enables users to log in once and gain access to various connected systems without re-entering credentials. SSO enhances user experience by reducing password fatigue, streamlines access management for IT departments, and can improve security by centralizing authentication controls. It typically uses protocols like SAML, OAuth, or OpenID Connect to securely share authentication information across different domains. While SSO offers convenience and can strengthen security when implemented correctly, it also presents a single point of failure if compromised, making robust security measures for the SSO system critical.\n\nLearn more from the following resources:",
"links": [
{
"title": "What is SSL? | SSL definition",
"url": "https://www.cloudflare.com/en-gb/learning/ssl/what-is-ssl/",
"type": "article"
},
{
"title": "TLS Basics",
"url": "https://www.internetsociety.org/deploy360/tls/basics/",
"title": "Whats the Difference Between SSL and TLS?",
"url": "https://aws.amazon.com/compare/the-difference-between-ssl-and-tls/",
"type": "article"
},
{
@@ -1805,7 +1805,7 @@
"type": "article"
},
{
"title": "What is LDAP",
"title": "",
"url": "https://www.youtube.com/watch?v=vy3e6ekuqqg",
"type": "video"
}

View File

@@ -1892,11 +1892,6 @@
"title": "winston Website",
"url": "https://github.com/winstonjs/winston",
"type": "opensource"
},
{
"title": "A Complete Guide to Winston Logging in Node.js",
"url": "https://betterstack.com/community/guides/logging/how-to-install-setup-and-use-winston-and-morgan-to-log-node-js-applications/",
"type": "article"
}
]
},

View File

@@ -28,7 +28,7 @@ roadmapIds:
In this project, you will style the HTML website structure you created previously [in a different project](/projects/basic-html-website). The focus will be on learning how to use CSS to create responsive layouts, apply color and typography, and enhance the overall design of your website.
Rough mockups of the website structure for mobile and desktop devices are given below. Do not worry about the design details i.e. colors backgrounds etc at this stage; we are not looking for a beautiful design, but a well-structured and responsive website. Just focus on making the layout same as the mockup and ensuring it looks good on different screen sizes.
Rough mockups of the website structure for mobile and desctop devices are given below. Do not worry about the design details i.e. colors backgrounds etc at this stage; we are not looking for a beautiful design, but a well-structured and responsive website. Just focus on making the layout same as the mockup and ensuring it looks good on different screen sizes.
[![Styled HTML Website](https://assets.roadmap.sh/guest/portfolio-template-xdhki.png)](https://assets.roadmap.sh/guest/portfolio-template-xdhki.png)
@@ -51,4 +51,4 @@ For bonus points, you can:
---
After completing this project, you will have a solid understanding of how to style a basic HTML website using CSS. You can move on to more advanced topics, such as CSS animations, transitions, and grid layouts, in future projects.
After completing this project, you will have a solid understanding of how to style a basic HTML website using CSS. You can move on to more advanced topics, such as CSS animations, transitions, and grid layouts, in future projects.

View File

@@ -5,4 +5,4 @@ Cookies play an instrumental role in the field of API (Application Programming I
Learn more from the following resources:
- [@article@What Are API Cookies? How to Send it?](https://apidog.com/articles/what-are-api-cookies/)
- [@article@Cookies - Mozilla](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/cookies)
- [@article@Cookes - Mozilla](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/cookies)

View File

@@ -5,4 +5,5 @@ Subnetting is a technique used in computer networking to divide a large network
Learn more from the following resources:
- [@article@Networking Basics: What is IPv4 Subnetting?](https://www.cbtnuggets.com/blog/technology/networking/networking-basics-what-is-ipv4-subnetting)
- [@video@Subnetting](https://www.youtube.com/playlist?list=PLIhvC56v63IKrRHh3gvZZBAGvsvOhwrRF)
- [@video@Lets subnet your home network!](https://www.youtube.com/watch?v=mJ_5qeqGOaI&list=PLIhvC56v63IKrRHh3gvZZBAGvsvOhwrRF&index=6)
- [@video@Subnetting for hackers](https://www.youtube.com/watch?v=o0dZFcIFIAw)

View File

@@ -5,4 +5,4 @@ LDAP (Lightweight Directory Access Protocol) is a standardized application proto
Learn more from the following resources:
- [@article@What Is LDAP & How Does It Work?](https://www.okta.com/uk/identity-101/what-is-ldap/)
- [@video@What is LDAP](https://www.youtube.com/watch?v=vy3e6ekuqqg)
- [@video@](https://www.youtube.com/watch?v=vy3e6ekuqqg)

View File

@@ -4754,7 +4754,7 @@
"oldId": "dOrnsftQKhJAmOmsSzDI0"
},
"zIndex": 999,
"width": 430,
"width": 402,
"height": 30,
"dragging": false,
"positionAbsolute": {

View File

@@ -1,12 +1,11 @@
# Basic Syntax
Understanding the basics is the key to a solid foundation. In this section, learn the basic terminologies, naming conventions, reserved keywords, expressions, statements, data structures, OOP, packages, etc.
Understanding the basics is the key to a solid foundation. In this section, learn the basic terminologies, naming conventions, reserved words, conditions, functions, data structures, OOP, packages, etc.
- To print output use --> System.out.println();
- To take input from user --> Scanner or BufferedReader class can be used
Visit the following resources to learn more:
- [@official@Java Language Basics](https://dev.java/learn/language-basics)
- [@video@Java - Basic Syntax](https://www.youtube.com/watch?v=81piDKqPxjQ)
- [@video@Java Tutorial for Beginners](https://www.youtube.com/watch?v=RRubcjpTkks)

View File

@@ -5,11 +5,11 @@ The pipe (`|`) is a powerful feature in Linux used to connect two or more comman
Here is a simple example of piping two commands, `ls` and `grep`, to list all the text files in the current directory:
```bash
ls | grep '\.txt$'
ls | grep .txt
```
In this example, `ls` lists the files in the current directory and `grep '\.txt$'` filters out any files that don't end with `.txt`. The pipe command, `|`, takes the output from `ls` and uses it as the input to `grep '\.txt$'`. The output of the entire command is the list of text files in the current directory.
In this example, `ls` lists the files in the current directory and `grep .txt` filters out any files that don't end with `.txt`. The pipe command, `|`, takes the output from `ls` and uses it as the input to `grep .txt`. The output of the entire command is the list of text files in the current directory.
Visit the following resources to learn more:
- [@article@Piping and Redirection](https://ryanstutorials.net/linuxtutorial/piping.php#piping)
- [@article@Piping and Redirection](https://ryanstutorials.net/linuxtutorial/piping.php#piping)

View File

@@ -5,4 +5,3 @@ winston is designed to be a simple and universal logging library with support fo
Visit the following resources to learn more:
- [@opensource@winston Website](https://github.com/winstonjs/winston)
- -[@article@A Complete Guide to Winston Logging in Node.js](https://betterstack.com/community/guides/logging/how-to-install-setup-and-use-winston-and-morgan-to-log-node-js-applications/)

View File

@@ -6,12 +6,6 @@ LLMs have the ability to achieve state-of-the-art performance in multiple Natura
As an example, OpenAI's GPT-3 is a prominent LLM that has gained significant attention due to its capability to generate high-quality text and perform a variety of language tasks with minimal fine-tuning.
Learn more from the following resources:
- [@roadmap.sh@Introduction to LLMs](https://roadmap.sh/guides/introduction-to-llms)
- [@article@Large language model](https://en.wikipedia.org/wiki/Large_language_model)
- [@video@Intro to Large Language Models](https://www.youtube.com/watch?v=zjkBMFhNj_g)
- [@video@Large Language Model Operations (LLMOps) Explained](https://www.youtube.com/watch?v=cvPEiPt7HXo)
- [@video@How Large Language Models Work](https://youtu.be/5sLYAQS9sWQ)
- [@feed@Explore top posts about LLM](https://app.daily.dev/tags/llm?ref=roadmapsh)

View File

@@ -17,9 +17,3 @@ Instruction Tuned LLMs = Base LLMs + Further Tuning + RLHF
```
To build an Instruction Tuned LLM, a Base LLM is taken and is further trained using a large dataset covering sample "Instructions" and how the model should perform as a result of those instructions. The model is then fine-tuned using a technique called "Reinforcement Learning with Human Feedback" (RLHF) which allows the model to learn from human feedback and improve its performance over time.
Learn more from the following resources:
- [@article@Understanding AI Models: Base Language Learning Models vs. Instruction Tuned Language Learning Models - Olivier Mills](https://oliviermills.com/articles/understanding-ai-models-base-language-learning-models-vs-instruction-tuned-language-learning-models)
- [@video@Why Are There So Many Foundation Models?](https://www.youtube.com/watch?v=QPQy7jUpmyA)
- [@video@How to Pick the Right AI Foundation Model](https://www.youtube.com/watch?v=pePAAGfh-IU)

View File

@@ -9,9 +9,3 @@ On a high level, training an LLM model involves three steps i.e. data collection
- **Evaluation**: The final step is to evaluate the performance of the model to see how well it performs on various tasks such as question answering, summarization, translation etc.
The output from the training Pipeline is an LLM model which is simply the parameters or weights which capture the knowledge learned during the training process. These parameters or weights are typically serialized and stored in a file, which can then be loaded into any application that requires language processing capabilities e.g. text generation, question answering, language processing etc.
Learn more from the following resources:
- [@article@What is LLM & How to Build Your Own Large Language Models?](https://www.signitysolutions.com/blog/how-to-build-large-language-models)
- [@guides@Large language model](https://en.wikipedia.org/wiki/Large_language_model)
- [@video@Five Steps to Create a New AI Model](https://youtu.be/jcgaNrC4ElU)

View File

@@ -26,9 +26,3 @@ Write me an introductory guide about Prompt Engineering.
```
However, using plain text as prompts i.e. without using any best practices you may not be able to fully utilise the power of LLMs. That's where "Prompt Engineering" or knowing the best practices for writing better prompts and getting the most out of LLMs comes in.
- [@guides@Basics of Prompting | Prompt Engineering Guide](https://www.promptingguide.ai/introduction/basics)
- [@article@Prompting Basics](https://learnprompting.org/docs/basics/prompting)
- [@offical@Prompt engineering - OpenAI API](https://platform.openai.com/docs/guides/prompt-engineering)
- [@offical@Prompt engineering overview - Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview)
- [@course@Introduction to Prompt Engineering (Playlist)](https://youtube.com/playlist?list=PLYio3GBcDKsPP2_zuxEp8eCulgFjI5a3g&si=n3Ot-tFECp4axL8L)

View File

@@ -24,7 +24,4 @@ Prompts can help reduce inaccuracies and ambiguities in the AI's responses. By p
In conclusion, the need for prompting stems from its role in guiding AI model behavior, improving text quality and relevance, eliciting a specific output, aligning AI and human intent, and reducing inaccuracies and ambiguity in generated content. By understanding and mastering the art of prompting, users can unlock the true potential of AI language models.
- [@article@Prompting Basics](https://learnprompting.org/docs/basics/prompting)
- [@video@AI prompt engineering: A deep dive](https://youtu.be/T9aRN5JkmL8?si=3uW2BQuNHLcHjqTv)
- [@video@What is Prompt Tuning?](https://www.youtube.com/watch?v=yu27PWzJI_Y)
- [@guides@What is Prompt Engineering? A Detailed Guide For 2024](https://www.datacamp.com/blog/what-is-prompt-engineering-the-future-of-ai-communication)
- [@article@Prompting Basics](https://learnprompting.org/docs/basics/prompting)

View File

@@ -24,5 +24,4 @@ Hello, how are you?
But it's one of the best practices to be clear and use delimiters to separate the content in prompt from the instructions. You will learn more about it in the "Best Practices" nodes of the roadmap.
- [@article@Basic Prompting - Learn Prompting](https://learnprompting.org/docs/basics/intro)
- [@guides@Basics of Prompting - Prompt Engineering Guide](https://www.promptingguide.ai/introduction/basics)
- [@article@Basic Prompting](https://learnprompting.org/docs/basics/intro)

View File

@@ -14,11 +14,4 @@ When constructing a prompt, it's essential to understand the different parts tha
*Example: To further clarify the email-writing task, you might provide a brief example of the tone or structure you want, such as "Dear [Recipient], Thank you for taking the time to meet with me yesterday. We discussed [topic 1], [topic 2], and [topic 3]. I look forward to our future collaboration."*
By considering these three parts of a prompt — context, instruction, and example — you can create effective and well-formed prompts that produce targeted and accurate responses from the AI.
Learn more from the following resources:
- [@guides@Understanding Prompt Structure: Key Elements of a Prompt](https://learnprompting.org/docs/basics/formalizing)
- [@guides@Elements of a Prompt | Prompt Engineering Guide](https://www.promptingguide.ai/introduction/elements)
- [@video@Building with Anthropic Claude: Prompt Workshop ](https://youtu.be/hkhDdcM5V94?si=mW-GLSvi7TMg33wx)
- [@guides@General Tips for Designing Prompts | Prompt Engineering Guide](https://www.promptingguide.ai/introduction/tips)
By considering these three parts of a prompt — context, instruction, and example — you can create effective and well-formed prompts that produce targeted and accurate responses from the AI.

View File

@@ -1,8 +1,3 @@
# Prompts
At this point, you probably already know what the Prompts are and the importance of writing good prompts. This section covers the best practices for writing good prompts as well as covering some of the commonly used prompting techniques.
Learn more from the following resources:
- [@guides@Prompt Engineering Guide: The Ultimate Guide to Generative AI](https://learnprompting.org/docs/introduction)
- [@guides@Optimizing Prompts | Prompt Engineering Guide](https://www.promptingguide.ai/guides/optimizing-prompts)
At this point, you probably already know what the Prompts are and the importance of writing good prompts. This section covers the best practices for writing good prompts as well as covering some of the commonly used prompting techniques.

View File

@@ -14,9 +14,4 @@ Role prompting is a technique used in prompt engineering to encourage the AI to
Imagine you want to explore solutions to an environmental problem. You can use role prompting to elicit diverse perspectives and insights. Here's an example prompt with role prompting:
_As a climate scientist, how would you approach the problem of deforestation to minimize its impact on the environment?_
Learn more from the following resources:
- [@article@Assigning Roles to Chatbots](https://learnprompting.org/docs/basics/roles)
- [@article@Revisiting Roles in Prompting](https://learnprompting.org/docs/intermediate/revisiting_roles)
_As a climate scientist, how would you approach the problem of deforestation to minimize its impact on the environment?_

View File

@@ -29,10 +29,4 @@ The book was hard to put down. - {sentiment_label}
This prompt structure assists the model in grasping the sentiment analysis task and increases the likelihood of getting the correct output (i.e., "Positive").
Remember to experiment with the number of examples and their content to find the optimal balance for your specific task. Additionally, you can use inline instructions to guide the model further, such as asking it to classify the sentiment of a given sentence.
Learn more from the following resources:
- [@article@Few-Shot Prompting | Prompt Engineering Guide](https://www.promptingguide.ai/techniques/fewshot)
- [@article@Shot-Based Prompting: Zero-Shot, One-Shot, and Few-Shot Prompting](https://learnprompting.org/docs/basics/few_shot)
- [@guides@Introduction to Advanced Few-Shot Prompting Techniques](https://learnprompting.org/docs/advanced/few_shot/introduction)
Remember to experiment with the number of examples and their content to find the optimal balance for your specific task. Additionally, you can use inline instructions to guide the model further, such as asking it to classify the sentiment of a given sentence.

View File

@@ -21,10 +21,4 @@ We can approach the topic through a series of connected prompts:
- "Now, consider the challenges of working from home. Identify three potential disadvantages or negative effects."
- "For each of these challenges, discuss how individuals and organizations can mitigate or address them."
By employing the Chain of Thought technique, we have directed the model to provide a thorough and systematic analysis of the subject in question, ultimately resulting in a more meaningful and accurate response.
Learn more from the following resources:
- [@article@Chain-of-Thought Prompting | Prompt Engineering Guide](https://www.promptingguide.ai/techniques/cot)
- [@article@Chain-of-Thought Prompting](https://learnprompting.org/docs/intermediate/chain_of_thought)
- [@guides@Introduction to Thought Generation Prompting Techniques for LLMs](https://learnprompting.org/docs/advanced/thought_generation/introduction)
By employing the Chain of Thought technique, we have directed the model to provide a thorough and systematic analysis of the subject in question, ultimately resulting in a more meaningful and accurate response.

View File

@@ -17,11 +17,4 @@ Suppose you want the model to explain how a solar panel works. A zeroshot chain
Please explain the process of how a solar panel works, starting with sunlight hitting the panel's surface and ending with electricity being produced. Structure your response as a step-by-step chain of thought, taking care to clarify how each step leads to the next.
```
By designing prompts that explicitly request step-by-step reasoning, the zeroshot chain of thought technique can lead to more comprehensive and insightful answers that go beyond simple factual statements.
Learn more from the following resources:
- [@article@Zero-Shot Prompting | Prompt Engineering Guide](https://www.promptingguide.ai/techniques/zeroshot)
- [@article@Zero-Shot CoT Prompting: Improving AI with Step-by-Step Reasoning](https://learnprompting.org/docs/intermediate/zero_shot_cot)
- [@article@Shot-Based Prompting: Zero-Shot, One-Shot, and Few-Shot Prompting](https://learnprompting.org/docs/basics/few_shot)
- [@guides@Introduction to Advanced Zero-Shot Prompting Techniques](https://learnprompting.org/docs/advanced/zero_shot/introduction)
By designing prompts that explicitly request step-by-step reasoning, the zeroshot chain of thought technique can lead to more comprehensive and insightful answers that go beyond simple factual statements.

View File

@@ -10,43 +10,30 @@ Let's ask a slightly complicated customer service question:
```
CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st.
I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off.
I saw that you have a new discount for shirts at 50%.
I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries.
Returns are allowed within 30 days.
Today's date is March 29th.
There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store.
Do not make up any information about discount policies.
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.
```
Output from the model would be:
```
Thank you for your inquiry. Unfortunately, since you purchased the shirt on March 1st, it is outside of our 30-day return policy.
However, we do have a 50% discount on all shirts currently, so you could purchase two shirts with the same amount of store credit.
Prices for our shirts range from $18-$100. Please let us know if you have any other questions.
Thank you for your inquiry. Unfortunately, since you purchased the shirt on March 1st, it is outside of our 30-day return policy. However, we do have a 50% discount on all shirts currently, so you could purchase two shirts with the same amount of store credit. Prices for our shirts range from $18-$100. Please let us know if you have any other questions.
```
That failed (we are within the return time), so lets try to break it down into subproblems:
```
CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st.
I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off.
I saw that you have a new discount for shirts at 50%. I'm wondering if
I can return the shirt and have enough store credit to buy two of your shirts?
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originall $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries.
Returns are allowed within 30 days.
Today's date is March 29th.
There is currently a 50% discount on all shirts.
Shirt prices range from $18-$100 at your store.
Do not make up any information about discount policies.
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.
What subproblems must be solved before answering the inquiry?
```
@@ -63,34 +50,19 @@ Let's try to solve the first subproblem:
```
CUSTOMER INQUIRY:
I just bought a T-shirt from your Arnold collection on March 1st.
I saw that it was on discount, so bought a shirt that was originally $30, and got 40% off.
I saw that you have a new discount for shirts at 50%.
I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
I just bought a T-shirt from your Arnold collection on March 1st. I saw that it was on discount, so bought a shirt that was originally $30, and got 40% off. I saw that you have a new discount for shirts at 50%. I'm wondering if I can return the shirt and have enough store credit to buy two of your shirts?
INSTRUCTIONS:
You are a customer service agent tasked with kindly responding to customer inquiries.
Returns are allowed within 30 days.
Today's date is March 29th.
There is currently a 50% discount on all shirts.
Shirt prices range from $18-$100 at your store.
Do not make up any information about discount policies.
Determine if the customer is within the 30-day return window.
Let's go step by step.
You are a customer service agent tasked with kindly responding to customer inquiries. Returns are allowed within 30 days. Today's date is March 29th. There is currently a 50% discount on all shirts. Shirt prices range from $18-$100 at your store. Do not make up any information about discount policies.
Determine if the customer is within the 30-day return window. Let's go step by step.
```
Output from the model would be:
```
Yes, the customer is within the 30-day return window.
The customer purchased the shirt on March 1st, and today's date is March 29th, which is within the 30-day return window.
Yes, the customer is within the 30-day return window. The customer purchased the shirt on March 1st, and today's date is March 29th, which is within the 30-day return window.
Therefore, the customer is eligible to return the shirt and receive store credit for the purchase.
The customer can use the store credit to purchase two shirts at the current 50% discount.
Therefore, the customer is eligible to return the shirt and receive store credit for the purchase. The customer can use the store credit to purchase two shirts at the current 50% discount.
```
By just solving the first subproblem, we were able to solve the entire problem. If GPT-3 did not return an answer immediately, we could have solved the next subproblem and so on until it did return an answer. Note that we use Let's go step by step.. The addition of this phrase is not always necessary, but it helps for this example.
Learn more from the following resources:
- [@article@Least-to-Most Prompting](https://learnprompting.org/docs/intermediate/least_to_most)
By just solving the first subproblem, we were able to solve the entire problem. If GPT-3 did not return an answer immediately, we could have solved the next subproblem and so on until it did return an answer. Note that we use Let's go step by step.. The addition of this phrase is not always necessary, but it helps for this example.

View File

@@ -27,8 +27,3 @@ Or you could also combine these prompts into a single prompt as follows:
```
I am writing a guide about SEO. Take the 10 key topics about SEO and write a detailed introduction to each.
```
Learn more from the following resources:
- [@article@Generated Knowledge in Prompts: Boosting AI Accuracy and Reliability](https://learnprompting.org/docs/intermediate/generated_knowledge#dual-prompt-approach)
- [@article@Recitation-augmented and Dual Prompting](https://aimlapi.com/academy-articles/recitation-augmented-and-dual-prompting)

View File

@@ -1,11 +1,3 @@
# Combining Techniques
All the techniques we've covered so far are useful on their own, but they're even more powerful when combined. For example, you can combine "Role Prompting" and any other prompting technique e.g. Chain of Thought, Dual Prompt, etc. to get more specific responses.
Learn more from the following resources:
- [@article@Combining Prompting Techniques: Enhance AI Outputs](https://learnprompting.org/docs/basics/combining_techniques)
- [@course@Combining prompting techniques - Hyperskill](https://hyperskill.org/learn/step/47989)
- [@guides@The Ultimate Guide to LLM Prompting, Fine-tuning, and Data Management ](https://medium.com/@subhraj07/the-ultimate-guide-to-llm-prompting-fine-tuning-and-data-management-933bbd2d05f4)
- [@video@4 Methods of Prompt Engineering - IBM](https://www.youtube.com/watch?v=1c9iyoVIwDs)
All the techniques we've covered so far are useful on their own, but they're even more powerful when combined. For example, you can combine "Role Prompting" and any other prompting technique e.g. Chain of Thought, Dual Prompt, etc. to get more specific responses.

View File

@@ -1,26 +1,18 @@
# Structured Data
Structured data is a powerful tool for organizing information effectively, particularly in real-world applications like e-commerce, mobile, and search. It helps convert user inputs into structured formats such as JSON, Markdown tables, HTML, and YAML, enhancing data management and retrieval. Recent improvements in this field include Google's expanded product information types, the increasing use of AI to automate and enhance structured data processes, the continued preference for JSON-LD as a structured data format, and enhanced mobile experiences through rich media and detailed information in search results.
Structured data helps in organizing information. It is especially useful in applications like e-commerce where you need to convert user input into a structured format.
## Examples
### Example 1
### Example: JSON Object
Consider a user searching for a "Birthday gift for my 18 months old daughter." Using prompts, you can generate structured data in various formats.
JSON Example:
```json
Print a JSON object containing:
- `gender` ("male", "female"),
- `occasion` (one of "party", "birthday", "anniversary"),
- `age_years` (numeric value)
from the text delimited by triple quotes:
"""Birthday gift for my 18 months old daughter"""
Let's say a user searches for `Birthday gift for my 18 months old daughter`. You can use a prompt to generate a JSON object from this input:
```
Print a JSON object containing `gender` ("male", "female"), `occasion` (one of "party", "birthday", "anniversary"), `age_years` (numeric value) from the text delimited by triple quotes:
"""Birthday gift for my 18 months old daughter"""
```
**Output**
### Output
The model would generate the following JSON object:
```json
@@ -29,38 +21,25 @@ The model would generate the following JSON object:
"occasion": "birthday",
"age_years": 1.5
}
```
The following examples extend the original Print JSON prompt to convert one structured data into other formats, demonstrating another capability with structured data, data format interchange from a single source prompt.
### Example 2
### Example: Markdown Table
Consider a user input `Anniversary gift for my husband`. You can use a prompt to generate a JSON object from this input:
From JSON to Markdown table
```markdown
| Gender | Occasion | Age (years) |
|--------|----------|-------------|
| Female | Birthday | 1.5 |
```
Print a JSON object containing `gender` ("male", "female"), `occasion` (one of "party", "birthday", "anniversary"), `age_years` (numeric value) from the text delimited by triple quotes:
"""Anniversary gift for my husband"""
```
### Example: HTML Table
### Output
From JSON to HTML table
The model would generate the following JSON object:
<table>
<tr>
<th>Gender</th>
<th>Occasion</th>
<th>Age (years)</th>
</tr>
<tr>
<td>Female</td>
<td>Birthday</td>
<td>1.5</td>
</tr>
</table>
Learn more from the following resources:
- [@article@How to Use AI to Organize Data into Tables for Reports and Articles](https://learnprompting.org/docs/basic_applications/table_generation)
```json
{
"gender": "male",
"occasion": "anniversary",
"age_years": null
}
```

View File

@@ -1,10 +1,6 @@
# Inferring
Making use of LLMs to infer information from text, i.e. text analysis, is another common use case. A common use case is sentiment analysis, such as grouping e-commerce reviews by sentiment. Beyond this, LLMs have diverse real-world applications, including customer service automation, market research, fraud detection, compliance monitoring, speech-to-text transcription, and anomaly detection. These applications demonstrate the versatility of LLMs in extracting valuable insights from text data, automating processes, and enhancing decision-making across different sectors. By leveraging LLMs, businesses can improve operational efficiency, gain deeper customer insights, and tackle complex challenges in areas such as cybersecurity and regulatory compliance
### Example
Imagine you are running a famous e-commerce store with thousands of reviews. You want to group the reviews by the sentiment of the review. You can use LLMs to infer the sentiment of the review.
Making use of LLMs to infer information from text is another common use case. Imagine you are running a famous e-commerce store with thousands of reviews. You want to group the reviews by the sentiment of the review. You can use LLMs to infer the sentiment of the review.
Here are some prompts showing the use of inferrence:
@@ -25,9 +21,3 @@ Identify a list of emotions that the writer of the following review is expressin
```
Output: `happy, satisfied, pleased, content, enthusiastic`
Learn more from the following resources:
- [@article@Generating Data | Prompt Engineering Guide](https://www.promptingguide.ai/applications/generating)
- [@article@Every Day Prompt Engineering - Text Analysis](https://www.ikangai.com/every-day-prompt-engineering-part-5-text-analysis/)
- [@course@Prompt Engineering Course — Inferring, Transforming, and Expanding with ChatGPT ](https://medium.com/geekculture/prompt-engineering-course-openai-inferring-transforming-expanding-chatgpt-chatgpt4-e5f63132f422)

View File

@@ -26,9 +26,3 @@ We value your business and are committed to ensuring your complete satisfaction.
Best regards,
AI Customer Service
```
Learn more from the following resources:
- [@article@Write Emails Faster Using AI: Sick Days, Cold Outreach, and More](https://learnprompting.org/docs/basic_applications/writing_emails)
- [@guides@AI Tools for Business Writing](https://communicationprogram.wharton.upenn.edu/library/ai-tools/)
- [@article@Everyday Prompt Engineering -Emails](https://www.ikangai.com/everyday-prompt-engineering-part-one-emails/)

View File

@@ -1,6 +1,6 @@
# Coding Assistance
Large Language Models (LLMs) tailored for coding have become invaluable tools for developers, offering a wide range of functionalities to assist in coding tasks. These include debugging, code generation, reformatting and reviewing, translating between programming languages, refactoring code, program simulations, and generating unit tests. Recent advancements have expanded their capabilities to include code optimization, documentation generation, bug fixing, project management assistance, security analysis, and version control support. New features also allow users to interact with outputs directly, such as editing or refining code suggestions with or without additional prompts, like OpenAI's Canvas or Anthropic's Artefacts. These advancements make LLMs' capabilities available across the software development process, enhancing productivity and streamlining workflows across various stages of coding and project management.
You can use ChatGPT for debugging, code generation, reformatting, commenting, and more.
## Code Generation
@@ -26,7 +26,6 @@ noun = random.choice(nouns)
madlib = f"I {verb} {preposition} the {noun}."
print(madlib)
```
## Commenting and Reformatting Code
@@ -63,7 +62,6 @@ for i in range(num_points):
# Print a space for padding, followed by an asterisk at the appropriate position on the wave
print(' ' * (amplitude + y) + '*')
```
## Debugging
@@ -113,9 +111,7 @@ for i in numbers:
if j in numbers:
numbers.remove(j)
print(numbers)
```
Note that we asked the bot to "act like a senior developer" to optimize the script. You can also dictate that it have a certain area of expertise (e.g., sorting algorithms) or number of years of experience. Alternatively, if you have a script that seems overly complicated, you can ask ChatGPT to write that script "as a very junior developer."
## Translating Between Programming Languages
@@ -151,13 +147,8 @@ salary = float(input("Enter employee's salary: "))
ss_tax = salary * 0.062
print("Social Security tax is: $", round(ss_tax, 2))
```
Learn more from the following resources:
- [@article@LLM Coding Assistance](https://learnprompting.org/docs/basic_applications/coding_assistance)
- [@guides@Generating Code | Prompt Engineering Guide](https://www.promptingguide.ai/applications/coding)
- [@article@Prompt engineering 101 for developers](https://www.pluralsight.com/resources/blog/software-development/prompt-engineering-for-developers)
- [@guide@Prompt Engineering for Coding Tasks ](https://towardsdatascience.com/prompt-engineering-llms-coding-chatgpt-artificial-intelligence-c16620503e4e)
- [@article@How GenAI Transforms Software Development: 11 Use Cases](https://www.index.dev/blog/11-generative-ai-use-cases-software-development)

View File

@@ -1,6 +1,6 @@
# Study Buddy
LLMs have become powerful study & tutoring tools, offering various applications in education. They can explain complex terms, generate quizzes, and provide personalized learning experiences adapted to individual needs, including neurodivergent learning styles. Recent developments have expanded their capabilities to include multilingual support, automated grading and feedback, content creation for diverse educational materials, and improved accessibility for all students. These advancements enable LLMs to enhance comprehension, retention, and overall educational outcomes while allowing educators to focus on more complex tasks.
One of our favorite ways to use LLMs is as a study tool! In particular, it is useful for explaining confusing terms as well as quizzing you on test content.
## Explaining Terms
@@ -46,7 +46,4 @@ And the model will generate 5 questions for you to answer.
There are probably other ways to use LLMs to study. Many learning companies like Duolingo and Quizlet are implementing LLMs into their products.
Learn more from the following resources:
- [@article@LLMs as a Study Buddy](https://learnprompting.org/docs/basic_applications/study_tool)
-

View File

@@ -1,9 +1,4 @@
# Designing Chatbots
Designing chatbots and other interfaces for real-world usage cases remains a key focus in the tech industry. Large language models (LLMs) continue to enhance chatbot capabilities, allowing them to respond effectively to diverse user inputs with minimal effort and training. The use cases for new user interfaces is expanding beyond traditional chatbots like: voice interfaces, gesture-based interfaces, multimodal AI systems using various input types, AI agents performing complex tasks autonomously, extended reality (XR) combining AI with AR/VR for immersive experiences. These advancements highlight the evolving nature of user interfaces, driven by technological innovations that promise more engaging and efficient interactions.
Building chatbots to offer customer support, sales, or other services is a hot topic in the tech industry. LLMs make it possible to build chatbots that can respond to a wide variety of user inputs, and can be trained to respond to new inputs with minimal effort.
Learn more from the following resources:
- [@video@What is a Chatbot?](https://www.youtube.com/watch?v=o9-ObGgfpEk)
- [@video@Do Chatbots Need AI?](https://www.youtube.com/watch?v=93l-2Usc08U)
- [@video@Generative vs Rules-Based Chatbots](https://www.youtube.com/watch?v=lZjUS_8btEo)

View File

@@ -1,24 +1,11 @@
# Real World Usage Examples
LLMs are being used in an increasing number and variety of usages, applications and embedded approaches. These use cases are dynamic and ever evolving.
### Examples
Here are some examples of how LLMs are used in the real world, though not exhaustive:
LLMs are used in a variety of ways. Here are some examples of how LLMs are used in the real world.
- Natural Language Processing
- Information Retrieval
- Internet Search & Citations
- Question Answering
- Text Generation
- Text Summarization
- Learning Tools
- Chatbots and Assistants
- Agents & Automation
- Multimedia Content
Follow the resources and latest updates to learn more:
- [@article@Solving Everyday Tasks with GenAI: Emails, Contracts, and More](https://learnprompting.org/docs/basic_applications/introduction)
- [@article@LLM Applications & Guides | Prompt Engineering Guide](https://www.promptingguide.ai/applications)
- [@article@7 Large Language Model (LLM) Use Cases and Applications: How AI is Reshaping Industries ](https://collabnix.com/7-top-large-language-model-llm-use-cases-and-applications-how-ai-is-reshaping-industries/)
- Chatbots

View File

@@ -1,8 +1,5 @@
# Citing Sources
As advancements have been made in the ability of Large Language Models (LLMs) to cite sources — particularly through realtime API access, search-augmented generation and specialized training — significant limitations persist. LLMs continue to struggle with hallucinations, generating inaccurate or fictitious citation. Many LLM lack real-time API access, which hampers their ability to provide up-to-date information or are limited by their knowledge cut off dates. They sometimes cannot independently verify sources or fully grasp the contextual relevance of citations, raising concerns regarding plagiarism and intellectual property. To address these challenges, ongoing efforts focus on improving realtime retrieval (RAG) methods, enhancing training, and integrating human oversight to ensure accuracy in citations.
LLMs for the most part cannot accurately cite sources. This is because they do not have access to the Internet, and do not exactly remember where their information came from. They will frequently generate sources that look good, but are entirely inaccurate.
Learn more from the following resources:
- [@guides@Why Dont Large Language Models Share URL References in Their Responses](https://medium.com/@gcentulani/why-dont-large-language-models-share-url-references-in-their-responses-bf427e513861)
- [@article@Effective large language model adaptation for improved grounding](https://research.google/blog/effective-large-language-model-adaptation-for-improved-grounding/)
Strategies like search augmented LLMs (LLMs that can search the Internet and other sources) can often fix this problem though.

View File

@@ -1,11 +1,4 @@
# Bias
Bias in Large Language Models (LLMs) remains a significant challenge, with models often generating stereotypical or discriminatory responses despite advancements in mitigation techniques. These biases can manifest in various forms, including gender, racial, and cultural prejudices, potentially leading to underfitting or overfitting in model outputs. Recent studies have highlighted persistent biases in LLM-generated content, emphasizing the need for caution when deploying these models in consumer-facing applications or research settings. Efforts to address this issue include developing diverse training datasets, implementing regulatory frameworks, and creating new evaluation tools. However, the challenge remains substantial as LLMs continue to influence societal perceptions. Developers and users must be aware of these pitfalls to avoid reputational damage and unintended negative impacts on individuals or communities.
LLMs are often biased towards generating stereotypical responses. Even with safe guards in place, they will sometimes say sexist/racist/homophobic things. Be careful when using LLMs in consumer-facing applications, and also be careful when using them in research (they can generate biased results).
Learn more from the following resources:
- [@guides@Biases in Prompts: Learn how to tackle them](https://mindfulengineer.ai/understanding-biases-in-prompts/)
- [@guides@Bias in AI: tackling the issues through regulations and standards](https://publicpolicy.ie/papers/bias-in-ai-tackling-the-issues-through-regulations-and-standards/)
- [@article@What Is AI Bias?](https://www.ibm.com/topics/ai-bias)
- [@article@What Is Algorithmic Bias?](https://www.ibm.com/think/topics/algorithmic-bias)
- [@article@AI Bias Examples](https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples)

View File

@@ -1,14 +1,27 @@
# Hallucinations
Large Language Model (LLM) hallucinations in 2024 can be broadly categorized into faithfulness and factuality issues. **Faithfulness hallucinations** occur when the model's output deviates from provided sources or context, including problems with source-reference divergence, context retrieval, dialogue history misinterpretation, and erroneous summarization. **Factuality hallucinations**, on the other hand, involve the generation of incorrect or unsupported information, encompassing factual inaccuracies, entity errors, overclaims, unverifiable statements, nonsensical responses, contradictions, and fabricated data.
LLMs will frequently generate falsehoods when asked a question that they do not know the answer to. Sometimes they will state that they do not know the answer, but much of the time they will confidently give a wrong answer.
These hallucinations stem from various causes such as training data issues, model limitations, prompt-related problems, and overfitting. To mitigate these challenges, strategies like Retrieval-Augmented Generation (RAG), improved training data, rigorous evaluation, clear user communication, advanced prompt engineering, model fine-tuning, output filtering, and multi-model approaches are being employed. As the field progresses, understanding and addressing these hallucination types remains crucial for enhancing the reliability and trustworthiness of LLM-generated content.
### Causes of Hallucinations
Learn more from the following resources:
There are several factors contributing to hallucinations in LMs:
- [@article@What are AI hallucinations?](https://www.ibm.com/topics/ai-hallucinations)
- [@article@Hallucination (artificial intelligence) - Wikipedia](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence))
- [@video@Why Large Language Models Hallucinate - IBM](https://www.youtube.com/watch?v=cfqtFvWOfg0)
- [@video@Risks of Large Language Models - IBM](https://www.youtube.com/watch?v=r4kButlDLUc)
- [@guides@Key Strategies to Minimize LLM Hallucinations](https://www.turing.com/resources/minimize-llm-hallucinations-strategy)
1. **Inherent limitations**: The training data for the LMs are massive, yet they still cannot contain the entire knowledge about the world. As a result, LMs have inherent limitations in handling certain facts or details, which leads to hallucinations in the generated text.
2. **Training data biases**: If the training data contains biases or errors, it may lead to hallucinations in the output as LMs learn from the data they've been exposed to.
3. **Token-based scoring**: The default behavior of many LMs, like GPT models, is to generate text based on token probabilities. Sometimes this can lead to high-probability tokens being selected even if it doesn't make sense with the given prompt.
### Mitigating Hallucinations
To reduce the occurrence of hallucinations in the generated text, consider the following strategies:
1. **Specify instructions**: Make the prompt more explicit with clear details and constraints. This can help guide the model to generate more accurate and coherent responses.
2. **Step-by-step approach**: Instead of asking the model to generate a complete response in one go, break down the task into smaller steps and iteratively generate the output. This can help in maintaining better control over the generated content.
3. **Model adjustments**: Tweak various parameters, such as `temperature` or `top_p`, to adjust the randomness and control of the generated text. Lower values will make the output more conservative, which can help reduce hallucinations.
4. **Validating and filtering**: Develop post-processing steps to validate and filter the generated text based on specific criteria or rules to minimize the prevalence of hallucinations in the output.
Remember that even with these strategies, it's impossible to completely eliminate hallucinations. However, being aware of their existence and employing methods to mitigate them can significantly improve the quality and reliability of LM-generated content.

View File

@@ -1,7 +1,3 @@
# Math
LLMs struggle with math. While they may have improved in solving simple math problems; they, however, coming up short when solving more complex math problems when minor semantic variation happens. This is particularly relevant in terms of mathematical reasoning. Despite advancements, they often fail at solving simple math problems and are unable to handle more complex ones effectively. Studies show that LLMs rely heavily on pattern recognition rather than genuine logical reasoning, leading to significant performance drops when faced with minor changes in problem wording or irrelevant information. This highlights a critical limitation in their reasoning capabilities.
Learn more from the following resources:
- [@article@Apple Says AIs Math Skills Fall Short](https://www.pymnts.com/artificial-intelligence-2/2024/apple-says-ais-math-skills-fall-short/)
LLMs are often bad at math. They have difficulty solving simple math problems, and they are often unable to solve more complex math problems.

View File

@@ -1,11 +1,13 @@
# Prompt Hacking
Prompt hacking is a form of adversarial prompting where language models are manipulated to generate outputs that violate safety guidelines or are off-topic. Common techniques include manipulating keywords, exploiting grammar and negations, and using leading questions. To combat this, developers implement safety mechanisms such as content filters, continual analysis, and carefully designed prompt templates. As language models become more integrated into digital infrastructure, concerns about prompt injection, data leakage, and potential misuse have grown. In response, evolving defense strategies like prompt shields, enhanced input validation, and fine-tuning for adversarial detection are being developed. Continuous monitoring and improvement of these safety measures are crucial to ensure responsible model behaviour and output alignment with desired guidelines.
Prompt hacking is a term used to describe a situation where a model, specifically a language model, is tricked or manipulated into generating outputs that violate safety guidelines or are off-topic. This could include content that's harmful, offensive, or not relevant to the prompt.
Learn more from the following resources:
There are a few common techniques employed by users to attempt "prompt hacking," such as:
- [@article@Prompt Hacking](https://learnprompting.org/docs/category/-prompt-hacking)
- [@article@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models)
- [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/)
- [@video@Explained: The OWASP Top 10 for Large Language Model Applications](https://www.youtube.com/watch?v=cYuesqIKf9A)
- [@video@Artificial Intelligence: The new attack surface](https://www.youtube.com/watch?v=_9x-mAHGgC4)
1. **Manipulating keywords**: Users may introduce specific keywords or phrases that are linked to controversial, inappropriate, or harmful content in order to trick the model into generating unsafe outputs.
2. **Playing with grammar**: Users could purposely use poor grammar, spelling, or punctuation to confuse the model and elicit responses that might not be detected by safety mitigations.
3. **Asking leading questions**: Users can try to manipulate the model by asking highly biased or loaded questions, hoping to get a similar response from the model.
To counteract prompt hacking, it's essential for developers and researchers to build in safety mechanisms such as content filters and carefully designed prompt templates to prevent the model from generating harmful or unwanted outputs. Constant monitoring, analysis, and improvement to the safety mitigations in place can help ensure the model's output aligns with the desired guidelines and behaves responsibly.
Read more about prompt hacking here [Prompt Hacking](https://learnprompting.org/docs/category/-prompt-hacking).

View File

@@ -1,76 +1,27 @@
# Pitfalls of LLMs
LLMs are extremely powerful. There are many pitfalls, safety challenges and risks that you should be aware of when using them.
LLMs are extremely powerful, but they are by no means perfect. There are many pitfalls that you should be aware of when using them.
### Language Translation
### Model Guessing Your Intentions
There are several risks associated with LLMs in language translation.
Sometimes, LLMs might not fully comprehend the intent of your prompt and may generate generic or safe responses. To mitigate this, make your prompts more explicit or ask the model to think step-by-step before providing a final answer.
- Inaccurate translations
- Contextual misinterpretation
- Biased translations
- Deepfakes
- Privacy and data security
- Legal and regulatory compliance
### Sensitivity to Prompt Phrasing
### Text Generation
LLMs can be sensitive to the phrasing of your prompts, which might result in completely different or inconsistent responses. Ensure that your prompts are well-phrased and clear to minimize confusion.
Text generation is a powerful capability of LLMs but also introduces certain risks and challenges.
### Model Generating Plausible but Incorrect Answers
- Misinformation and fake news
- Bias amplification
- Offensive or inappropriate content
- Plagiarism and copyright infringement
- Lack of transparency
- Privacy breaches
In some cases, LLMs might generate answers that sound plausible but are actually incorrect. One way to deal with this is by adding a step for the model to verify the accuracy of its response or by prompting the model to provide evidence or a source for the given information.
### Question Answering
### Verbose or Overly Technical Responses
LLMs present several risks in the domain of question answering.
LLMs, especially larger ones, may generate responses that are unnecessarily verbose or overly technical. To avoid this, explicitly guide the model by making your prompt more specific, asking for a simpler response, or requesting a particular format.
- Hallucination
- Outdated information
- Bias
- Harmful answers
- Lack of contextual understanding
- Privacy and security concerns
- Lack of transparency and xxplainability
### LLMs Not Asking for Clarification
### Text summarization
When faced with an ambiguous prompt, LLMs might try to answer it without asking for clarification. To encourage the model to seek clarification, you can prepend your prompt with "If the question is unclear, please ask for clarification."
Text summarization is a powerful application of LLMs but also introduces certain risks and challenge
### Model Failure to Perform Multi-part Tasks
- Information loss
- Bias amplification
- Contextual misinterpretation
### Sentiment analysis
Sentiment analysis, the process of determining a piece of texts sentiment or emotional tone, is an application where LLMs are frequently employed.
- Biased sentiment analysis
- Cultural and contextual nuances
- Limited domain understanding
- Misinterpretation of negation and ambiguity
- Overgeneralization and lack of individual variation
### Code Assistance
Code assistance and generation is an area where LLMs have shown promising capabilities.
- Security vulnerabilities
- Performance and efficiency challenges
- Quality and reliability concerns
- Insufficient understanding of business or domain context
- Intellectual property concerns
Read more from [Risks of Large Language Models: A comprehensive guide](https://www.deepchecks.com/risks-of-large-language-models/).
Learn more from the following resources:
- [@video@Risks of Large Language Models - IBM](https://www.youtube.com/watch?v=r4kButlDLUc)
- [@article@Risks of Large Language Models: A comprehensive guide](https://www.deepchecks.com/risks-of-large-language-models/)
- [@article@Limitations of LLMs: Bias, Hallucinations, and More](https://learnprompting.org/docs/basics/pitfalls)
- [@guides@Risks & Misuses | Prompt Engineering Guide](https://www.promptingguide.ai/risks)
- [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/)
- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models)
Sometimes, LLMs might not complete all parts of a multi-part task or might only focus on one aspect of it. To avoid this, consider breaking the task into smaller, more manageable sub-tasks or ensure that each part of the task is clearly identified in the prompt.

View File

@@ -16,3 +16,4 @@ There are other supporting components that are used to manage the lifecycle of t
For more information, visit the following links:
- [@article@Spring Boot vs Spring MVC: How do They Compare?](https://www.javatpoint.com/spring-vs-spring-boot-vs-spring-mvc)
- [@article@Spring MVC components](https://www.javatpoint.com/spring-vs-spring-boot-vs-spring-mvc)