Compare commits

...

29 Commits

Author SHA1 Message Date
Arik Chakma
52018e7030 fix: add layout in explore page 2024-04-03 22:17:12 +06:00
Arik Chakma
95d70c521a feat: add footer for ai generated roadmaps 2024-04-03 22:12:27 +06:00
Kamran Ahmed
852622f5ac Add link to data analyst roadmap 2024-04-03 00:18:47 +01:00
Kamran Ahmed
03173f6017 Update dates for data analyst roadmap 2024-04-02 22:28:44 +01:00
Kamran Ahmed
5c1e3cae3f Update content for data analyst roadmap 2024-04-02 22:28:44 +01:00
Kamran Ahmed
a3f9c8e5e2 Add content for data analyst 2024-04-02 22:28:44 +01:00
Kamran Ahmed
43f9412dd4 Add data analyst roadmap 2024-04-02 22:28:44 +01:00
Kamran Ahmed
58e11afc94 Add roadmap dirs for data analyst 2024-04-02 22:28:44 +01:00
Kamran Ahmed
854c39ec39 Add content directory 2024-04-02 22:28:44 +01:00
Kamran Ahmed
907391b34c Add analyst roadmap 2024-04-02 22:28:44 +01:00
Deepak Mardi
5a56b0f753 Fixed typo (#5443)
* Updating the link for DevOps Roadmap to correct URL (https://roadmap.sh/docker) previously set to (https://roadmap.sh/best-practices)

* update

* update

* Fixed typo in Lazy Eager Explicit Loading of ASP.NET Core Roadmap
2024-04-01 15:22:11 +01:00
Ma'ruf
5fb4d3e2dc fix: focus disappears in navigation (#5355) 2024-04-01 15:20:16 +01:00
Deepak Mardi
0a89057823 fix: invalid link in Java Roadmap (#5425) 2024-03-31 03:14:26 +06:00
Kamran Ahmed
add0db5514 Update footer text 2024-03-30 15:51:57 +00:00
Kamran Ahmed
2eec44303c Fix broken link 2024-03-30 05:07:48 +00:00
Kamran Ahmed
84fc0e40b5 Add get started page path 2024-03-30 03:16:53 +00:00
Kamran Ahmed
d3ee18e2a1 Reset scroll on active group change 2024-03-29 21:18:31 +00:00
Kamran Ahmed
4988ba0604 Fix filter button autofocuses 2024-03-29 21:09:42 +00:00
Kamran Ahmed
fd48e980cd Activate buttons in hero section 2024-03-29 21:00:37 +00:00
Kamran Ahmed
dd728b526e Add roadmaps page 2024-03-29 20:41:21 +00:00
Kamran Ahmed
f8f29d2a17 Fix flickering issue 2024-03-29 19:35:41 +00:00
Kamran Ahmed
5bdfe48cad Restructure roadmaps 2024-03-29 18:02:56 +00:00
Kamran Ahmed
e4e0f9ac98 Filtering of roadmaps 2024-03-29 17:17:58 +00:00
Kamran Ahmed
40c8bfc312 Add roadmaps page 2024-03-29 17:04:57 +00:00
Kamran Ahmed
3479201e20 Roadmaps 2024-03-29 15:54:38 +00:00
Kamran Ahmed
1092528de0 Desktop screen UI 2024-03-29 05:00:37 +00:00
VinayPrabhakar-gamer
45086a6314 fix: OSI typo (#5414)
OCI text corrected to OSI under Application Layer section
2024-03-28 03:18:08 +06:00
Damian Dorosz
b6798ea3a2 fix: update broken link 2024-03-28 03:17:00 +06:00
Arik Chakma
1cb29d0fc7 feat: implement checklist (#5418) 2024-03-27 20:56:05 +00:00
148 changed files with 9461 additions and 101 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

@@ -30,6 +30,8 @@ Roadmaps are now interactive, you can click the nodes to read more about the top
Here is the list of available roadmaps with more being actively worked upon.
> Have a look at the [get started](https://roadmap.sh/get-started) page that might help you pick up a path.
- [Frontend Roadmap](https://roadmap.sh/frontend) / [Frontend Beginner Roadmap](https://roadmap.sh/frontend?r=frontend-beginner)
- [Backend Roadmap](https://roadmap.sh/backend) / [Backend Beginner Roadmap](https://roadmap.sh/backend?r=backend-beginner)
- [DevOps Roadmap](https://roadmap.sh/devops) / [DevOps Beginner Roadmap](https://roadmap.sh/devops?r=devops-beginner)
@@ -37,6 +39,7 @@ Here is the list of available roadmaps with more being actively worked upon.
- [Computer Science Roadmap](https://roadmap.sh/computer-science)
- [Data Structures and Algorithms Roadmap](https://roadmap.sh/datastructures-and-algorithms)
- [AI and Data Scientist Roadmap](https://roadmap.sh/ai-data-scientist)
- [Data Analyst Roadmap](https://roadmap.sh/data-analyst)
- [MLOps Roadmap](https://roadmap.sh/mlops)
- [QA Roadmap](https://roadmap.sh/qa)
- [Python Roadmap](https://roadmap.sh/python)

View File

@@ -48,6 +48,11 @@ function getFilesInFolder(folderPath, fileList = {}) {
return fileList;
}
/**
* Write the topic content for the given topic
* @param currTopicUrl
* @returns {Promise<string>}
*/
function writeTopicContent(currTopicUrl) {
const [parentTopic, childTopic] = currTopicUrl
.replace(/^\d+-/g, '/')
@@ -59,9 +64,18 @@ function writeTopicContent(currTopicUrl) {
const roadmapTitle = roadmapId.replace(/-/g, ' ');
let prompt = `I am reading a guide about "${roadmapTitle}". I am on the topic "${parentTopic}". I want to know more about "${childTopic}". Write me a brief paragraph for that. Your output should be strictly markdown. Do not include anything other than the description in your output. I already know the benefits of each so do not add benefits in the output.`;
let prompt = `I will give you a topic and you need to write a brief introduction for that with regards to "${roadmapTitle}". Your format should be as follows and be in strictly markdown format:
# (Put a heading for the topic)
(Write me a brief introduction for the topic with regards to "${roadmapTitle}")
`;
if (!childTopic) {
prompt = `I am reading a guide about "${roadmapTitle}". I am on the topic "${parentTopic}". I want to know more about "${parentTopic}". Write me a brief paragraph for that. Your output should be strictly markdown. Do not include anything other than the description in your output. I already know the benefits of each so do not add benefits in the output.`;
prompt += `First topic is: ${parentTopic}`;
} else {
prompt += `First topic is: ${childTopic} under ${parentTopic}`;
}
console.log(`Generating '${childTopic || parentTopic}'...`);
@@ -123,10 +137,9 @@ async function writeFileForGroup(group, topicUrlToPathMapping) {
}
const topicContent = await writeTopicContent(currTopicUrl);
newFileContent += `\n\n${topicContent}`;
console.log(`Writing ${topicId}..`);
fs.writeFileSync(contentFilePath, newFileContent, 'utf8');
fs.writeFileSync(contentFilePath, topicContent, 'utf8');
// console.log(currentFileContent);
// console.log(currTopicUrl);

View File

@@ -1,17 +1,11 @@
import { useEffect, useState } from 'react';
import { getUrlParams } from '../../lib/browser';
import {
type AppError,
type FetchError,
httpGet,
httpPost,
} from '../../lib/http';
import { type AppError, type FetchError, httpGet } from '../../lib/http';
import { RoadmapHeader } from './RoadmapHeader';
import { TopicDetail } from '../TopicDetail/TopicDetail';
import type { RoadmapDocument } from './CreateRoadmap/CreateRoadmapModal';
import { currentRoadmap } from '../../stores/roadmap';
import { RestrictedPage } from './RestrictedPage';
import { isLoggedIn } from '../../lib/jwt';
import { FlowRoadmapRenderer } from './FlowRoadmapRenderer';
export const allowedLinkTypes = [

View File

@@ -125,6 +125,32 @@ export function FlowRoadmapRenderer(props: FlowRoadmapRendererProps) {
}
}, []);
const handleChecklistCheckboxClick = useCallback(
(e: MouseEvent, checklistId: string) => {
const target = e?.currentTarget as HTMLDivElement;
if (!target) {
return;
}
const isCurrentStatusDone = target?.classList.contains('done');
updateTopicStatus(checklistId, isCurrentStatusDone ? 'pending' : 'done');
},
[],
);
const handleChecklistLabelClick = useCallback(
(e: MouseEvent, checklistId: string) => {
const target = e?.currentTarget as HTMLDivElement;
if (!target) {
return;
}
const isCurrentStatusDone = target?.classList.contains('done');
updateTopicStatus(checklistId, isCurrentStatusDone ? 'pending' : 'done');
},
[],
);
return (
<>
{hideRenderer && (
@@ -162,6 +188,8 @@ export function FlowRoadmapRenderer(props: FlowRoadmapRendererProps) {
onTopicAltClick={handleTopicAltClick}
onButtonNodeClick={handleLinkClick}
onLinkClick={handleLinkClick}
onChecklistCheckboxClick={handleChecklistCheckboxClick}
onChecklistLableClick={handleChecklistLabelClick}
fontFamily="Balsamiq Sans"
fontURL="/fonts/balsamiq.woff2"
/>

View File

@@ -51,7 +51,7 @@ import Icon from './AstroIcon.astro';
href='https://kamranahmed.info'
target='_blank'
>
<span class='hidden sm:inline'>@kamrify</span>
<span class='hidden sm:inline'>Kamran</span>
<span class='inline sm:hidden'>Kamran Ahmed</span>
</a>
</p>

View File

@@ -14,6 +14,7 @@ import type {
import { pageProgressMessage } from '../../stores/page';
import { showLoginPopup } from '../../lib/popup';
import { replaceChildren } from '../../lib/dom.ts';
import {setUrlParams} from "../../lib/browser.ts";
export class Renderer {
resourceId: string;
@@ -141,19 +142,8 @@ export class Renderer {
const newJsonFileSlug = newJsonUrl.split('/').pop()?.replace('.json', '');
// Update the URL and attach the new roadmap type
if (window?.history?.pushState) {
const url = new URL(window.location.href);
const type = this.resourceType[0]; // r for roadmap, b for best-practices
url.searchParams.delete(type);
if (newJsonFileSlug !== this.resourceId) {
url.searchParams.set(type, newJsonFileSlug!);
}
window.history.pushState(null, '', url.toString());
}
const type = this.resourceType[0]; // r for roadmap, b for best-practices
setUrlParams({ [type]: newJsonFileSlug! })
this.jsonToSvg(newJsonUrl)?.then(() => {});
}

View File

@@ -113,7 +113,9 @@ export function GenerateRoadmap() {
const [roadmapTopicLimitUsed, setRoadmapTopicLimitUsed] = useState(0);
const [isConfiguring, setIsConfiguring] = useState(false);
const [openAPIKey, setOpenAPIKey] = useState<string | undefined>(getOpenAIKey());
const [openAPIKey, setOpenAPIKey] = useState<string | undefined>(
getOpenAIKey(),
);
const isKeyOnly = IS_KEY_ONLY_ROADMAP_GENERATION;
const isAuthenticatedUser = isLoggedIn();
@@ -658,7 +660,7 @@ export function GenerateRoadmap() {
</div>
<div
className={cn({
'relative mb-20 max-h-[800px] min-h-[800px] sm:max-h-[1000px] md:min-h-[1000px] lg:max-h-[1200px] lg:min-h-[1200px] overflow-hidden':
'relative mb-20 max-h-[800px] min-h-[800px] overflow-hidden sm:max-h-[1000px] md:min-h-[1000px] lg:max-h-[1200px] lg:min-h-[1200px]':
!isAuthenticatedUser,
})}
>
@@ -666,18 +668,18 @@ export function GenerateRoadmap() {
ref={roadmapContainerRef}
id="roadmap-container"
onClick={handleNodeClick}
className="relative px-4 py-5 [&>svg]:mx-auto [&>svg]:max-w-[1300px]"
className="relative min-h-[400px] px-4 py-5 [&>svg]:mx-auto [&>svg]:max-w-[1300px]"
/>
{!isAuthenticatedUser && (
<div className="absolute bottom-0 left-0 right-0">
<div className="h-80 w-full bg-gradient-to-t from-gray-100 to-transparent" />
<div className="bg-gray-100">
<div className="mx-auto px-5 max-w-[600px] flex-col items-center justify-center bg-gray-100 pt-px">
<div className="mx-auto max-w-[600px] flex-col items-center justify-center bg-gray-100 px-5 pt-px">
<div className="mt-8 text-center">
<h2 className="mb-0.5 sm:mb-3 text-xl sm:text-2xl font-medium">
<h2 className="mb-0.5 text-xl font-medium sm:mb-3 sm:text-2xl">
Sign up to View the full roadmap
</h2>
<p className="mb-6 text-sm sm:text-base text-gray-600 text-balance">
<p className="mb-6 text-balance text-sm text-gray-600 sm:text-base">
You must be logged in to view the complete roadmap
</p>
</div>

View File

@@ -45,7 +45,7 @@ export function RoadmapSearch(props: RoadmapSearchProps) {
const randomTerms = ['OAuth', 'APIs', 'UX Design', 'gRPC'];
return (
<div className="flex flex-grow flex-col items-center px-4 py-6 sm:px-6">
<div className="flex flex-grow flex-col items-center px-4 py-6 sm:px-6 md:my-24 lg:my-32">
{isConfiguring && (
<IncreaseRoadmapLimit
onClose={() => {
@@ -55,7 +55,7 @@ export function RoadmapSearch(props: RoadmapSearchProps) {
}}
/>
)}
<div className="flex flex-col gap-0 text-center sm:gap-2 md:mt-24 lg:mt-32">
<div className="flex flex-col gap-0 text-center sm:gap-2">
<h1 className="relative text-2xl font-medium sm:text-3xl">
<span className="hidden sm:inline">Generate roadmaps with AI</span>
<span className="inline sm:hidden">AI Roadmap Generator</span>

View File

@@ -18,7 +18,7 @@ export function RoleRoadmaps(props: RoleRoadmapsProps) {
<SectionBadge title={badge} />
</div>
<div className="my-4 sm:my-7 text-left">
<h2 className="mb-1 text-xl sm:text-3xl font-semibold">{title}</h2>
<h2 className="mb-1 text-balance text-xl sm:text-3xl font-semibold">{title}</h2>
<p className="text-sm sm:text-base text-gray-500">{description}</p>
<div className="mt-4 sm:mt-7 grid sm:grid-cols-2 md:grid-cols-3 gap-3">{children}</div>

View File

@@ -73,9 +73,9 @@ export function NavigationDropdown() {
</button>
<div
className={cn(
'absolute pointer-events-none left-0 top-full z-[999] mt-2 w-48 min-w-[320px] -translate-y-1 rounded-lg bg-slate-800 py-2 opacity-0 shadow-xl transition-all duration-100',
'absolute pointer-events-none invisible left-0 top-full z-[999] mt-2 w-48 min-w-[320px] -translate-y-1 rounded-lg bg-slate-800 py-2 opacity-0 shadow-xl transition-all duration-100',
{
'pointer-events-auto translate-y-2.5 opacity-100': isOpen,
'pointer-events-auto visible translate-y-2.5 opacity-100': isOpen,
},
)}
>

View File

@@ -21,7 +21,7 @@ const discordInfo = await getDiscordInfo();
</p>
<div
class='mt-5 grid grid-cols-1 justify-between gap-2 divide-x-0 sm:my-11 sm:grid-cols-3 sm:gap-0 sm:divide-x'
class='mt-5 grid grid-cols-1 justify-between gap-2 divide-x-0 sm:my-11 sm:grid-cols-3 sm:gap-0 sm:divide-x mb-4 sm:mb-0'
>
<OpenSourceStat text='GitHub Stars' value={starCount} />
<OpenSourceStat text='Registered Users' value={'850k'} />

View File

@@ -14,7 +14,7 @@ const isDiscordMembers = text.toLowerCase() === 'discord members';
---
<div
class='flex items-start sm:items-center justify-start flex-col sm:justify-center sm:gap-0 gap-2 sm:bg-transparent bg-gray-200 sm:rounded-none rounded-xl p-4'
class='flex items-start sm:items-center justify-start flex-col sm:justify-center sm:gap-0 gap-2 sm:bg-transparent sm:rounded-none rounded-xl p-0 sm:p-4 mb-3 sm:mb-0'
>
{
isGitHubStars && (

View File

@@ -0,0 +1,30 @@
import { cn } from '../../lib/classname.ts';
type CategoryFilterButtonProps = {
category: string;
selected: boolean;
onClick: () => void;
};
export function CategoryFilterButton(props: CategoryFilterButtonProps) {
const { category, selected, onClick } = props;
return (
<button
className={cn(
'border-b bg-gradient-to-l py-1.5 pr-3 text-center text-sm text-gray-500 hover:text-gray-900 sm:text-right',
{
'from-white font-semibold text-gray-900':
selected && category !== 'All Roadmaps',
'font-semibold text-gray-900':
selected && category === 'All Roadmaps',
'hover:from-white': category !== 'All Roadmaps',
},
)}
type="button"
onClick={onClick}
>
{category}
</button>
);
}

View File

@@ -0,0 +1,510 @@
import { useEffect, useRef, useState } from 'react';
import { cn } from '../../lib/classname.ts';
import { Filter, X } from 'lucide-react';
import { CategoryFilterButton } from './CategoryFilterButton.tsx';
import { useOutsideClick } from '../../hooks/use-outside-click.ts';
const groupNames = [
'Absolute Beginners',
'Web Development',
'Languages / Platforms',
'Frameworks',
'Mobile Development',
'Databases',
'Computer Science',
'Machine Learning',
'Game Development',
'Design',
'DevOps',
'Blockchain',
'Cyber Security',
];
type AllowGroupNames = (typeof groupNames)[number];
type GroupType = {
group: AllowGroupNames;
roadmaps: {
title: string;
link: string;
type: 'role' | 'skill';
otherGroups?: AllowGroupNames[];
}[];
};
const groups: GroupType[] = [
{
group: 'Absolute Beginners',
roadmaps: [
{
title: 'Frontend Beginner',
link: '/frontend?r=frontend-beginner',
type: 'role',
otherGroups: ['Web Development'],
},
{
title: 'Backend Beginner',
link: '/backend?r=backend-beginner',
type: 'role',
otherGroups: ['Web Development'],
},
{
title: 'DevOps Beginner',
link: '/devops?r=devops-beginner',
type: 'role',
otherGroups: ['Web Development'],
},
],
},
{
group: 'Web Development',
roadmaps: [
{
title: 'Frontend',
link: '/frontend',
type: 'role',
otherGroups: ['Web Development'],
},
{
title: 'Backend',
link: '/backend',
type: 'role',
otherGroups: ['Web Development'],
},
{
title: 'Full Stack',
link: '/full-stack',
type: 'role',
otherGroups: ['Web Development', 'Absolute Beginners'],
},
{
title: 'QA',
link: '/qa',
type: 'role',
},
{
title: 'GraphQL',
link: '/graphql',
type: 'skill',
otherGroups: ['Web Development'],
},
],
},
{
group: 'Frameworks',
roadmaps: [
{
title: 'React',
link: '/react',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Vue',
link: '/vue',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Angular',
link: '/angular',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Spring Boot',
link: '/spring-boot',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'ASP.NET Core',
link: '/aspnet-core',
type: 'skill',
otherGroups: ['Web Development'],
},
],
},
{
group: 'Languages / Platforms',
roadmaps: [
{
title: 'JavaScript',
link: '/javascript',
type: 'skill',
otherGroups: ['Web Development', 'DevOps', 'Mobile Development'],
},
{
title: 'TypeScript',
link: '/typescript',
type: 'skill',
otherGroups: ['Web Development', 'Mobile Development'],
},
{
title: 'Node.js',
link: '/nodejs',
type: 'skill',
otherGroups: ['Web Development', 'DevOps'],
},
{
title: 'C++',
link: '/cpp',
type: 'skill',
},
{
title: 'Go',
link: '/golang',
type: 'skill',
otherGroups: ['Web Development', 'DevOps'],
},
{
title: 'Rust',
link: '/rust',
type: 'skill',
otherGroups: ['Web Development', 'DevOps'],
},
{
title: 'Python',
link: '/python',
type: 'skill',
otherGroups: ['Web Development', 'DevOps'],
},
{
title: 'Java',
link: '/java',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'SQL',
link: '/sql',
type: 'skill',
otherGroups: ['Web Development', 'Databases', 'DevOps'],
},
],
},
{
group: 'DevOps',
roadmaps: [
{
title: 'DevOps',
link: '/devops',
type: 'role',
otherGroups: ['Web Development'],
},
{
title: 'Docker',
link: '/docker',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Kubernetes',
link: '/kubernetes',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'AWS',
link: '/aws',
type: 'skill',
otherGroups: ['Web Development'],
},
],
},
{
group: 'Mobile Development',
roadmaps: [
{
title: 'Android',
link: '/android',
type: 'role',
},
{
title: 'React Native',
link: '/react-native',
type: 'role',
},
{
title: 'Flutter',
link: '/flutter',
type: 'role',
},
],
},
{
group: 'Databases',
roadmaps: [
{
title: 'PostgreSQL',
link: '/postgresql-dba',
type: 'role',
otherGroups: ['Web Development'],
},
{
title: 'MongoDB',
link: '/mongodb',
type: 'skill',
otherGroups: ['Web Development'],
},
],
},
{
group: 'Computer Science',
roadmaps: [
{
title: 'Computer Science',
link: '/computer-science',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Data Structures',
link: '/datastructures-and-algorithms',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'System Design',
link: '/system-design',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Design and Architecture',
link: '/software-design-architecture',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Software Architect',
link: '/software-architect',
type: 'role',
otherGroups: ['Web Development'],
},
{
title: 'Code Review',
link: '/code-review',
type: 'skill',
otherGroups: ['Web Development'],
},
{
title: 'Technical Writer',
link: '/technical-writer',
type: 'role',
},
],
},
{
group: 'Machine Learning',
roadmaps: [
{
title: 'AI and Data Scientist',
link: '/ai-data-scientist',
type: 'role',
},
{
title: 'Data Analyst',
link: '/data-analyst',
type: 'role',
},
{
title: 'MLOps',
link: '/mlops',
type: 'role',
},
{
title: 'Prompt Engineering',
link: '/prompt-engineering',
type: 'skill',
},
],
},
{
group: 'Game Development',
roadmaps: [
{
title: 'Client Side Game Dev.',
link: '/game-developer',
type: 'role',
},
{
title: 'Server Side Game Dev.',
link: '/server-side-game-developer',
type: 'role',
},
],
},
{
group: 'Design',
roadmaps: [
{
title: 'UX Design',
link: '/ux-design',
type: 'role',
},
{
title: 'Design System',
link: '/design-system',
type: 'skill',
otherGroups: ['Web Development'],
},
],
},
{
group: 'Blockchain',
roadmaps: [
{
title: 'Blockchain',
link: '/blockchain',
type: 'role',
},
],
},
{
group: 'Cyber Security',
roadmaps: [
{
title: 'Cyber Security',
link: '/cyber-security',
type: 'role',
},
],
},
];
const roleRoadmaps = groups.flatMap((group) =>
group.roadmaps.filter((roadmap) => roadmap.type === 'role'),
);
const skillRoadmaps = groups.flatMap((group) =>
group.roadmaps.filter((roadmap) => roadmap.type === 'skill'),
);
const allGroups = [
{
group: 'Role Based Roadmaps',
roadmaps: roleRoadmaps,
},
{
group: 'Skill Based Roadmaps',
roadmaps: skillRoadmaps,
},
];
export function RoadmapsPage() {
const [activeGroup, setActiveGroup] = useState<AllowGroupNames>('');
const [visibleGroups, setVisibleGroups] = useState<GroupType[]>(allGroups);
const [isFilterOpen, setIsFilterOpen] = useState(false);
useEffect(() => {
if (!activeGroup) {
setVisibleGroups(allGroups);
return;
}
const group = groups.find((group) => group.group === activeGroup);
if (!group) {
return;
}
// other groups that have a roadmap that is in the same group
const otherGroups = groups.filter((g) => {
return (
g.group !== group.group &&
g.roadmaps.some((roadmap) => {
return roadmap.otherGroups?.includes(group.group);
})
);
});
setVisibleGroups([
group,
...otherGroups.map((g) => ({
...g,
roadmaps: g.roadmaps.filter((roadmap) =>
roadmap.otherGroups?.includes(group.group),
),
})),
]);
}, [activeGroup]);
return (
<div className="border-t bg-gray-100">
<button
onClick={() => {
setIsFilterOpen(!isFilterOpen);
}}
id="filter-button"
className={cn(
'-mt-1 flex w-full items-center justify-center bg-gray-300 py-2 text-sm text-black focus:shadow-none focus:outline-0 sm:hidden',
{
'mb-3': !isFilterOpen,
},
)}
>
{!isFilterOpen && <Filter size={13} className="mr-1" />}
{isFilterOpen && <X size={13} className="mr-1" />}
Categories
</button>
<div className="container relative flex flex-col gap-4 sm:flex-row">
<div
className={cn(
'hidden w-full flex-col from-gray-100 sm:w-[180px] sm:border-r sm:bg-gradient-to-l sm:pt-6',
{
'hidden sm:flex': !isFilterOpen,
'z-50 flex': isFilterOpen,
},
)}
>
<div className="absolute top-0 -mx-4 w-full bg-white pb-0 shadow-xl sm:sticky sm:top-10 sm:mx-0 sm:bg-transparent sm:pb-20 sm:shadow-none">
<div className="grid grid-cols-1">
<CategoryFilterButton
onClick={() => {
setActiveGroup('');
setIsFilterOpen(false);
}}
category={'All Roadmaps'}
selected={activeGroup === ''}
/>
{groups.map((group) => (
<CategoryFilterButton
key={group.group}
onClick={() => {
setActiveGroup(group.group);
setIsFilterOpen(false);
document?.getElementById('filter-button')?.scrollIntoView();
}}
category={group.group}
selected={activeGroup === group.group}
/>
))}
</div>
</div>
</div>
<div className="flex flex-grow flex-col gap-6 pb-20 pt-2 sm:pt-8">
{visibleGroups.map((group) => (
<div key={`${group.group}-${group.roadmaps.length}`}>
<h2 className="mb-2 text-xs uppercase tracking-wide text-gray-400">
{group.group}
</h2>
<div className="grid grid-cols-1 gap-1.5 sm:grid-cols-2 md:grid-cols-3">
{group.roadmaps.map((roadmap) => (
<a
key={roadmap.link}
className="rounded-md border bg-white px-3 py-2 text-left text-sm shadow-sm transition-all hover:border-gray-300 hover:bg-gray-50"
href={roadmap.link}
>
{roadmap.title}
</a>
))}
</div>
</div>
))}
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,44 @@
import { isLoggedIn } from '../../lib/jwt.ts';
import { showLoginPopup } from '../../lib/popup.ts';
export function RoadmapsPageHeader() {
return (
<div className="bg-white py-3 sm:py-12">
<div className="container">
<div className="flex flex-col items-start bg-white sm:items-center">
<h1 className="text-2xl font-bold sm:text-5xl">Developer Roadmaps</h1>
<p className="mb-3 mt-1 text-sm sm:my-3 sm:text-lg">
Browse the ever-growing list of up-to-date, community driven
roadmaps
</p>
<p className="mb-3 flex w-full flex-col gap-1.5 sm:mb-0 sm:w-auto sm:flex-row sm:gap-3">
<a
className="inline-block rounded-md bg-black px-3.5 py-2 text-sm text-white sm:py-1.5 sm:text-base"
href="https://draw.roadmap.sh"
onClick={(e) => {
if (!isLoggedIn()) {
e.preventDefault();
showLoginPopup();
}
}}
>
Draw your own roadmap
</a>
<a
className="inline-block rounded-md bg-gray-300 px-3.5 py-2 text-sm text-black sm:py-1.5 sm:text-base"
href="https://roadmap.sh/ai"
onClick={(e) => {
if (!isLoggedIn()) {
e.preventDefault();
showLoginPopup();
}
}}
>
Generate Roadmaps with AI
</a>
</p>
</div>
</div>
</div>
);
}

View File

@@ -1,13 +1,13 @@
---
jsonUrl: '/jsons/roadmaps/ai-data-scientist.json'
pdfUrl: '/pdfs/roadmaps/ai-data-scientist.pdf'
order: 6
order: 4
briefTitle: 'AI and Data Scientist'
briefDescription: 'Step by step guide to becoming an AI and Data Scientist in 2024'
title: 'AI and Data Scientist Roadmap'
description: 'Step by step guide to becoming an AI and Data Scientist in 2024'
hasTopics: true
isNew: true
isNew: false
dimensions:
width: 968
height: 2243.96

View File

@@ -1,6 +1,6 @@
---
pdfUrl: '/pdfs/roadmaps/android.pdf'
order: 4
order: 5
briefTitle: 'Android'
briefDescription: 'Step by step guide to becoming an Android Developer in 2024'
title: 'Android Developer'

View File

@@ -57,5 +57,5 @@ sitemap:
tags:
- 'roadmap'
- 'main-sitemap'
- 'role-roadmap'
- 'skill-roadmap'
---

View File

@@ -2,7 +2,7 @@
## Eager Loading
Eager Loading helps you to load all your needed entities at once; i.e., all your child entities will be loaded at single database call. This can be achieved, using the Include method, which returs the related entities as a part of the query and a large amount of data is loaded at once.
Eager Loading helps you to load all your needed entities at once; i.e., all your child entities will be loaded at single database call. This can be achieved, using the Include method, which returns the related entities as a part of the query and a large amount of data is loaded at once.
## Lazy Loading

View File

@@ -5,4 +5,4 @@ MediatR is an open-source library for .NET that is designed to simplify the proc
For more information, visit the following links:
- [Use MediatR in ASP.NET or ASP.NET Core](https://medium.com/dotnet-hub/use-mediatr-in-asp-net-or-asp-net-core-cqrs-and-mediator-in-dotnet-how-to-use-mediatr-cqrs-aspnetcore-5076e2f2880c)
- [How to implement CQRS using MediatR in an ASP.NET?](https://blog.christian-schou.dk/how-to-implement-cqrs-with-mediatr-in-asp-net/)
- [How to implement CQRS using MediatR in an ASP.NET?](https://christian-schou.dk/blog/how-to-implement-cqrs-with-mediatr-in-asp-net/)

View File

@@ -52,5 +52,5 @@ sitemap:
tags:
- 'roadmap'
- 'main-sitemap'
- 'role-roadmap'
- 'skill-roadmap'
---

View File

@@ -32,8 +32,8 @@ The **Presentation layer** is responsible for translating or converting the data
The **Application layer** is the interface between the user and the communication system. It is responsible for providing networking services for various applications, like email, web browsing, or file sharing.
Each of these layers interacts with the adjacent layers to pass data packets back and forth. Understanding the OCI model is crucial for addressing potential security threats and vulnerabilities that can occur at each layer. By implementing strong network security measures at each layer, you can minimize the risk of cyber attacks and keep your data safe.
Each of these layers interacts with the adjacent layers to pass data packets back and forth. Understanding the OSI model is crucial for addressing potential security threats and vulnerabilities that can occur at each layer. By implementing strong network security measures at each layer, you can minimize the risk of cyber attacks and keep your data safe.
In the next section, we will discuss network protocols and how they play an essential role in network communication and security.
- [What is OSI Model?](https://www.youtube.com/watch?v=Ilk7UXzV_Qc&ab_channel=RealPars)
- [What is OSI Model?](https://www.youtube.com/watch?v=Ilk7UXzV_Qc&ab_channel=RealPars)

View File

@@ -0,0 +1,3 @@
# Introduction to Data Analytics for Data Analysts
Data Analytics is a core component of a Data Analyst's role. The field involves extracting meaningful insights from raw data to drive decision-making processes. It includes a wide range of techniques and disciplines ranging from the simple data compilation to advanced algorithms and statistical analysis. As a data analyst, you are expected to understand and interpret complex digital data, such as the usage statistics of a website, the sales figures of a company, or client engagement over social media, etc. This knowledge enables data analysts to support businesses in identifying trends, making informed decisions, predicting potential outcomes - hence playing a crucial role in shaping business strategies.

View File

@@ -0,0 +1,3 @@
# Descriptive Analytics
Descriptive Analytics is one of the fundamental types of Data Analytics that provides insight into the past. As a Data Analyst, utilizing Descriptive Analytics involves the technique of using historical data to understand changes that have occurred in a business over time. Primarily concerned with the “what has happened” aspect, it analyzes raw data from the past to draw inferences and identify patterns and trends. This helps companies understand their strengths, weaknesses and pinpoint operational problems, setting the stage for accurate Business Intelligence and decision-making processes.

View File

@@ -0,0 +1,3 @@
# Diagnostic Analytics
Diagnostic analytics, as a crucial type of data analytics, is focused on studying past performance to understand why something happened. This is an integral part of the work done by data analysts. Through techniques such as drill-down, data discovery, correlations, and cause-effect analysis, data analysts utilizing diagnostic analytics can look beyond general trends and identify the root cause of changes observed in the data. Consequently, this enables businesses to address operational and strategic issues effectively, by allowing them to grasp the reasons behind such issues. For every data analyst, the skill of performing diagnostic data analytics is a must-have asset that enhances their analysis capability.

View File

@@ -0,0 +1,3 @@
# Predictive Analysis
Predictive analysis is a crucial type of data analytics that any competent data analyst should comprehend. It refers to the practice of extracting information from existing data sets in order to determine patterns and forecast future outcomes and trends. Data analysts apply statistical algorithms, machine learning techniques, and artificial intelligence to the data to anticipate future results. Predictive analysis enables organizations to be proactive, forward-thinking, and strategic by providing them valuable insights on future occurrences. It's a powerful tool that gives companies a significant competitive edge by enabling risk management, opportunity identification, and strategic decision-making.

View File

@@ -0,0 +1,3 @@
# Prescriptive Analytics
Prescriptive analytics, a crucial type of data analytics, is essential for making data-driven decisions in business and organizational contexts. As a data analyst, the goal of prescriptive analytics is to recommend various actions using predictions on the basis of known parameters to help decision makers understand likely outcomes. Prescriptive analytics employs a blend of techniques and tools such as algorithms, machine learning, computational modelling procedures, and decision-tree structures to enable automated decision making. Therefore, prescriptive analytics not only anticipates what will happen and when it will happen, but also explains why it will happen, contributing to the significance of a data analysts role in an organization.

View File

@@ -0,0 +1,5 @@
# Introduction to Types of Data Analytics
Data Analytics has proven to be a critical part of decision-making in modern business ventures. It is responsible for discovering, interpreting, and transforming data into valuable information. Different types of data analytics look at past, present, or predictive views of business operations.
Data Analysts, as ambassadors of this domain, employ these types, which are namely Descriptive Analytics, Diagnostic Analytics, Predictive Analytics and Prescriptive Analytics, to answer various questions — What happened? Why did it happen? What could happen? And what should we do next? Understanding these types gives data analysts the power to transform raw datasets into strategic insights.

View File

@@ -0,0 +1,3 @@
# Data Collection
In the realm of data analysis, the concept of collection holds immense importance. As the term suggests, collection refers to the process of gathering and measuring information on targeted variables in an established systematic fashion that enables a data analyst to answer relevant questions and evaluate outcomes. This step is foundational to any data analysis scheme, as it is the first line of interaction with the raw data that later transforms into viable insights. The effectiveness of data analysis is heavily reliant on the quality and quantity of data collected. Different methodologies and tools are employed for data collection depending on the nature of the data needed, such as surveys, observations, experiments, or scraping online data stores. This process should be carried out with clear objectives and careful consideration to ensure accuracy and relevance in the later stages of analysis and decision-making.

View File

@@ -0,0 +1,3 @@
# Cleanup
The Cleanup of Data is a critical component of a Data Analyst's role. It involves the process of inspecting, cleaning, transforming, and modeling data to discover useful information, inform conclusions, and support decision making. This process is crucial for Data Analysts to generate accurate and significant insights from data, ultimately resulting in better and more informed business decisions. A solid understanding of data cleanup procedures and techniques is a fundamental skill for any Data Analyst. Hence, it is necessary to hold a high emphasis on maintaining data quality by managing data integrity, accuracy, and consistency during the data cleanup process.

View File

@@ -0,0 +1,3 @@
# Exploration
In the realm of data analytics, exploration of data is a key concept that data analysts leverage to understand and interpret data effectively. Typically, this exploration process involves discerning patterns, identifying anomalies, examining underlying structures, and testing hypothesis, which often gets accomplished via descriptive statistics, visual methods, or sophisticated algorithms. It's a fundamental stepping-stone for any data analyst, ultimately guiding them in shaping the direction of further analysis or modeling. This concept serves as a foundation for dealing with complexities and uncertainties in data, hence improving decision-making in various fields ranging from business and finance to healthcare and social sciences.

View File

@@ -0,0 +1,3 @@
# Visualization - A Key Concept for Data Analysts
The visualization of data is an essential skill in the toolkit of every data analyst. This practice is about transforming complex raw data into a graphical format that allows for an easier understanding of large data sets, trends, outliers, and important patterns. Whether pie charts, line graphs, bar graphs, or heat maps, data visualization techniques not only streamline data analysis, but also facilitate a more effective communication of the findings to others. This key concept underscores the importance of presenting data in a digestible and visually appealing manner to drive data-informed decision making in an organization.

View File

@@ -0,0 +1,3 @@
# Statistical Analysis: A Key Concept for Data Analysts
Statistical analysis plays a critical role in the daily functions of a data analyst. It encompasses collecting, examining, interpreting, and present data, enabling data analysts to uncover patterns, trends and relationships, deduce insights and support decision-making in various fields. By applying statistical concepts, data analysts can transform complex data sets into understandable information that organizations can leverage for actionable insights. This cornerstone of data analysis enables analysts to deliver predictive models, trend analysis, and valuable business insights, making it indispensable in the world of data analytics. It is vital for data analysts to grasp such statistical methodologies to effectively decipher large data volumes they handle.

View File

@@ -0,0 +1,3 @@
# Machine Learning - A Key Concept for Data Analysts
Machine learning, a subset of artificial intelligence, is an indispensable tool in the hands of a data analyst. It provides the ability to automatically learn, improve from experience and make decisions without being explicitly programmed. In the context of a data analyst, machine learning contributes significantly in uncovering hidden insights, recognising patterns or making predictions based on large amounts of data. Through the use of varying algorithms and models, data analysts are able to leverage machine learning to convert raw data into meaningful information, making it a critical concept in data analysis.

View File

@@ -0,0 +1,3 @@
# Introduction to Key Concepts for Data
In the realm of data analysis, understanding some key concepts is essential. Data analysis is the process of inspecting, cleansing, transforming, and modeling data to discover useful information and support decision-making. In the broadest sense, data can be classified into various types like nominal, ordinal, interval and ratio, each with a specific role and analysis technique. Higher-dimensional data types like time-series, panel data, and multi-dimensional arrays are also critical. On the other hand, data quality and data management are key concepts to ensure clean and reliable datasets. With an understanding of these fundamental concepts, a data analyst can transform raw data into meaningful insights.

View File

@@ -0,0 +1,3 @@
# Introduction to Data Analysis
Data Analysis plays a crucial role in today's data-centric world. It involves the practice of inspecting, cleansing, transforming, and modeling data to extract valuable insights for decision-making. A **Data Analyst** is a professional primarily tasked with collecting, processing, and performing statistical analysis on large datasets. They discover how data can be used to answer questions and solve problems. With the rapid expansion of data in modern firms, the role of a data analyst has been evolving greatly, making them a significant asset in business strategy and decision-making processes.

View File

@@ -0,0 +1,3 @@
# Sum
Sum is one of the most fundamental operations in data analysis. As a data analyst, the ability to quickly and accurately summarize numerical data is key to draw meaningful insights from large data sets. The operation can be performed using various software and programming languages such as Excel, SQL, Python, R etc., each providing distinct methods to compute sums. Understanding the 'sum' operation is critical for tasks such as trend analysis, forecasting, budgeting, and essentially any operation involving quantitative data.

View File

@@ -0,0 +1,3 @@
# Min / Max Function
Understanding the minimum and maximum values in your dataset is critical in data analysis. These basic functions, often referred to as Min-Max functions, are statistical tools that data analysts use to inspect the distribution of a particular dataset. By identifying the lowest and highest values, data analysts can gain insight into the range of the dataset, identify possible outliers, and understand the data's variability. Beyond their use in descriptive statistics, Min-Max functions also play a vital role in data normalization, shaping the accuracy of predictive models in Machine Learning and AI fields.

View File

@@ -0,0 +1,3 @@
# Average
The average, also often referred to as the mean, is one of the most commonly used mathematical calculations in data analysis. It provides a simple, useful measure of a set of data. For a data analyst, understanding how to calculate and interpret averages is fundamental. Basic functions, including the average, are integral components in data analysis that are used to summarize and understand complex data sets. Though conceptually simple, the power of average lies in its utility in a range of analyses - from forecasting models to understanding trends and patterns in the dataset.

View File

@@ -0,0 +1,3 @@
# Count
The Count function in data analysis is one of the most fundamental tasks that a Data Analyst gets to handle. This function is a simple yet powerful tool that aids in understanding the underlying data by providing the count or frequency of occurrences of unique elements in data sets. The relevance of count comes into play in various scenarios from understanding the popularity of a certain category to analyzing customer activity, and much more. This basic function offers crucial insights into data, making it an essential skill in the toolkit of any data analyst.

View File

@@ -0,0 +1,3 @@
# Concatenation
The term 'Concat' or Concatenation refers to the operation of combining two or more data structures, be it strings, arrays, or datasets, end-to-end in a sequence. In the context of data analysis, a Data Analyst uses concatenation as a basic function to merge or bind data sets along an axis - either vertically or horizontally. This function is commonly used in data wrangling or preprocessing to combine data from multiple sources, handle missing values, and shape data into a form that fits better with analysis tools. An understanding of 'Concat' plays a crucial role in managing the complex, large data sets that data analysts often work with.

View File

@@ -0,0 +1,3 @@
# Trim
Trim is considered a basic yet vital function within the scope of data analysis. It plays an integral role in preparing and cleansing the dataset, which is key to analytical accuracy. Trim allows data analysts to streamline dataset by removing extra spaces, unwanted characters, outliers or specific ranges of values, hence, enhancing the data quality. Furthermore, Trim functions can help in reducing the errors, enhancing the efficiency of data modelling and ensuring reliable data insight generation. Understanding Trim function is thus an essential part of a data analyst's toolbox.

View File

@@ -0,0 +1,3 @@
# Upper, Lower, Proper Functions
In the field of data analysis, the Upper, Lower, and Proper functions serve as fundamental tools for manipulating and transforming text data. A data analyst often works with a vast array of datasets, where the text data may not always adhere to a consistent format. To tackle such issues, the Upper, Lower, and Proper functions are used. 'Upper' converts all the text to uppercase, while 'Lower' does the opposite, transforming all text to lowercase. The 'Proper' function is used to capitalize the first letter of each word, making it proper case. These functions are indispensable when it comes to cleaning and preparing data, a major part of a data analyst's role.

View File

@@ -0,0 +1,5 @@
# Replace/Substitute
When working with datasets, there is often a need for a Data Analyst to alter or adjust certain values. This necessity might arise due to incorrect or inaccurate entries, outliers affecting the results, or simply the need to rewrite certain values for better interpretation and analysis of the data. One of the key basic functions that allow for such alterations in the data is the 'replace' or 'substitute' function.
The replace or substitute function provides an efficient way to replace certain values in a dataset with another. This fundamental function is not only applicable to numerals but it is also functional with categorical data. In data analysis, this replace or substitute function is absolutely critical, contributing greatly to data cleaning, manipulation, and subsequently, the accuracy and reliability of the analytical results obtained.

View File

@@ -0,0 +1,5 @@
# vlookup and hlookup
Data Analysts often deal with large and complex datasets that require efficient tools for data manipulation and extraction. This is where basic functions like vlookup and hlookup in Excel become extremely useful. These functions are versatile lookup and reference functions that can find specified data in a vast array, providing ease and convenience in data retrieval tasks.
The Vertical Lookup (vlookup) is used to find data in a table sorted vertically, while the Horizontal Lookup (hlookup) is used on data organized horizontally. Mastering these functions is crucial for any data analyst's toolbox, as they can dramatically speed up data access, reduce errors in data extraction, and simplify the overall process of analysis. In essence, these two functions are not just basic functions; they serve as essential tools for efficient data analysis.

View File

@@ -0,0 +1,3 @@
# Basic Functions of a Data Analyst
A Data Analyst serves a pivotal role in the decision-making processes within an organization. The basic function of a data analyst involves collecting, processing, and performing statistical analyses of data. Their work encompasses understanding the nature of data, finding out the patterns and insights hidden within them, and communicating these findings in a manner that can facilitate the decision-making of the company. They are often tasked to transform complex data into a format that is easily understandable, which enables the company to make informed decisions. This may involve designing and maintaining databases and data systems, conducting analysis to identify trends, and creating visualizations of their findings. These basic functions are the cornerstones upon which a data analyst builds more complex and organization-specific responsibilities from.

View File

@@ -0,0 +1,3 @@
# DATEDIF
The `DATEDIF` function is an incredibly valuable tool for a Data Analyst in Excel or Google Sheets, by providing the ability to calculate the difference between two dates. This function takes in three parameters: start date, end date and the type of difference required (measured in years, months, days, etc.). In Data Analysis, particularly when dealing with time-series data or when you need to uncover trends over specific periods, the `DATEDIF` function is a necessary asset. Recognizing its functionality will enable a data analyst to manipulate or shape data progressively and efficiently.

View File

@@ -0,0 +1,3 @@
# Understanding Basic Functions
As a Data Analyst, Excel is an extremely powerful tool that you will interact with on a daily basis. From organizing data into spreadsheets, performing calculations with complex formulas, to creating graphs and visual aids in presenting the data, the basic functions of Excel are crucial in your role. Excels plethora of complex and simple functions make it a unique, versatile, and accessible tool for data analysis. Understanding these basic functions not only elevates the expertise in handling and interpreting data but also increases efficiency and productivity in your line of work. Whether you're calculating, extracting or merging data, Excels basic functions can make these tasks more straightforward ensuring the necessary accuracy of the data insights you provide.

View File

@@ -0,0 +1,3 @@
# Pivot Tables
Data Analysts recurrently find the need to summarize, investigate, and analyze their data to make meaningful and insightful decisions. One of the most powerful tools to accomplish this in Microsoft Excel is the Pivot Table. Pivot Tables allow analysts to organize and summarize large quantities of data in a concise, tabular format. The strength of pivot tables comes from their ability to manipulate data dynamically, leading to quicker analysis and richer insights. Understanding and employing Pivot Tables efficiently is a fundamental skill for any data analyst, as it directly impacts their ability to derive significant information from raw datasets.

View File

@@ -0,0 +1,3 @@
# Charting
Excel serves as a powerful tool for data analysts when it comes to data organization, manipulation, recovery, and visualization. One of the incredible features it offers is 'Charting'. Charting essentially means creating visual representations of data, which aids data analysts to easily understand complex data and showcase compelling stories of data trends, correlations, and statistical analysis. These charts vary from simple bar graphs to more complex 3D surface and stock charts. As a data analyst, mastering charting under Excel substantially enhances data interpretation, making it easier to extract meaningful insights from substantial data sets.

View File

@@ -0,0 +1,3 @@
# Excel
Excel is a powerful tool utilized by data analysts worldwide to store, manipulate, and analyze data. It offers a vast array of features such as pivot tables, graphs and a powerful suite of formulas and functions to help sift through large sets of data. A data analyst uses Excel to perform a wide range of tasks, from simple data entry and cleaning, to more complex statistical analysis and predictive modeling. Proficiency in Excel is often a key requirement for a data analyst, as its versatility and ubiquity make it an indispensable tool in the field of data analysis.

View File

@@ -0,0 +1,3 @@
# SQL for Data Analysts
Structured Query Language, or SQL, is an essential tool for every data analyst. As a domain-specific language used in programming and designed for managing data held in relational database management systems, SQL allows analysts to manipulate and analyse large volumes of data efficiently. Understanding SQL allows a data analyst to extract insights from data stored in databases, conduct complex queries, and create elaborate data reports. SQL is recognized for its effectiveness in data manipulation and its compatibility with other coding languages, making it a fundamental competency in the data analytics field.

View File

@@ -0,0 +1,3 @@
# Python as a Programming Language
Python is a powerful, flexible, open-source programming language that is incredibly impactful in the realm of data analysis. As a data analyst, you are typically required to clean, interpret, visualize and present data, and Python, being versatile and well-supported, has libraries and frameworks like Pandas, Numpy, Matplotlib, and Seaborn which make these tasks easier and efficient. It is a favorite language among data analysts and data scientists due to its simplicity to learn and readability. Understanding Python can greatly enhance the capabilities and effectiveness of a data analyst.

View File

@@ -0,0 +1,3 @@
# R
R is a powerful language profoundly used by data analysts and statisticians across the globe. Offering a wide array of statistical and graphical techniques, R proves to be an excellent tool for data manipulation, statistical modeling and visualization. With its comprehensive collection of packages and built-in functions for data analysis, R allows data analysts to perform complex exploratory data analysis, build sophisticated models and create stunning visualizations. Moreover, given its open-source nature, R consistently advances with contributions from the worldwide statistical community.

View File

@@ -0,0 +1,3 @@
# Programming Language for Data Analysts
As a data analyst, programming languages are crucial tools in your line of work. They not only help in collection and cleanup of data, but also assist in analyzing it to generate insightful reports and predictions. These languages can be employed to create algorithms for complex computations, model data, and visualizations amongst other tasks. Familiarity and proficiency in several programming languages can give data analysts a significant competitive edge, enhancing their ability to draw useful business insights from raw data. Examples of commonly used programming languages in data analysis include SQL, Python, R, Java and SAS.

View File

@@ -0,0 +1,3 @@
# Pandas
Pandas is a widely acknowledged and highly useful data manipulation library in the world of data analysis. Known for its robust features like data cleaning, wrangling and analysis, pandas has become one of the go-to tools for data analysts. Built on NumPy, it provides high-performance, easy-to-use data structures and data analysis tools. In essence, its flexibility and versatility make it a critical part of the data analyst's toolkit, as it holds the capability to cater to virtually every data manipulation task.

View File

@@ -0,0 +1,3 @@
# Dplyr
Dplyr is a powerful and popular toolkit for data manipulation in R. As a data analyst, this library provides integral functions to manipulate, clean, and process data efficiently. It has been designed to be easy and intuitive, ensuring a robust and consistent syntax. Dplyr ensures data reliability and fast processing, essential for analysts dealing with large datasets. With a strong focus on efficiency, dplyr functions like select, filter, arrange, mutate, summarise, and group_by optimise data analysis operations, making data manipulation a smoother and hassle-free procedure for data analysts.

View File

@@ -0,0 +1,3 @@
# Data Manipulation Libraries
Data manipulation is a key aspect of the role of a data analyst. There are numerous data manipulation libraries available that enable data analysts to handle, process and analyze massive datasets effectively and efficiently. These libraries, particularly in programming languages like Python, R, and more, come with a wide range of functionalities that include sorting, filtering, aggregating, merging and reshaping data. Using data manipulation libraries, data analysts can transform raw data into a more understandable or usable format to derive meaningful insights or conclusions. A few examples of these libraries are Pandas in Python, dplyr in R, and DataTable in Julia. These libraries not only make data manipulation tasks easier but also contribute to improving the overall data analysis process.

View File

@@ -0,0 +1,3 @@
# Matplotlib
Matplotlib is a paramount data visualization library used extensively by data analysts for generating a wide array of plots and graphs. Through Matplotlib, data analysts can convey results clearly and effectively, driving insights from complex data sets. It offers a hierarchical environment which is very natural for a data scientist to work with. Providing an object-oriented API, it allows for extensive customization and integration into larger applications. From histograms, bar charts, scatter plots to 3D graphs, the versatility of Matplotlib assists data analysts in the better comprehension and compelling representation of data.

View File

@@ -0,0 +1,3 @@
# ggplot2
When it comes to data visualization in R programming, ggplot2 stands tall as one of the primary tools for data analysts. This data visualization library, which forms part of the tidyverse suite of packages, facilitates the creation of complex and sophisticated visual narratives. With its grammar of graphics philosophy, ggplot2 enables analysts to build graphs and charts layer by layer, thereby offering detailed control over graphical features and design. Its versatility in creating tailored and aesthetically pleasing graphics is a vital asset for any data analyst tackling exploratory data analysis, reporting, or dashboard building.

View File

@@ -0,0 +1,3 @@
# Data Visualization Libraries
Data visualization is a critical part of any data analysis process. It allows data analysts to understand complex data sets by converting a myriad of numbers into engaging, meaningful visuals. Data visualization libraries are toolkits enabling this transformation. They consist of pre-built functions and methods to create visuals such as graphs, charts, maps, and many more from raw data. This gives data analysts the capacity to present their findings in an insightful, easy-to-understand manner for stakeholders. Popular libraries include `Matplotlib`, `Seaborn`, `Plotly`, and `Bokeh` in Python, and `ggplot2` in R, each varying in their features, complexity, and flexibility.

View File

@@ -0,0 +1,3 @@
# Databases
Behind every strong data analyst, there's not just a rich assortment of data, but a set of robust databases that enable effective data collection. Databases are a fundamental aspect of data collection in a world where the capability to manage, organize, and evaluate large volumes of data is critical. As a data analyst, the understanding and use of databases is instrumental in capturing the necessary data for conducting qualitative and quantitative analysis, forecasting trends and making data-driven decisions. Thorough knowledge of databases, therefore, can be considered a key component of a data analyst's arsenal. These databases can vary from relational databases like SQL to NoSQL databases like MongoDB, each serving a unique role in the data collection process.

View File

@@ -0,0 +1,3 @@
# CSV Files in Data Collection for Data Analysts
CSV or Comma Separated Values files play an integral role in data collection for data analysts. These file types allow the efficient storage of data and are commonly generated by spreadsheet software like Microsoft Excel or Google Sheets, but their simplicity makes them compatible with a variety of applications that deal with data. In the context of data analysis, CSV files are extensively used to import and export large datasets, making them essential for any data analyst's toolkit. They allow analysts to organize vast amounts of information into a structured format, which is fundamental in extracting useful insights from raw data.

View File

@@ -0,0 +1,3 @@
# APIs and Data Collection
Application Programming Interfaces, better known as APIs, play a fundamental role in the work of data analysts, particularly in the process of data collection. APIs are sets of protocols, routines, and tools that enable different software applications to communicate with each other. In data analysis, APIs are used extensively to collect, exchange, and manipulate data from different sources in a secure and efficient manner. This data collection process is paramount in shaping the insights derived by the analysts.

View File

@@ -0,0 +1,3 @@
# Web Scraping
Web scraping plays a significant role in collecting unique datasets for data analysis. In the realm of a data analyst's tasks, web scraping refers to the method of extracting information from websites and converting it into a structured usable format like a CSV, Excel spreadsheet, or even into databases. This technique allows data analysts to gather large sets of data from the internet, which otherwise could be time-consuming if done manually. The capability of web scraping and parsing data effectively can give data analysts a competitive edge in their data analysis process, from unlocking in-depth, insightful information to making data-driven decisions.

View File

@@ -0,0 +1,3 @@
# Data Collection
In the context of the Data Analyst role, data collection is a foundational process that entails gathering relevant data from various sources. This data can be quantitative or qualitative and may be sourced from databases, online platforms, customer feedback, among others. The gathered information is then cleaned, processed, and interpreted to extract meaningful insights. A data analyst performs this whole process carefully, as the quality of data is paramount to ensuring accurate analysis, which in turn informs business decisions and strategies. This highlights the importance of an excellent understanding, proper tools, and precise techniques when it comes to data collection in data analysis.

View File

@@ -0,0 +1,5 @@
# Handling Missing Data in Data Cleaning
When working with real-world data as a Data Analyst, encountering missing or null values is quite prevalent. This phenomenon is referred to as "Missing Data" in the field of data analysis. Missing data can severely impact the results of a data analysis process since it reduces the statistical power, which can distort the reliability and robustness of outcomes.
Missing data is a part of the 'Data Cleaning' step which is a crucial part of the Preprocessing in Data Analytics. It involves identifying incomplete, incorrect or irrelevant data and then replacing, modifying or deleting this dirty data. Successful data cleaning of missing values can significantly augment the overall quality of the data, therefore offering valuable and reliable insights. It is essential for a Data Analyst to understand the different techniques for dealing with missing data, such as different types of imputations based on the nature of the data and research question.

View File

@@ -0,0 +1,3 @@
# Removing Duplicates
In the world of data analysis, a critical step is data cleaning, that includes an important sub-task: removing duplicate entries. Duplicate data can distort the results of data analysis by giving extra weight to duplicate instances and leading to biased or incorrect conclusions. Despite the quality of data collection, there's a high probability that datasets may contain duplicate records due to various factors like human error, merging datasets, etc. Therefore, data analysts must master the skill of identifying and removing duplicates to ensure that their analysis is based on a unique, accurate, and diverse set of data. This process contributes to more accurate predictions and inferences, thus maximizing the insights gained from the data.

View File

@@ -0,0 +1,3 @@
# Finding Outliers
In the field of data analysis, data cleaning is an essential and preliminary step. This process involves correcting or removing any errors, inaccuracy, or irrelevance present in the obtained raw data, making it more suitable for analysis. One crucial aspect of this process is "finding outliers". Outliers are unusual or surprising data points that deviate significantly from the rest of the data. While they may be the result of mere variability or error, they will often pull the aggregate data towards them, skewing the results and impeding the accuracy of data analysis. Therefore, identifying and appropriately handling these outliers is crucial to ensure the reliability of subsequent data analysis tasks.

View File

@@ -0,0 +1,3 @@
# Data Transformation
Data Transformation, also known as Data Wrangling, is an essential part of a Data Analyst's role. This process involves the conversion of data from a raw format into another format to make it more appropriate and valuable for a variety of downstream purposes such as analytics. Data Analysts transform data to make the data more suitable for analysis, ensure accuracy, and to improve data quality. The right transformation techniques can give the data a structure, multiply its value, and enhance the accuracy of the analytics performed by serving meaningful results.

View File

@@ -0,0 +1,3 @@
# Pandas for Data Cleaning
In the realms of data analysis, data cleaning is a crucial preliminary process, this is where `pandas` - a popular python library - shines. Primarily used for data manipulation and analysis, pandas adopts a flexible and powerful data structure (DataFrames and Series) that greatly simplifies the process of cleaning raw, messy datasets. Data analysts often work with large volumes of data, some of which may contain missing or inconsistent data that can negatively impact the results of their analysis. By utilizing pandas, data analysts can quickly identify, manage and fill these missing values, drop unnecessary columns, rename column headings, filter specific data, apply functions for more complex data transformations and much more. Thus, making pandas an invaluable tool for effective data cleaning in data analysis.

View File

@@ -0,0 +1,3 @@
# Data Cleaning with dplyr
Data cleaning plays a crucial role in the data analysis pipeline, where it rectifies and enhances the quality of data to increase the efficiency and authenticity of the analytical process. The `dplyr` package, an integral part of the `tidyverse` suite in R, has become a staple in the toolkit of data analysts dealing with data cleaning. `dplyr` offers a coherent set of verbs that significantly simplifies the process of manipulating data structures, such as dataframes and databases. This involves selecting, sorting, filtering, creating or modifying variables, and aggregating records, among other operations. Incorporating `dplyr` into the data cleaning phase enables data analysts to perform operations more effectively, improve code readability, and handle large and complex data with ease.

View File

@@ -0,0 +1,3 @@
# Data Cleaning
Data cleaning, which is often referred as data cleansing or data scrubbing, is one of the most important and initial steps in the data analysis process. As a data analyst, the bulk of your work often revolves around understanding, cleaning, and standardizing raw data before analysis. Data cleaning involves identifying, correcting or removing any errors or inconsistencies in datasets in order to improve their quality. The process is crucial because it directly determines the accuracy of the insights you generate - garbage in, garbage out. Even the most sophisticated models and visualizations would not be of much use if they're based on dirty data. Therefore, mastering data cleaning techniques is essential for any data analyst.

View File

@@ -0,0 +1,3 @@
# Mean
Central tendency refers to the statistical measure that identifies a single value as representative of an entire distribution. The mean or average is one of the most popular and widely used measures of central tendency. For a data analyst, calculating the mean is a routine task. This single value provides an analyst with a quick snapshot of the data and could be useful for further data manipulation or statistical analysis. Mean is particularly helpful in predicting trends and patterns within voluminous data sets or adjusting influencing factors that may distort the 'true' representation of the data. It is the arithmetic average of a range of values or quantities, computed as the total sum of all the values divided by the total number of values.

View File

@@ -0,0 +1,3 @@
# Median
Median signifies the middle value in a data set when arranged in ascending or descending order. As a data analyst, understanding, calculating, and interpreting the median is crucial. It is especially helpful when dealing with outliers in a dataset as the median is less sensitive to extreme values. Thus, providing a more realistic 'central' value for skewed distributions. This measure is a reliable reflection of the dataset and is widely used in fields like real estate, economics, and finance for data interpretation and decision-making.

View File

@@ -0,0 +1,7 @@
# Mode
The concept of central tendency is fundamental in statistics and has numerous applications in data analysis. From a data analyst's perspective, the central tendencies like mean, median, and mode can be highly informative about the nature of data. Among these, the "Mode" is often underappreciated, yet it plays an essential role in interpreting datasets.
The mode, in essence, represents the most frequently occurring value in a dataset. While it may appear simplistic, the mode's ability to identify the most common value can be instrumental in a wide range of scenarios, like market research, customer behavior analysis, or trend identification. For instance, a data analyst can use the mode to determine the most popular product in a sales dataset or identify the most commonly reported bug in a software bug log.
Beyond these, utilizing the mode along with the other measures of central tendency (mean and median) can provide a more rounded view of your data. This approach personifies the diversity that's often required in data analytic strategies to account for different data distributions and outliers. The mode, therefore, forms an integral part of the data analyst's toolkit for statistical data interpretation.

View File

@@ -0,0 +1,3 @@
# Average
When focusing on data analysis, understanding key statistical concepts is crucial. Amongst these, central tendency is a foundational element. Central Tendency refers to the measure that determines the center of a distribution. The average is a commonly used statistical tool by which data analysts discern trends and patterns. As one of the most recognized forms of central tendency, figuring out the "average" involves summing all values in a data set and dividing by the number of values. This provides analysts with a 'typical' value, around which the remaining data tends to cluster, facilitating better decision-making based on existing data.

View File

@@ -0,0 +1,3 @@
# Central Tendency
Descriptive analysis is a significant branch in the field of data analytics, and under this, the concept of Central Tendency plays a vital role. As data analysts, understanding central tendency is of paramount importance as it offers a quick summary of the data. It provides information about the center point around which the numerical data is distributed. The three major types of the central tendency include the Mean, Median, and Mode. These measures are used by data analysts to identify trends, make comparisons, or draw conclusions. Therefore, an understanding of central tendency equips data analysts with essential tools for interpreting and making sense of statistical data.

View File

@@ -0,0 +1,3 @@
# Range
The concept of Range refers to the spread of a dataset, primarily in the realm of statistics and data analysis. This measure is crucial for a data analyst as it provides an understanding of the variability amongst the numbers within a dataset. Specifically in a role such as Data Analyst, understanding the range and dispersion aids in making more precise analyses and predictions. Understanding the dispersion within a range can highlight anomalies, identify standard norms, and form the foundation for statistical conclusions like the standard deviation, variance, and interquartile range. It allows for the comprehension of the reliability and stability of particular datasets, which can help guide strategic decisions in many industries. Therefore, range is a key concept that every data analyst must master.

View File

@@ -0,0 +1,3 @@
# Variance as a Measure of Dispersion
Data analysts heavily rely on statistical concepts to analyze and interpret data, and one such fundamental concept is variance. Variance, an essential measure of dispersion, quantifies the spread of data, providing insight into the level of variability within the dataset. Understanding variance is crucial for data analysts as the reliability of many statistical models depends on the assumption of constant variance across observations. In other words, it helps analysts determine how much data points diverge from the expected value or mean, which can be pivotal in identifying outliers, understanding data distribution, and driving decision-making processes. However, variance can't be interpreted in the original units of measurement due to its squared nature, which is why it is often used in conjunction with its square root, the standard deviation.

View File

@@ -0,0 +1,3 @@
# Standard Deviation
In the realm of data analysis, the concept of dispersion plays a critical role in understanding and interpreting data. One of the key measures of dispersion is the Standard Deviation. As a data analyst, understanding the standard deviation is crucial as it gives insight into how much variation or dispersion exists from the average (mean), or expected value. A low standard deviation indicates that the data points are generally close to the mean, while a high standard deviation implies that the data points are spread out over a wider range. By mastering the concept of standard deviation and other statistical tools related to dispersion, data analysts are better equipped to provide meaningful analyses and insights from the available data.

View File

@@ -0,0 +1,3 @@
# Dispersion
Dispersion in descriptive analysis, specifically for a data analyst, offers a crucial way to understand the variability or spread in a set of data. Descriptive analysis focus on describing and summarizing data to find patterns, relationships, or trends. Distinct measures of dispersion such as range, variance, standard deviation, and interquartile range gives data analysts insight into how spread out data points are, and how reliable any patterns detected may be. This understanding of dispersion helps data analysts in identifying outliers, drawing meaningful conclusions, and making informed predictions.

View File

@@ -0,0 +1,3 @@
# Skewness
Skewness is a crucial statistical concept driven by data analysis and is a significant parameter in understanding the distribution shape of a dataset. In essence, skewness provides a measure to define the extent and direction of asymmetry in data. A positive skewness indicates a distribution with an asymmetric tail extending towards more positive values, while a negative skew indicates a distribution with an asymmetric tail extending towards more negative values. For a data analyst, recognizing and analyzing skewness is essential as it can greatly influence model selection, prediction accuracy, and interpretation of results.

View File

@@ -0,0 +1,3 @@
# Kurtosis
Understanding distribution shapes is an integral part of a Data Analyst's daily responsibilities. When they inspect statistical data, one key feature they consider is the kurtosis of the distribution. In statistics, kurtosis identifies the heaviness of the distribution tails and the sharpness of the peak. A proper understanding of kurtosis can assist Analysts in risk management, outlier detection, and provides deeper insight into variations. Therefore, being proficient in interpreting kurtosis measurements of a distribution shape is a significant skill that every data analyst should master.

View File

@@ -0,0 +1,3 @@
# Distribution Shape
In the realm of Data Analysis, the distribution shape is considered as an essential component under descriptive analysis. A data analyst uses the shape of the distribution to understand the spread and trend of the data set. It aids in identifying the skewness (asymmetry) and kurtosis (the 'tailedness') of the data and helps to reveal meaningful patterns that standard statistical measures like mean or median might not capture. The distribution shape can provide insights into datas normality and variability, informing decisions about which statistical methods are appropriate for further analysis.

View File

@@ -0,0 +1,3 @@
# Visualising Distributions
Visualising Distributions, from a data analyst's perspective, plays a key role in understanding the overall distribution and identifying patterns within data. It aids in summarising, structuring, and plotting structured data graphically to provide essential insights. This includes using different chart types like bar graphs, histograms, and scatter plots for interval data, and pie or bar graphs for categorical data. Ultimately, the aim is to provide a straightforward and effective manner to comprehend the data's characteristics and underlying structure. A data analyst uses these visualisation techniques to make initial conclusions, detect anomalies, and decide on further analysis paths.

View File

@@ -0,0 +1,3 @@
# Descriptive Analysis
In the realm of data analytics, descriptive analysis plays an imperative role as a fundamental step in data interpretation. Essentially, descriptive analysis encompasses the process of summarizing, organizing, and simplifying complex data into understandable and interpretable forms. This method entails the use of various statistical tools to depict patterns, correlations, and trends in a data set. For data analysts, it serves as the cornerstone for in-depth data exploration, providing the groundwork upon which further analysis techniques such as predictive and prescriptive analysis are built.

View File

@@ -0,0 +1,3 @@
# Tableau in Data Visualization
Tableau is a powerful data visualization tool utilized extensively by data analysts worldwide. Its primary role is to transform raw, unprocessed data into an understandable format without any technical skills or coding. Data analysts use Tableau to create data visualizations, reports, and dashboards that help businesses make more informed, data-driven decisions. They also use it to perform tasks like trend analysis, pattern identification, and forecasts, all within a user-friendly interface. Moreover, Tableau's data visualization capabilities make it easier for stakeholders to understand complex data and act on insights quickly.

View File

@@ -0,0 +1,3 @@
# PowerBI
PowerBI, an interactive data visualization and business analytics tool developed by Microsoft, plays a crucial role in the field of a data analyst's work. It helps data analysts to convert raw data into meaningful insights through it's easy-to-use dashboards and reports function. This tool provides a unified view of business data, allowing analysts to track and visualize key performance metrics and make better-informed business decisions. With PowerBI, data analysts also have the ability to manipulate and produce visualizations of large data sets that can be shared across an organization, making complex statistical information more digestible.

View File

@@ -0,0 +1,3 @@
# Matplotlib
For a Data Analyst, understanding data and being able to represent it in a visually insightful form is a crucial part of effective decision-making in any organization. Matplotlib, a plotting library for the Python programming language, is an extremely useful tool for this purpose. It presents a versatile framework for generating line plots, scatter plots, histogram, bar charts and much more in a very straightforward manner. This library also allows for comprehensive customizations, offering a high level of control over the look and feel of the graphics it produces, which ultimately enhances the quality of data interpretation and communication.

View File

@@ -0,0 +1,3 @@
# Seaborn
Seaborn is a robust, comprehensive Python library focused on the creation of informative and attractive statistical graphics. As a data analyst, seaborn plays an essential role in elaborating complex visual stories with the data. It aids in understanding the data by providing an interface for drawing attractive and informative statistical graphics. Seaborn is built on top of Python's core visualization library Matplotlib, and is integrated with data structures from Pandas. This makes seaborn an integral tool for data visualization in the data analyst's toolkit, making the exploration and understanding of data easier and more intuitive.

View File

@@ -0,0 +1,3 @@
# Data Visualization with ggplot2
ggplot2 is an important and powerful tool in the data analyst's toolkit, especially for visualizing and understanding complex datasets. Built within the R programming language, it provides a flexible, cohesive environment for creating graphs. The main strength of ggplot2 lies in its ability to produce sophisticated and tailored visualizations. This allows data analysts to communicate data-driven findings in an efficient and effective manner, enabling clear communication to stakeholders about relevant insights and patterns identified within the data.

View File

@@ -0,0 +1,3 @@
# Bar Charts in Data Visualization
As a vital tool in the data analyst's arsenal, bar charts are essential for analyzing and interpreting complex data. Bar charts, otherwise known as bar graphs, are frequently used graphical displays for dealing with categorical data groups or discrete variables. With their stark visual contrast and definitive measurements, they provide a simple yet effective means of identifying trends, understanding data distribution, and making data-driven decisions. By analyzing the lengths or heights of different bars, data analysts can effectively compare categories or variables against each other and derive meaningful insights effectively. Simplicity, readability, and easy interpretation are key features that make bar charts a favorite in the world of data analytics.

View File

@@ -0,0 +1,3 @@
# Histograms
As a Data Analyst, understanding and representing complex data in a simplified and comprehensible form is of paramount importance. This is where the concept of data visualization comes into play, specifically the use of histograms. A histogram is a graphical representation that organizes a group of data points into a specified range. It provides an visual interpretation of numerical data by indicating the number of data points that fall within a specified range of values, known as bins. This highly effective tool allows data analysts to view data distribution over a continuous interval or a certain time period, which can further aid in identifying trends, outliers, patterns, or anomalies present in the data. Consequently, histograms are instrumental in making informed business decisions based on these data interpretations.

View File

@@ -0,0 +1,3 @@
# Line Chart
Data visualization is a crucial skill for every Data Analyst and the Line Chart is one of the most commonly used chart types in this field. Line charts act as powerful tools for summarizing and interpreting complex datasets. Through attractive and interactive design, these charts allow for clear and efficient communication of patterns, trends, and outliers in the data. This makes them valuable for data analysts when presenting data spanning over a period of time, forecasting trends or demonstrating relationships between different data sets.

View File

@@ -0,0 +1,3 @@
# Stacked Chart
A stacked chart is an essential tool for a data analyst in the field of data visualization. This type of chart presents quantitative data in a visually appealing manner and allows users to easily compare different categories while still being able to compare the total sizes. These charts are highly effective when trying to measure part-to-whole relationships, displaying accumulated totals over time or when presenting data with multiple variables. Data analysts often use stacked charts to detect patterns, trends and anomalies which can aid in strategic decision making.

View File

@@ -0,0 +1,3 @@
# Scatter Plot
A scatter plot, a crucial aspect of data visualization, is a mathematical diagram using Cartesian coordinates to represent values from two different variables. As a data analyst, understanding and interpreting scatter plots can be instrumental in identifying correlations and trends within a dataset, drawing meaningful insights, and showcasing these findings in a clear, visual manner. In addition, scatter plots are paramount in predictive analytics as they reveal patterns which can be used to predict future occurrences.

Some files were not shown because too many files have changed in this diff Show More