mirror of
https://github.com/kamranahmedse/developer-roadmap.git
synced 2026-03-15 11:21:46 +08:00
Compare commits
106 Commits
feat/proje
...
feat/langu
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
19dae66a2b | ||
|
|
4adcf16f2d | ||
|
|
76970ab2ac | ||
|
|
bfb3a3eb30 | ||
|
|
447bf4eb0f | ||
|
|
e7c9135e99 | ||
|
|
e006871ce6 | ||
|
|
164baba193 | ||
|
|
7f776808df | ||
|
|
82e4e18b4d | ||
|
|
285dd28ae7 | ||
|
|
0af30bc421 | ||
|
|
2890c722fd | ||
|
|
4bbab1fbee | ||
|
|
b2081fd427 | ||
|
|
85135c5da9 | ||
|
|
ccc50b9c36 | ||
|
|
ba2ff16092 | ||
|
|
109d9c578a | ||
|
|
77b9912ada | ||
|
|
d662292906 | ||
|
|
45a9459f21 | ||
|
|
f06ccc5c37 | ||
|
|
37f2b75e07 | ||
|
|
ec94ff055f | ||
|
|
c1ae24fa20 | ||
|
|
2007167fa9 | ||
|
|
c67a7d195d | ||
|
|
a6cf26b1b0 | ||
|
|
57af6e493a | ||
|
|
e10caeca44 | ||
|
|
c9ed9271fe | ||
|
|
0dddd941d6 | ||
|
|
83c95fbc18 | ||
|
|
c2a36e6c38 | ||
|
|
124ce3eee7 | ||
|
|
ab69587aa5 | ||
|
|
ad3a5da811 | ||
|
|
1173e7c932 | ||
|
|
cab7373201 | ||
|
|
0de4345cb7 | ||
|
|
54c3f36e64 | ||
|
|
402ba0e9f3 | ||
|
|
93edff078e | ||
|
|
015e54d158 | ||
|
|
7ad2732786 | ||
|
|
158f41f220 | ||
|
|
7cb4287925 | ||
|
|
a9b6e01043 | ||
|
|
3cba71b3ac | ||
|
|
6b9754e885 | ||
|
|
7e884c6593 | ||
|
|
6c61244a14 | ||
|
|
80c564340a | ||
|
|
8a52d58341 | ||
|
|
447fce674a | ||
|
|
74051ee843 | ||
|
|
4a758b1b55 | ||
|
|
2c68cb83c2 | ||
|
|
2e2d11328d | ||
|
|
eefe365068 | ||
|
|
dde429caa0 | ||
|
|
111dc0a6d0 | ||
|
|
c6a4bff63e | ||
|
|
bcc456d3d0 | ||
|
|
4d1b9ab093 | ||
|
|
edfcc84ece | ||
|
|
3c3a92835d | ||
|
|
7269227dc2 | ||
|
|
bfd615f755 | ||
|
|
65a4f903f6 | ||
|
|
ecda4b6eb3 | ||
|
|
2ff54205ef | ||
|
|
a8801820cf | ||
|
|
e89b00f4f0 | ||
|
|
82da716657 | ||
|
|
7f7851e8e9 | ||
|
|
7b9b783472 | ||
|
|
dfc38db855 | ||
|
|
0b2119be50 | ||
|
|
3d71390126 | ||
|
|
8d801652b9 | ||
|
|
4c2109d470 | ||
|
|
6a8019f890 | ||
|
|
e2172abc72 | ||
|
|
032361936b | ||
|
|
c21dc44975 | ||
|
|
0c78ab8369 | ||
|
|
7bf3672ef6 | ||
|
|
58e7697451 | ||
|
|
56e58d431a | ||
|
|
f8e9642e6e | ||
|
|
7a6933699a | ||
|
|
9289099980 | ||
|
|
d2dad38963 | ||
|
|
29a4dc25b0 | ||
|
|
34cdd8c79a | ||
|
|
658ed6738b | ||
|
|
dd3f89b58a | ||
|
|
7e57bfc854 | ||
|
|
25c1228bf2 | ||
|
|
f6c758c3ef | ||
|
|
82fbb1235e | ||
|
|
fbd24ea5e2 | ||
|
|
1981568501 | ||
|
|
8a5c0eeb5f |
6
.github/ISSUE_TEMPLATE/config.yml
vendored
6
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,14 +1,14 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: ✋ Roadmap Request
|
||||
url: https://discord.gg/ZrSpJ8zH
|
||||
url: https://roadmap.sh/discord
|
||||
about: Please do not open issues with roadmap requests, hop onto the discord server for that.
|
||||
- name: 📝 Typo or Grammatical Mistake
|
||||
url: https://github.com/kamranahmedse/developer-roadmap/tree/master/src/data
|
||||
about: Please submit a pull request instead of reporting it as an issue.
|
||||
- name: 💬 Chat on Discord
|
||||
url: https://discord.gg/ZrSpJ8zH
|
||||
url: https://roadmap.sh/discord
|
||||
about: Join the community on our Discord server.
|
||||
- name: 🤝 Guidance
|
||||
url: https://discord.gg/ZrSpJ8zH
|
||||
url: https://roadmap.sh/discord
|
||||
about: Join the community in our Discord server.
|
||||
|
||||
@@ -27,6 +27,24 @@ For the existing roadmaps, please follow the details listed for the nature of co
|
||||
|
||||
If you have a project idea that you think we should add to the roadmap, feel free to open an issue with as much details about the project as possible and the roadmap you think it should be added to.
|
||||
|
||||
The detailed format for issue should be as follows:
|
||||
|
||||
```
|
||||
## What is this project about?
|
||||
|
||||
(Add introduction to the project)
|
||||
|
||||
## Skills this Project Covers
|
||||
|
||||
(Comma separated list of skills e.g. Programming Knowledge, Database,)
|
||||
|
||||
## Requirements
|
||||
|
||||
( Detailed list of requirements, i.e. input, output, an hints to help build this etc)
|
||||
```
|
||||
|
||||
Have a look at this project to get an idea of [what we are looking for](https://roadmap.sh/projects/github-user-activity).
|
||||
|
||||
## Adding Content
|
||||
|
||||
Find [the content directory inside the relevant roadmap](https://github.com/kamranahmedse/developer-roadmap/tree/master/src/data/roadmaps). Please keep the following guidelines in mind when submitting content:
|
||||
|
||||
@@ -1846,23 +1846,57 @@
|
||||
},
|
||||
"mm6c7GLQEwoQdAHdAYzGh": {
|
||||
"title": "Security",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "This topic describes Angular's built-in protections against common web-application vulnerabilities and attacks such as cross-site scripting attacks. It doesn't cover application-level security, such as authentication and authorization.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Angular Official Docs - Security",
|
||||
"url": "https://angular.dev/best-practices/security",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Open Web Application Security Project (OWASP)",
|
||||
"url": "https://owasp.org/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"umUX4Hxk7srHlFR_Un-u7": {
|
||||
"title": "Cross-site Scripting",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Cross-site scripting (XSS) enables attackers to inject malicious code into web pages. Such code can then, for example, steal user and login data, or perform actions that impersonate the user. This has been one of the biggest web security vulnerabilities for over a decade.\n\nTo systematically block XSS bugs, Angular treats all values as untrusted by default. When a value is inserted into the DOM from a template binding, or interpolation, Angular sanitizes and escapes untrusted values.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Angular Official Docs - Preventing cross-site scripting (XSS)",
|
||||
"url": "https://angular.dev/best-practices/security#preventing-cross-site-scripting-xss",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Mitigate cross-site scripting (XSS)",
|
||||
"url": "https://web.dev/articles/strict-csp",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cgI9oeUHufA-ky_W1zENe": {
|
||||
"title": "Sanitization",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Sanitization is the inspection of an untrusted value, turning it into a value that's safe to insert into the DOM. In many cases, sanitization doesn't change a value at all. Sanitization depends on context: A value that's harmless in CSS is potentially dangerous in a URL.\n\nAngular sanitizes untrusted values for HTML and URLs. Sanitizing resource URLs isn't possible because they contain arbitrary code. In development mode, Angular prints a console warning when it has to change a value during sanitization.\n\nInterpolated content is always escaped —the HTML isn't interpreted and the browser displays angle brackets in the element's text content.\n\nFor the HTML to be interpreted, bind it to an HTML property such as `innerHTML`. Be aware that binding a value that an attacker might control into `innerHTML` normally causes an XSS vulnerability.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Angular Official Docs - Sanitization and security contexts",
|
||||
"url": "https://angular.dev/best-practices/security#sanitization-and-security-contexts",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"XoYSuv1salCCHoI1cJkxv": {
|
||||
"title": "Trusting Safe Values",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Sometimes applications genuinely need to include executable code, display an `<iframe>` from some URL, or construct potentially dangerous URLs. To prevent automatic sanitization in these situations, tell Angular that you inspected a value, checked how it was created, and made sure it is secure. Do be careful. If you trust a value that might be malicious, you are introducing a security vulnerability into your application. If in doubt, find a professional security reviewer.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Angular Official Docs - Trusting safe values",
|
||||
"url": "https://angular.dev/best-practices/security#trusting-safe-values",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"5h7U0spwEUhB-hbjSlaeB": {
|
||||
"title": "Enforce Trusted Types",
|
||||
|
||||
@@ -339,11 +339,6 @@
|
||||
"title": "Explore top posts about Python",
|
||||
"url": "https://app.daily.dev/tags/python?ref=roadmapsh",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Python for Beginners - Learn Python in 1 Hour",
|
||||
"url": "https://www.youtube.com/watch?v=kqtD5dpn9C8&ab_channel=ProgrammingwithMosh",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -1197,6 +1192,11 @@
|
||||
"title": "Explore top posts about GraphQL",
|
||||
"url": "https://app.daily.dev/tags/graphql?ref=roadmapsh",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Tutorial - GraphQL Explained in 100 Seconds",
|
||||
"url": "https://www.youtube.com/watch?v=eIQh02xuVw4",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -1704,6 +1704,11 @@
|
||||
"title": "RabbitMQ Tutorial - Message Queues and Distributed Systems",
|
||||
"url": "https://www.youtube.com/watch?v=nFxjaVmFj5E",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "RabbitMQ in 100 Seconds",
|
||||
"url": "https://m.youtube.com/watch?v=NQ3fZtyXji0",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -2909,7 +2914,7 @@
|
||||
},
|
||||
"dwfEHInbX2eFiafM-nRMX": {
|
||||
"title": "DynamoDB",
|
||||
"description": "",
|
||||
"description": "DynamoDB is a fully managed NoSQL database service provided by AWS, designed for high-performance applications that require low-latency data access at any scale.\n\nIt supports key-value and document data models, allowing developers to store and retrieve any amount of data with predictable performance.\n\nDynamoDB is known for its seamless scalability, automatic data replication across multiple AWS regions, and built-in security features, making it ideal for use cases like real-time analytics, mobile apps, gaming, IoT, and more.\n\nKey features include flexible schema design, powerful query capabilities, and integration with other AWS services.",
|
||||
"links": []
|
||||
},
|
||||
"RyJFLLGieJ8Xjt-DlIayM": {
|
||||
@@ -2966,8 +2971,14 @@
|
||||
},
|
||||
"WiAK70I0z-_bzbWNwiHUd": {
|
||||
"title": "TimeScale",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "TimescaleDB is an open-source time-series database built on top of PostgreSQL, designed for efficiently storing and querying time-series data.\n\nIt introduces the concept of hypertables, which automatically partition data by time and space, making it ideal for high-volume data scenarios like monitoring, IoT, and financial analytics.\n\nTimescaleDB combines the power of relational databases with the performance of a specialized time-series solution, offering advanced features like continuous aggregates, real-time analytics, and seamless integration with PostgreSQL's ecosystem.\n\nIt's a robust choice for developers looking to manage time-series data in scalable and efficient ways.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Tutorial - TimeScaleDB Explained in 100 Seconds",
|
||||
"url": "https://www.youtube.com/watch?v=69Tzh_0lHJ8",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"gT6-z2vhdIQDzmR2K1g1U": {
|
||||
"title": "Cassandra",
|
||||
@@ -2997,7 +3008,7 @@
|
||||
},
|
||||
"5xy66yQrz1P1w7n6PcAFq": {
|
||||
"title": "AWS Neptune",
|
||||
"description": "",
|
||||
"description": "AWS Neptune is a fully managed graph database service designed for applications that require highly connected data.\n\nIt supports two popular graph models: Property Graph and RDF (Resource Description Framework), allowing you to build applications that traverse billions of relationships with millisecond latency.\n\nNeptune is optimized for storing and querying graph data, making it ideal for use cases like social networks, recommendation engines, fraud detection, and knowledge graphs.\n\nIt offers high availability, automatic backups, and multi-AZ (Availability Zone) replication, ensuring data durability and fault tolerance.\n\nAdditionally, Neptune integrates seamlessly with other AWS services and supports open standards like Gremlin, SPARQL, and Apache TinkerPop, making it flexible and easy to integrate into existing applications.",
|
||||
"links": []
|
||||
},
|
||||
"Z01E67D6KjrShvQCHjGR7": {
|
||||
|
||||
@@ -499,7 +499,7 @@
|
||||
]
|
||||
},
|
||||
"JLXIbP-y8C2YktIk3R12m": {
|
||||
"title": "Ehereum",
|
||||
"title": "Ethereum",
|
||||
"description": "Ethereum is a programmable blockchain platform with the capacity to support smart contracts, dapps (decentralized apps), and other DeFi projects. The Ethereum native token is the Ether (ETH), and it’s used to fuel operations on the blockchain.\n\nThe Ethereum platform launched in 2015, and it’s now the second largest form of crypto next to Bitcoin (BTC).\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
|
||||
@@ -365,36 +365,10 @@
|
||||
}
|
||||
]
|
||||
},
|
||||
"BqvijNoRzSGYLCMP-6hhr": {
|
||||
"title": "CISSP",
|
||||
"description": "The Certified Information Systems Security Professional (CISSP) is a globally recognized certification offered by the International Information System Security Certification Consortium (ISC)². It is designed for experienced security professionals to validate their knowledge and expertise in the field of information security.\n\nWho Should Obtain the CISSP Certification?\n------------------------------------------\n\nThe CISSP certification is ideal for security consultants, managers, IT directors, security auditors, security analysts, and other professionals who are responsible for designing, implementing, and managing security for their organization. This certification is aimed at professionals with at least five years of full-time experience in two or more of the eight CISSP domains:\n\n* Security and Risk Management\n* Asset Security\n* Security Architecture and Engineering\n* Communication and Network Security\n* Identity and Access Management (IAM)\n* Security Assessment and Testing\n* Security Operations\n* Software Development Security\n\nCertification Process\n---------------------\n\nTo obtain the CISSP certification, candidates must meet the following requirements:\n\n* **Experience:** Possess a minimum of five years of cumulative, paid, full-time work experience in two or more of the eight domains of the CISSP Common Body of Knowledge (CBK).\n \n* **Exam:** Pass the CISSP examination with a minimum scaled score of 700 out of 1000 points. The exam consists of 100 to 150 multiple-choice and advanced innovative questions that must be completed within three hours.\n \n* **Endorsement:** After passing the exam, candidates must submit an endorsement application to be reviewed and endorsed by an (ISC)² CISSP holder within nine months of passing the exam.\n \n* **Continuing Professional Education (CPE):** To maintain the CISSP certification, professionals must earn 120 CPE credits every three years, with a minimum of 40 credits earned each year, and pay an annual maintenance fee.\n \n\nBenefits of CISSP Certification\n-------------------------------\n\nObtaining the CISSP certification comes with numerous benefits, such as:\n\n* Enhanced credibility, as the CISSP is often considered the gold standard in information security certifications.\n* Increased job opportunities, as many organizations and government agencies require or prefer CISSP-certified professionals.\n* Improved knowledge and skills, as the certification covers a broad range of security topics and best practices.\n* Higher salary potential, as CISSP-certified professionals often command higher salaries compared to their non-certified counterparts.\n* Access to a network of other CISSP-certified professionals and resources, enabling continuous learning and professional development.\n\nLearn more from the following resources",
|
||||
"links": [
|
||||
{
|
||||
"title": "ISC2 CISSP",
|
||||
"url": "https://www.isc2.org/certifications/cissp",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "ISC2 CISSP - Official Study Guide",
|
||||
"url": "https://www.wiley.com/en-us/ISC2+CISSP+Certified+Information+Systems+Security+Professional+Official+Study+Guide%2C+10th+Edition-p-9781394254699",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Destcert - CISSP Free Resources",
|
||||
"url": "https://destcert.com/resources/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "CISSP Exam Cram 2024",
|
||||
"url": "https://youtube.com/playlist?list=PL7XJSuT7Dq_XPK_qmYMqfiBjbtHJRWigD&si=_wSeCkvj-1rzv0ZF",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "CISSP Prep (Coffee Shots)",
|
||||
"url": "https://youtube.com/playlist?list=PL0hT6hgexlYxKzBmiCD6SXW0qO5ucFO-J&si=9ICs373Vl1ce3s0H",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
"AAo7DXB7hyBzO6p05gx1i": {
|
||||
"title": "CEH",
|
||||
"description": "**Certified Ethical Hacker (CEH)** is an advanced certification focused on equipping cybersecurity professionals with the knowledge and skills required to defend against the continuously evolving landscape of cyber threats. This certification is facilitated by the EC-Council, an internationally recognized organization for information security certifications.\n\nObjectives\n----------\n\nThe CEH certification aims to provide professionals with the following skills:\n\n* Understand the ethics and legal requirements of ethical hacking\n* Identify and analyze common cyber threats, including malware, social engineering, and various network attacks\n* Utilize the latest penetration testing tools and methodologies to uncover vulnerabilities in systems, networks, and applications\n* Implement defensive countermeasures to protect against cyber attacks\n\nTarget Audience\n---------------\n\nThe CEH certification is ideal for:\n\n* Cybersecurity professionals seeking to expand their skill set\n* IT administrators responsible for securing their organization's systems and network\n* Penetration testers looking to demonstrate their ethical hacking capabilities\n* Security consultants who want a recognized certification in the IT security field\n\nExam Details\n------------\n\nTo become a Certified Ethical Hacker, you must pass the CEH exam, which consists of the following:\n\n* Number of Questions: 125\n* Exam Type: Multiple choice questions\n* Duration: 4 hours\n* Passing Score: 70%\n\nPreparation\n-----------\n\nTo prepare for the CEH exam, candidates can follow the EC-Council's official training course or opt for self-study. The recommended resources include:\n\n* EC-Council's [_CEH v11: Certified Ethical Hacker_](https://www.eccouncil.org/programs/certified-ethical-hacker-ceh/) training course\n* Official CEH study guide and practice exams\n* CEH-related books, articles, and online resources\n\nRecertification\n---------------\n\nCEH holders need to earn 120 ECE (Education Credits) within three years of obtaining their certification to retain their credentials. These credits can be obtained through training, workshops, conferences, and other continuous learning opportunities in the field of information security.",
|
||||
"links": []
|
||||
},
|
||||
"lqFp4VLY_S-5tAbhNQTew": {
|
||||
"title": "CISA",
|
||||
@@ -436,10 +410,36 @@
|
||||
"description": "CREST is a non-profit, accreditation and certification body that represents the technical information security industry. Established in 2008, its mission is to promote the development and professionalization of the cyber security sector. CREST provides certifications for individuals and accreditations for companies, helping customers find knowledgeable and experienced professionals in the field.\n\nCREST Examinations and Certifications\n-------------------------------------\n\nCREST offers various examinations and certifications, including:\n\n* **CREST Practitioner Security Analyst (CPSA)**: This is an entry-level certification for individuals looking to demonstrate their knowledge and competence in vulnerability assessment and penetration testing. Passing the CPSA exam is a prerequisite for taking other CREST technical examinations.\n \n* **CREST Registered Penetration Tester (CRT)**: This certification is aimed at professionals with a solid understanding of infrastructure and web application penetration testing. CRT holders have demonstrated practical skills in identifying and exploiting vulnerabilities in a controlled environment.\n \n* **CREST Certified Infrastructure Tester (CCIT)** and **CREST Certified Web Application Tester (CCWAT)**: These advanced certifications require candidates to have a deep technical understanding and practical skills in infrastructure or web application testing, respectively. These certifications are intended for experienced professionals who can perform in-depth technical assessments and identify advanced security vulnerabilities.\n \n* **CREST Certified Simulated Attack Manager (CCSAM)** and **CREST Certified Simulated Attack Specialist (CCSAS)**: These certifications focus on the planning, scoping, and management of simulated attack engagements, or red teaming. They require candidates to have experience in both the technical and managerial aspects of coordinated cyber attacks.\n \n\nBenefits of CREST Certifications\n--------------------------------\n\nObtaining CREST certifications provides several benefits, such as:\n\n* Increased credibility and recognition within the cyber security industry\n* Validation of your technical knowledge and expertise\n* Access to resources and support through the CREST community\n* Assurance for employers and clients that you're skilled and trustworthy\n\nIn the rapidly evolving field of cyber security, CREST certifications demonstrate a commitment to continuous learning, growth, and professionalism.",
|
||||
"links": []
|
||||
},
|
||||
"AAo7DXB7hyBzO6p05gx1i": {
|
||||
"title": "CEH",
|
||||
"description": "**Certified Ethical Hacker (CEH)** is an advanced certification focused on equipping cybersecurity professionals with the knowledge and skills required to defend against the continuously evolving landscape of cyber threats. This certification is facilitated by the EC-Council, an internationally recognized organization for information security certifications.\n\nObjectives\n----------\n\nThe CEH certification aims to provide professionals with the following skills:\n\n* Understand the ethics and legal requirements of ethical hacking\n* Identify and analyze common cyber threats, including malware, social engineering, and various network attacks\n* Utilize the latest penetration testing tools and methodologies to uncover vulnerabilities in systems, networks, and applications\n* Implement defensive countermeasures to protect against cyber attacks\n\nTarget Audience\n---------------\n\nThe CEH certification is ideal for:\n\n* Cybersecurity professionals seeking to expand their skill set\n* IT administrators responsible for securing their organization's systems and network\n* Penetration testers looking to demonstrate their ethical hacking capabilities\n* Security consultants who want a recognized certification in the IT security field\n\nExam Details\n------------\n\nTo become a Certified Ethical Hacker, you must pass the CEH exam, which consists of the following:\n\n* Number of Questions: 125\n* Exam Type: Multiple choice questions\n* Duration: 4 hours\n* Passing Score: 70%\n\nPreparation\n-----------\n\nTo prepare for the CEH exam, candidates can follow the EC-Council's official training course or opt for self-study. The recommended resources include:\n\n* EC-Council's [_CEH v11: Certified Ethical Hacker_](https://www.eccouncil.org/programs/certified-ethical-hacker-ceh/) training course\n* Official CEH study guide and practice exams\n* CEH-related books, articles, and online resources\n\nRecertification\n---------------\n\nCEH holders need to earn 120 ECE (Education Credits) within three years of obtaining their certification to retain their credentials. These credits can be obtained through training, workshops, conferences, and other continuous learning opportunities in the field of information security.",
|
||||
"links": []
|
||||
"BqvijNoRzSGYLCMP-6hhr": {
|
||||
"title": "CISSP",
|
||||
"description": "The Certified Information Systems Security Professional (CISSP) is a globally recognized certification offered by the International Information System Security Certification Consortium (ISC)². It is designed for experienced security professionals to validate their knowledge and expertise in the field of information security.\n\nWho Should Obtain the CISSP Certification?\n------------------------------------------\n\nThe CISSP certification is ideal for security consultants, managers, IT directors, security auditors, security analysts, and other professionals who are responsible for designing, implementing, and managing security for their organization. This certification is aimed at professionals with at least five years of full-time experience in two or more of the eight CISSP domains:\n\n* Security and Risk Management\n* Asset Security\n* Security Architecture and Engineering\n* Communication and Network Security\n* Identity and Access Management (IAM)\n* Security Assessment and Testing\n* Security Operations\n* Software Development Security\n\nCertification Process\n---------------------\n\nTo obtain the CISSP certification, candidates must meet the following requirements:\n\n* **Experience:** Possess a minimum of five years of cumulative, paid, full-time work experience in two or more of the eight domains of the CISSP Common Body of Knowledge (CBK).\n \n* **Exam:** Pass the CISSP examination with a minimum scaled score of 700 out of 1000 points. The exam consists of 100 to 150 multiple-choice and advanced innovative questions that must be completed within three hours.\n \n* **Endorsement:** After passing the exam, candidates must submit an endorsement application to be reviewed and endorsed by an (ISC)² CISSP holder within nine months of passing the exam.\n \n* **Continuing Professional Education (CPE):** To maintain the CISSP certification, professionals must earn 120 CPE credits every three years, with a minimum of 40 credits earned each year, and pay an annual maintenance fee.\n \n\nBenefits of CISSP Certification\n-------------------------------\n\nObtaining the CISSP certification comes with numerous benefits, such as:\n\n* Enhanced credibility, as the CISSP is often considered the gold standard in information security certifications.\n* Increased job opportunities, as many organizations and government agencies require or prefer CISSP-certified professionals.\n* Improved knowledge and skills, as the certification covers a broad range of security topics and best practices.\n* Higher salary potential, as CISSP-certified professionals often command higher salaries compared to their non-certified counterparts.\n* Access to a network of other CISSP-certified professionals and resources, enabling continuous learning and professional development.\n\nLearn more from the following resources",
|
||||
"links": [
|
||||
{
|
||||
"title": "ISC2 CISSP",
|
||||
"url": "https://www.isc2.org/certifications/cissp",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "ISC2 CISSP - Official Study Guide",
|
||||
"url": "https://www.wiley.com/en-us/ISC2+CISSP+Certified+Information+Systems+Security+Professional+Official+Study+Guide%2C+10th+Edition-p-9781394254699",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Destcert - CISSP Free Resources",
|
||||
"url": "https://destcert.com/resources/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "CISSP Exam Cram 2024",
|
||||
"url": "https://youtube.com/playlist?list=PL7XJSuT7Dq_XPK_qmYMqfiBjbtHJRWigD&si=_wSeCkvj-1rzv0ZF",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "CISSP Prep (Coffee Shots)",
|
||||
"url": "https://youtube.com/playlist?list=PL0hT6hgexlYxKzBmiCD6SXW0qO5ucFO-J&si=9ICs373Vl1ce3s0H",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"UY6xdt_V3YMkZxZ1hZLvW": {
|
||||
"title": "Operating Systems",
|
||||
@@ -1958,8 +1958,14 @@
|
||||
},
|
||||
"O1fY2n40yjZtJUEeoItKr": {
|
||||
"title": "Evil Twin",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "An Evil Twin is a type of wireless network attack where an attacker sets up a rogue Wi-Fi access point that mimics a legitimate Wi-Fi network. The rogue access point has the same SSID (network name) as the legitimate network, making it difficult for users to distinguish between the two. The attacker's goal is to trick users into connecting to the rogue access point, allowing them to intercept sensitive information, inject malware, or launch other types of attacks.\n\nTypes of Evil Twin Attacks\n--------------------------\n\n* **Captive Portal Attack:** The most common evil twin attack scenario is an attack using Captive Portals, this is a common scenario where an attacker creates a fake captive portal that mimics the legitimate network's login page. The goal is to trick users into entering their credentials, which the attacker can then use to gain access to the network.\n* **Man-in-the-Middle (MitM) Attack:** In this scenario, the attacker intercepts communication between the user's device and the legitimate network. The attacker can then inject malware, steal sensitive information, or modify data in real-time.\n* **SSL Stripping Attack:** The attacker downgrades the user's connection from HTTPS to HTTP, allowing them to intercept sensitive information, such as login credentials or credit card numbers.\n* **Malware Injection:** The attacker injects malware into the user's device, which can then spread to other devices on the network.\n\nHow Evil Twin Attacks are Carried Out\n-------------------------------------\n\n* **Rogue Access Point:** The attacker sets up a rogue access point with the same SSID as the legitimate network. This can be done using a laptop, a portable Wi-Fi router, or even a compromised device on the network.\n* **Wi-Fi Scanning:** The attacker uses specialized software to scan for nearby Wi-Fi networks and identify potential targets.\n* **Network Sniffing:** The attacker uses network sniffing tools to capture and analyze network traffic, allowing them to identify vulnerabilities and intercept sensitive information.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Common tool - airgeddon",
|
||||
"url": "https://www.kali.org/tools/airgeddon/",
|
||||
"type": "website"
|
||||
}
|
||||
]
|
||||
},
|
||||
"urtsyYWViEzbqYLoNfQAh": {
|
||||
"title": "DNS Poisoning",
|
||||
@@ -1998,8 +2004,14 @@
|
||||
},
|
||||
"P-Am25WJV8cFd_KsX7cdj": {
|
||||
"title": "SQL Injection",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "**SQL Injection** is a type of web application security vulnerability that allows an attacker to inject malicious SQL code into a web application's database, potentially leading to unauthorized data access, modification, or deletion.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "PortSwigger - SQL Injection",
|
||||
"url": "https://portswigger.net/web-security/sql-injection",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"pK2iRArULlK-B3iSVo4-n": {
|
||||
"title": "CSRF",
|
||||
|
||||
@@ -547,6 +547,11 @@
|
||||
"url": "https://www.learnshell.org/en/Welcome",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Bash Scripting Tutorial",
|
||||
"url": "https://www.javatpoint.com/bash",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Explore top posts about Bash",
|
||||
"url": "https://app.daily.dev/tags/bash?ref=roadmapsh",
|
||||
|
||||
@@ -2736,11 +2736,6 @@
|
||||
"title": "Explore top posts about React",
|
||||
"url": "https://app.daily.dev/tags/react?ref=roadmapsh",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Build a React Native App by Mosh",
|
||||
"url": "https://www.youtube.com/watch?v=0-S5a0eXPoc",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
@@ -856,7 +856,7 @@
|
||||
},
|
||||
"YVMyHFSCVF-GgXydq-SFJ": {
|
||||
"title": "Checkpoint — Infrastructure",
|
||||
"description": "If you remember, earlier in the roadmap, you manually logged into the AWS console and had to setup the services. Now that you know terraform, go ahead and automate the process of creating the infrastructure for your application using terraform and with that your deployments will be fully automated i.e., you should have:\n\n* Infrastructure setup using terraform\n* Provisioning using Ansible\n* CI/CD using GitHub Actions\n* Monitoring using Monit\n\nAnd that is it! You have successfully completed the roadmap and are now a full-stack developer. Congratulations! 🎉\n\nWhat's next?\n------------\n\nGo ahead and build something cool! Share your learnings with the community and help others learn as well. If you have any questions, feel free to join our [discord server](https://discord.gg/ZrSpJ8zH) and ask away!",
|
||||
"description": "If you remember, earlier in the roadmap, you manually logged into the AWS console and had to setup the services. Now that you know terraform, go ahead and automate the process of creating the infrastructure for your application using terraform and with that your deployments will be fully automated i.e., you should have:\n\n* Infrastructure setup using terraform\n* Provisioning using Ansible\n* CI/CD using GitHub Actions\n* Monitoring using Monit\n\nAnd that is it! You have successfully completed the roadmap and are now a full-stack developer. Congratulations! 🎉\n\nWhat's next?\n------------\n\nGo ahead and build something cool! Share your learnings with the community and help others learn as well. If you have any questions, feel free to join our [discord server](https://roadmap.sh/discord) and ask away!",
|
||||
"links": []
|
||||
}
|
||||
}
|
||||
@@ -312,12 +312,39 @@
|
||||
"7OffO2mBmfBKqPBTZ9ngI": {
|
||||
"title": "Godot",
|
||||
"description": "Godot is an open-source, multi-platform game engine that is known for being feature-rich and user-friendly. It is developed by hundreds of contributors from around the world and supports the creation of both 2D and 3D games. Godot uses its own scripting language, GDScript, which is similar to Python, but it also supports C# and visual scripting. It is equipped with a unique scene system and comes with a multitude of tools that can expedite the development process. Godot's design philosophy centers around flexibility, extensibility, and ease of use, providing a handy tool for both beginners and pros in game development.",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "Godot in 100 Seconds",
|
||||
"url": "https://m.youtube.com/watch?v=QKgTZWbwD1U",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"a6H-cZtp3A_fB8jnfMxBR": {
|
||||
"title": "Unreal Engine",
|
||||
"description": "The **Unreal Engine** is a powerful game development engine created by Epic Games. Used by game developers worldwide, it supports the creation of high-quality games across multiple platforms such as iOS, Android, Windows, Mac, Xbox, and PlayStation. Unreal Engine is renowned for its photo-realistic rendering, dynamic physics and effects, robust multiplayer framework, and its flexible scripting system called Blueprint. The engine is also fully equipped with dedicated tools and functionalities for animation, AI, lighting, cinematography, and post-processing effects. The most recent version, Unreal Engine 5, introduces real-time Global Illumination and makes film-quality real-time graphics achievable.",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "Unreal Engine Documentation",
|
||||
"url": "https://dev.epicgames.com/documentation/en-us/unreal-engine/unreal-engine-5-4-documentation",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Unreal Engine YouTube Channel",
|
||||
"url": "https://m.youtube.com/channel/UCBobmJyzsJ6Ll7UbfhI4iwQ",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Unreal Source Discord",
|
||||
"url": "https://discord.gg/unrealsource",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Unreal in 100 Seconds",
|
||||
"url": "https://www.youtube.com/watch?v=DXDe-2BC4cE",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"CeAUEN233L4IoFSZtIvvl": {
|
||||
"title": "Native",
|
||||
@@ -327,7 +354,13 @@
|
||||
"rNeOti8DDyWTMP9FB9kJ_": {
|
||||
"title": "Unity 3D",
|
||||
"description": "**Unity 3D** is a versatile, cross-platform game engine that supports the development of both 2D and 3D games. This game engine allows users to create a wide variety of games including AR, VR, Mobile, Consoles, and Computers in C#. It provides a host of powerful features and tools, such as scripting, asset bundling, scene building, and simulation, to assist developers in creating interactive content. Unity 3D also boasts a large, active community that regularly contributes tutorials, scripts, assets, and more, making it a robust platform for all levels of game developers.",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "Unity in 100 Seconds",
|
||||
"url": "https://www.youtube.com/watch?v=iqlH4okiQqg",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"4YgbrXLXf5mfaL2tlYkzk": {
|
||||
"title": "Programming Languages",
|
||||
@@ -347,7 +380,13 @@
|
||||
"AaRZiItRcn8fYb5R62vfT": {
|
||||
"title": "Assembly",
|
||||
"description": "**Assembly** is a low-level programming language, often used for direct hardware manipulation, real-time systems, and to write performance-critical code. It provides a strong correspondence between its instructions and the architecture's machine-code instructions, since it directly represents the specific commands of the computer's CPU structure. However, it's closer to machine language (binary code) than to human language, which makes it difficult to read and understand. The syntax varies greatly, which depends upon the CPU architecture for which it's designed, thus Assembly language written for one type of processor can't be used on another. Despite its complexity, time-intensive coding process and machine-specific nature, Assembly language is still utilized for speed optimization and hardware manipulation where high-level languages may not be sufficient.",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "Code walkthrough of a game written in x64 assembly",
|
||||
"url": "https://www.youtube.com/watch?v=WUoqlp30M78",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"ts9pWxUimvFqfNJYCmNNw": {
|
||||
"title": "Rust",
|
||||
@@ -491,12 +530,12 @@
|
||||
},
|
||||
"aNhyXWW2b7yKTv8y14zk9": {
|
||||
"title": "Point",
|
||||
"description": "",
|
||||
"description": "Point lights are one of the most common types of lights used in computer graphics and games. They resemble real-world light bulbs, emitting light uniformly in all directions.\n\nThese lights are available out of the box in most game engines and offer a range of customizable parameters, such as intensity, falloff, color, and more.\n\nPoint lights are the most straightforward type of light, making them ideal for quickly and intuitively lighting up your scenes.",
|
||||
"links": []
|
||||
},
|
||||
"FetbhcK1RDt4izZ6NEUEP": {
|
||||
"title": "Spot",
|
||||
"description": "",
|
||||
"description": "Spotlights are a common type of light in computer graphics and games that mimic the behavior of real-world spotlights. They offer a range of parameters to adjust their behavior, such as radius, cone angle, falloff, and intensity.\n\nSpotlights are readily available out of the box in both Unreal and Unity game engines, making them an accessible and powerful tool for adding realistic and dynamic lighting to your scenes.",
|
||||
"links": []
|
||||
},
|
||||
"sC3omOmL2DOyTSvET5cDa": {
|
||||
@@ -531,8 +570,14 @@
|
||||
},
|
||||
"UcLGWYu41Ok2NYdLNIY5C": {
|
||||
"title": "Frustum",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Frustum culling is a standard practice in computer graphics, used in virtually all games to optimize performance by not rendering objects outside of your field of view. Think of your field of view as a frustum, a truncated pyramid shape. The farthest side is called the far clip plane, and the closest side is the near clip plane. Any object in the game that doesn't fall within this frustum is culled, meaning it’s not rendered, to improve performance. This feature comes built-in with Unreal Engine.\n\nYou can also adjust the near and far clip planes to fine-tune culling. For example, if an object is too close to the camera, it may disappear because it crosses the near clip plane threshold. Similarly, objects that are too far away might be culled by the far clip plane. In some cases, distant objects are LOD-ed (Level of Detail), an optimization technique that reduces the detail of the mesh the farther you are from it, and increases detail as you get closer.\n\nFrustum culling is a fundamental technique that is implemented in virtually all modern games to ensure efficient rendering and smooth gameplay.",
|
||||
"links": [
|
||||
{
|
||||
"title": "Frustum Culling - Game Optimization 101 - Unreal Engine",
|
||||
"url": "https://www.youtube.com/watch?v=Ql56s1erTMI",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"_1LkU258hzizSIgXipE0b": {
|
||||
"title": "Light",
|
||||
@@ -572,7 +617,13 @@
|
||||
"ffa5-YxRhE3zhWg7KXQ4r": {
|
||||
"title": "OpenGL",
|
||||
"description": "Open GL, also known as Open Graphics Library, is a cross-language, cross-platform API designed to render 2D and 3D vector graphics. As a software interface for graphics hardware, Open GL provides programmers the ability to create complex graphics visuals in detail. It was first developed by Silicon Graphics Inc. in 1992 and quickly became a highly popular tool in the graphics rendering industry. Open GL is widely used in CAD, virtual reality, scientific visualization, information visualization, and flight simulation. It is also used in video games production where real-time rendering is a requirement. The API is designed to work with a broad range of hardware from different manufacturers. Being open-source, Open GL's code capabilities can be extended by anyone in the software community.",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "OpenGL Tutorials",
|
||||
"url": "https://youtube.com/playlist?list=PLPaoO-vpZnumdcb4tZc4x5Q-v7CkrQ6M-&si=Mr71bYJMgoDhN9h-",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"CeydBMwckqKll-2AgOlyd": {
|
||||
"title": "WebGL",
|
||||
@@ -597,7 +648,13 @@
|
||||
"oEznLciLxZJaulMlBGgg4": {
|
||||
"title": "Metal",
|
||||
"description": "Metal is a low-level, high-performance, application programming interface (API) developed by Apple. It debuted in iOS 8 and is dedicated to graphics and data-parallel computations. Essentially, it's designed to exploit modern GPU architecture on Apple devices, optimizing performance and power efficiency. This API applies to various platforms, including iOS, macOS, and tvOS. In contrast to high-level APIs like OpenGL, Metal offers a much lower overhead, allowing more direct control over the GPU. For developers, it means that they can squeeze better performance out of the hardware compared to higher-level APIs. With Metal, developers have a much more detailed view and control on the GPU which results in better graphical output and smoother performance.",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "Metal Documentation",
|
||||
"url": "https://developer.apple.com/metal/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"yPfhJSTFS7a72UcqF1ROK": {
|
||||
"title": "Vulkan",
|
||||
@@ -707,7 +764,13 @@
|
||||
"rGEHTfdNeBAX3_XqC-vvI": {
|
||||
"title": "Reinforcements Learning",
|
||||
"description": "`Reinforcement Learning` is a type of Machine Learning which is geared towards making decisions. It involves an agent that learns to behave in an environment, by performing certain actions and observing the results or rewards/results it gets. The main principle of reinforcement learning is to reward good behavior and penalize bad behavior. The agent learns from the consequences of its actions, rather than from being taught explicitly. In the context of game development, reinforcement learning could be used to develop an AI (Artificial Intelligence) which can improve its performance in a game based on reward-driven behavior. The AI gradually learns the optimal strategy, known as policy, to achieve the best result.",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "AI Learns to Walk (deep reinforcement learning)",
|
||||
"url": "https://m.youtube.com/watch?v=L_4BPjLBF4E",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"9_OcZ9rzedDFfwEYxAghh": {
|
||||
"title": "Learning",
|
||||
|
||||
@@ -181,8 +181,19 @@
|
||||
},
|
||||
"h71Tx3nkfUrnhaqcHlDkQ": {
|
||||
"title": "Staging Area",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "In Git, a staging area serves as an intermediate step between your local repository changes and the actual commit.\n\n* Temporary storage: The staging area holds changes that are intended to be part of the next commit.\n* Previewing changes: It allows you to preview your changes before committing them.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Getting Started - What is Git? - Staging Area",
|
||||
"url": "https://git-scm.com/book/en/v2/Getting-Started-What-is-Git%3F#:~:text=The%20staging%20area%20is%20a,area%E2%80%9D%20works%20just%20as%20well.",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "What are Staged Changes in Git?",
|
||||
"url": "https://www.youtube.com/watch?v=HyeNfWZBut8",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"2_z3R7seCvQVj-Na4H1SV": {
|
||||
"title": "Committing Changes",
|
||||
@@ -372,8 +383,19 @@
|
||||
},
|
||||
"GS3f1FKFVKT0-GJQrgCm8": {
|
||||
"title": "Setting up Profile",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "On GitHub, creating a profile is an essential step in showcasing yourself as a developer or contributor.\n\n* Sharing information: Your profile page allows others to find out more about you, including your interests and skills.\n* Showcasing projects: You can display your notable projects and contributions, giving a glimpse into your work experience.\n* Expressing identity: The profile also serves as an opportunity for personal expression, allowing you to convey your unique personality and style within the GitHub community.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Setting up your profile",
|
||||
"url": "https://docs.github.com/en/get-started/start-your-journey/setting-up-your-profile",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Profile Readme",
|
||||
"url": "https://www.youtube.com/watch?v=KhGWbt1dAKQ",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"c_FO6xMixrrMo6iisfsvl": {
|
||||
"title": "Creating Repositories",
|
||||
@@ -573,8 +595,24 @@
|
||||
},
|
||||
"x6eILrLCQrVpz4j8uOuy6": {
|
||||
"title": "Pull Requests",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "A pull request is a proposal to merge a set of changes from one branch into another. In a pull request, collaborators can review and discuss the proposed set of changes before they integrate the changes into the main codebase. Pull requests display the differences, or diffs, between the content in the source branch and the content in the target branch.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Creating a pull request",
|
||||
"url": "https://docs.github.com/articles/creating-a-pull-request",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Pull Requests",
|
||||
"url": "https://www.atlassian.com/git/tutorials/making-a-pull-request#:~:text=In%20their%20simplest%20form%2C%20pull,request%20via%20their%20Bitbucket%20account.",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Pull Request in 100 Seconds ",
|
||||
"url": "https://youtu.be/8lGpZkjnkt4?si=qbCQ8Uvzn9GN2koL",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"8lXXVFkgK6n5IHaYkYe3l": {
|
||||
"title": "PR from a Fork",
|
||||
@@ -636,8 +674,19 @@
|
||||
},
|
||||
"dQS1V0zZxeKhHhUo3STBK": {
|
||||
"title": "Saved Replies",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "GitHub allows you to save frequently used comments and reuse them when discussing issues or pull requests.\n\n* Saved replies: You can create pre-written comments that can be easily added to conversations.\n* Customization: Saved replies can be edited to fit specific situations, making it easy to tailor your responses.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Using saved replies",
|
||||
"url": "https://docs.github.com/en/get-started/writing-on-github/working-with-saved-replies/using-saved-replies",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Walkthrough: Using Github’s “Saved Replies” to make life consistent and easy",
|
||||
"url": "https://prowe214.medium.com/walkthrough-using-githubs-saved-replies-to-make-life-consistent-and-easy-80f23efe6a0",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"oWMaOWU06juoIuzXNe-wt": {
|
||||
"title": "Mentions",
|
||||
@@ -973,8 +1022,19 @@
|
||||
},
|
||||
"wydgCxR5VnieBpRolXt8i": {
|
||||
"title": "Teams within Organization",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "GitHub Organizations allow you to create teams within your organization, which helps in organizing members based on their roles and responsibilities.\n\n* Grouping: Team members can be grouped together according to the company or group's structure.\n* Access permissions: Access permissions can be cascaded from one team member to another.\n* Mentions: Team mentions allow for easy referencing of specific teams in repository discussions.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Organizing members into teams",
|
||||
"url": "https://docs.github.com/en/organizations/organizing-members-into-teams",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Best practices for organizations and teams using GitHub Enterprise Cloud",
|
||||
"url": "https://github.blog/enterprise-software/devops/best-practices-for-organizations-and-teams-using-github-enterprise-cloud/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"DzFJDdqnSy5GeGHWOpcVo": {
|
||||
"title": "GitHub Projects",
|
||||
@@ -1026,8 +1086,19 @@
|
||||
},
|
||||
"sxvT2hGko2PDRBoBrCGWD": {
|
||||
"title": "Roadmaps",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "GitHub roadmaps are a feature that helps you visualize and organize plans for your projects, allowing you to create a high-level view of milestones and goals, and collaborate on planning and tracking progress with team members.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Customizing the roadmap layout",
|
||||
"url": "https://docs.github.com/en/issues/planning-and-tracking-with-projects/customizing-views-in-your-project/customizing-the-roadmap-layout",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Learn how to use Project Roadmaps - GitHub Checkout",
|
||||
"url": "https://www.youtube.com/watch?v=D80u__nYYWw",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"TNBz5755PhI6iKxTQTqcS": {
|
||||
"title": "Automations",
|
||||
@@ -1138,7 +1209,7 @@
|
||||
},
|
||||
"qFEonbCMLri8iA0yONwuf": {
|
||||
"title": "git log options",
|
||||
"description": "`git log` is a command in Git that shows the commit history of your repository. It provides a detailed view of all commits, including their hashes, authors, dates, and messages.\n\nHere are some common git log options:\n\n* \\-2: Only show the last two commits.\n* \\--all: Show all branches in the repository.\n* \\--graph: Display the commit history as a graph.\n* \\--no-color: Disable colorized output.\n* \\--stat: Show a statistical summary of changes.\n* \\*\\*-S\\`: Only show commits with modified files.\n\nYou can combine these options to tailor your log output to suit your needs.\n\nFor example, `git log -2 --graph` will display the last two commits in graph form.\n\nVisit the following resources to learn more:",
|
||||
"description": "`git log` is a command in Git that shows the commit history of your repository. It provides a detailed view of all commits, including their hashes, authors, dates, and messages.\n\nHere are some common git log options:\n\n* `-2`: Only show the last two commits.\n* `--all`: Show all branches in the repository.\n* `--graph`: Display the commit history as a graph.\n* `--pretty`: Enable clean colorized output.\n* `--no-color`: Disable colorized output.\n* `--stat`: Show a statistical summary of changes.\n* `**-S`: Only show commits with modified files.\n\nYou can combine these options to tailor your log output to suit your needs.\n\nFor example, `git log -2 --graph` will display the last two commits in graph form.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Git Log",
|
||||
@@ -1154,7 +1225,7 @@
|
||||
},
|
||||
"0Yi4cryT2v2SGBjouOas3": {
|
||||
"title": "Undoing Changes",
|
||||
"description": "",
|
||||
"description": "If mistakes or unwanted changes have been committed to your Git repository, there are ways to correct them. Two common methods for reverting changes include:\n\n* Git Reset: Resets the branch to a previous commit.\n* Git Revert: Creates a new commit that reverts specified changes.",
|
||||
"links": []
|
||||
},
|
||||
"dLr55Om7IOvI53c1DgTKc": {
|
||||
@@ -1261,8 +1332,19 @@
|
||||
},
|
||||
"mzjtCdpke1ayHcEuS-YUS": {
|
||||
"title": "Staged Changes",
|
||||
"description": "To view the changes you've staged with `git add`, but not yet committed, use `git diff --cached`. This command compares the staged files against their original versions in the repository. It's a quick way to review what you're about to commit before finalizing it.",
|
||||
"links": []
|
||||
"description": "To view the changes you've staged with `git add`, but not yet committed, use `git diff --cached`. This command compares the staged files against their original versions in the repository. It's a quick way to review what you're about to commit before finalizing it.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What does Staged Changes mean in Git?",
|
||||
"url": "https://dillionmegida.com/p/staged-changes-in-git/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "What are Staged Changes in Git?",
|
||||
"url": "https://www.youtube.com/watch?v=HyeNfWZBut8",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"uxqJzQFRcALqatNRIWR0w": {
|
||||
"title": "Unstaged Changes",
|
||||
@@ -1282,7 +1364,7 @@
|
||||
},
|
||||
"sOoC-XxEoIvwKct00oKlX": {
|
||||
"title": "Rewriting History",
|
||||
"description": "",
|
||||
"description": "In certain situations, you might need to modify or remove commits from your Git repository's history. This can be achieved using various methods:\n\n* `git commit --amend`: Allows you to edit the most recent commit.\n* `git rebase`: Replaces one branch with another, preserving the commit history.\n* `git filter-branch`: Removes specific commits from a branch without altering the original branch.\n* `git push --force`: Updates the remote repository while respecting existing pull requests.\n\nRewriting history in Git is typically necessary when:\n\n* Fixing mistakes: Correcting errors or typos in commit messages.\n* Removing sensitive data: Deleting confidential information from commits, like API keys or database credentials.\n* Simplifying complex histories: Reorganizing branches to improve clarity and reduce complexity.",
|
||||
"links": []
|
||||
},
|
||||
"NjPnEXLf1Lt9qzgxccogv": {
|
||||
@@ -1334,7 +1416,7 @@
|
||||
},
|
||||
"BKVA6Q7DXemAYjyQOA0nh": {
|
||||
"title": "git filter-branch",
|
||||
"description": "",
|
||||
"description": "You can use `git filter-branch` to rewrite Git revision history by applying custom filters on each revision.",
|
||||
"links": []
|
||||
},
|
||||
"OQOmxg9mCfcjt80hpvXkA": {
|
||||
@@ -1355,8 +1437,14 @@
|
||||
},
|
||||
"iFJBF-EEnLjQVsFSXjo_i": {
|
||||
"title": "Tagging",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "In Git, tags are used to identify specific points in a repository's history as being important. This feature allows developers to mark release points or milestones.\n\n* Marking release points: Tags are typically used to mark release versions (e.g., v1.0, v2.0) of a project.\n* Types of tags: There are different types of tags, including lightweight and annotated tags.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Git Basics - Tagging",
|
||||
"url": "https://git-scm.com/book/en/v2/Git-Basics-Tagging",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"NeU38WPbEphJuJ_AMkH82": {
|
||||
"title": "Managing Tags",
|
||||
@@ -1562,7 +1650,7 @@
|
||||
},
|
||||
"fjAFNjNNbPOzme9Uk_fDV": {
|
||||
"title": "Submodules",
|
||||
"description": "",
|
||||
"description": "In Git, submodules allow you to include another repository within a project. This feature enables the management of external dependencies as part of the main project.\n\n* Including external repositories: Submodules can be used to include other Git repositories within your project.\n* Managing dependencies: They provide a way to manage and track changes in external dependencies.",
|
||||
"links": []
|
||||
},
|
||||
"x4bnsPVTiX2xOCSyrgWpF": {
|
||||
@@ -1636,8 +1724,24 @@
|
||||
},
|
||||
"lw4zHuhtxIO4kDvbyiVfq": {
|
||||
"title": "Repository management",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Using GitHub CLI for repository management allows you to streamline tasks and work more efficiently. ou can use GitHub CLI to manage repositories with the following commands:\n\n* `gh repo create`: Create a new repository.\n* `gh repo delete`: Delete an existing repository.\n* `gh repo visibility`: Change the repository's visibility (public or private).\n* `gh repo topic`: Manage topic labels for a repository.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "gh repo",
|
||||
"url": "https://cli.github.com/manual/gh_repo",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Efficient GitHub Operations: Simplifying Repository Management using Github CLI",
|
||||
"url": "https://dev.to/yutee_okon/efficient-github-operations-simplifying-repository-management-using-github-cli-190l",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GitHub CLI (gh) - How to manage repositories more efficiently",
|
||||
"url": "https://www.youtube.com/watch?v=BII6ZY2Rnlc",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"kGnZifvXbHBf5zXIsfAQw": {
|
||||
"title": "Issue Management",
|
||||
@@ -1657,21 +1761,16 @@
|
||||
},
|
||||
"s3MzDYFPMASqiS8UnvWzW": {
|
||||
"title": "Pull Requests",
|
||||
"description": "A pull request is a proposal to merge a set of changes from one branch into another. In a pull request, collaborators can review and discuss the proposed set of changes before they integrate the changes into the main codebase. Pull requests display the differences, or diffs, between the content in the source branch and the content in the target branch.\n\nVisit the following resources to learn more:",
|
||||
"description": "You can use GitHub CLI to manage pull requests with the following commands:\n\n* `gh pr create`: Create a new pull request.\n* `gh pr merge`: Merge a pull request into the target branch.\n* `gh pr list`: List all pull requests for a repository.\n* `gh pr view`: View details of a specific pull request.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Creating a pull request",
|
||||
"url": "https://docs.github.com/articles/creating-a-pull-request",
|
||||
"title": "gh pr",
|
||||
"url": "https://cli.github.com/manual/gh_pr",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Pull Requests",
|
||||
"url": "https://www.atlassian.com/git/tutorials/making-a-pull-request#:~:text=In%20their%20simplest%20form%2C%20pull,request%20via%20their%20Bitbucket%20account.",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Pull Request in 100 Seconds ",
|
||||
"url": "https://youtu.be/8lGpZkjnkt4?si=qbCQ8Uvzn9GN2koL",
|
||||
"title": "Use GitHub CLI For Command Line Pull Request Management",
|
||||
"url": "https://www.youtube.com/watch?v=Ku9_0Mftiic",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
@@ -1726,8 +1825,19 @@
|
||||
},
|
||||
"uS1H9KoKGNONvETCuFBbz": {
|
||||
"title": "Scheduled Worfklows",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "GitHub Actions allows you to schedule workflows to run at specific times or intervals. You can set up workflows to automatically run at predetermined times, such as daily or weekly.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Events that trigger workflows - Schedule",
|
||||
"url": "https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#schedule",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Actions - How to Schedule workflows in GitHub",
|
||||
"url": "https://www.youtube.com/watch?v=StipNrK__Gk",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"6QwlY3dEvjfAOPALcWKXQ": {
|
||||
"title": "Workflow Runners",
|
||||
@@ -1763,8 +1873,24 @@
|
||||
},
|
||||
"aflP7oWsQzAr4YPo2LLiQ": {
|
||||
"title": "Secrets and Env Vars",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "GitHub provides features to securely store and manage sensitive data, such as secrets and environment variables.\n\n* Secrets: These are sensitive values that should not be committed to a repository, like API keys or database credentials.\n* Environment Variables: They can be used to set values for your workflow or application, making it easier to manage dependencies.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Using secrets in GitHub Actions",
|
||||
"url": "https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Store information in variables",
|
||||
"url": "https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Secrets and Environment Variables in your GitHub Action",
|
||||
"url": "https://www.youtube.com/watch?v=dPLPSaFqJmY",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"HMNhzzV6ApTKj4I_FOmUB": {
|
||||
"title": "Caching Dependencies",
|
||||
@@ -1784,8 +1910,14 @@
|
||||
},
|
||||
"alysXC4b1hGi9ZdQ5-40y": {
|
||||
"title": "Storing Artifacts",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "GitHub provides a feature for storing artifacts, which allows you to upload build outputs or other files as part of your workflow.\n\n* Artifacts: These are files generated by a job, such as compiled binaries, test reports, or logs. They can be used to validate the results of a build or deployment.\n* Referenceable storage: Artifacts are stored in a referenceable way, making it easy to access and use them in future builds.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Storing and sharing data from a workflow",
|
||||
"url": "https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/storing-and-sharing-data-from-a-workflow",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"jc4R1zhd1YeCEbVuxwJWy": {
|
||||
"title": "Workflow Status",
|
||||
|
||||
@@ -423,7 +423,7 @@
|
||||
},
|
||||
"1RcwBHU3jzx0YxxUGZic4": {
|
||||
"title": "string",
|
||||
"description": "String is a primitive type that holds a sequence of characters. String in Javascript is written within a pair of single quotation marks '' or double quotation marks \"\". Both quotes can be used to contain a string but only if the starting quote is the same as the end quote.\n\nVisit the following resources to learn more:",
|
||||
"description": "String is a primitive type that holds a sequence of characters. String in Javascript is written within a pair of single quotation marks `''` or double quotation marks `\"\"`. Both quotes can be used to contain a string but only if the starting quote is the same as the end quote.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "String",
|
||||
@@ -1459,11 +1459,6 @@
|
||||
"title": "Explore top posts about JavaScript",
|
||||
"url": "https://app.daily.dev/tags/javascript?ref=roadmapsh",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "JavaScript Functions - Programming with Mosh",
|
||||
"url": "https://youtu.be/N8ap4k_1QEQ",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
},
|
||||
"Mp056kNnwsRWeEXuhGPy-": {
|
||||
"title": "What is Node.js?",
|
||||
"description": "Node.js is an open-source and cross-platform JavaScript runtime environment. It is a popular tool for almost any kind of project! Node.js runs the V8 JavaScript engine, Google Chrome's core, outside the browser. This allows Node.js to be very performant. A Node.js app runs in a single process, without creating a new thread for every request. Node.js provides a set of asynchronous I/O primitives in its standard library that prevent JavaScript code from blocking and generally, libraries in Node.js are written using non-blocking paradigms, making blocking behavior the exception rather than the norm.\n\nVisit the following resources to learn more:",
|
||||
"description": "Node.js is an open-source and cross-platform JavaScript runtime environment. It is a popular tool for almost any kind of project! Node.js runs the V8 JavaScript engine, Google Chrome's core, outside the browser. This allows Node.js to be very performant. A Node.js app runs in a single process, without creating a new thread for every request.\n\nNode.js provides a set of asynchronous I/O primitives in its standard library that prevent JavaScript code from blocking and generally, libraries in Node.js are written using non-blocking paradigms, making blocking behavior the exception rather than the norm.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
@@ -30,13 +30,13 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Node.JS Introduction",
|
||||
"url": "https://www.w3schools.com/nodejs/nodejs_intro.asp",
|
||||
"title": "Node.js - Getting Started",
|
||||
"url": "https://nodejs.org/en/learn/getting-started/introduction-to-nodejs",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Official Website",
|
||||
"url": "https://nodejs.org/en/learn/getting-started/introduction-to-nodejs",
|
||||
"title": "Node.js - Introduction",
|
||||
"url": "https://www.w3schools.com/nodejs/nodejs_intro.asp",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
@@ -319,7 +319,7 @@
|
||||
},
|
||||
"oYeux7PbveYaVwXRzAg5x": {
|
||||
"title": "Local Installation",
|
||||
"description": "Locally installed packages are available only to the project where the packages are installed, while the globally installed packages can be used any where without installing them into a project. Another usecase of the global packages is when using CLI tools.\n\nVisit the following resources to learn more:",
|
||||
"description": "Locally installed packages are available only to the project where the packages are installed, while the globally installed packages can be used any where without installing them into a project. Another use case of the global packages is when using CLI tools.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Downloading and installing packages locally",
|
||||
@@ -399,8 +399,24 @@
|
||||
},
|
||||
"dOlzIXBfAPmbY542lNOe6": {
|
||||
"title": "Semantic Versioning",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Semantic Versioning is a standard for versioning software that's widely adopted in the npm ecosystem. It provides a clear and consistent way to communicate changes in a software package to users.\n\nVersion Format\n--------------\n\nA semantic version number consists of three parts separated by dots:\n\n* MAJOR: Incremented when there are incompatible API changes.\n* MINOR: Incremented when new functionality is added in a backwards-compatible manner.\n* PATCH: Incremented when bug fixes are made without affecting the API.\n\n### Example: 1.2.3\n\n* 1 is the major version.\n* 2 is the minor version.\n* 3 is the patch version.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Semver.org",
|
||||
"url": "https://semver.org/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Medium - Understanding Semantic Versioning",
|
||||
"url": "https://medium.com/codex/understanding-semantic-versioning-a-guide-for-developers-dad5f2b70583",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Devopedia - Semver",
|
||||
"url": "https://devopedia.org/semantic-versioning",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"t_kfKdNSKVBPYQ9zF9VqQ": {
|
||||
"title": "Error Handling",
|
||||
@@ -525,9 +541,14 @@
|
||||
"description": "Node.js includes a command-line debugging utility. The Node.js debugger client is not a full-featured debugger, but simple stepping and inspection are possible. To use it, start Node.js with the inspect argument followed by the path to the script to debug.\n\nExample - `$ node inspect myscript.js`\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
"title": "Official Docs",
|
||||
"url": "https://nodejs.org/api/debugger.html",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Freecodecamp.org - Debugging",
|
||||
"url": "https://www.freecodecamp.org/news/how-to-debug-node-js-applications/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -812,7 +833,7 @@
|
||||
},
|
||||
"b1r1X3XCoPSayQjDBcy54": {
|
||||
"title": "fs module",
|
||||
"description": "File System or fs module is a built in module in Node that enables interacting with the file system using JavaScript. All file system operations have synchronous, callback, and promise-based forms, and are accessible using both CommonJS syntax and ES6 Modules.\n\nVisit the following resources to learn more:",
|
||||
"description": "File System or `fs` module is a built in module in Node that enables interacting with the file system using JavaScript. All file system operations have synchronous, callback, and promise-based forms, and are accessible using both CommonJS syntax and ES6 Modules.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Documentation",
|
||||
@@ -1153,7 +1174,7 @@
|
||||
},
|
||||
"1vq_KcYR_pkfp1MtXaL75": {
|
||||
"title": "Express.js",
|
||||
"description": "Express is a node js web application framework that provides broad features for building web and mobile applications. It is used to build a single page, multipage, and hybrid web application.\n\nVisit the following resources to learn more:",
|
||||
"description": "Express is a node js web application framework that provides broad features for building web and mobile applications. It is used to build a single page, multi-page, and hybrid web application.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Express.js Official Website",
|
||||
@@ -1255,12 +1276,12 @@
|
||||
"description": "You can make API calls using the `http` module in Node.js as well. Here are the two methods that you can use:\n\n* `http.get()` - Make http GET requests.\n* `http.request()` - Similar to `http.get()` but enables sending other types of http requests (GET requests inclusive).\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Node.js http.get() documentaion",
|
||||
"title": "Node.js http.get() documentation",
|
||||
"url": "https://nodejs.org/docs/latest-v16.x/api/http.html#httpgeturl-options-callback",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Node http.request() documentaion",
|
||||
"title": "Node http.request() documentation",
|
||||
"url": "https://nodejs.org/docs/latest-v16.x/api/http.html#httprequesturl-options-callback",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1276,7 +1297,7 @@
|
||||
"description": "Axios is a promise-based HTTP Client for node.js and the browser. Used for making requests to web servers. On the server-side it uses the native node.js http module, while on the client (browser) it uses XMLHttpRequests.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Axios Official Documentations",
|
||||
"title": "Axios Official Documentation",
|
||||
"url": "https://axios-http.com/docs/intro",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1294,8 +1315,19 @@
|
||||
},
|
||||
"-_2letLUta5Ymc5eEOKhn": {
|
||||
"title": "ky",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Ky is a tiny and elegant HTTP client based on the browser Fetch API. Ky targets modern browsers and Deno.For older browsers, you will need to transpile and use a fetch polyfill.For Node.js, check out Got.. 1 KB (minified & gzipped), one file, and no dependencies.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Ky Official Docs",
|
||||
"url": "https://github.com/sindresorhus/ky",
|
||||
"type": "opensource"
|
||||
},
|
||||
{
|
||||
"title": "npmjs.org",
|
||||
"url": "https://www.npmjs.com/package/ky/v/0.9.0",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"B_3rTGQxJneMREXoi2gQn": {
|
||||
"title": "fetch",
|
||||
@@ -1393,8 +1425,24 @@
|
||||
},
|
||||
"812bVEzxwTsYzLG_PmLqN": {
|
||||
"title": "--watch",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "The `--watch` flag in Node.js is a powerful feature introduced in Node.js version 19 that enables automatic reloading of your Node.js application whenever changes are detected in the specified files.\n\nHow it works\n------------\n\n* You run your Node.js script with the `--watch` flag: `$ node --watch your_script.js`\n* Node.js starts watching the specified file (or directory) for changes.\n* Whenever a change is detected, Node.js automatically restarts the script\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Docs",
|
||||
"url": "https://nodejs.org/api/cli.html",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Node.js API Docs",
|
||||
"url": "https://nodejs.org/api/cli.html#--watch",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Medium - Watch Mode",
|
||||
"url": "https://medium.com/@khaled.smq/built-in-nodejs-watch-mode-52ffadaec8a8",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"2Ym2jMvov0lZ79aJFaw29": {
|
||||
"title": "nodemon",
|
||||
@@ -1419,20 +1467,20 @@
|
||||
},
|
||||
"L-_N7OxxuHCXsdWYBgZGu": {
|
||||
"title": "ejs",
|
||||
"description": "EJS is a templating language or engine that allows you to generate HTML markup with pure JavaScript. And this is what makes it perfect for Nodejs applications. In simple words, the EJS template engine helps to easily embed JavaScript into your HTML template.\n\nVisit the following resources to learn more:",
|
||||
"description": "EJS is a template language or engine that allows you to generate HTML markup with pure JavaScript. And this is what makes it perfect for Nodejs applications. In simple words, the EJS template engine helps to easily embed JavaScript into your HTML template.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Ejs website",
|
||||
"title": "EJS Website",
|
||||
"url": "https://ejs.co/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Ejs Official Documentations",
|
||||
"title": "EJS Official Documentation",
|
||||
"url": "https://ejs.co/#docs",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Ejs Official Package",
|
||||
"title": "EJS Official Package",
|
||||
"url": "https://www.npmjs.com/package/ejs",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1492,8 +1540,14 @@
|
||||
},
|
||||
"5l-lZ8gwVLqqAF_n99vIO": {
|
||||
"title": "Working with Databases",
|
||||
"description": "A database is an organized collection of structured information, or data, typically stored electronically in a computer system. A database is usually controlled by a database management system (DBMS).",
|
||||
"links": []
|
||||
"description": "A database is an organized collection of structured information, or data, typically stored electronically in a computer system. A database is usually controlled by a database management system (DBMS).\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Wikipedia - What is Database?",
|
||||
"url": "https://en.wikipedia.org/wiki/Database",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"NDf-o-WECK02mVnZ8IFxy": {
|
||||
"title": "Mongoose",
|
||||
@@ -1560,8 +1614,29 @@
|
||||
},
|
||||
"JXQF9H4_N0rM7ZDKcCZNn": {
|
||||
"title": "Drizzle",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Drizzle lets you build your project the way you want, without interfering with your project or structure. Using Drizzle you can define and manage database schemas in TypeScript, access your data in a SQL-like or relational way, and take advantage of opt-in tools to make your developer experience amazing.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Drizzle Website",
|
||||
"url": "https://orm.drizzle.team/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Drizzle Documentation",
|
||||
"url": "https://orm.drizzle.team/docs/overview",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Drizzle Github",
|
||||
"url": "https://github.com/drizzle-team/drizzle-orm",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Getting Started with Drizzle",
|
||||
"url": "https://dev.to/franciscomendes10866/getting-started-with-drizzle-orm-a-beginners-tutorial-4782",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"rk5FtAPDi1TpvWd0yBbtl": {
|
||||
"title": "TypeORM",
|
||||
@@ -1645,17 +1720,55 @@
|
||||
"90NIFfbWjTbyKZKwyJlfI": {
|
||||
"title": "Testing",
|
||||
"description": "Software testing is the process of verifying that what we create is doing exactly what we expect it to do. The tests are created to prevent bugs and improve code quality.\n\nThe two most common testing approaches are unit testing and end-to-end testing. In the first, we examine small snippets of code, in the second, we test an entire user flow.\n\nVisit the following resources to learn more:",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "Wikipedia - Software Testing",
|
||||
"url": "https://en.wikipedia.org/wiki/Software_testing",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Vitest",
|
||||
"url": "https://vitest.dev/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Jest",
|
||||
"url": "https://jest.io",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"qjToBaMenW3SDtEfoCbQ6": {
|
||||
"title": "Vitest",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Vitest is a Vite-native unit testing framework that's Jest-compatible. Vitest is a powerful testing library built on top of Vite that is growing in popularity. You can use Vitest for a range of testing needs, such as unit, integration, end-to-end (E2E), snapshot, and performance testing of functions and components. ESM, TypeScript, JSX. Out-of-box ESM, TypeScript and JSX support powered by esbuild. Vitest is free and open source.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
"url": "https://vitest.dev/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Vitest Documentation",
|
||||
"url": "https://vitest.dev/guide/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"oSLpy31XEcA2nRq9ks_LJ": {
|
||||
"title": "node:test",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "`node:test` is a built-in module in Node.js that provides a simple, asynchronous test runner. It's designed to make writing tests as straightforward as writing any other code.\n\nKey Features\n------------\n\n* Simplicity: Easy to use and understand.\n* Asynchronous Support: Handles asynchronous code gracefully.\n* Subtests: Allows for organizing tests into hierarchical structures.\n* Hooks: Provides beforeEach and afterEach hooks for setup and teardown.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Test Runner API Docs",
|
||||
"url": "https://nodejs.org/api/test.html",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Node.js Test Runner",
|
||||
"url": "https://nodejs.org/en/learn/test-runner/using-test-runner",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"5xrbKv2stKPJRv7Vzf9nM": {
|
||||
"title": "Jest",
|
||||
@@ -1680,8 +1793,24 @@
|
||||
},
|
||||
"Ix-g9pgJjEI04bSfROvlq": {
|
||||
"title": "Playwright",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Playwright is an open-source automation library developed by Microsoft for testing and automating web applications. 1 It offers a unified API to control Chromium, Firefox, and WebKit browsers, making it a versatile choice for cross-browser testing.\n\nPlaywright provides a high-level API to interact with web pages. You can write scripts to simulate user actions, such as clicking buttons, filling forms, and navigating through different pages. Playwright handles the underlying browser interactions, making it easy to write and maintain tests.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Playwright Website",
|
||||
"url": "https://playwright.dev/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Playwright Docs",
|
||||
"url": "https://playwright.dev/docs/getting-started-vscode",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Getting Started with Playwright",
|
||||
"url": "https://learn.microsoft.com/en-us/shows/getting-started-with-end-to-end-testing-with-playwright/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"3Fh3-V1kCZtlUTvEoloIO": {
|
||||
"title": "Cypress",
|
||||
@@ -1867,7 +1996,18 @@
|
||||
"ZLNUuDKhJ03Kw7xMVc7IR": {
|
||||
"title": "Debugging",
|
||||
"description": "Debugging is a concept to identify and remove errors from software applications. Here, we will learn about the technique to debug a Node.js application.\n\nWhy not to use console.log() for debugging?\n-------------------------------------------\n\nUsing `console.log` to debug the code generally dives into an infinite loop of “stopping the app and adding a console.log, and start the app again” operations. Besides slowing down the development of the app, it also makes the writing dirty and creates unnecessary code. Finally, trying to log out variables alongside with the noise of other potential logging operations, may make the process of debugging difficult when attempting to find the values you are debugging.\n\nVisit the following resources to learn more:",
|
||||
"links": []
|
||||
"links": [
|
||||
{
|
||||
"title": "Wikipedia - What is Debugging?",
|
||||
"url": "https://en.wikipedia.org/wiki/Debugging",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Node.js - Getting Started",
|
||||
"url": "https://nodejs.org/en/learn/getting-started/debugging",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"oU9I7KBZoTSXXFmYscEIq": {
|
||||
"title": "Memory Leaks",
|
||||
@@ -1911,7 +2051,7 @@
|
||||
"description": "As much fun as it is to intercept your container requests with inspect and step through your code, you won’t have this option in production. This is why it makes a lot of sense to try and debug your application locally in the same way as you would in production.\n\nIn production, one of your tools would be to login to your remote server to view the console logs, just as you would on local. But this can be a tedious approach. Luckily, there are tools out there that perform what is called log aggregation, such as Stackify.\n\nThese tools send your logs from your running application into a single location. They often come with high-powered search and query utilities so that you can easily parse your logs and visualize them.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Debugging using APM ",
|
||||
"title": "Debugging using APM",
|
||||
"url": "https://stackify.com/node-js-debugging-tips/",
|
||||
"type": "article"
|
||||
},
|
||||
|
||||
@@ -2297,6 +2297,11 @@
|
||||
"title": "Awk command in Linux/Unix",
|
||||
"url": "https://www.digitalocean.com/community/tutorials/awk-command-linux-unix",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Tutorial - AWK in 300 Seconds",
|
||||
"url": "https://www.youtube.com/watch?v=15DvGiWVNj0",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -2313,6 +2318,11 @@
|
||||
"title": "Use the Grep Command",
|
||||
"url": "https://docs.rackspace.com/docs/use-the-linux-grep-command",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Tutorial - grep: A Practical Guide",
|
||||
"url": "https://www.youtube.com/watch?v=crFZOrqlqao",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
@@ -58,11 +58,6 @@
|
||||
"title": "Python for Beginners: Data Types",
|
||||
"url": "https://thenewstack.io/python-for-beginners-data-types/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Python Variables - Python Tutorial for Beginners with Examples | Mosh",
|
||||
"url": "https://www.youtube.com/watch?v=cQT33yu9pY8",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -808,8 +803,19 @@
|
||||
},
|
||||
"KAXF2kUAOvtBZhY8G9rkI": {
|
||||
"title": "Context Manager",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "Context managers are a contruct in Python that allows you to set up context for a block of code, and then automatically clean up or relase resources when the block is exited. It is most commonly used with the `with` statement.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Context managers in Python",
|
||||
"url": "https://www.freecodecamp.org/news/context-managers-in-python/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Context managers",
|
||||
"url": "https://book.pythontips.com/en/latest/context_managers.html",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"0-ShORjGnQlAdcwjtxdEB": {
|
||||
"title": "Learn a Framework",
|
||||
|
||||
@@ -936,7 +936,7 @@
|
||||
},
|
||||
"thfnymb_UIiKxakKfiua5": {
|
||||
"title": "Component / Libraries",
|
||||
"description": "",
|
||||
"description": "React component libraries are collections of pre-built, reusable components that can be used to speed up the development process. They can be styled using CSS in various ways, including traditional CSS files, CSS modules, and CSS-in-JS solutions like styled-components.",
|
||||
"links": []
|
||||
},
|
||||
"akVNUPOqaTXaSHoQFlkP_": {
|
||||
|
||||
@@ -274,8 +274,14 @@
|
||||
},
|
||||
"0CtAZQcFJexMiJfZ-mofv": {
|
||||
"title": "v-else",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "The `v-else` conditionally renders an element or a template fragment as a function in case the `v-if` does not fulfil the condition.\n\nVisit the following resources for more information:",
|
||||
"links": [
|
||||
{
|
||||
"title": "v-else documentation",
|
||||
"url": "https://vuejs.org/api/built-in-directives.html#v-else",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"a9caVhderJaVo0v14w8WB": {
|
||||
"title": "v-else-if",
|
||||
|
||||
@@ -372,7 +372,7 @@ function getRoadmapDefaultTemplate({ title, description }) {
|
||||
</svg>
|
||||
</div>
|
||||
<div tw="text-[30px] flex leading-[30px]">
|
||||
6th most starred GitHub project
|
||||
7th most starred GitHub project
|
||||
</div>
|
||||
</div>
|
||||
<div tw="flex items-center mt-2.5">
|
||||
|
||||
@@ -1,20 +1,85 @@
|
||||
type AIAnnouncementProps = {};
|
||||
import { useState } from 'react';
|
||||
import { Modal } from './Modal.tsx';
|
||||
import {PartyPopper, Play, PlayCircle} from 'lucide-react';
|
||||
import { isMobileScreen } from '../lib/is-mobile.ts';
|
||||
|
||||
export function FeatureAnnouncement(props: AIAnnouncementProps) {
|
||||
return (
|
||||
<a
|
||||
className="rounded-md border border-dashed border-purple-600 px-3 py-1.5 text-purple-400 transition-colors hover:border-purple-400 hover:text-purple-200"
|
||||
href="/community"
|
||||
type FeatureAnnouncementProps = {};
|
||||
|
||||
export function FeatureAnnouncement(props: FeatureAnnouncementProps) {
|
||||
const [isPlaying, setIsPlaying] = useState(false);
|
||||
|
||||
const videoModal = (
|
||||
<Modal
|
||||
onClose={() => setIsPlaying(false)}
|
||||
bodyClassName={'h-auto overflow-hidden'}
|
||||
wrapperClassName={'md:max-w-3xl lg:max-w-4xl xl:max-w-5xl'}
|
||||
>
|
||||
<span className="relative -top-[1px] mr-1 text-xs font-semibold uppercase text-white">
|
||||
New
|
||||
</span>{' '}
|
||||
<span className={'hidden sm:inline'}>
|
||||
Explore community made roadmaps
|
||||
</span>
|
||||
<span className={'inline text-sm sm:hidden'}>
|
||||
Community roadmaps explorer!
|
||||
</span>
|
||||
</a>
|
||||
<div className="text-balance bg-gradient-to-r from-gray-100 px-4 py-2 text-left text-sm md:py-3 lg:text-base">
|
||||
<span
|
||||
className="relative -top-px mr-1.5 rounded bg-blue-300 px-1.5 py-0.5 text-xs font-semibold uppercase text-gray-800"
|
||||
style={{ lineHeight: '1.5' }}
|
||||
>
|
||||
New
|
||||
</span>
|
||||
Projects are live on the{' '}
|
||||
<a
|
||||
href={'/backend/projects'}
|
||||
className="font-medium text-blue-500 underline underline-offset-2"
|
||||
>
|
||||
backend roadmap
|
||||
</a>
|
||||
<span className={'hidden md:inline'}>
|
||||
{' '}
|
||||
and are coming soon on the others{' '}
|
||||
</span>
|
||||
<PartyPopper className="relative -top-[3px] ml-2 inline-block h-5 w-5 text-blue-500 md:ml-1 md:h-6 md:w-6" />
|
||||
</div>
|
||||
<div
|
||||
className="iframe-container"
|
||||
style={{
|
||||
position: 'relative',
|
||||
paddingBottom: '56.25%',
|
||||
height: 0,
|
||||
overflow: 'hidden',
|
||||
}}
|
||||
>
|
||||
{/*https://www.youtube.com/embed/?playsinline=1&disablekb=1&&iv_load_policy=3&cc_load_policy=0&controls=0&rel=0&autoplay=1&mute=1&origin=https%3A%2F%2Fytch.xyz&widgetid=1*/}
|
||||
<iframe
|
||||
src="https://www.youtube.com/embed/9lS3slfJ0x0?start=31&autoplay=1&disablekb=1&rel=0&cc_load_policy=0&rel=0&autoplay=1&origin=https%3A%2F%2Froadmap.sh&widgetid=1&showinfo=0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||
allowFullScreen
|
||||
style={{
|
||||
position: 'absolute',
|
||||
top: 0,
|
||||
left: 0,
|
||||
width: '100%',
|
||||
height: '100%',
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
</Modal>
|
||||
);
|
||||
|
||||
return (
|
||||
<>
|
||||
{isPlaying && videoModal}
|
||||
<button
|
||||
className="rounded-md border border-dashed border-purple-600 px-3 py-1.5 text-purple-400 transition-colors hover:border-purple-400 hover:text-purple-200"
|
||||
onClick={() => {
|
||||
setIsPlaying(true);
|
||||
}}
|
||||
>
|
||||
<span className="relative sm:-top-[1px] mr-1 text-xs font-semibold uppercase text-white">
|
||||
<PlayCircle className="inline-block h-4 w-4 relative -top-[2px] mr-1" />
|
||||
Watch
|
||||
</span>{' '}
|
||||
<span className={'hidden sm:inline'}>
|
||||
Practice your skills with projects
|
||||
</span>
|
||||
<span className={'inline text-sm sm:hidden'}>
|
||||
Build projects to skill up
|
||||
</span>
|
||||
</button>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -11,7 +11,7 @@ import { FeatureAnnouncement } from "../FeatureAnnouncement";
|
||||
id='hero-text'
|
||||
>
|
||||
<p class='-mt-4 mb-7 sm:-mt-10 sm:mb-4'>
|
||||
<FeatureAnnouncement />
|
||||
<FeatureAnnouncement client:load />
|
||||
</p>
|
||||
|
||||
<h1
|
||||
|
||||
@@ -4,6 +4,7 @@ import Icon from '../AstroIcon.astro';
|
||||
import { NavigationDropdown } from '../NavigationDropdown';
|
||||
import { AccountDropdown } from './AccountDropdown';
|
||||
import NewIndicator from './NewIndicator.astro';
|
||||
import { RoadmapDropdownMenu } from '../RoadmapDropdownMenu/RoadmapDropdownMenu';
|
||||
---
|
||||
|
||||
<div class='bg-slate-900 py-5 text-white sm:py-8'>
|
||||
@@ -19,7 +20,7 @@ import NewIndicator from './NewIndicator.astro';
|
||||
|
||||
<a
|
||||
href='/teams'
|
||||
class='group relative !mr-2 inline text-blue-300 hover:text-white sm:hidden'
|
||||
class='group relative inline text-blue-300 hover:text-white sm:hidden'
|
||||
>
|
||||
Teams
|
||||
|
||||
@@ -35,32 +36,18 @@ import NewIndicator from './NewIndicator.astro';
|
||||
</a>
|
||||
|
||||
<!-- Desktop navigation items -->
|
||||
<div class='hidden space-x-5 sm:flex sm:items-center'>
|
||||
<div class='hidden gap-5 sm:flex sm:items-center'>
|
||||
<NavigationDropdown client:load />
|
||||
<a href='/get-started' class='text-gray-400 hover:text-white'>
|
||||
Start Here
|
||||
</a>
|
||||
<RoadmapDropdownMenu client:load />
|
||||
<a
|
||||
href='/teams'
|
||||
class='group relative text-gray-400 hover:text-white'
|
||||
class='group relative !mr-5 text-gray-400 hover:text-white'
|
||||
>
|
||||
Teams
|
||||
</a>
|
||||
<a href='/ai' class='text-gray-400 hover:text-white'> AI</a>
|
||||
<a
|
||||
href='/community'
|
||||
class='group relative !mr-2 text-blue-300 hover:text-white'
|
||||
>
|
||||
Community
|
||||
<NewIndicator />
|
||||
</a>
|
||||
<!--<button-->
|
||||
<!-- data-command-menu-->
|
||||
<!-- class='hidden items-center rounded-md border border-gray-800 px-2.5 py-1.5 text-sm text-gray-400 hover:cursor-pointer hover:bg-gray-800 md:flex'-->
|
||||
<!-->-->
|
||||
<!-- <Icon icon='search' class='h-3 w-3' />-->
|
||||
<!-- <span class='ml-2'>Search</span>-->
|
||||
<!--</button>-->
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
@@ -2,22 +2,34 @@ import {
|
||||
BookOpenText,
|
||||
CheckSquare,
|
||||
FileQuestion,
|
||||
FolderKanban,
|
||||
Menu,
|
||||
Shirt,
|
||||
Video,
|
||||
Waypoints,
|
||||
} from 'lucide-react';
|
||||
import { useRef, useState } from 'react';
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import { cn } from '../lib/classname.ts';
|
||||
import { useOutsideClick } from '../hooks/use-outside-click.ts';
|
||||
import {
|
||||
navigationDropdownOpen,
|
||||
roadmapsDropdownOpen,
|
||||
} from '../stores/page.ts';
|
||||
import { useStore } from '@nanostores/react';
|
||||
|
||||
const links = [
|
||||
{
|
||||
link: '/roadmaps',
|
||||
label: 'Roadmaps',
|
||||
description: 'Step by step learning paths',
|
||||
label: 'Official Roadmaps',
|
||||
description: 'Made by subject matter experts',
|
||||
Icon: Waypoints,
|
||||
},
|
||||
{
|
||||
link: '/backend/projects',
|
||||
label: 'Projects',
|
||||
description: 'Skill-up with real-world projects',
|
||||
Icon: FolderKanban,
|
||||
},
|
||||
{
|
||||
link: '/best-practices',
|
||||
label: 'Best Practices',
|
||||
@@ -54,21 +66,30 @@ const links = [
|
||||
|
||||
export function NavigationDropdown() {
|
||||
const dropdownRef = useRef<HTMLDivElement>(null);
|
||||
const [isOpen, setIsOpen] = useState(false);
|
||||
|
||||
const $navigationDropdownOpen = useStore(navigationDropdownOpen);
|
||||
const $roadmapsDropdownOpen = useStore(roadmapsDropdownOpen);
|
||||
|
||||
useOutsideClick(dropdownRef, () => {
|
||||
setIsOpen(false);
|
||||
navigationDropdownOpen.set(false);
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
if ($roadmapsDropdownOpen) {
|
||||
navigationDropdownOpen.set(false);
|
||||
}
|
||||
}, [$roadmapsDropdownOpen]);
|
||||
|
||||
return (
|
||||
<div className="relative flex items-center" ref={dropdownRef}>
|
||||
<button
|
||||
className={cn('text-gray-400 hover:text-white', {
|
||||
'text-white': isOpen,
|
||||
'text-white': $navigationDropdownOpen,
|
||||
})}
|
||||
onClick={() => setIsOpen(true)}
|
||||
onMouseOver={() => setIsOpen(true)}
|
||||
onClick={() => navigationDropdownOpen.set(true)}
|
||||
onMouseOver={() => navigationDropdownOpen.set(true)}
|
||||
aria-label="Open Navigation Dropdown"
|
||||
aria-expanded={$navigationDropdownOpen}
|
||||
>
|
||||
<Menu className="h-5 w-5" />
|
||||
</button>
|
||||
@@ -76,9 +97,11 @@ export function NavigationDropdown() {
|
||||
className={cn(
|
||||
'pointer-events-none invisible absolute left-0 top-full z-[999] mt-2 w-48 min-w-[320px] -translate-y-1 rounded-lg bg-slate-800 py-2 opacity-0 shadow-xl transition-all duration-100',
|
||||
{
|
||||
'pointer-events-auto visible translate-y-2.5 opacity-100': isOpen,
|
||||
'pointer-events-auto visible translate-y-2.5 opacity-100':
|
||||
$navigationDropdownOpen,
|
||||
},
|
||||
)}
|
||||
role="menu"
|
||||
>
|
||||
{links.map((link) => (
|
||||
<a
|
||||
@@ -87,6 +110,7 @@ export function NavigationDropdown() {
|
||||
rel={link.isExternal ? 'noopener noreferrer' : undefined}
|
||||
key={link.link}
|
||||
className="group flex items-center gap-3 px-4 py-2.5 text-gray-400 transition-colors hover:bg-slate-700"
|
||||
role="menuitem"
|
||||
>
|
||||
<span className="flex h-[40px] w-[40px] items-center justify-center rounded-full bg-slate-600 transition-colors group-hover:bg-slate-500 group-hover:text-slate-100">
|
||||
<link.Icon className="inline-block h-5 w-5" />
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
---
|
||||
import { getFormattedStars } from '../lib/github';
|
||||
import Icon from './AstroIcon.astro';
|
||||
import { getFormattedStars, getRepositoryRank } from '../lib/github';
|
||||
import { getDiscordInfo } from '../lib/discord';
|
||||
|
||||
import OpenSourceStat from './OpenSourceStat.astro';
|
||||
|
||||
const starCount = await getFormattedStars('kamranahmedse/developer-roadmap');
|
||||
const repoName = 'kamranahmedse/developer-roadmap';
|
||||
|
||||
const starCount = await getFormattedStars(repoName);
|
||||
const repoRank = await getRepositoryRank(repoName);
|
||||
const discordInfo = await getDiscordInfo();
|
||||
---
|
||||
|
||||
@@ -16,18 +19,19 @@ const discordInfo = await getDiscordInfo();
|
||||
href='https://github.com/search?o=desc&q=stars%3A%3E100000&s=stars&type=Repositories'
|
||||
target='_blank'
|
||||
class='font-medium text-gray-600 underline underline-offset-2 hover:text-black'
|
||||
>6th most starred project on GitHub</a
|
||||
>{repoRank} most starred project on GitHub</a
|
||||
> and is visited by hundreds of thousands of developers every month.
|
||||
</p>
|
||||
|
||||
<div
|
||||
class='mt-5 grid grid-cols-1 justify-between gap-2 divide-x-0 sm:my-11 sm:grid-cols-3 sm:gap-0 sm:divide-x mb-4 sm:mb-0'
|
||||
>
|
||||
<OpenSourceStat text='GitHub Stars' value={starCount} />
|
||||
<OpenSourceStat text='Registered Users' value={'+1M'} />
|
||||
<OpenSourceStat text='GitHub Stars' value={starCount} secondaryValue={repoRank} />
|
||||
<OpenSourceStat text='Registered Users' value={'+1M'} secondaryValue="+90k" />
|
||||
<OpenSourceStat
|
||||
text='Discord Members'
|
||||
value={discordInfo.totalFormatted}
|
||||
secondaryValue="+1.5k"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
---
|
||||
import { ChevronUp } from 'lucide-react';
|
||||
import Icon from './AstroIcon.astro';
|
||||
|
||||
export interface Props {
|
||||
secondaryValue?: string;
|
||||
value: string;
|
||||
text: string;
|
||||
}
|
||||
|
||||
const { value, text } = Astro.props;
|
||||
const { value, text, secondaryValue } = Astro.props;
|
||||
|
||||
const isGitHubStars = text.toLowerCase() === 'github stars';
|
||||
const isRegistered = text.toLowerCase() === 'registered users';
|
||||
@@ -19,7 +20,7 @@ const isDiscordMembers = text.toLowerCase() === 'discord members';
|
||||
{
|
||||
isGitHubStars && (
|
||||
<p class='flex items-center text-sm text-blue-500 sm:flex'>
|
||||
<span class='rounded-md bg-blue-500 px-1 text-white'>Rank 6th</span>
|
||||
<span class='rounded-md bg-blue-500 px-1 text-white'>Rank {secondaryValue}</span>
|
||||
out of 28M!
|
||||
</p>
|
||||
)
|
||||
@@ -28,7 +29,7 @@ const isDiscordMembers = text.toLowerCase() === 'discord members';
|
||||
{
|
||||
isRegistered && (
|
||||
<p class='flex items-center text-sm text-blue-500 sm:flex'>
|
||||
<span class='mr-1.5 rounded-md bg-blue-500 px-1 text-white'>+75k</span>
|
||||
<span class='mr-1.5 rounded-md bg-blue-500 px-1 text-white'>{secondaryValue}</span>
|
||||
every month
|
||||
</p>
|
||||
)
|
||||
@@ -37,7 +38,7 @@ const isDiscordMembers = text.toLowerCase() === 'discord members';
|
||||
{
|
||||
isDiscordMembers && (
|
||||
<p class='flex items-center text-sm text-blue-500 sm:flex'>
|
||||
<span class='mr-1.5 rounded-md bg-blue-500 px-1 text-white'>+1.5k</span>
|
||||
<span class='mr-1.5 rounded-md bg-blue-500 px-1 text-white'>{secondaryValue}</span>
|
||||
every month
|
||||
</p>
|
||||
)
|
||||
@@ -88,7 +89,7 @@ const isDiscordMembers = text.toLowerCase() === 'discord members';
|
||||
{
|
||||
isDiscordMembers && (
|
||||
<a
|
||||
href='https://discord.gg/ZrSpJ8zH'
|
||||
href='https://roadmap.sh/discord'
|
||||
target='_blank'
|
||||
class='group mt-0 flex flex-col items-center rounded-lg border border-black bg-white px-3 py-2 text-sm hover:bg-black hover:text-white'
|
||||
>
|
||||
|
||||
@@ -8,7 +8,7 @@ export function EmptySolutions(props: EmptySolutionsProps) {
|
||||
const { projectId } = props;
|
||||
|
||||
return (
|
||||
<div className="flex min-h-[250px] flex-col items-center justify-center rounded-xl px-5 py-3 sm:px-0 sm:py-20">
|
||||
<div className="flex min-h-[250px] flex-col items-center justify-center rounded-xl px-5 py-3 sm:px-0 sm:py-20 bg-white border mb-5">
|
||||
<Blocks className="mb-4 opacity-10 h-14 w-14" />
|
||||
<h2 className="mb-1 text-lg font-semibold sm:text-xl">
|
||||
No solutions submitted yet
|
||||
|
||||
@@ -4,13 +4,13 @@ import { SubmissionRequirement } from './SubmissionRequirement.tsx';
|
||||
|
||||
type LeavingRoadmapWarningModalProps = {
|
||||
onClose: () => void;
|
||||
onContinue: () => void;
|
||||
repositoryUrl: string;
|
||||
};
|
||||
|
||||
export function LeavingRoadmapWarningModal(
|
||||
props: LeavingRoadmapWarningModalProps,
|
||||
) {
|
||||
const { onClose, onContinue } = props;
|
||||
const { onClose, repositoryUrl } = props;
|
||||
|
||||
return (
|
||||
<Modal onClose={onClose} bodyClassName="h-auto p-4">
|
||||
@@ -41,17 +41,18 @@ export function LeavingRoadmapWarningModal(
|
||||
<span className="font-medium text-purple-600">
|
||||
incorrect or misleading
|
||||
</span>
|
||||
. It helps the community. It helps the community.
|
||||
. It helps the community.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<button
|
||||
<a
|
||||
className="inline-flex w-full items-center gap-2 rounded-lg bg-black px-3 py-2.5 text-sm text-white"
|
||||
onClick={onContinue}
|
||||
href={repositoryUrl}
|
||||
target="_blank"
|
||||
>
|
||||
<ArrowUpRight className="h-5 w-5" />
|
||||
Continue to Solution
|
||||
</button>
|
||||
</a>
|
||||
|
||||
<button
|
||||
className="absolute right-2.5 top-2.5 text-gray-600 hover:text-black"
|
||||
|
||||
@@ -13,7 +13,8 @@ import { isLoggedIn } from '../../lib/jwt';
|
||||
import { showLoginPopup } from '../../lib/popup';
|
||||
import { VoteButton } from './VoteButton.tsx';
|
||||
import { GitHubIcon } from '../ReactIcons/GitHubIcon.tsx';
|
||||
import { cn } from '../../lib/classname.ts';
|
||||
import { SelectLanguages } from './SelectLanguages.tsx';
|
||||
import type { ProjectFrontmatter } from '../../lib/project.ts';
|
||||
|
||||
export interface ProjectStatusDocument {
|
||||
_id?: string;
|
||||
@@ -24,6 +25,7 @@ export interface ProjectStatusDocument {
|
||||
startedAt?: Date;
|
||||
submittedAt?: Date;
|
||||
repositoryUrl?: string;
|
||||
languages?: string[];
|
||||
|
||||
upvotes: number;
|
||||
downvotes: number;
|
||||
@@ -53,15 +55,16 @@ type ListProjectSolutionsResponse = {
|
||||
|
||||
type QueryParams = {
|
||||
p?: string;
|
||||
l?: string;
|
||||
};
|
||||
|
||||
type PageState = {
|
||||
currentPage: number;
|
||||
language: string;
|
||||
};
|
||||
|
||||
const VISITED_SOLUTIONS_KEY = 'visited-project-solutions';
|
||||
|
||||
type ListProjectSolutionsProps = {
|
||||
project: ProjectFrontmatter;
|
||||
projectId: string;
|
||||
};
|
||||
|
||||
@@ -90,27 +93,26 @@ const submittedAlternatives = [
|
||||
];
|
||||
|
||||
export function ListProjectSolutions(props: ListProjectSolutionsProps) {
|
||||
const { projectId } = props;
|
||||
const { projectId, project: projectData } = props;
|
||||
|
||||
const toast = useToast();
|
||||
const [pageState, setPageState] = useState<PageState>({
|
||||
currentPage: 0,
|
||||
language: '',
|
||||
});
|
||||
|
||||
const [isLoading, setIsLoading] = useState(true);
|
||||
const [solutions, setSolutions] = useState<ListProjectSolutionsResponse>();
|
||||
const [alreadyVisitedSolutions, setAlreadyVisitedSolutions] = useState<
|
||||
Record<string, boolean>
|
||||
>({});
|
||||
const [showLeavingRoadmapModal, setShowLeavingRoadmapModal] = useState<
|
||||
ListProjectSolutionsResponse['data'][number] | null
|
||||
>(null);
|
||||
|
||||
const loadSolutions = async (page = 1) => {
|
||||
const loadSolutions = async (page = 1, language: string = '') => {
|
||||
const { response, error } = await httpGet<ListProjectSolutionsResponse>(
|
||||
`${import.meta.env.PUBLIC_API_URL}/v1-list-project-solutions/${projectId}`,
|
||||
{
|
||||
currPage: page,
|
||||
...(language ? { languages: language } : {}),
|
||||
},
|
||||
);
|
||||
|
||||
@@ -132,7 +134,7 @@ export function ListProjectSolutions(props: ListProjectSolutionsProps) {
|
||||
return;
|
||||
}
|
||||
|
||||
pageProgressMessage.set('Submitting vote...');
|
||||
pageProgressMessage.set('Submitting vote');
|
||||
const { response, error } = await httpPost(
|
||||
`${import.meta.env.PUBLIC_API_URL}/v1-vote-project/${solutionId}`,
|
||||
{
|
||||
@@ -172,13 +174,9 @@ export function ListProjectSolutions(props: ListProjectSolutionsProps) {
|
||||
|
||||
useEffect(() => {
|
||||
const queryParams = getUrlParams() as QueryParams;
|
||||
const alreadyVisitedSolutions = JSON.parse(
|
||||
localStorage.getItem(VISITED_SOLUTIONS_KEY) || '{}',
|
||||
);
|
||||
|
||||
setAlreadyVisitedSolutions(alreadyVisitedSolutions);
|
||||
setPageState({
|
||||
currentPage: +(queryParams.p || '1'),
|
||||
language: queryParams.l || '',
|
||||
});
|
||||
}, []);
|
||||
|
||||
@@ -188,23 +186,21 @@ export function ListProjectSolutions(props: ListProjectSolutionsProps) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (pageState.currentPage !== 1) {
|
||||
if (pageState.currentPage !== 1 || pageState.language !== '') {
|
||||
setUrlParams({
|
||||
p: String(pageState.currentPage),
|
||||
l: pageState.language,
|
||||
});
|
||||
} else {
|
||||
deleteUrlParam('p');
|
||||
deleteUrlParam('l');
|
||||
}
|
||||
|
||||
loadSolutions(pageState.currentPage).finally(() => {
|
||||
loadSolutions(pageState.currentPage, pageState.language).finally(() => {
|
||||
setIsLoading(false);
|
||||
});
|
||||
}, [pageState]);
|
||||
|
||||
if (isLoading) {
|
||||
return <LoadingSolutions />;
|
||||
}
|
||||
|
||||
const isEmpty = solutions?.data.length === 0;
|
||||
if (isEmpty) {
|
||||
return <EmptySolutions projectId={projectId} />;
|
||||
@@ -213,115 +209,128 @@ export function ListProjectSolutions(props: ListProjectSolutionsProps) {
|
||||
const leavingRoadmapModal = showLeavingRoadmapModal ? (
|
||||
<LeavingRoadmapWarningModal
|
||||
onClose={() => setShowLeavingRoadmapModal(null)}
|
||||
onContinue={() => {
|
||||
const visitedSolutions = {
|
||||
...alreadyVisitedSolutions,
|
||||
[showLeavingRoadmapModal._id!]: true,
|
||||
};
|
||||
localStorage.setItem(
|
||||
VISITED_SOLUTIONS_KEY,
|
||||
JSON.stringify(visitedSolutions),
|
||||
);
|
||||
|
||||
window.open(showLeavingRoadmapModal.repositoryUrl, '_blank');
|
||||
}}
|
||||
repositoryUrl={showLeavingRoadmapModal?.repositoryUrl!}
|
||||
/>
|
||||
) : null;
|
||||
|
||||
const selectedLanguage = pageState.language;
|
||||
|
||||
return (
|
||||
<section>
|
||||
<div className="mb-4 overflow-hidden rounded-lg border bg-white p-3 sm:p-5">
|
||||
{leavingRoadmapModal}
|
||||
|
||||
<div className="flex min-h-[500px] flex-col divide-y divide-gray-100">
|
||||
{solutions?.data.map((solution, counter) => {
|
||||
const isVisited = alreadyVisitedSolutions[solution._id!];
|
||||
const avatar = solution.user.avatar || '';
|
||||
|
||||
return (
|
||||
<div
|
||||
key={solution._id}
|
||||
className={
|
||||
'flex flex-col justify-between gap-2 py-2 text-sm text-gray-500 sm:flex-row sm:items-center sm:gap-0'
|
||||
}
|
||||
>
|
||||
<div className="flex items-center gap-1.5">
|
||||
<img
|
||||
src={
|
||||
avatar
|
||||
? `${import.meta.env.PUBLIC_AVATAR_BASE_URL}/${avatar}`
|
||||
: '/images/default-avatar.png'
|
||||
}
|
||||
alt={solution.user.name}
|
||||
className="mr-0.5 h-7 w-7 rounded-full"
|
||||
/>
|
||||
<span className="font-medium text-black">
|
||||
{solution.user.name}
|
||||
</span>
|
||||
<span className="hidden sm:inline">
|
||||
{submittedAlternatives[
|
||||
counter % submittedAlternatives.length
|
||||
] || 'submitted their solution'}
|
||||
</span>{' '}
|
||||
<span className="flex-grow text-right text-gray-400 sm:flex-grow-0 sm:text-left sm:font-medium sm:text-black">
|
||||
{getRelativeTimeString(solution?.submittedAt!)}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-end gap-1">
|
||||
<span className="flex items-center overflow-hidden rounded-full border">
|
||||
<VoteButton
|
||||
icon={ThumbsUp}
|
||||
isActive={solution?.voteType === 'upvote'}
|
||||
count={solution.upvotes || 0}
|
||||
onClick={() => {
|
||||
handleSubmitVote(solution._id!, 'upvote');
|
||||
}}
|
||||
/>
|
||||
|
||||
<VoteButton
|
||||
icon={ThumbsDown}
|
||||
isActive={solution?.voteType === 'downvote'}
|
||||
count={solution.downvotes || 0}
|
||||
onClick={() => {
|
||||
handleSubmitVote(solution._id!, 'downvote');
|
||||
}}
|
||||
/>
|
||||
</span>
|
||||
|
||||
<a
|
||||
className="ml-1 flex items-center gap-1 rounded-full border px-2 py-1 text-xs text-black transition-colors hover:border-black hover:bg-black hover:text-white"
|
||||
onClick={(e) => {
|
||||
e.preventDefault();
|
||||
setShowLeavingRoadmapModal(solution);
|
||||
}}
|
||||
target="_blank"
|
||||
href={solution.repositoryUrl}
|
||||
>
|
||||
<GitHubIcon className="h-4 w-4 text-current" />
|
||||
Visit Solution
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
{(solutions?.totalPages || 0) > 1 && (
|
||||
<div className="mt-4">
|
||||
<Pagination
|
||||
totalPages={solutions?.totalPages || 1}
|
||||
currPage={solutions?.currPage || 1}
|
||||
perPage={solutions?.perPage || 21}
|
||||
totalCount={solutions?.totalCount || 0}
|
||||
onPageChange={(page) => {
|
||||
setPageState({
|
||||
...pageState,
|
||||
currentPage: page,
|
||||
});
|
||||
<div className="relative mb-5 hidden items-center justify-between sm:flex">
|
||||
<div>
|
||||
<h1 className="mb-1 text-xl font-semibold">
|
||||
{projectData.title} Solutions
|
||||
</h1>
|
||||
<p className="text-sm text-gray-500">{projectData.description}</p>
|
||||
</div>
|
||||
{!isLoading && (
|
||||
<SelectLanguages
|
||||
projectId={projectId}
|
||||
selectedLanguage={selectedLanguage}
|
||||
onSelectLanguage={(language) => {
|
||||
setPageState((prev) => ({
|
||||
...prev,
|
||||
language: prev.language === language ? '' : language,
|
||||
}));
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{isLoading ? (
|
||||
<LoadingSolutions />
|
||||
) : (
|
||||
<>
|
||||
<div className="flex min-h-[500px] flex-col divide-y divide-gray-100">
|
||||
{solutions?.data.map((solution, counter) => {
|
||||
const avatar = solution.user.avatar || '';
|
||||
return (
|
||||
<div
|
||||
key={solution._id}
|
||||
className="flex flex-col gap-2 py-2 text-sm text-gray-500"
|
||||
>
|
||||
<div className="flex flex-col justify-between gap-2 text-sm text-gray-500 sm:flex-row sm:items-center sm:gap-0">
|
||||
<div className="flex items-center gap-1.5">
|
||||
<img
|
||||
src={
|
||||
avatar
|
||||
? `${import.meta.env.PUBLIC_AVATAR_BASE_URL}/${avatar}`
|
||||
: '/images/default-avatar.png'
|
||||
}
|
||||
alt={solution.user.name}
|
||||
className="mr-0.5 h-7 w-7 rounded-full"
|
||||
/>
|
||||
<span className="font-medium text-black">
|
||||
{solution.user.name}
|
||||
</span>
|
||||
<span className="hidden sm:inline">
|
||||
{submittedAlternatives[
|
||||
counter % submittedAlternatives.length
|
||||
] || 'submitted their solution'}
|
||||
</span>{' '}
|
||||
<span className="flex-grow text-right text-gray-400 sm:flex-grow-0 sm:text-left sm:font-medium sm:text-black">
|
||||
{getRelativeTimeString(solution?.submittedAt!)}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center justify-end gap-1">
|
||||
<span className="flex shrink-0 overflow-hidden rounded-full border">
|
||||
<VoteButton
|
||||
icon={ThumbsUp}
|
||||
isActive={solution?.voteType === 'upvote'}
|
||||
count={solution.upvotes || 0}
|
||||
onClick={() => {
|
||||
handleSubmitVote(solution._id!, 'upvote');
|
||||
}}
|
||||
/>
|
||||
|
||||
<VoteButton
|
||||
icon={ThumbsDown}
|
||||
isActive={solution?.voteType === 'downvote'}
|
||||
count={solution.downvotes || 0}
|
||||
hideCount={true}
|
||||
onClick={() => {
|
||||
handleSubmitVote(solution._id!, 'downvote');
|
||||
}}
|
||||
/>
|
||||
</span>
|
||||
|
||||
<button
|
||||
className="ml-1 flex items-center gap-1 rounded-full border px-2 py-1 text-xs text-black transition-colors hover:border-black hover:bg-black hover:text-white"
|
||||
onClick={() => {
|
||||
setShowLeavingRoadmapModal(solution);
|
||||
}}
|
||||
>
|
||||
<GitHubIcon className="h-4 w-4 text-current" />
|
||||
Visit Solution
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
|
||||
{(solutions?.totalPages || 0) > 1 && (
|
||||
<div className="mt-4">
|
||||
<Pagination
|
||||
totalPages={solutions?.totalPages || 1}
|
||||
currPage={solutions?.currPage || 1}
|
||||
perPage={solutions?.perPage || 21}
|
||||
totalCount={solutions?.totalCount || 0}
|
||||
onPageChange={(page) => {
|
||||
setPageState({
|
||||
...pageState,
|
||||
currentPage: page,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</section>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
88
src/components/Projects/SelectLanguages.tsx
Normal file
88
src/components/Projects/SelectLanguages.tsx
Normal file
@@ -0,0 +1,88 @@
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import { useOutsideClick } from '../../hooks/use-outside-click';
|
||||
import { httpGet } from '../../lib/http';
|
||||
import { useToast } from '../../hooks/use-toast';
|
||||
import { ChevronDown, X } from 'lucide-react';
|
||||
|
||||
type SelectLanguagesProps = {
|
||||
projectId: string;
|
||||
selectedLanguage: string;
|
||||
onSelectLanguage: (language: string) => void;
|
||||
};
|
||||
|
||||
export function SelectLanguages(props: SelectLanguagesProps) {
|
||||
const { projectId, onSelectLanguage, selectedLanguage } = props;
|
||||
|
||||
const dropdownRef = useRef<HTMLDivElement>(null);
|
||||
const toast = useToast();
|
||||
|
||||
const [distinctLanguages, setDistinctLanguages] = useState<string[]>([]);
|
||||
const [isOpen, setIsOpen] = useState(false);
|
||||
|
||||
const loadDistinctLanguages = async () => {
|
||||
const { response, error } = await httpGet<string[]>(
|
||||
`${import.meta.env.PUBLIC_API_URL}/v1-list-project-languages/${projectId}`,
|
||||
);
|
||||
|
||||
if (error || !response) {
|
||||
toast.error(error?.message || 'Failed to load project languages');
|
||||
return;
|
||||
}
|
||||
|
||||
setDistinctLanguages(response);
|
||||
};
|
||||
|
||||
useOutsideClick(dropdownRef, () => {
|
||||
setIsOpen(false);
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
loadDistinctLanguages().finally(() => {});
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div className="relative flex">
|
||||
<button
|
||||
className="flex items-center gap-1 rounded-md border border-gray-300 py-1.5 pl-3 pr-2 text-xs font-medium text-gray-900"
|
||||
onClick={() => setIsOpen(!isOpen)}
|
||||
>
|
||||
{selectedLanguage || 'Select Language'}
|
||||
|
||||
<ChevronDown className="ml-1 h-4 w-4" />
|
||||
</button>
|
||||
{selectedLanguage && (
|
||||
<button
|
||||
className="ml-1 text-red-500 text-xs border border-red-500 rounded-md px-2 py-1"
|
||||
onClick={() => onSelectLanguage('')}
|
||||
>
|
||||
Clear
|
||||
</button>
|
||||
)}
|
||||
|
||||
{isOpen && (
|
||||
<div
|
||||
className="absolute right-0 top-full z-10 w-full min-w-[200px] max-w-[200px] translate-y-1.5 overflow-hidden rounded-md border border-gray-300 bg-white p-1 shadow-lg"
|
||||
ref={dropdownRef}
|
||||
>
|
||||
{distinctLanguages.map((language) => {
|
||||
const isSelected = selectedLanguage === language;
|
||||
|
||||
return (
|
||||
<button
|
||||
key={language}
|
||||
className="flex w-full items-center rounded-md px-4 py-1.5 text-left text-sm text-gray-700 hover:bg-gray-100 aria-selected:bg-gray-100"
|
||||
onClick={() => {
|
||||
onSelectLanguage(language);
|
||||
setIsOpen(false);
|
||||
}}
|
||||
aria-selected={isSelected}
|
||||
>
|
||||
{language}
|
||||
</button>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -170,10 +170,19 @@ export function SubmitProjectModal(props: SubmitProjectModalProps) {
|
||||
projectUrlExists: 'success',
|
||||
});
|
||||
|
||||
const languagesUrl = `${mainApiUrl}/languages`;
|
||||
const languagesResponse = await fetch(languagesUrl);
|
||||
let languages: string[] = [];
|
||||
if (languagesResponse.ok) {
|
||||
const languagesData = await languagesResponse.json();
|
||||
languages = Object.keys(languagesData || {})?.slice(0, 4);
|
||||
}
|
||||
|
||||
const submitProjectUrl = `${import.meta.env.PUBLIC_API_URL}/v1-submit-project/${projectId}`;
|
||||
const { response: submitResponse, error } =
|
||||
await httpPost<SubmitProjectResponse>(submitProjectUrl, {
|
||||
repositoryUrl: repoUrl,
|
||||
languages,
|
||||
});
|
||||
|
||||
if (error || !submitResponse) {
|
||||
@@ -272,7 +281,7 @@ export function SubmitProjectModal(props: SubmitProjectModalProps) {
|
||||
|
||||
<button
|
||||
type="submit"
|
||||
className="mt-2 w-full rounded-lg bg-black p-2 font-medium text-white disabled:opacity-50 text-sm"
|
||||
className="mt-2 w-full rounded-lg bg-black p-2 text-sm font-medium text-white disabled:opacity-50"
|
||||
disabled={isLoading}
|
||||
>
|
||||
{isLoading ? 'Verifying...' : 'Verify and Submit'}
|
||||
|
||||
@@ -5,14 +5,15 @@ type VoteButtonProps = {
|
||||
icon: LucideIcon;
|
||||
isActive: boolean;
|
||||
count: number;
|
||||
hideCount?: boolean;
|
||||
onClick: () => void;
|
||||
};
|
||||
export function VoteButton(props: VoteButtonProps) {
|
||||
const { icon: VoteIcon, isActive, count, onClick } = props;
|
||||
const { icon: VoteIcon, isActive, hideCount = false, count, onClick } = props;
|
||||
return (
|
||||
<button
|
||||
className={cn(
|
||||
'flex items-center gap-1 px-2 py-1 text-sm text-gray-500 hover:bg-gray-100 hover:text-black focus:outline-none',
|
||||
'flex gap-1 px-2 py-1 text-sm text-gray-500 hover:bg-gray-100 hover:text-black focus:outline-none',
|
||||
{
|
||||
'bg-gray-100 text-orange-600 hover:text-orange-700': isActive,
|
||||
'bg-transparent text-gray-500 hover:text-black': !isActive,
|
||||
@@ -21,10 +22,14 @@ export function VoteButton(props: VoteButtonProps) {
|
||||
disabled={isActive}
|
||||
onClick={onClick}
|
||||
>
|
||||
<VoteIcon className={cn('size-3.5 stroke-[2.5px]')} />
|
||||
<span className="relative -top-[0.5px] text-xs font-medium tabular-nums">
|
||||
{count}
|
||||
</span>
|
||||
<VoteIcon className={cn('size-3.5 stroke-[2.5px]', {
|
||||
'top-[1.5px] relative mr-0.5': hideCount
|
||||
})} />
|
||||
{!hideCount && (
|
||||
<span className="relative -top-[0.5px] text-xs font-medium tabular-nums">
|
||||
{count}
|
||||
</span>
|
||||
)}
|
||||
</button>
|
||||
);
|
||||
}
|
||||
|
||||
93
src/components/RoadmapDropdownMenu/RoadmapDropdownMenu.tsx
Normal file
93
src/components/RoadmapDropdownMenu/RoadmapDropdownMenu.tsx
Normal file
@@ -0,0 +1,93 @@
|
||||
import { ChevronDown, Globe, Menu, Sparkles, Waypoints } from 'lucide-react';
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import { useOutsideClick } from '../../hooks/use-outside-click';
|
||||
import { cn } from '../../lib/classname';
|
||||
import {
|
||||
navigationDropdownOpen,
|
||||
roadmapsDropdownOpen,
|
||||
} from '../../stores/page.ts';
|
||||
import { useStore } from '@nanostores/react';
|
||||
|
||||
const links = [
|
||||
{
|
||||
link: '/roadmaps',
|
||||
label: 'Official Roadmaps',
|
||||
description: 'Made by subject matter experts',
|
||||
Icon: Waypoints,
|
||||
},
|
||||
{
|
||||
link: '/ai/explore',
|
||||
label: 'AI Roadmaps',
|
||||
description: 'Generate roadmaps with AI',
|
||||
Icon: Sparkles,
|
||||
},
|
||||
{
|
||||
link: '/community',
|
||||
label: 'Community Roadmaps',
|
||||
description: 'Made by community members',
|
||||
Icon: Globe,
|
||||
},
|
||||
];
|
||||
|
||||
export function RoadmapDropdownMenu() {
|
||||
const dropdownRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
const $roadmapsDropdownOpen = useStore(roadmapsDropdownOpen);
|
||||
const $navigationDropdownOpen = useStore(navigationDropdownOpen);
|
||||
|
||||
useOutsideClick(dropdownRef, () => {
|
||||
roadmapsDropdownOpen.set(false);
|
||||
});
|
||||
|
||||
useEffect(() => {
|
||||
if ($navigationDropdownOpen) {
|
||||
roadmapsDropdownOpen.set(false);
|
||||
}
|
||||
}, [$navigationDropdownOpen]);
|
||||
|
||||
return (
|
||||
<div className="relative flex items-center" ref={dropdownRef}>
|
||||
<button
|
||||
className={cn('text-gray-400 hover:text-white', {
|
||||
'text-white': $roadmapsDropdownOpen,
|
||||
})}
|
||||
onClick={() => roadmapsDropdownOpen.set(true)}
|
||||
onMouseOver={() => roadmapsDropdownOpen.set(true)}
|
||||
aria-label="Open Navigation Dropdown"
|
||||
aria-expanded={$roadmapsDropdownOpen}
|
||||
>
|
||||
Roadmaps{' '}
|
||||
<ChevronDown className="inline-block h-3 w-3" strokeWidth={4} />
|
||||
</button>
|
||||
<div
|
||||
className={cn(
|
||||
'pointer-events-none invisible absolute left-0 top-full z-[999] mt-2 w-48 min-w-[320px] -translate-y-1 rounded-lg bg-slate-800 py-2 opacity-0 shadow-2xl transition-all duration-100',
|
||||
{
|
||||
'pointer-events-auto visible translate-y-2.5 opacity-100':
|
||||
$roadmapsDropdownOpen,
|
||||
},
|
||||
)}
|
||||
role="menu"
|
||||
>
|
||||
{links.map((link) => (
|
||||
<a
|
||||
href={link.link}
|
||||
key={link.link}
|
||||
className="group flex items-center gap-3 px-4 py-2.5 text-gray-400 transition-colors hover:bg-slate-700"
|
||||
role="menuitem"
|
||||
>
|
||||
<span className="flex h-[40px] w-[40px] items-center justify-center rounded-full bg-slate-600 transition-colors group-hover:bg-slate-500 group-hover:text-slate-100">
|
||||
<link.Icon className="inline-block h-5 w-5" />
|
||||
</span>
|
||||
<span className="flex flex-col">
|
||||
<span className="font-medium text-slate-300 transition-colors group-hover:text-slate-100">
|
||||
{link.label}
|
||||
</span>
|
||||
<span className="text-sm">{link.description}</span>
|
||||
</span>
|
||||
</a>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
84
src/components/courses/CourseStep.astro
Normal file
84
src/components/courses/CourseStep.astro
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
import { Swords } from 'lucide-react';
|
||||
---
|
||||
|
||||
<div class='flex flex-col'>
|
||||
<div
|
||||
class='-ml-[27.6px] mb-3 flex items-center text-sm leading-none text-gray-400'
|
||||
>
|
||||
<span class='h-3 w-3 rounded-full bg-black'></span>
|
||||
<span class='h-[1px] w-[15px] bg-black'></span>
|
||||
<h2 class='rounded-md border border-black bg-black px-3 py-2 text-white'>
|
||||
Step 1 — Learn the absolute basics i.e. HTML and CSS
|
||||
</h2>
|
||||
</div>
|
||||
|
||||
<p class='mb-2 text-sm text-gray-500'>
|
||||
Purchase and watch one of the following <span class='font-medium text-black'
|
||||
>premium courses</span
|
||||
>
|
||||
</p>
|
||||
|
||||
<div class='rounded-lg border p-3'>
|
||||
<ul class='flex flex-col gap-1 text-sm'>
|
||||
<li>
|
||||
<a href='#' class='group font-medium text-gray-800 hover:text-black'>
|
||||
<span
|
||||
class='mr-1.5 inline-block rounded bg-green-300 px-1.5 py-0.5 text-xs uppercase text-black no-underline'
|
||||
>
|
||||
Course
|
||||
</span>
|
||||
|
||||
<span class='underline underline-offset-1'
|
||||
>HTML and CSS with Mosh</span
|
||||
>
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href='#' class='group font-medium text-gray-800 hover:text-black'>
|
||||
<span
|
||||
class='mr-1.5 inline-block rounded bg-green-300 px-1.5 py-0.5 text-xs uppercase text-black no-underline'
|
||||
>
|
||||
Course
|
||||
</span>
|
||||
|
||||
<span class='underline underline-offset-1'
|
||||
>Learn HTML with 50 Projects</span
|
||||
>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<p class='mt-3 text-sm text-gray-500'>
|
||||
Once done, build the <span class='font-medium text-black'
|
||||
>following projects</span
|
||||
> to test and practice your skills
|
||||
</p>
|
||||
|
||||
<div class='mt-3 flex flex-col gap-1'>
|
||||
<a
|
||||
href='/projects/task-tracker'
|
||||
class='flex items-center gap-2 rounded-md bg-zinc-100 px-2 py-1.5 text-sm text-black transition-colors hover:bg-zinc-300'
|
||||
>
|
||||
<Swords size='1.25em' className='text-gray-400' />
|
||||
Build a login page for a website.
|
||||
</a>
|
||||
|
||||
<a
|
||||
href='/projects/task-tracker'
|
||||
class='flex items-center gap-2 rounded-md bg-zinc-100 px-2 py-1.5 text-sm text-black transition-colors hover:bg-zinc-300'
|
||||
>
|
||||
<Swords size='1.25em' className='text-gray-400' />
|
||||
Create a landing page for an e-commerce website.
|
||||
</a>
|
||||
|
||||
<a
|
||||
href='/projects/task-tracker'
|
||||
class='flex items-center gap-2 rounded-md bg-zinc-100 px-2 py-1.5 text-sm text-black transition-colors hover:bg-zinc-300'
|
||||
>
|
||||
<Swords size='1.25em' className='text-gray-400' />
|
||||
Create a responsive website for a restaurant.
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
29
src/components/courses/Milestone.astro
Normal file
29
src/components/courses/Milestone.astro
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
import { Flag } from 'lucide-react';
|
||||
---
|
||||
|
||||
<div class='flex flex-col'>
|
||||
<p
|
||||
class='-ml-[37px] mb-3 flex items-center text-sm leading-none text-gray-400'
|
||||
>
|
||||
<span
|
||||
class='relative flex h-8 w-8 items-center justify-center rounded-full bg-green-600 text-white'
|
||||
>
|
||||
<Flag size='1.2em' />
|
||||
</span>
|
||||
<span class='h-[2px] w-[4.5px] bg-green-600'></span>
|
||||
<span
|
||||
class='rounded-md border border-green-600 bg-green-600 px-3 py-2 text-white'
|
||||
>
|
||||
You are ready to apply for jobs
|
||||
</span>
|
||||
</p>
|
||||
|
||||
<p class='mb-2 text-sm text-gray-500'>
|
||||
At this point, you should have a solid understanding of basic front-end development concepts and be able to build simple websites. Start applying for jobs, while continuing to learn and improve your skills.
|
||||
</p>
|
||||
|
||||
<p class='mb-2 text-sm text-gray-500'>
|
||||
You might have a difficult time finding a job at this stage, but don't get discouraged. Keep applying and improving your skills. You can also consider contributing to open-source projects to gain experience and build your portfolio.
|
||||
</p>
|
||||
</div>
|
||||
@@ -1,3 +1,5 @@
|
||||
# Backend Monitoring with Prometheus, Grafana, ELK Stack
|
||||
|
||||
Efficiency and rate of performance are paramount for the backend processes in web applications. Utilizing performance monitoring tools such as Prometheus, Grafana, and the ELK Stack ensures that any issues impacting performance can be promptly identified and rectified. For example, Prometheus offers robust monitoring capabilities by collecting numeric time series data, presenting a detailed insight into the application's performance metrics. Grafana can visualize this data in an accessible, user-friendly way, helping developers to interpret complex statistics and notice trends or anomalies. Meanwhile, the ELK Stack (Elasticsearch, Logstash, Kibana) provides log management solutions, making it possible to search and analyze logs for indications of backend issues. By using these tools, developers can effectively keep backend performance at optimal levels, ensuring smoother user experiences.
|
||||
Efficiency and rate of performance are paramount for the backend processes in web applications. Utilizing performance monitoring tools such as Prometheus, Grafana, and the ELK Stack ensures that any issues impacting performance can be promptly identified and rectified. For example, Prometheus offers robust monitoring capabilities by collecting numeric time series data, presenting a detailed insight into the application's performance metrics. Grafana can visualize this data in an accessible, user-friendly way, helping developers to interpret complex statistics and notice trends or anomalies. Meanwhile, the ELK Stack (Elasticsearch, Logstash, Kibana) provides log management solutions, making it possible to search and analyze logs for indications of backend issues. By using these tools, developers can effectively keep backend performance at optimal levels, ensuring smoother user experiences.
|
||||
|
||||
- [@video@Tutorial - Grafana Explained in 3 minutes](https://www.youtube.com/watch?v=X-GLqyMZaJk)
|
||||
@@ -1,3 +1,5 @@
|
||||
# Maintaining Updated Dependencies
|
||||
|
||||
Keeping your dependencies up to date is crucial for optimizing backend performance in web applications. Regular updates bring new features, improvements, and important patches for security vulnerabilities that could harm the performance and security of your application. An outdated package, for example, may run inefficiently or even prevent other components from functioning at peak performance. This creates a ripple effect that could slow down or disrupt entire processes. Therefore, staying current with all updates enhances the robustness and operational efficiency, contributing to faster load times, better stability, and ultimately, an improved user experience.
|
||||
Keeping your dependencies up to date is crucial for optimizing backend performance in web applications. Regular updates bring new features, improvements, and important patches for security vulnerabilities that could harm the performance and security of your application. An outdated package, for example, may run inefficiently or even prevent other components from functioning at peak performance. This creates a ripple effect that could slow down or disrupt entire processes. Therefore, staying current with all updates enhances the robustness and operational efficiency, contributing to faster load times, better stability, and ultimately, an improved user experience.
|
||||
|
||||
[@video@Tutorial - dependabot on GitHub](https://www.youtube.com/watch?v=TnBEVPUsuAw)
|
||||
@@ -1,6 +1,6 @@
|
||||
# Verify the Change in Production.
|
||||
|
||||
Veryfing the change is a crucial step in the code review process that ensures the recently merged changes work correctly and do not cause any unexpected disruptions when deployed to the live production environment. Rigorous testing before deployment helps minimize the risks, but having an additional layer of validation post-deployment provides you with the confidence that your code changes are working as intended while interacting with real users and production data. To make sure of this, consider the following tips:
|
||||
Verifying the change is a crucial step in the code review process that ensures the recently merged changes work correctly and do not cause any unexpected disruptions when deployed to the live production environment. Rigorous testing before deployment helps minimize the risks, but having an additional layer of validation post-deployment provides you with the confidence that your code changes are working as intended while interacting with real users and production data. To make sure of this, consider the following tips:
|
||||
|
||||
- Implement automated monitoring and alerting systems to keep track of your application's key performance indicators (KPIs) and notify you in case of a significant change in the metrics.
|
||||
|
||||
@@ -10,4 +10,4 @@ Veryfing the change is a crucial step in the code review process that ensures th
|
||||
|
||||
- Observe user interaction through user analytics, bug reports, or direct feedback to assess whether the code change has had the intended impact and is positively affecting the user experience.
|
||||
|
||||
- Establish strategies for gradual deployment, such as canary or blue-green deployments, to minimize the potential impact of a problematic change on your entire user base and ensure smoother rollback if needed.
|
||||
- Establish strategies for gradual deployment, such as canary or blue-green deployments, to minimize the potential impact of a problematic change on your entire user base and ensure smoother rollback if needed.
|
||||
|
||||
235
src/data/guides/devops-career-path.md
Normal file
235
src/data/guides/devops-career-path.md
Normal file
@@ -0,0 +1,235 @@
|
||||
---
|
||||
title: 'Is DevOps engineering a good career path in @currentYear@?'
|
||||
description: 'Learn why a DevOps career path is a smart choice in 2024. Get insights into demand, growth, and earning potential in DevOps.'
|
||||
authorId: ekene
|
||||
excludedBySlug: '/devops/career-path'
|
||||
seo:
|
||||
title: 'Is DevOps engineering a good career path in @currentYear@?'
|
||||
description: 'Learn why a DevOps career path is a smart choice in 2024. Get insights into demand, growth, and earning potential in DevOps.'
|
||||
ogImageUrl: 'https://assets.roadmap.sh/guest/devops-engineer-career-path-2h4r7.jpg'
|
||||
isNew: true
|
||||
type: 'textual'
|
||||
date: 2024-08-20
|
||||
sitemap:
|
||||
priority: 0.7
|
||||
changefreq: 'weekly'
|
||||
tags:
|
||||
- 'guide'
|
||||
- 'textual-guide'
|
||||
- 'guide-sitemap'
|
||||
---
|
||||
|
||||

|
||||
|
||||
Making career choices could be overwhelming for beginners and experienced software developers seeking to advance their skills. This could be due to several factors, such as the abundance of options, the numerous resources on the internet, the steep learning curves, and so on.
|
||||
|
||||
However, before selecting a path, it is helpful to look at certain factors, such as your interests, strengths, and the future prospects of the career path, as these factors play a crucial role in determining your potential for success.
|
||||
|
||||
[DevOps engineering](https://roadmap.sh/devops) is one of the most [in-demand and highest](https://uk.indeed.com/career-advice/career-development/software-engineering-jobs)[-](https://uk.indeed.com/career-advice/career-development/software-engineering-jobs)[paying roles](https://uk.indeed.com/career-advice/career-development/software-engineering-jobs) in the tech industry and, in recent times, has become the go-to choice for many people getting into tech and experienced tech professionals.
|
||||
As a DevOps professional, you'll enjoy spectacular career growth filled with endless opportunities.
|
||||
|
||||
The DevOps philosophy involves bringing developers and operation teams together to improve the software delivery process.
|
||||
|
||||
This guide will detail DevOps and provide the necessary information to help you decide whether to pursue the Devops engineer career path and steps to ensure a DevOps career growth.
|
||||
|
||||
## What is DevOps?
|
||||
|
||||
Derived from the combination of development (Dev) and operations (Ops), DevOps is a software development methodology that aims to improve collaboration between development and operations teams, increase the efficiency, security, and speed of software development and delivery.
|
||||
|
||||

|
||||
|
||||
Within DevOps, you'll play an important part in the entire software development lifecycle - from initial planning to implementation. This means you'll be a team player with excellent communication skills.
|
||||
|
||||
## Is DevOps engineer career path right for you?
|
||||
|
||||
DevOps is a field that's here to stay. The DevOps market grew to an incredible $10.3 billion at the end of 2023 and it is growing. Hence, securing a DevOps role is your first step toward a long-lasting career.
|
||||
|
||||
DevOps career paths are worth considering if you have experience in software development, networking, or operations. It involves automation, testing, monitoring, configuring, networking, and Infrastructure as Code (IaC) and requires a diverse skill set as discussed below. It is a bridge between development and operations teams.
|
||||
|
||||
These are some factors to consider before choosing the DevOps engineer career path:
|
||||
|
||||
- Interest in automation
|
||||
- Enjoy collaborating
|
||||
- Interest in infrastructure management
|
||||
- Love for problem-solving
|
||||
- Willingness to continuously learn new skills and technology
|
||||
|
||||
### Interest in automation
|
||||
|
||||
Automation is an integral part of the DevOps career path. It involves writing scripts and code to automate repetitive tasks and enhance software delivery processes. By automating repetitive tasks and workflows, DevOps teams can increase efficiency, reduce errors, and accelerate time to market for software releases.
|
||||
|
||||
### Enjoy collaborating
|
||||
|
||||
Collaboration is crucial in the DevOps career, as you will work with different people across several teams. The goal is to break down the silos across teams and ensure they all work together to achieve the same goal. Having great collaboration skills is crucial to being a DevOps engineer.
|
||||
|
||||
### Interest in infrastructure management
|
||||
|
||||
Do you enjoy working on infrastructural rather than domain code? The plus side of infrastructural code is that it can be replicated across several infrastructures once it is set up, and you can transfer the knowledge to other organizations instead of domain code, where you would always need to learn the domain of the business you are writing code for.
|
||||
|
||||
### Love for problem-solving
|
||||
|
||||
Choosing this field requires that you enjoy solving problems and can devise solutions to complex problems.
|
||||
|
||||
### Willingness to continuously learn new skills and technology
|
||||
|
||||
DevOps is an evolving field, and there is always something new. To be up to date, you have to be willing and open to continuous learning. This involves taking courses, reading articles, and getting updates on things happening in the DevOps field and tech.
|
||||
|
||||
It is worth noting that working in DevOps involves working in high-pressure environments. You are constantly relied on to manage an organization's IT and new and existing cloud systems, which can sometimes be overwhelming.
|
||||
|
||||
Also, there is a steep learning curve. As a tech beginner, it could be daunting and challenging to get into DevOps and adapt the DevOps culture, but as you go along, it gets easier.
|
||||
|
||||
## DevOps in 2024
|
||||
|
||||
According to [Statista](https://www.statista.com/statistics/1367003/in-demand-it-roles/), DevOps software engineering positions are among the top technical positions demanded by recruiters worldwide in 2023. Indeed reported that the average annual salary of [DevOps engineer](https://www.indeed.com/career/development-operations-engineer/salaries?from=top_sb) in the USA is $124,392.
|
||||
|
||||
DevOps has evolved over the last decade. Today, it is more than automating tasks and having engineers write scripts. It is now more about practices that can help to automate software delivery, improve business and the overall software development processes.
|
||||
|
||||
Certain trends are impacting the DevOps market currently and will also play a role in the future of DevOps. Some of them include:
|
||||
|
||||
- Microservices
|
||||
- Cloud technology
|
||||
- Automation and CI/CD
|
||||
- Artificial Intelligence and Machine Learning
|
||||
|
||||
Let's look at these trends and how they indirectly influence your decision.
|
||||
|
||||
### Microservices
|
||||
|
||||
This architecture enables the agile development and continuous delivery of software solutions. In a microservice architecture, applications are split into smaller parts known as microservices that focus on a single responsibility. Each part (microservice) is developed and deployed independently, and microservices communicate via events or API interfaces.
|
||||
|
||||
It is a common trend, and many organizations are adopting this architecture because of its benefits, one of which is adapting to market changes and shipping out features faster rather than the delays of modular monoliths. A DevOps engineer is critical to helping in the adoption and success of microservices.
|
||||
|
||||
### Cloud technology
|
||||
|
||||
Cloud-native applications have become popular recently. They involve developing and deploying software applications and their dependencies in a cloud environment.
|
||||
|
||||
There are several cloud platforms, some of the most popular ones being [AWS](https://roadmap.sh/aws), Microsoft Azure, and Google Cloud Platform (GCP). One advantage of using these cloud providers is that you don't have to manage the cloud infrastructure but instead focus on developing your applications. You also pay for only the resources you use.
|
||||
|
||||
Containerization tools like [Docker](https://roadmap.sh/docker) and [Kubernetes](https://roadmap.sh/kubernetes) have been made popular by cloud services and microservices. These tools are part of the toolkit of DevOps engineers.
|
||||
|
||||
### Automation and CI/CD
|
||||
|
||||
Automation and continuous integration/continuous deployment are integral to DevOps. Organizations are widely adopting automation of their infrastructure and deployments because of its benefits including faster and more reliable application deployments.
|
||||
|
||||
Also, with the adoption of GitOps, an operational framework that takes DevOps principles and best practices for application development and applies them to infrastructure automation, the deployment process is even more efficient. DevOps professionals are the major catalysts for this and will remain relevant.
|
||||
|
||||
### Artificial intelligence and machine learning
|
||||
|
||||
AI and ML have become integrated into our daily lives and automation tools are used to automate processes and routine tasks, monitor system health, and predict potential system issues. These AI tools need to be designed, maintained, and enhanced.
|
||||
|
||||
In the AI and ML field, it is the job of an MLOps engineer, but a DevOps engineer can upskill and switch roles to an MLOps engineer.
|
||||
|
||||
There is a concern that AI will replace DevOps professionals. However, I believe AI will complement the DevOps process, improve the software development lifecycle, and make better DevOps engineers.
|
||||
|
||||
## Specializations in DevOps
|
||||
|
||||
The DevOps career paths are rewarding and successful. The DevOps career also offers a lot of growth opportunities, and as you have seen in the previous section, it is in high demand.
|
||||
|
||||
There are several DevOps career paths and devops career opportunities for both entry-level and experienced positions. Normally, working in DevOps starts with an entry-level position like a release manager or junior DevOps engineer.
|
||||
|
||||
As a DevOps professional, you can decide to go for any of these following DevOps skills and specializations:
|
||||
|
||||
- Automation expert
|
||||
- General DevOps engineer
|
||||
- System engineer
|
||||
- DevOps erchitect
|
||||
- DevOps release manager
|
||||
- DevSecOps engineer
|
||||
- DevOps test engineer
|
||||
|
||||
### Automation expert
|
||||
|
||||
In the DevOps career path, you can work as an automation expert or engineer, depending on the organization. You can specialize in implementing automation solutions, continuous improvement, and software delivery. As automation plays a critical role, every DevOps engineer should be familiar with the automation process.
|
||||
|
||||
Automation experts specialize in implementing continuous integration (CI) and continuous delivery (CD) within the software lifecycle to boost the efficiency of development and operations teams. Additionally, they design and integrate monitoring, dashboard, and incident management tools like [Grafana](https://grafana.com/), [Loki](https://grafana.com/oss/loki/), and [Seq](https://datalust.co/seq).
|
||||
|
||||
### General DevOps engineer
|
||||
|
||||
This is one of the key DevOps career path. As a DevOps engineer, you work closely with developers and act as a bridge between other team members. You are involved in all aspects of the software development life cycle. You are also a bridge between operations and development teams. DevOps engineers need to be proficient with top DevOps automation tools and have knowledge of cloud platforms like AWS and Google Cloud. Usually a newbie in DevOps starts in this path as a junior DevOps engineer.
|
||||
|
||||
### Systems engineer
|
||||
|
||||
This is another DevOps career you can assume as you become a DevOps engineer. As a system engineer, you are responsible for designing, deploying, and maintaining an organization's IT infrastructure, including the hardware, software, networking, and operating systems.
|
||||
|
||||
### DevOps architect
|
||||
|
||||
In this DevOps career path, a DevOps architect is responsible for designing and implementing the overall DevOps architecture and processes in an organization.
|
||||
|
||||
A DevOps architect is responsible for building the foundation upon which the entire process rests. The DevOps architect role is a more senior role than a DevOps engineer.
|
||||
|
||||
A DevOps architect is like the contractor of the DevOps world and ensures consistency of agile principles across the DevOps process and work closely with other senior DevOps engineers and professionals to ensure these principles are followed.
|
||||
|
||||
### DevOps release manager
|
||||
|
||||
This is a DevOps career path where you are responsible for managing and overseeing software releases throughout the DevOps process. A DevOps release manager ensures software products are released on time, with high quality and reliability.
|
||||
|
||||
### DevSecOps engineer
|
||||
|
||||
DevSecOps stands for Development, Security and Operations. Such engineers design and implement secure architectures for software and infrastructure, manage vulnerabilities, and protect against security threats.
|
||||
|
||||
DevSecOps engineers ensure that software applications and their supporting infrastructure are secure.
|
||||
|
||||
### DevOps test engineer
|
||||
|
||||
A DevOps test engineer is responsible for implementing tests to ensure software products are high-quality, reliable, and scalable. They oversee all stages of the testing process, such as designing automated testing frameworks, identifying and resolving issues, and certifying compliance with industry standards.
|
||||
|
||||
Other DevOps roles include:
|
||||
|
||||
- DevOps Cloud Engineer
|
||||
- Lead DevOps Engineer
|
||||
|
||||
## Skills required in DevOps
|
||||
|
||||
DevOps engineers require both technical skills and soft skills, which may vary from organization to organization, as well as team structure, technologies, and tools. However, some common skills exist across the board.
|
||||
|
||||
- Knowledge of coding and scripting
|
||||
- In-depth knowledge of container and container orchestration
|
||||
- Knowledge of logging and configuration management
|
||||
- Understanding of system administration
|
||||
- In-depth knowledge of version control systems
|
||||
- Knowledge of continuous integration and continuous deployment (CI/CD)
|
||||
- Collaboration skills
|
||||
|
||||
### Knowledge of coding and scripting
|
||||
|
||||
To build a DevOps career, you should know at least one programming language and be proficient in scripting to further automate tasks and processes that would be tedious and slow. You should also be familiar with software development principles.
|
||||
|
||||
### In-depth knowledge of container and container orchestration
|
||||
|
||||
With micro-services popularity, applications can be shipped in containers and deployed to the cloud. It is possible with the help of tools like [Docker](https://roadmap.sh/docker) and container orchestration tools like [Kubernetes](https://roadmap.sh/kubernetes). A DevOps cloud engineer must have extensive knowledge of these tools and how to use them.
|
||||
|
||||

|
||||
|
||||
### Knowledge of logging and configuration management tools
|
||||
|
||||
Monitoring is one of the core DevOps processes. In the DevOps career path, you are expected to have a knowledge of monitoring and logging tools. A popular one used is [Grafana](https://grafana.com/). You should be comfortable working with configuration management tools, automation frameworks, and Linux environments.
|
||||
|
||||
### Understanding of system administration
|
||||
|
||||
A basic understanding of provisioning and managing servers, security monitoring, and networks is required in the DevOps career path. You will monitor the servers for security vulnerabilities and apply patches when necessary.
|
||||
|
||||
### In-depth knowledge of version control systems and source code management
|
||||
|
||||
Version control is one of the DevOps tools, and required technical skills that a DevOps engineer should have. One of the common Version Control Systems(VCS) out there is Git.
|
||||
|
||||
### Knowledge of continuous integration and continuous deployment (CI/CD)
|
||||
|
||||
A DevOps professional is required to have a deep understanding of CI/CD. CI/CD involves the design and implementation of software delivery pipelines. It enables faster software release cycles. Some key DevOps tools include [Jenkins](https://www.jenkins.io/), [Azure DevOps](https://azure.microsoft.com/de-de/products/devops), [CircleCI](https://circleci.com/), [BitBucket Pipelines](https://bitbucket.org/), [GitHub Actions](https://github.com/features/actions), etc.
|
||||
|
||||
### Communication and Collaboration skills
|
||||
|
||||
As a DevOps professional, be prepared to work closely with cross-functional development teams. You are expected to have good communication and collaboration skills to be an effective team member. You should be able to clearly communicate your ideas to other developers, end-users and stakeholders.
|
||||
|
||||
## How can I start my DevOps career?
|
||||
|
||||
The next question you might be asking is how do I start my career in [DevOps](https://roadmap.sh/devops).
|
||||
You can begin your DevOps career by obtaining a bachelor's degree in computer science degree from a college.
|
||||
|
||||
You can also obtain DevOps certification from certified DevOps trainers. One of the popular DevOps certifications is the AWS Certified DevOps Engineer.
|
||||
|
||||
roadmap.sh offers step-by-step guidance on [how to become a DevOps engineer](https://roadmap.sh/devops/how-to-become-devops-engineer), and by signing up, you will be able to:
|
||||
|
||||
- Keep track of your progress and also share it on your roadmap.sh profile.
|
||||
- Collaborate on other official roadmaps.
|
||||
- Draw your roadmap, either as an individual learner or for [Dev](https://roadmap.sh/teams) [t](https://roadmap.sh/teams)[eams](https://roadmap.sh/teams).
|
||||
- [Generate new roadmaps with AI](https://roadmap.sh/ai).
|
||||
@@ -46,7 +46,7 @@ After a few years of the release of ES5, things started to change, TC39 (the com
|
||||
- Default and rest parameters
|
||||
- Spread operator
|
||||
- `let` and `const`
|
||||
- Iterators `for..of`
|
||||
- Iterators `for..of`
|
||||
- Generators
|
||||
- `map` and `set`
|
||||
- Proxies and Symbols
|
||||
@@ -80,4 +80,4 @@ ESNext is a dynamic name that refers to whatever the current version of ECMAScri
|
||||
|
||||
Since the release of ES6, [TC39](https://github.com/tc39) has quite streamlined their process. TC39 operates through a Github organization now and there are [several proposals](https://github.com/tc39/proposals) for new features or syntax to be added to the next versions of ECMAScript. Any one can go ahead and [submit a proposal](https://github.com/tc39/proposals) thus resulting in increasing the participation from the community. Every proposal goes through [four stages of maturity](https://tc39.github.io/process-document/) before it makes it into the specification.
|
||||
|
||||
And that about wraps it up. Feel free to leave your feedback in the [discord](https://discord.gg/ZrSpJ8zH). Also here are the links to original language specifications [ES6](https://www.ecma-international.org/ecma-262/6.0/), [ES7](https://www.ecma-international.org/ecma-262/7.0/) and [ES8](https://www.ecma-international.org/ecma-262/8.0/).
|
||||
And that about wraps it up. Feel free to leave your feedback in the [discord](https://roadmap.sh/discord). Also here are the links to original language specifications [ES6](https://www.ecma-international.org/ecma-262/6.0/), [ES7](https://www.ecma-international.org/ecma-262/7.0/) and [ES8](https://www.ecma-international.org/ecma-262/8.0/).
|
||||
|
||||
@@ -25,7 +25,7 @@ Java has been a popular programming language for the past 28 years and remains i
|
||||
|
||||
If you are building web applications, the ability to work on both front-end and back-end development using Java is valuable. Fundamental and advanced Java skills such as multithreading, concurrency, JVM tuning, and object-oriented design are vital in enterprise environments.
|
||||
|
||||
To remain competitive as a Java developer, you must continuously improve your skill sets to meet evolving industry demands.
|
||||
To remain competitive as a Java developer, you must continuously improve your skill sets to meet evolving industry demands.
|
||||
|
||||
This guide will equip you with the skills required in 2024. You’ll understand the landscape of Java demand, adoption, diverse applications, and strategies for excelling as a Java developer. By the end of this guide, you will be confident about pursuing a Java development career.
|
||||
|
||||
@@ -78,7 +78,7 @@ JavaScript is a programming language used alongside HTML and CSS to enhance the
|
||||
|
||||
### TypeScript
|
||||
|
||||
Typescript is an extension of JavaScript with static typing and other advanced features. [TypeScript](https://roadmap.sh/typescript) code transpiles to JavaScript and can run seamlessly wherever JavaScript runs, making it a highly versatile programming language for front-end development. The knowledge enhances your productivity by allowing you to build robust applications, detect errors, and catch issues as they happen.
|
||||
Typescript is an extension of JavaScript with static typing and other advanced features. [TypeScript](https://roadmap.sh/typescript) code transpiles to JavaScript and can run seamlessly wherever JavaScript runs, making it a highly versatile programming language for front-end development. The knowledge enhances your productivity by allowing you to build robust applications, detect errors, and catch issues as they happen.
|
||||
|
||||

|
||||
|
||||
@@ -102,7 +102,7 @@ Here are the back-end skills you should learn in 2024:
|
||||
- Web Security
|
||||
- Caching
|
||||
|
||||
### Java programming language:
|
||||
### Java programming language:
|
||||
|
||||
A deep understanding of Java fundamentals is essential to becoming a full stack developer. Having a strong grasp of Java's core concepts, such as classes, inheritance, abstraction, etc., is crucial for developing full stack applications running on the web or mobile platforms. Java's versatility and robustness make it a popular choice for backend development, and proficiency in Java allows developers to build scalable and secure server-side components for their applications. Some popular database management systems are MySQL, SQL, PostgreSQL, MongoDB, and Oracle.
|
||||
|
||||
@@ -116,7 +116,7 @@ Frameworks are pre-written and thoroughly tested collections of code, classes, c
|
||||
|
||||
While Java has several frameworks for building full stack applications, it's crucial to consider each framework's associated pros and cons, adoption rates, and how effectively they address the intended business requirements. One particularly renowned framework is the Java [Spring](https://roadmap.sh/spring-boot) framework, celebrated for simplifying the web development process for small-scale and enterprise-level Java applications. In addition to its user-friendliness, it boasts a vast ecosystem and a thriving community of developers.
|
||||
|
||||
### Version control
|
||||
### Version control
|
||||
|
||||
Version control systems facilitates teamwork by allowing you and your team members to collaborate on a project simultaneously. It enables the management of changes to code and files over time without disrupting the workflow.
|
||||
|
||||
@@ -182,7 +182,7 @@ Design patterns are proven approaches to solving specific design challenges and
|
||||
|
||||
As the popular saying goes, "a tree cannot make a forest." While it might be tempting to work in isolation and tackle all development tasks alone, it's essential for you to join communities that encourage collaboration, peer learning, and staying updated on the latest developments.
|
||||
|
||||
A great community to join is the [r](https://discord.gg/ZrSpJ8zH)[oadmap.sh](https://discord.gg/ZrSpJ8zH) [Discord community](https://discord.gg/ZrSpJ8zH), where you can connect with like-minded individuals who share your passion for development.
|
||||
A great community to join is the [r](https://roadmap.sh/discord)[oadmap.sh](https://roadmap.sh/discord) [Discord community](https://roadmap.sh/discord), where you can connect with like-minded individuals who share your passion for development.
|
||||
|
||||
### Soft skills
|
||||
|
||||
|
||||
39
src/data/projects/basic-dockerfile.md
Normal file
39
src/data/projects/basic-dockerfile.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
title: 'Basic Dockerfile'
|
||||
description: 'Build a basic Dockerfile to create a Docker image.'
|
||||
isNew: false
|
||||
sort: 1
|
||||
difficulty: 'beginner'
|
||||
nature: 'CLI'
|
||||
skills:
|
||||
- 'docker'
|
||||
- 'dockerfile'
|
||||
- 'linux'
|
||||
- 'devops'
|
||||
seo:
|
||||
title: 'Basic Dockerfile'
|
||||
description: 'Write a basic Dockerfile to create a Docker image.'
|
||||
keywords:
|
||||
- 'basic dockerfile'
|
||||
- 'dockerfile'
|
||||
- 'docker'
|
||||
roadmapIds:
|
||||
- 'devops'
|
||||
- 'docker'
|
||||
---
|
||||
|
||||
In this project, you will write a basic Dockerfile to create a Docker image. When this Docker image is run, it should print "Hello, Captain!" to the console before exiting.
|
||||
|
||||
## Requirements
|
||||
|
||||
- The Dockerfile should be named `Dockerfile`.
|
||||
- The Dockerfile should be in the root directory of the project.
|
||||
- The base image should be `alpine:latest`.
|
||||
- The Dockerfile should contain a single instruction to print "Hello, Captain!" to the console before exiting.
|
||||
|
||||
|
||||
You can learn more about writing a Dockerfile [here](https://docs.docker.com/engine/reference/builder/).
|
||||
|
||||
<hr />
|
||||
|
||||
If you are looking to build a more advanced version of this project, you can consider adding the ability to pass your name to the Docker image as an argument, and have the Docker image print "Hello, [your name]!" instead of "Hello, Captain!".
|
||||
68
src/data/projects/caching-server.md
Normal file
68
src/data/projects/caching-server.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: 'Caching Proxy'
|
||||
description: 'Build a caching server that caches responses from other servers.'
|
||||
isNew: false
|
||||
sort: 10
|
||||
difficulty: 'intermediate'
|
||||
nature: 'CLI'
|
||||
skills:
|
||||
- 'Programming Language'
|
||||
- 'Text Processing'
|
||||
- 'Markdown libraries'
|
||||
- 'File Uploads'
|
||||
seo:
|
||||
title: 'Caching Proxy Project Idea'
|
||||
description: 'Build a caching proxy server that caches responses from proxied server.'
|
||||
keywords:
|
||||
- 'backend project idea'
|
||||
roadmapIds:
|
||||
- 'backend'
|
||||
- 'nodejs'
|
||||
- 'python'
|
||||
- 'java'
|
||||
- 'golang'
|
||||
- 'spring-boot'
|
||||
---
|
||||
|
||||
You are required to build a CLI tool that starts a caching proxy server, it will forward requests to the actual server and cache the responses. If the same request is made again, it will return the cached response instead of forwarding the request to the server.
|
||||
|
||||
## Requirements
|
||||
|
||||
User should be able to start the caching proxy server by running a command like following:
|
||||
|
||||
```shell
|
||||
caching-proxy --port <number> --origin <url>
|
||||
```
|
||||
|
||||
- `--port` is the port on which the caching proxy server will run.
|
||||
- `--origin` is the URL of the server to which the requests will be forwarded.
|
||||
|
||||
For example, if the user runs the following command:
|
||||
|
||||
```shell
|
||||
caching-proxy --port 3000 --origin http://dummyjson.com
|
||||
```
|
||||
|
||||
The caching proxy server should start on port 3000 and forward requests to `http://dummyjson.com`.
|
||||
|
||||
Taking the above example, if the user makes a request to `http://localhost:3000/products`, the caching proxy server should forward the request to `http://dummyjson.com/products`, return the response along with headers and cache the response. Also, add the headers to the response that indicate whether the response is from the cache or the server.
|
||||
|
||||
```plaintext
|
||||
# If the response is from the cache
|
||||
X-Cache: HIT
|
||||
|
||||
# If the response is from the origin server
|
||||
X-Cache: MISS
|
||||
```
|
||||
|
||||
If the same request is made again, the caching proxy server should return the cached response instead of forwarding the request to the server.
|
||||
|
||||
You should also provide a way to clear the cache by running a command like following:
|
||||
|
||||
```shell
|
||||
caching-proxy --clear-cache
|
||||
```
|
||||
|
||||
<hr />
|
||||
|
||||
After building the above project, you should have a good understanding of how caching works and how you can build a caching proxy server to cache responses from other servers.
|
||||
@@ -7,6 +7,7 @@ difficulty: 'beginner'
|
||||
nature: 'CLI'
|
||||
skills:
|
||||
- 'Programming Language'
|
||||
- 'CLI'
|
||||
- 'API Consumption'
|
||||
seo:
|
||||
title: 'GitHub User Activity CLI'
|
||||
|
||||
45
src/data/projects/log-archive-tool.md
Normal file
45
src/data/projects/log-archive-tool.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
title: 'Log Archive Tool'
|
||||
description: 'Build a tool to archive logs from the CLI.'
|
||||
isNew: false
|
||||
sort: 2
|
||||
difficulty: 'beginner'
|
||||
nature: 'CLI'
|
||||
skills:
|
||||
- 'linux'
|
||||
- 'bash'
|
||||
- 'shell scripting'
|
||||
seo:
|
||||
title: 'Log Archive Tool'
|
||||
description: 'Build a tool to archive logs from the CLI.'
|
||||
keywords:
|
||||
- 'log archive tool'
|
||||
- 'devops project idea'
|
||||
roadmapIds:
|
||||
- 'devops'
|
||||
- 'linux'
|
||||
---
|
||||
|
||||
In this project, you will build a tool to archive logs on a set schedule by compressing them and storing them in a new directory, this is especially useful for removing old logs and keeping the system clean while maintaining the logs in a compressed format for future reference. This project will help you practice your programming skills, including working with files and directories, and building a simple cli tool.
|
||||
|
||||
The most common location for logs on a unix based system is `/var/log`.
|
||||
|
||||
## Requirements
|
||||
|
||||
The tool should run from the command line, accept the log directory as an argument, compress the logs, and store them in a new directory. The user should be able to:
|
||||
|
||||
- Provide the log directory as an argument when running the tool.
|
||||
```bash
|
||||
log-archive <log-directory>
|
||||
```
|
||||
- The tool should compress the logs in a tar.gz file and store them in a new directory.
|
||||
- The tool should log the date and time of the archive to a file.
|
||||
```bash
|
||||
logs_archive_20240816_100648.tar.gz
|
||||
```
|
||||
|
||||
You can learn more about the `tar` command [here](https://www.gnu.org/software/tar/manual/tar.html).
|
||||
|
||||
<hr />
|
||||
|
||||
If you are looking to build a more advanced version of this project, you can consider adding functionality to the tool like emailing the user updates on the archive, or sending the archive to a remote server or cloud storage.
|
||||
@@ -12,7 +12,7 @@ skills:
|
||||
- 'File Uploads'
|
||||
seo:
|
||||
title: 'Markdown Note-taking App Project Idea'
|
||||
description: 'Build an API for an expense tracker application.'
|
||||
description: 'Build a note-taking app that uses markdown for formatting.'
|
||||
keywords:
|
||||
- 'backend project idea'
|
||||
roadmapIds:
|
||||
|
||||
76
src/data/projects/number-guessing-game.md
Normal file
76
src/data/projects/number-guessing-game.md
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
title: 'Number Guessing Game'
|
||||
description: 'Build a simple number guessing game to test your luck.'
|
||||
isNew: false
|
||||
sort: 4
|
||||
difficulty: 'beginner'
|
||||
nature: 'CLI'
|
||||
skills:
|
||||
- 'Programming Language'
|
||||
- 'CLI'
|
||||
- 'Logic Building'
|
||||
seo:
|
||||
title: 'Number Guessing Game Project Idea'
|
||||
description: 'Build a simple number guessing game to test your luck.'
|
||||
keywords:
|
||||
- 'number guessing game'
|
||||
- 'backend project idea'
|
||||
roadmapIds:
|
||||
- 'backend'
|
||||
- 'nodejs'
|
||||
- 'python'
|
||||
- 'java'
|
||||
- 'golang'
|
||||
- 'spring-boot'
|
||||
---
|
||||
|
||||
You are required to build a simple number guessing game where the computer randomly selects a number and the user has to guess it. The user will be given a limited number of chances to guess the number. If the user guesses the number correctly, the game will end, and the user will win. Otherwise, the game will continue until the user runs out of chances.
|
||||
|
||||
## Requirements
|
||||
|
||||
It is a CLI-based game, so you need to use the command line to interact with the game. The game should work as follows:
|
||||
|
||||
- When the game starts, it should display a welcome message along with the rules of the game.
|
||||
- The computer should randomly select a number between 1 and 100.
|
||||
- User should select the difficulty level (easy, medium, hard) which will determine the number of chances they get to guess the number.
|
||||
- The user should be able to enter their guess.
|
||||
- If the user's guess is correct, the game should display a congratulatory message along with the number of attempts it took to guess the number.
|
||||
- If the user's guess is incorrect, the game should display a message indicating whether the number is greater or less than the user's guess.
|
||||
- The game should end when the user guesses the correct number or runs out of chances.
|
||||
|
||||
Here is a sample output of the game:
|
||||
|
||||
```plaintext
|
||||
Welcome to the Number Guessing Game!
|
||||
I'm thinking of a number between 1 and 100.
|
||||
You have 5 chances to guess the correct number.
|
||||
|
||||
Please select the difficulty level:
|
||||
1. Easy (10 chances)
|
||||
2. Medium (5 chances)
|
||||
3. Hard (3 chances)
|
||||
|
||||
Enter your choice: 2
|
||||
|
||||
Great! You have selected the Medium difficulty level.
|
||||
Let's start the game!
|
||||
|
||||
Enter your guess: 50
|
||||
Incorrect! The number is less than 50.
|
||||
|
||||
Enter your guess: 25
|
||||
Incorrect! The number is greater than 25.
|
||||
|
||||
Enter your guess: 35
|
||||
Incorrect! The number is less than 35.
|
||||
|
||||
Enter your guess: 30
|
||||
Congratulations! You guessed the correct number in 4 attempts.
|
||||
```
|
||||
|
||||
To make the game more interesting, you can add the following features:
|
||||
|
||||
- Allow the user to play multiple rounds of the game (i.e., keep playing until the user decides to quit). You can do this by asking the user if they want to play again after each round.
|
||||
- Add a timer to see how long it takes the user to guess the number.
|
||||
- Implement a hint system that provides clues to the user if they are stuck.
|
||||
- Keep track of the user's high score (i.e., the fewest number of attempts it took to guess the number under a specific difficulty level).
|
||||
115
src/data/projects/scalable-ecommerce-platform.md
Normal file
115
src/data/projects/scalable-ecommerce-platform.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
title: 'Scalable E-Commerce Platform'
|
||||
description: 'Build an e-commerce platform using microservices architecture.'
|
||||
isNew: false
|
||||
sort: 19
|
||||
difficulty: 'advanced'
|
||||
nature: 'API'
|
||||
skills:
|
||||
- 'Microservices'
|
||||
- 'Database'
|
||||
- 'Docker'
|
||||
- 'Authentication'
|
||||
seo:
|
||||
title: 'Scalable E-Commerce Platform Project Idea'
|
||||
description: 'Build a scalable e-commerce platform using microservices architecture and Docker.'
|
||||
keywords:
|
||||
- 'e-commerce platform'
|
||||
- 'backend project idea'
|
||||
roadmapIds:
|
||||
- 'backend'
|
||||
- 'nodejs'
|
||||
- 'python'
|
||||
- 'java'
|
||||
- 'golang'
|
||||
- 'spring-boot'
|
||||
---
|
||||
|
||||
Build a scalable e-commerce platform using microservices architecture and Docker. The platform will handle various aspects of an online store, such as product catalog management, user authentication, shopping cart, payment processing, and order management. Each of these features will be implemented as separate microservices, allowing for independent development, deployment, and scaling.
|
||||
|
||||
## Core Microservices:
|
||||
|
||||
Here are the sample core microservices that you can implement for your e-commerce platform:
|
||||
|
||||
1. **User Service:**
|
||||
- **Functionality:** Handles user registration, authentication, and profile management.
|
||||
- **Tech Stack:** Any backend language e.g. Node.js (Express), Go, Python (Flask/Django)
|
||||
- **Database:** Any database e.g. PostgreSQL
|
||||
|
||||
2. **Product Catalog Service:**
|
||||
- **Functionality:** Manages product listings, categories, and inventory.
|
||||
- **Tech Stack:** Any backend language e.g. Node.js (Express), Go, Python (Flask/Django)
|
||||
- **Database:** Any database e.g. MongoDB or MySQL
|
||||
|
||||
3. **Shopping Cart Service:**
|
||||
- **Functionality:** Manages users' shopping carts, including adding/removing items and updating quantities.
|
||||
- **Tech Stack:** Any backend language e.g. Node.js (Express), Go, Python (Flask/Django)
|
||||
- **Database:** Redis (for quick access)
|
||||
|
||||
4. **Order Service:**
|
||||
- **Functionality:** Processes orders, including placing orders, tracking order status, and managing order history.
|
||||
- **Tech Stack:** Any backend language e.g. Node.js (Express), Go, Python (Flask/Django)
|
||||
- **Database:** MySQL
|
||||
|
||||
5. **Payment Service:**
|
||||
- **Functionality:** Handles payment processing, integrating with external payment gateways.
|
||||
- **Tech Stack:** Any backend language e.g. Node.js (Express), Go, Python (Flask/Django)
|
||||
- **Third-Party Integration:** Stripe, PayPal, etc.
|
||||
|
||||
6. **Notification Service:**
|
||||
- **Functionality:** Sends email and SMS notifications for various events (e.g., order confirmation, shipping updates).
|
||||
- **Tech Stack:** Any backend language e.g. Node.js (Express), Go, Python (Flask/Django)
|
||||
- **Third-Party Integration:** Twilio, SendGrid, etc.
|
||||
|
||||
## **Additional Components:**
|
||||
|
||||
- **API Gateway:**
|
||||
- **Functionality:** Serves as the entry point for all client requests, routing them to the appropriate microservice.
|
||||
- **Tech Stack:** NGINX, Kong, or Traefik
|
||||
|
||||
- **Service Discovery:**
|
||||
- **Functionality:** Automatically detects and manages service instances.
|
||||
- **Tech Stack:** Consul or Eureka
|
||||
|
||||
- **Centralized Logging:**
|
||||
- **Functionality:** Aggregates logs from all microservices for easy monitoring and debugging.
|
||||
- **Tech Stack:** ELK Stack (Elasticsearch, Logstash, Kibana)
|
||||
|
||||
- **Docker & Docker Compose:**
|
||||
- **Functionality:** Containerizes each microservice and manages their orchestration, networking, and scaling.
|
||||
- **Docker Compose:** Defines and runs multi-container Docker applications for development and testing.
|
||||
|
||||
- **CI/CD Pipeline:**
|
||||
- **Functionality:** Automates the build, test, and deployment process of each microservice.
|
||||
- **Tech Stack:** Jenkins, GitLab CI, or GitHub Actions
|
||||
|
||||
## Steps to Get Started:
|
||||
|
||||
1. **Set up Docker and Docker Compose:**
|
||||
- Create Dockerfiles for each microservice.
|
||||
- Use Docker Compose to define and manage multi-container applications.
|
||||
|
||||
2. **Develop Microservices:**
|
||||
- Start with a simple MVP (Minimum Viable Product) for each service, then iterate by adding more features.
|
||||
|
||||
3. **Integrate Services:**
|
||||
- Use REST APIs or gRPC for communication between microservices.
|
||||
- Implement an API Gateway to handle external requests and route them to the appropriate services.
|
||||
|
||||
4. **Implement Service Discovery:**
|
||||
- Use Consul or Eureka to enable dynamic service discovery.
|
||||
|
||||
5. **Set up Monitoring and Logging:**
|
||||
- Use tools like Prometheus and Grafana for monitoring.
|
||||
- Set up the ELK stack for centralized logging.
|
||||
|
||||
6. **Deploy the Platform:**
|
||||
- Use Docker Swarm or Kubernetes for production deployment.
|
||||
- Implement auto-scaling and load balancing.
|
||||
|
||||
7. **CI/CD Integration:**
|
||||
- Automate testing and deployment using Jenkins or GitLab CI.
|
||||
|
||||
<hr />
|
||||
|
||||
This project offers a comprehensive approach to building a modern, scalable e-commerce platform and will give you hands-on experience with Docker, microservices, and related technologies. After completing this project, you'll have a solid understanding of how to design, develop, and deploy complex distributed systems.
|
||||
@@ -0,0 +1,8 @@
|
||||
As an open-source tool for configuration management, Ansible provides several benefits when added to your project:
|
||||
|
||||
- **Simplicity**: Easy to learn and use with simple YAML syntax.
|
||||
- **Agentless**: No need to install agents on managed nodes; instead it uses SSH to communicate with them.
|
||||
- **Scalability**: Can manage a large number of servers simultaneously with minimum effort.
|
||||
- **Integration**: Ansible integrates well with various cloud providers, CI/CD tools, and infrastructure.
|
||||
- **Modularity**: [Extensive library](https://docs.ansible.com/ansible/2.9/modules/list_of_all_modules.html) of modules for different tasks.
|
||||
- **Reusability**: Ansible playbooks and roles can be reused and shared across projects.
|
||||
6
src/data/question-groups/devops/content/auto-scaling.md
Normal file
6
src/data/question-groups/devops/content/auto-scaling.md
Normal file
@@ -0,0 +1,6 @@
|
||||
While the specifics will depend on the cloud provider you decide to go with, the generic steps would be the following:
|
||||
|
||||
1. **Set up an auto-scaling group**. Create what is usually known as an auto-scaling group, where you configure the minimum and maximum number of instances you can have and their types. Your scaling policies will interact with this group to automate the actions later on.
|
||||
2. **Define the scaling policies**. What makes your platform want to scale? Is it traffic? Is it resource allocation? Find the right metric, and configure the policies that will trigger a scale-up or scale-down event on the auto-scaling group you already configured.
|
||||
3. **Balance your load**. Now it’s time to set up a load balancer to distribute the traffic amongst all your nodes.
|
||||
4. **Monitor**. Keep a constant monitor over your cluster to understand if your policies are correctly configured, or if you need to adapt and tweak them. Once you’re done with the first 3 steps, this is where you’ll constantly be, as the triggering conditions might change quite often.
|
||||
@@ -0,0 +1,15 @@
|
||||

|
||||
|
||||
Blue-green deployment is a release strategy that reduces downtime and the risk of production issues by running two identical production environments, referred to as "blue" and "green."
|
||||
|
||||
At a high level, the way this process works is as follows:
|
||||
|
||||
- **Setup Two Environments**: Prepare two identical environments: blue (current live environment) and green (new version environment).
|
||||
- **Deploy to Green**: Deploy the new version of the application to the green environment through your normal CI/CD pipelines.
|
||||
- **Testing green**: Perform testing and validation in the green environment to ensure the new version works as expected.
|
||||
- **Switch Traffic**: Once the green environment is verified, switch the production traffic from blue to green. Optionally, the traffic switch can be done gradually to avoid potential problems from affecting all users immediately.
|
||||
- **Monitor**: Monitor the green environment to ensure it operates correctly with live traffic. Take your time, and make sure you’ve monitored every single major event before issuing the “green light”.
|
||||
- **Fallback Plan**: Keep the blue environment intact as a fallback. If any issues arise in the green environment, you can quickly switch traffic back to the blue environment. This is one of the fastest rollbacks you’ll experience in deployment and release management.
|
||||
- **Clean Up**: Once the green environment is stable and no issues are detected, you can update the blue environment to be the new staging area for the next deployment.
|
||||
|
||||
This way, you ensure minimal downtime (either for new deployments or for rollbacks) and allow for a quick rollback in case of issues with the new deployment.
|
||||
@@ -0,0 +1,3 @@
|
||||
A build pipeline is an automated process that compiles, tests, and prepares code for deployment. It typically involves multiple stages, such as source code retrieval, code compilation, running unit tests, performing static code analysis, creating build artifacts, and deploying to one of the available environments.
|
||||
|
||||
The build pipeline effectively removes humans from the deployment process as much as possible, clearly reducing the chance of human error. This, in turn, ensures consistency and reliability in software builds and speeds up the development and deployment process.
|
||||
@@ -0,0 +1,5 @@
|
||||

|
||||
|
||||
A canary release is a common and well-known deployment strategy. It works this way: when a new version of an application is ready, instead of deploying it and making it available to everyone, you gradually roll it out to a small subset of users or servers before being released to the entire production environment.
|
||||
|
||||
This way, you can test the new version in a real-world environment with minimal risk. If the canary release performs well and no issues are detected, the deployment is gradually expanded to a larger audience until it eventually reaches 100% of the users. If, on the other hand, problems are found, the release can be quickly rolled back with minimal impact.
|
||||
32
src/data/question-groups/devops/content/cicd-setup.md
Normal file
32
src/data/question-groups/devops/content/cicd-setup.md
Normal file
@@ -0,0 +1,32 @@
|
||||
Setting up a CI/CD pipeline from scratch involves several steps. Assuming you’ve already set up your project on a version control system, and everyone in your team has proper access to it, then the next steps would help:
|
||||
|
||||
1. **Setup the Continuous Integration (CI)**:
|
||||
- Select a continuous integration tool (there are many, like Jenkins, GitLab CI, CircleCI, pick one).
|
||||
- Connect the CI tool to your version control system.
|
||||
- Write a build script that defines the build process, including steps like code checkout, dependencies installation, compiling the code, and running tests.
|
||||
- Set up automated testing to run on every code commit or pull request.
|
||||
|
||||
2. **Artifact Storage**:
|
||||
- Decide where to store build artifacts (it could be Docker Hub, AWS S3 or anywhere you can then reference from the CD pipeline).
|
||||
- Configure the pipeline to package and upload artifacts to the storage after a successful build.
|
||||
|
||||
3. **Setup your Continuous Deployment (CD)**:
|
||||
- Choose a CD tool or extend your CI tool (same deal as before, there are many options, pick one).
|
||||
Define deployment scripts that specify how to deploy your application to different environments (e.g., development, staging, production).
|
||||
- Configure the CD tool to trigger deployments after successful builds and tests.
|
||||
- Set up environment-specific configurations and secrets management.
|
||||
Remember that this system should be able to pull the artifacts from the continuous integration pipeline, so set up that access as well.
|
||||
|
||||
4. **Infrastructure Setup**:
|
||||
- Provision infrastructure using IaC tools (e.g., Terraform, CloudFormation).
|
||||
- Ensure environments are consistent and reproducible to reduce times if there is a need to create new ones or destroy and recreate existing ones. This should be as easy as executing a command without any human intervention.
|
||||
|
||||
5. **Setup your monitoring and logging solutions**:
|
||||
- Implement monitoring and logging for your applications and infrastructure (e.g., Prometheus, Grafana, ELK stack).
|
||||
- Remember to configure alerts for critical issues. Otherwise, you’re missing a key aspect of monitoring (reacting to problems).
|
||||
|
||||
6. **Security and Compliance**:
|
||||
- By now, it’s a good idea to think about integrating security scanning tools into your pipeline (e.g., Snyk, OWASP Dependency-Check).
|
||||
- nsure compliance with relevant standards and practices depending on your specific project’s needs.
|
||||
|
||||
Additionally, as a good practice, you might also want to document the CI/CD process, pipeline configuration, and deployment steps. This is to train new team members on using and maintaining the pipelines you just created.
|
||||
@@ -0,0 +1,7 @@
|
||||
As usual, there are many options when it comes to monitoring and logging solutions, even in the space of Kubernetes. Some useful options could be a Prometheus and Grafana combo, where you get the monitoring data with the first one and plot the results however you want with the second one.
|
||||
|
||||
You could also set up an EFK-based (using Elastic, Fluentd, and Kibana) or ELK-based (Elastic, Logstash, and Kibana) logging solution to gather and analyze logs.
|
||||
|
||||
Finally, when it comes to alerting based on your monitoring data, you could use something like [Alertmanager](https://github.com/prometheus/alertmanager) that integrates directly with Prometheus and get notified of any issues in your infrastructure.
|
||||
|
||||
There are other options out there as well, such as NewRelic or Datadog. In the end, it’s all about your specific needs and the context around them.
|
||||
28
src/data/question-groups/devops/content/common-iac-tools.md
Normal file
28
src/data/question-groups/devops/content/common-iac-tools.md
Normal file
@@ -0,0 +1,28 @@
|
||||

|
||||
|
||||
As usual, there are several options out there, some of them specialized in different aspects of IaC.
|
||||
|
||||
**Configuration management tools**
|
||||
|
||||
If you’re in search of effective configuration management tools to streamline and automate your IT infrastructure, you might consider exploring the following popular options:
|
||||
|
||||
- Ansible
|
||||
- Chef
|
||||
- Puppet
|
||||
|
||||
Configuration management tools are designed to help DevOps engineers manage and maintain consistent configurations across multiple servers and environments. These tools automate the process of configuring, deploying, and managing systems, ensuring that your infrastructure remains reliable, scalable, and compliant with your organization's standards.
|
||||
|
||||
**Provisioning and orchestration tools**
|
||||
|
||||
If, on the other hand, you’re looking for tools to handle provisioning and orchestration of your infrastructure, you might want to explore the following popular options:
|
||||
|
||||
- Terraform
|
||||
- CloudFormation (AWS)
|
||||
- Pulumi
|
||||
|
||||
Provisioning and orchestration tools are essential for automating the process of setting up and managing your infrastructure resources. These tools allow you to define your IaC, making it easier to deploy, manage, and scale resources across cloud environments.
|
||||
|
||||
Finally, if you’re looking for multi-purpose tools, you can try something like:
|
||||
|
||||
- Ansible (can also be used for provisioning)
|
||||
- Pulumi (supports both IaC and configuration management)
|
||||
@@ -0,0 +1,8 @@
|
||||
Containers help to add consistency in several ways, here are some examples:
|
||||
|
||||
- **Isolation**: Containers encapsulate all the dependencies, libraries, and configurations needed to run an application, isolating it from the host system and other containers. This ensures that the application runs the same way regardless of where the container is deployed.
|
||||
- **Portability**: Containers can be run on any environment that supports the container runtime. This means that the same container image can be used on a developer's local machine, a testing environment, or a production server without any kind of modification.
|
||||
- **Consistency**: By using the same container image across different environments, you eliminate inconsistencies from differences in configuration, dependencies, and runtime environments. This ensures that if the application works in one environment, it will work in all others.
|
||||
- **Version Control**: Container images can be versioned and stored in registries (e.g., Docker Hub, AWS ECR). This allows teams to track and roll back to specific versions of an application if there are problems.
|
||||
- **Reproducibility**: Containers make it easier to reproduce the exact environment required for the application. This is especially useful for debugging issues that occur in production but not in development, as developers can recreate the production environment locally.
|
||||
- **Automation**: Containers facilitate the use of automated build and deployment pipelines. Automated processes can consistently create, test, and deploy container images.
|
||||
@@ -0,0 +1,6 @@
|
||||
A container is a runtime instance of a container image (which is a lightweight, executable package that includes everything needed to run your code). It is the execution environment that runs the application or service defined by the container image.
|
||||
|
||||
When a container is started, it becomes an isolated process on the host machine with its own filesystem, network interfaces, and other resources.
|
||||
Containers share the host operating system's kernel, making them more efficient and faster to start than virtual machines.
|
||||
|
||||
A virtual machine (VM), on the other hand, is an emulation of a physical computer. Each VM runs a full operating system and has virtualized hardware, which makes them more resource-intensive and slower to start compared to containers.
|
||||
@@ -0,0 +1,7 @@
|
||||
As a DevOps engineer, the concept of continuous monitoring should be ingrained in your brain as a must-perform activity.
|
||||
|
||||
You see, continuous monitoring is the practice of constantly overseeing and analyzing an IT system's performance, security, and compliance in real-time.
|
||||
|
||||
It involves collecting and assessing data from various parts of the infrastructure to detect issues, security threats, and performance bottlenecks as soon as they occur.
|
||||
|
||||
The goal is to ensure the system's health, security, and compliance, enabling quick responses to potential problems and maintaining the overall stability and reliability of the environment. Tools like Prometheus, Grafana, Nagios, and Splunk are commonly used for continuous monitoring.
|
||||
11
src/data/question-groups/devops/content/data-migration.md
Normal file
11
src/data/question-groups/devops/content/data-migration.md
Normal file
@@ -0,0 +1,11 @@
|
||||
Handling data migrations in a continuous deployment pipeline is not a trivial task. It requires careful planning to ensure that the application remains functional and data integrity is maintained throughout the process. Here’s an approach:
|
||||
|
||||
1. **Backward Compatibility**: Ensure that any database schema changes are backward compatible. This means that the old application version should still work with the new schema. For example, if you're adding a new column, ensure the application can handle cases where this column might be null initially.
|
||||
2. **Migration Scripts**: Write database migration scripts that are idempotent (meaning that they can be run multiple times without causing issues) and can be safely executed during the deployment process. Use a tool like Flyway or Liquibase to manage these migrations.
|
||||
3. **Separate Deployment Phases**:
|
||||
- **Phase 1 - Schema Migration**: Deploy the database migration scripts first, adding new columns, tables, or indexes without removing or altering existing structures that the current application relies on.
|
||||
- **Phase 2 - Application Deployment**: Deploy the application code that utilizes the new schema. This ensures that the application is ready to work with the updated database structure.
|
||||
- **Phase 3 - Cleanup (Optional)**: After verifying that the new application version is stable, you can deploy a cleanup script to remove or alter deprecated columns, tables, or other schema elements. While optional, this step is advised, as it helps reduce the chances of creating a build up of technical debt for future developers to deal with.
|
||||
4. **Feature Flags**: Use feature flags to roll out new features that depend on the data migration. This allows you to deploy the new application code without immediately activating the new features, providing an additional safety net.
|
||||
|
||||
That said, an important, non-technical step that should also be taken into consideration is the coordination with stakeholders, particularly if the migration is complex or requires downtime. Clear communication ensures that everyone is aware of the risks and the planned steps.
|
||||
11
src/data/question-groups/devops/content/devsecops.md
Normal file
11
src/data/question-groups/devops/content/devsecops.md
Normal file
@@ -0,0 +1,11 @@
|
||||
To implement security in a DevOps pipeline (DevSecOps), you should integrate security practices throughout the development and deployment process. This is not just about securing the app once it’s in production, this is about securing the entire app-creation process.
|
||||
|
||||
That includes:
|
||||
|
||||
1. **Shift Left Security**: Incorporate security early in the development process by integrating security checks in the CI/CD pipeline. This means performing static code analysis, dependency scanning, and secret detection during the build phase.
|
||||
2. **Automated Testing**: Implement automated security tests, such as vulnerability scans and dynamic application security testing (DAST), to identify potential security issues before they reach production.
|
||||
3. **Continuous Monitoring**: Monitor the pipeline and the deployed applications for security incidents using tools like Prometheus, Grafana, and specialized security monitoring tools.
|
||||
4. **Infrastructure as Code - Security**: Ensure that infrastructure configurations defined in code are secure by scanning IaC templates (like Terraform) for misconfigurations and vulnerabilities (like hardcoded passwords).
|
||||
5. **Access Control**: Implement strict access controls, using something like role-based access control (RBAC) or ABAC (attribute-based access control) and enforcing the principle of least privilege across the pipeline.
|
||||
6. **Compliance Checks**: Figure out the compliance and regulations of your industry and integrate those checks to ensure the pipeline adheres to industry standards and regulatory requirements.
|
||||
7. **Incident Response**: Figure out a clear incident response plan and integrate security alerts into the pipeline to quickly address potential security breaches.
|
||||
10
src/data/question-groups/devops/content/docker-compose.md
Normal file
10
src/data/question-groups/devops/content/docker-compose.md
Normal file
@@ -0,0 +1,10 @@
|
||||
Docker Compose is, in fact, a tool designed to simplify the definition and management of multi-container Docker applications. It allows you to define, configure, and run multiple containers as a single service using a single YAML file.
|
||||
|
||||
In a multi-container application, Compose provides the following key roles:
|
||||
|
||||
1. **Service Definition**: With Compose you can specify multiple services inside a single file, you can also define how each service should be built, the networks they should connect to, and the volumes they should use (if any).
|
||||
2. **Orchestration**: It manages the startup, shutdown, and scaling of services, ensuring that containers are launched in the correct order based on the defined dependencies.
|
||||
3. **Environment Management**: Docker Compose simplifies environment configuration because it lets you set environment variables, networking configurations, and volume mounts in the docker-compose.yml file.
|
||||
4. **Simplified Commands**: All of the above can be done with a very simple set of commands you can run directly from the terminal (i.e. docker-compose up, or docker-compose down).
|
||||
|
||||
In the end, Docker Compose simplifies the development, testing, and deployment of multi-container applications by giving you, as a user, an extremely friendly and powerful interface.
|
||||
@@ -0,0 +1,5 @@
|
||||
Continuous Integration (CI) involves automatically building and testing code changes as they are committed to version control systems (usually Git). This helps catch issues early and improves code quality.
|
||||
|
||||
On the other hand, Continuous Deployment (CD) goes a step further by automatically deploying every change that passes the CI process, ensuring that software updates are delivered to users quickly and efficiently without manual intervention.
|
||||
|
||||
Combined, they add a great deal of stability and agility to the development lifecycle.
|
||||
@@ -0,0 +1,9 @@
|
||||
Each DevOps team should define this list within the context of their own project, however, a good rule of thumb is to consider the following metrics:
|
||||
|
||||
1. **Build Success Rate**: The percentage of successful builds versus failed builds. A low success rate indicates issues in code quality or pipeline configuration.
|
||||
2. **Build Time**: The time it takes to complete a build. Monitoring build time helps identify bottlenecks and optimize the pipeline for faster feedback.
|
||||
3. **Deployment Frequency**: How often deployments occur. Frequent deployments indicate a smooth pipeline, while long gaps may signal issues with your CI/CD or with the actual dev workflow.
|
||||
4. **Lead Time for Changes**: The time from code commit to production deployment. Shorter lead times are preferable, indicating an efficient pipeline.
|
||||
5. **Mean Time to Recovery (MTTR)**: The average time it takes to recover from a failure. A lower MTTR indicates a resilient pipeline that can quickly address and fix issues.
|
||||
6. **Test Coverage and Success Rate**: The percentage of code covered by automated tests and the success rate of those tests. High coverage and success rates are good indicators of better quality and reliability.
|
||||
7. **Change Failure Rate**: The percentage of deployments that result in failures. A lower change failure rate indicates a stable and reliable deployment process.
|
||||
15
src/data/question-groups/devops/content/high-availability.md
Normal file
15
src/data/question-groups/devops/content/high-availability.md
Normal file
@@ -0,0 +1,15 @@
|
||||
Having high availability in your system means that the cluster will always be accessible, even if one or more servers are down.
|
||||
|
||||
While disaster recovery means having the ability to continue providing service even in the face of a regional network outage (when multiple sections of the world are rendered unreachable).
|
||||
|
||||
To ensure high availability and disaster recovery in a cloud environment, you can follow these strategies if they apply to your particular context:
|
||||
|
||||
- **Multi-Region Deployment**: If available, deploy your application across multiple geographic regions to ensure that if one region fails, others can take over, minimizing downtime.
|
||||
- **Redundancy**: Keep redundant resources, such as multiple instances, databases, and storage systems, across different availability zones within a region to avoid single points of failure.
|
||||
- **Auto-Scaling**: Implement auto-scaling to automatically adjust resource capacity in response to demand, ensuring the application remains available even under high load.
|
||||
- **Monitoring and Alerts**: Implement continuous monitoring and set up alerts to detect and respond to potential issues before they lead to downtime. Use tools like CloudWatch, Azure Monitor, or Google Cloud Monitoring.
|
||||
- **Failover Mechanisms**: Make sure to set up automated failover mechanisms to switch to backup systems or regions seamlessly in case of a failure in the primary systems.
|
||||
|
||||
Whatever strategy (or combination of) you decide to go with, always develop and regularly test a disaster recovery plan that outlines steps for restoring services and data in the event of a major failure.
|
||||
|
||||
This plan should include defined RTO (Recovery Time Objective) and RPO (Recovery Point Objective) targets. Being prepared to deal with the worst case scenarios is the only way, as these types of problems tend to cause chaos in small and big companies alike.
|
||||
9
src/data/question-groups/devops/content/iac-concept.md
Normal file
9
src/data/question-groups/devops/content/iac-concept.md
Normal file
@@ -0,0 +1,9 @@
|
||||

|
||||
|
||||
IaC (Infrastructure as Code) is all about managing infrastructure through code, instead of using other more conventional configuration methods. Specifically in the context of Terraform, here is how you’d want to approach IaC:
|
||||
|
||||
- **Configuration Files**: Define your infrastructure using HCL or JSON files.
|
||||
- **Execution Plan**: Generate a plan showing the changes needed to reach the desired state.
|
||||
- **Resource Provisioning**: Terraform will then apply the plan to provision and configure desired resources.
|
||||
- **State Management**: Terraform then tracks the current state of your infrastructure with a state file.
|
||||
- **Version Control**: Finally, store the configuration files in a version control system to easily version them and share them with other team members.
|
||||
@@ -0,0 +1,6 @@
|
||||
Logging for a distributed system is definitely not a trivial problem to solve. While the actual implementation might change based on your particular tech stack, the main aspects to consider are:
|
||||
|
||||
- Keep the structure of all logs consistent and the same throughout your platform. This will ensure that whenever you want to explore them in search for details, you’ll be able to quickly move from one to the other without having to change anything.
|
||||
- Centralize them somewhere. It can be an ELK stack, it can be Splunk or any of the many solutions available out there. Just make sure you centralize all your logs so that you can easily interact with all of them when required.
|
||||
- Add unique IDs to each request that gets logged, that way you can trace the flow of data from service to service. Otherwise, debugging problems becomes a real issue.
|
||||
- Add a tool that helps you search, query, and visualize the logs. After all, that’s why you want to keep track of that information, to use it somehow. Find yourself a UI that works for you and use it to explore your logs.
|
||||
@@ -0,0 +1,19 @@
|
||||
There are many components involved, some of them are part of the master node, and others belong to the worker nodes.
|
||||
|
||||
Here’s a quick summary:
|
||||
|
||||
1. **Master Node Component**s:
|
||||
- **API Server**: The front-end for the Kubernetes control plane, handling all RESTful requests for the cluster.
|
||||
- **etcd**: A distributed key-value store that holds the cluster's configuration and state.
|
||||
- **Controller** Manager: Manages various controllers that regulate the state of the cluster.
|
||||
- **Scheduler**: Assigns workloads to different nodes based on resource availability and other constraints.
|
||||
2. *Worker Node Components*:
|
||||
- **Kubelet**: This is an agent that runs on each node, and it ensures that each container is running in a Pod.
|
||||
- **Kube-proxy**: A network proxy that maintains network rules and handles routing for services.
|
||||
- **Container Runtime**: This software runs containers, such as Docker, containerd, or CRI-O.
|
||||
3. **Additional Components**:
|
||||
- **Pods**: These are the smallest deployable units in Kubernetes; they consist of one or more containers.
|
||||
- **Services**: Services define a logical set of Pods and a policy for accessing them, they’re often used for load balancing.
|
||||
- **ConfigMaps and Secrets**: They manage configuration data and sensitive information, respectively.
|
||||
- **Ingress**: It manages external access to services, typically through HTTP/HTTPS.
|
||||
- **Namespaces**: They provide a mechanism for isolating groups of resources within a single cluster.
|
||||
@@ -0,0 +1,15 @@
|
||||
As with any piece of software solution, there are no absolutes. In the case of Kubernetes Operators, while they do offer significant benefits for automating and managing complex applications, they also introduce additional complexity and resource requirements.
|
||||
|
||||
**Advantages of Kubernetes Operators**:
|
||||
|
||||
1. **Automation of Complex Tasks**: Operators automate the management of complex stateful applications, such as databases, reducing the need for manual intervention.
|
||||
2. **Consistency**: They help reduce human error and increase reliability by ensuring consistent deployments, scaling, and management of applications across environments.
|
||||
3. **Custom Resource Management**: Operators allow you to manage custom resources in Kubernetes, extending its capabilities to support more complex applications and services.
|
||||
4. **Simplified Day-2 Operations**: Operators streamline tasks like backups, upgrades, and failure recovery, making it easier to manage applications over time.
|
||||
|
||||
**Disadvantages of Kubernetes Operators**:
|
||||
|
||||
1. **Complexity**: Developing and maintaining Operators can be complex and require in-depth knowledge of both Kubernetes and the specific application being managed.
|
||||
2. **Overhead**: Running Operators adds additional components to your Kubernetes cluster, which can increase resource consumption and operational overhead.
|
||||
3. **Limited Use Cases**: Not all applications benefit from the complexity of an Operator; for simple stateless applications, Operators might be overkill.
|
||||
4. **Maintenance**: Operators need to be regularly maintained and updated, especially as Kubernetes itself keeps evolving, which can add to the maintenance burden.
|
||||
7
src/data/question-groups/devops/content/load-balancer.md
Normal file
7
src/data/question-groups/devops/content/load-balancer.md
Normal file
@@ -0,0 +1,7 @@
|
||||

|
||||
|
||||
A load balancer is a device or software that distributes incoming network traffic across multiple servers to ensure no single server becomes overwhelmed.
|
||||
|
||||
It is important because it improves the availability, reliability, and performance of applications by evenly distributing the load, preventing server overload, and providing failover capabilities in case of server failures.
|
||||
|
||||
Load balancers are usually used when scaling up RESTful microservices, because given their stateless nature, you can set up multiple copies of the same one behind a load balancer and let it distribute the load amongst all copies evenly.
|
||||
@@ -0,0 +1,10 @@
|
||||
While in theory microservices can solve all platform problems, in practice there are several challenges that you might encounter along the way.
|
||||
|
||||
Some examples are:
|
||||
|
||||
1. **Complexity**: Managing multiple services increases the overall system complexity, making development, deployment, and monitoring more challenging (as there are more “moving parts”).
|
||||
2. **Service Communication**: Ensuring reliable communication between services, handling network latency, and dealing with issues like service discovery and API versioning can be difficult. There are of course alternatives to deal with all of these issues, but they’re not evident right off the bat nor the same for everyone.
|
||||
3. **Data Management**: It’s all about trade-offs in the world of distributed computing. Managing data consistency and transactions across distributed services is complex, often requiring techniques like eventual consistency and distributed databases.
|
||||
4. **Deployment Overhead**: Coordinating the deployment of multiple services, especially when they have interdependencies, can lead to more complex CI/CD pipelines.
|
||||
5. **Monitoring and Debugging**: Troubleshooting issues is harder in a microservices architecture due to the distributed nature of the system. Trying to figure out where the information goes and which services are involved in a single request can be quite a challenge for large platforms. This makes debugging microservices architecture a real headache.
|
||||
6. **Security**: Securing microservices involves managing authentication, authorization, and data protection across multiple services, often with varying security requirements.
|
||||
@@ -0,0 +1,11 @@
|
||||

|
||||
|
||||
A microservice is an architectural style that structures an application as a collection of small, loosely coupled, and independently deployable services (hence the term “micro”).
|
||||
|
||||
Each service focuses on a specific business domain and can communicate with others through well-defined APIs.
|
||||
|
||||
In the end, your application is not (usually) composed of a single microservice (that would make it monolith), instead, its architecture consists of multiple microservices working together to serve the incoming requests.
|
||||
|
||||
On the other hand, a monolithic application is a single (often massive) unit where all functions and services are interconnected and run as a single process.
|
||||
|
||||
The biggest difference between monoliths and microservices is that changes to a monolithic application require the entire system to be rebuilt and redeployed, while microservices can be developed, deployed, and scaled independently, allowing for greater flexibility and resilience.
|
||||
@@ -0,0 +1,8 @@
|
||||
To migrate an existing application into a containerized environment, you’ll need to adapt the following steps to your particular context:
|
||||
|
||||
1. Figure out what parts of the application need to be containerized together.
|
||||
2. Create your Dockerfiles and define the entire architecture in that configuration, including the interservice dependencies that there might be.
|
||||
3. Figure out if you also need to containerize any external dependency, such as a database. If you do, add that to the Dockerfile.
|
||||
4. Build the actual docker image.
|
||||
5. Once you make sure it runs locally, configure the orchestration tool you use to manage the containers.
|
||||
6. You’re now ready to deploy to production, however, make sure you keep monitoring and alerting on any problem shortly after the deployment in case you need to roll back.
|
||||
@@ -0,0 +1,5 @@
|
||||
The process is pretty much the same as it was described above, with an added step to set up the actual Kubernetes cluster:
|
||||
|
||||
Use Terraform to define and provision Kubernetes clusters in each cloud. For instance, create an EKS cluster on AWS, an AKS cluster on Azure, and a GKE cluster on Google Cloud, specifying configurations such as node types, sizes, and networking.
|
||||
|
||||
Once you’re ready, make sure to set up the Kubernetes auto-scaler on each of the cloud providers to manage resources and scale based on the load they receive.
|
||||
9
src/data/question-groups/devops/content/multi-cloud.md
Normal file
9
src/data/question-groups/devops/content/multi-cloud.md
Normal file
@@ -0,0 +1,9 @@
|
||||
Setting up a multi-cloud infrastructure using Terraform involves the following steps:
|
||||
|
||||
1. **Define Providers**: In your Terraform configuration files, define the providers for each cloud service you intend to use (e.g., AWS, Azure, Google Cloud). Each provider block will configure how Terraform interacts with that specific cloud.
|
||||
2. **Create Resource Definitions**: In the same or separate Terraform files, define the resources you want to provision in each cloud. For example, you might define AWS EC2 instances, Azure Virtual Machines, and Google Cloud Storage buckets within the same project.
|
||||
3. **Set Up State Management**: Use a remote backend to manage Terraform state files centrally and securely. This is crucial for multi-cloud setups to ensure consistency and to allow collaboration among team members.
|
||||
4. **Configure Networking**: Design and configure networking across clouds, including VPCs, subnets, VPNs, or peering connections, to enable communication between resources in different clouds.
|
||||
5. **Provision Resources**: Run terraform init to initialize the configuration, then terraform plan to preview the changes, and finally terraform apply to provision the infrastructure across the multiple cloud environments.
|
||||
6. **Handle Authentication**: Ensure that each cloud provider's authentication (e.g., access keys, service principals) is securely handled, possibly using environment variables or a secret management tool. Do not hardcode sensitive information in your code, ever.
|
||||
7. **Monitor and Manage**: As always, after deploying, use Terraform's state files and output to monitor the infrastructure.
|
||||
@@ -0,0 +1,7 @@
|
||||
Managing the network configuration is not a trivial task, especially when the architecture is big and complex.
|
||||
|
||||
- Specifically in a cloud environment, managing network configurations involves several steps:
|
||||
Creating and isolating resources within Virtual Private Clouds (VPCs), organizing them into subnets, and controlling traffic using security groups and network ACLs.
|
||||
- Set up load balancers to distribute traffic for better performance, while setting up DNS services at the same time to manage domain routing.
|
||||
- Have VPNs and VPC peering connect cloud resources securely with other networks.
|
||||
- Finally, automation tools like Terraform handle network setups consistently, and monitoring tools ensure everything runs smoothly.
|
||||
9
src/data/question-groups/devops/content/optimize-cicd.md
Normal file
9
src/data/question-groups/devops/content/optimize-cicd.md
Normal file
@@ -0,0 +1,9 @@
|
||||
There are many ways in which you can optimize a CI/CD pipeline for performance and reliability, it all depends highly on the tech stack and your specific context (your app, your CI/CD setup, etc). However, the following are some potential solutions to this problem:
|
||||
|
||||
1. **Parallelize Jobs**: As long as you can, try to run independent jobs in parallel to reduce overall build and test times. This ensures faster feedback and speeds up the entire pipeline.
|
||||
2. **Optimize Build Caching**: Use caching mechanisms to avoid redundant work, such as re-downloading dependencies or rebuilding unchanged components. This can significantly reduce build times.
|
||||
3. **Incremental Builds**: Implement incremental builds that only rebuild parts of the codebase that have changed, rather than the entire project. This is especially useful for large projects with big codebases.
|
||||
4. **Efficient Testing**: Prioritize and parallelize tests, running faster unit tests early and reserving more intensive integration or end-to-end tests for later stages. Be smart about it and use test impact analysis to only run tests affected by recent code changes.
|
||||
5. **Monitor Pipeline Health**: Continuously monitor the pipeline for bottlenecks, failures, and performance issues. Use metrics and logs to identify and address inefficiencies.
|
||||
6. **Environment Consistency**: Ensure that build, test, and production environments are consistent to avoid "it works on my machine" issues. Use containerization or Infrastructure as Code (IaC) to maintain environment parity. Your code should work in all environments, and if it doesn’t, it should not be the fault of the environment.
|
||||
7. **Pipeline Stages**: Use pipeline stages wisely to catch issues early. For example, fail fast on linting or static code analysis before moving on to more resource-intensive stages.
|
||||
7
src/data/question-groups/devops/content/orchestration.md
Normal file
7
src/data/question-groups/devops/content/orchestration.md
Normal file
@@ -0,0 +1,7 @@
|
||||
Orchestration in DevOps refers to the automated coordination and management of complex IT systems. It involves combining multiple automated tasks and processes into a single workflow to achieve a specific goal.
|
||||
|
||||
Nowadays, automation (or orchestration) is one of the key components of any software development process and it should never be avoided nor preferred over manual configuration.
|
||||
|
||||
As an automation practice, orchestration helps to remove the chance of human error from the different steps of the software development lifecycle. This is all to ensure efficient resource utilization and consistency.
|
||||
|
||||
Some examples of orchestration can include orchestrating container deployments with Kubernetes and automating infrastructure provisioning with tools like Terraform.
|
||||
@@ -0,0 +1,21 @@
|
||||
There are too many out there to name them all, but we can group them into two main categories: on-prem and cloud-based.
|
||||
|
||||
**On-prem CI/CD tools**
|
||||
|
||||
These tools allow you to install them on your own infrastructure and don’t require any extra external internet access. Some examples are:
|
||||
|
||||
- Jenkins
|
||||
- GitLab CI/CD (can be self-hosted)
|
||||
- Bamboo
|
||||
- TeamCity
|
||||
|
||||
**Cloud-based CI/CD tools**
|
||||
|
||||
On the other hand, these tools either require you to use them from the cloud or are only accessible in SaaS format, which means they provide the infrastructure, and you just use their services.
|
||||
|
||||
Some examples of these tools are:
|
||||
- CircleCI
|
||||
- Travis CI
|
||||
- GitLab CI/CD (cloud version)
|
||||
- Azure DevOps
|
||||
- Bitbucket Pipelines
|
||||
8
src/data/question-groups/devops/content/purpose-of-cm.md
Normal file
8
src/data/question-groups/devops/content/purpose-of-cm.md
Normal file
@@ -0,0 +1,8 @@
|
||||
When organizations and platforms grow large enough, keeping track of how different areas of the IT ecosystem (infrastructure, deployment pipelines, hardware, etc) are meant to be configured becomes a problem, and finding a way to manage that chaos suddenly becomes a necessity.
|
||||
That is where configuration management comes into play.
|
||||
|
||||
The purpose of a configuration management tool is to automate the process of managing and maintaining the consistency of software and hardware configurations across an organization's infrastructure.
|
||||
|
||||
It makes sure that systems are configured correctly, updates are applied uniformly, and configurations are maintained according to predefined standards.
|
||||
|
||||
This helps reduce configuration errors, increase efficiency, and ensure that environments are consistent and compliant.
|
||||
5
src/data/question-groups/devops/content/reverse-proxy.md
Normal file
5
src/data/question-groups/devops/content/reverse-proxy.md
Normal file
@@ -0,0 +1,5 @@
|
||||

|
||||
|
||||
A reverse proxy is a piece of software that sits between clients and backend servers, forwarding client requests to the appropriate server and returning the server's response to the client. It helps with load balancing, security, caching, and handling SSL termination.
|
||||
|
||||
An example of a reverse proxy is **Nginx**. For example, if you have a web application running on several backend servers, Nginx can distribute incoming HTTP requests evenly among these servers. This setup improves performance, enhances fault tolerance, and ensures that no single server is overwhelmed by traffic.
|
||||
15
src/data/question-groups/devops/content/role-of-devops.md
Normal file
15
src/data/question-groups/devops/content/role-of-devops.md
Normal file
@@ -0,0 +1,15 @@
|
||||
This is probably one of the most common DevOps interview questions out there because by answering it correctly, you show that you actually know what DevOps engineers (A.K.A “you”) are supposed to work on.
|
||||
|
||||
That said, this is not a trivial question to answer because different companies will likely implement DevOps with their own “flavor” and in their own way.
|
||||
|
||||
At a high level, the role of a DevOps engineer is to bridge the gap between development and operations teams with the aim of improving the development lifecycle and reducing deployment errors.
|
||||
|
||||
With that said other key responsibilities may include:
|
||||
|
||||
- Implementing and managing CI/CD pipelines.
|
||||
- Automating infrastructure provisioning and configuration using IaC tools.
|
||||
- Monitoring and maintaining system performance, security, and availability.
|
||||
- Collaborating with developers to streamline code deployments and ensure smooth operations.
|
||||
- Managing and optimizing cloud infrastructure.
|
||||
- Ensuring system scalability and reliability.
|
||||
- Troubleshooting and resolving issues across the development and production environments.
|
||||
@@ -0,0 +1,17 @@
|
||||

|
||||
|
||||
They’re both valid scaling techniques, but they both have different limitations on the affected system.
|
||||
|
||||
**Horizontal Scaling**
|
||||
|
||||
- Involves adding more machines or instances to your infrastructure.
|
||||
- Increases capacity by connecting multiple hardware or software entities so they work as a single logical unit.
|
||||
- Often used in distributed systems and cloud environments.
|
||||
|
||||
**Vertical Scaling**
|
||||
|
||||
- Involves adding more resources (CPU, RAM, storage) to an existing machine.
|
||||
- Increases capacity by enhancing the power of a single server or instance.
|
||||
- Limited by the maximum capacity of the hardware.
|
||||
|
||||
In summary, horizontal scaling adds more machines to handle increased load, while vertical scaling enhances the power of existing machines.
|
||||
@@ -0,0 +1,6 @@
|
||||
There are many ways to handle secrets management in a DevOps pipeline, some of them involve:
|
||||
|
||||
- Storing secrets in environment variables managed by the CI/CD tool.
|
||||
- Using secret management tools like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault to securely store and retrieve secrets.
|
||||
- Encrypted configuration files are also an option, with decryption keys stored securely somewhere else.
|
||||
- Whatever strategy you decide to go with, it's crucial to implement strict access controls and permissions, integrate secret management tools with CI/CD pipelines to fetch secrets securely at runtime, and above all, avoid hardcoding secrets in code repositories or configuration files.
|
||||
@@ -0,0 +1,5 @@
|
||||
Contrary to popular belief, serverless computing doesn’t mean there are no servers, in fact, there are, however, you just don’t need to worry about them.
|
||||
|
||||
Serverless computing is a cloud computing model where the cloud provider automatically manages the infrastructure, allowing developers to focus solely on writing and deploying code. In this model, you don't have to manage servers or worry about scaling, as the cloud provider dynamically allocates resources as needed.
|
||||
|
||||
One of the great qualities of this model is that you pay only for the compute time your code actually uses, rather than for pre-allocated infrastructure (like you would for a normal server).
|
||||
@@ -0,0 +1,12 @@
|
||||
Handling stateful applications in a Kubernetes environment requires careful management of persistent data; you need to ensure that data is retained even if Pods are rescheduled or moved.
|
||||
|
||||
Here’s one way you can do it:
|
||||
|
||||
1. **Persistent Volumes (PVs) and Persistent Volume Claims (PVCs)**: Use Persistent Volumes to define storage resources in the cluster, and Persistent Volume Claims to request specific storage. This way you decouple storage from the lifecycle of Pods, ensuring that data persists independently of Pods.
|
||||
2. **StatefulSets**: Deploy stateful applications using StatefulSets instead of Deployments. StatefulSets ensure that Pods have stable, unique network identities and persistent storage, which is crucial for stateful applications like databases.
|
||||
3. **Storage Classes**: Use Storage Classes to define the type of storage (e.g., SSD, HDD) and the dynamic provisioning of Persistent Volumes. This allows Kubernetes to automatically provision the appropriate storage based on the application's needs.
|
||||
4. **Headless Services**: Configure headless services to manage network identities for StatefulSets. This allows Pods to have consistent DNS names, which is important for maintaining stateful connections between Pods.
|
||||
5. **Backup and Restore**: Implement backup and restore mechanisms to protect the persistent data. Tools like Velero can be used to back up Kubernetes resources and persistent volumes.
|
||||
6. **Data Replication**: For critical applications, set up data replication across multiple zones or regions to ensure high availability and data durability.
|
||||
|
||||
As always, continuously monitor the performance and health of stateful applications using Kubernetes-native tools (e.g., Prometheus) and ensure that the storage solutions meet the performance requirements of the application.
|
||||
@@ -0,0 +1,3 @@
|
||||
DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). Its main goal is to shorten (and simplify) the software development lifecycle and provide continuous delivery with high software quality.
|
||||
|
||||
It is important because it helps to improve collaboration between development and operations teams which in turn, translates into increasing deployment frequency, reducing failure rates of new releases, and speeding up recovery time.
|
||||
@@ -0,0 +1,9 @@
|
||||
Docker is an open-source platform that enables developers to create, deploy, and run applications within lightweight, portable containers. These containers package an application along with all of its dependencies, libraries, and configuration files.
|
||||
|
||||
That, in turn, ensures that the application can run consistently across various computing environments.
|
||||
|
||||
Docker has become one of the most popular DevOps tools because it provides a consistent and isolated environment for development, continuous testing, and deployment. This consistency helps to eliminate the common "it works on my machine" problem by ensuring that the application behaves the same way, regardless of where it is run—whether on a developer's local machine, a testing server, or in production.
|
||||
|
||||
Additionally, Docker simplifies the management of complex applications by allowing developers to break them down into smaller, manageable microservices, each running in its own container.
|
||||
|
||||
This approach not only supports but also enhances scalability, and flexibility and it makes it easier to manage dependencies, version control, and updates.
|
||||
@@ -0,0 +1,9 @@
|
||||
GitOps is a practice that uses Git as the single source of truth for infrastructure and application management. It takes advantage of Git repositories to store all configuration files and through automated processes, it ensures that both infrastructure and application configuration match the described state in the repo.
|
||||
|
||||
The main differences between GitOps and traditional CI/CD are:
|
||||
|
||||
- **Source of Truth**: GitOps uses Git as the single source of truth for both infrastructure and application configurations. In traditional CI/CD, configurations may be scattered across various tools and scripts.
|
||||
- **Deployment Automation**: In GitOps, changes are automatically applied by reconciling the desired state in Git with the actual state in the environment. Traditional CI/CD often involves manual steps for deployment.
|
||||
- **Declarative Approach**: GitOps emphasizes a declarative approach where the desired state is defined in Git and the system automatically converges towards it. Traditional CI/CD often uses imperative scripts to define steps and procedures to get the system to the state it should be in.
|
||||
- **Operational Model**: GitOps operates continuously, monitoring for changes in Git and applying them in near real-time. Traditional CI/CD typically follows a linear pipeline model with distinct build, test, and deploy stages.
|
||||
- **Rollback and Recovery**: GitOps simplifies rollbacks and recovery by reverting changes in the Git repository, which is a native mechanism and automatically triggers the system to revert to the previous state. Traditional CI/CD may require extra work and configuration to roll back changes.
|
||||
@@ -0,0 +1,8 @@
|
||||
A Helm chart is a set of YAML templates used to configure Kubernetes resources. It simplifies the deployment and management of applications within a Kubernetes cluster by bundling all necessary components (such as deployments, services, and configurations) into a single, reusable package.
|
||||
|
||||
Helm charts are used in Kubernetes to:
|
||||
|
||||
- **Simplify Deployments**: By using Helm charts, you can deploy complex applications with a single command.
|
||||
- **Version Control**: Given how they’re just plain-text files, helm charts support versioning, allowing you to track and roll back to previous versions of your applications easily.
|
||||
- **Configuration Management**: They allow you to manage configuration values separately from the Kubernetes manifests, making it easier to update and maintain configurations.
|
||||
- **Reuse and Share**: Helm charts can be reused and shared across different projects and teams, promoting best practices and consistency.
|
||||
5
src/data/question-groups/devops/content/what-is-iac.md
Normal file
5
src/data/question-groups/devops/content/what-is-iac.md
Normal file
@@ -0,0 +1,5 @@
|
||||

|
||||
|
||||
IaC is the practice of managing and provisioning infrastructure through machine-readable configuration files (in other words, “code”), rather than through physical hardware configuration or interactive configuration tools.
|
||||
|
||||
By keeping this configuration in code format, we now gain the ability to keep it stored in version control platforms, and automate their deployment consistently across environments, reducing the risk of human error and increasing efficiency in infrastructure management.
|
||||
@@ -0,0 +1,7 @@
|
||||

|
||||
|
||||
If we’re talking about DevOps tools, then Kubernetes is a must-have. Specifically, Kubernetes is an open-source container orchestration platform. That means it can automate the deployment, scaling, and management of containerized applications.
|
||||
|
||||
It is widely used because it simplifies the complex tasks of managing containers for large-scale applications, such as ensuring high availability, load balancing, rolling updates, and self-healing.
|
||||
|
||||
Kubernetes helps organizations run and manage applications more efficiently and reliably in various environments, including on-premises, cloud, or hybrid setups.
|
||||
@@ -0,0 +1,3 @@
|
||||
As a DevOps engineer, knowing your tools is key, given how many are out there, understanding which ones get the job done is important.
|
||||
|
||||
In this case, Prometheus is an open-source monitoring and alerting tool designed for reliability and scalability. It is widely used to monitor applications and infrastructure by collecting metrics, storing them in a time-series database, and providing powerful querying capabilities.
|
||||
@@ -0,0 +1,5 @@
|
||||
A rollback is the process of reverting a system to a previous stable state, typically after a failed or problematic deployment to production.
|
||||
|
||||
You would perform a rollback when a new deployment causes one or several of the following problems: application crashes, significant bugs, security vulnerabilities, or performance problems.
|
||||
|
||||
The goal is to restore the system to a known “good” state while minimizing downtime and the impact on users while investigating and resolving the issues with the new deployment.
|
||||
@@ -0,0 +1,11 @@
|
||||

|
||||
|
||||
A service mesh is a dedicated layer in a system’s architecture for handling service-to-service communication.
|
||||
|
||||
This is a very common problem to solve when your microservice-based architecture grows out of control. Suddenly having to understand how to orchestrate them all in a way that is reliable and scalable becomes more of a chore.
|
||||
|
||||
While teams can definitely come up with solutions to this problem, using a ready-made solution is also a great alternative.
|
||||
|
||||
A service mesh manages tasks like load balancing, service discovery, encryption, authentication, authorization, and observability, without requiring changes to the application code (so it can easily be added once the problem presents, instead of planning for it from the start).
|
||||
|
||||
There are many products out there that provide this functionality, but some examples are Istio, Linkerd, and Consul.
|
||||
@@ -0,0 +1,3 @@
|
||||
The concept of 'shift left' in DevOps refers to the practice of performing tasks earlier in the software development lifecycle.
|
||||
|
||||
This includes integrating testing, security, and other quality checks early in the development process rather than at the end. The goal is to identify and fix issues sooner, thus reducing defects, improving quality, and speeding up software delivery times.
|
||||
@@ -0,0 +1,7 @@
|
||||
Version control is a system that records changes to files over time so that specific versions can be recalled later or multiple developers can work on the same codebase and eventually merge their work streams together with minimum effort.
|
||||
|
||||
It is important in DevOps because it allows multiple team members to collaborate on code, tracks and manages changes efficiently, enables rollback to previous versions if issues arise, and supports automation in CI/CD pipelines, ensuring consistent and reliable software delivery (which is one of the key principles of DevOps).
|
||||
|
||||
In terms of tooling, one of the best and most popular version control systems is Git. It provides what is known as a distributed version control system, giving every team member a piece of the code so they can branch it, work on it however they feel like it, and push it back to the rest of the team once they’re done.
|
||||
|
||||
That said, there are other legacy teams using alternatives like CSV or SVN.
|
||||
6
src/data/question-groups/devops/content/zero-downtime.md
Normal file
6
src/data/question-groups/devops/content/zero-downtime.md
Normal file
@@ -0,0 +1,6 @@
|
||||
Zero-downtime deployments are crucial to maintain the stability of service with high-traffic applications. To achieve this, there are many different strategies, some of which we’ve already covered in this article.
|
||||
|
||||
1. **Blue-Green Deployment**: Set up two identical environments—blue (current live) and green (new version). Deploy the new version to the green environment, test it, and then switch traffic from blue to green. This ensures that users experience no downtime.
|
||||
2. **Canary Releases**: Gradually route a small percentage of traffic to the new version while the rest continues to use the current version. Monitor the new version's performance, and if successful, progressively increase the traffic to the new version.
|
||||
3. **Rolling Deployments**: Update a subset of instances or Pods at a time, gradually rolling out the new version across all servers or containers. This method ensures that some instances remain available to serve traffic while others are being updated.
|
||||
4. **Feature Flags**: Deploy the new version with features toggled off. Gradually enable features for users without redeploying the code. This allows you to test new features in production and quickly disable them if issues arise.
|
||||
246
src/data/question-groups/devops/devops.md
Normal file
246
src/data/question-groups/devops/devops.md
Normal file
@@ -0,0 +1,246 @@
|
||||
---
|
||||
order: 6
|
||||
briefTitle: 'DevOps'
|
||||
briefDescription: 'Get ready for your DevOps interview with 50 popular questions and answers that cover tools, pipelines, and key practices.'
|
||||
title: 'Top 50 Popular DevOps Interview Questions (and Answers)'
|
||||
description: 'Get ready for your DevOps interview with 50 popular questions and answers that cover tools, pipelines, and key practices.'
|
||||
authorId: 'fernando'
|
||||
isNew: true
|
||||
date: 2024-08-20
|
||||
seo:
|
||||
title: 'Top 50 Popular DevOps Interview Questions (and Answers)'
|
||||
description: 'Get ready for your DevOps interview with 50 popular questions and answers that cover tools, pipelines, and key practices.'
|
||||
keywords:
|
||||
- 'devops quiz'
|
||||
- 'devops questions'
|
||||
- 'devops interview questions'
|
||||
- 'devops interview'
|
||||
- 'devops test'
|
||||
sitemap:
|
||||
priority: 1
|
||||
changefreq: 'monthly'
|
||||
questions:
|
||||
- question: What is DevOps, and why is it important?
|
||||
answer: what-is-devops.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: Explain the difference between continuous integration and continuous deployment.
|
||||
answer: explain-ci-vs-cd.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is a container, and how is it different from a virtual machine?
|
||||
answer: container-vs-vm.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: Name some popular CI/CD tools.
|
||||
answer: popular-cicd-tools.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is Docker, and why is it used?
|
||||
answer: what-is-docker.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: Can you explain what infrastructure as code (IaC) is?
|
||||
answer: what-is-iac.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What are some common IaC tools?
|
||||
answer: common-iac-tools.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is version control, and why is it important in DevOps?
|
||||
answer: what-is-version-control.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: Explain the concept of 'shift left' in DevOps.
|
||||
answer: what-is-shift-left.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is a microservice, and how does it differ from a monolithic application?
|
||||
answer: microservice-vs-monolithic.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is a build pipeline?
|
||||
answer: build-pipelines.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is the role of a DevOps engineer?
|
||||
answer: role-of-devops.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is Kubernetes, and why is it used?
|
||||
answer: what-is-kubernetes.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: Explain the concept of orchestration in DevOps.
|
||||
answer: orchestration.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is a load balancer, and why is it important?
|
||||
answer: load-balancer.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is the purpose of a configuration management tool?
|
||||
answer: purpose-of-cm.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is continuous monitoring?
|
||||
answer: continuous-monitoring.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What's the difference between horizontal and vertical scaling?
|
||||
answer: scaling-differences.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: What is a rollback, and when would you perform one?
|
||||
answer: what-is-rollback.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: Explain what a service mesh is
|
||||
answer: what-is-service-mesh.md
|
||||
topics:
|
||||
- 'Beginner'
|
||||
- question: Describe how you would set up a CI/CD pipeline from scratch
|
||||
answer: cicd-setup.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: How do containers help with consistency in development and production environments?
|
||||
answer: container-consistency.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: Explain the concept of 'infrastructure as code' using Terraform.
|
||||
answer: iac-concept.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What are the benefits of using Ansible for configuration management?
|
||||
answer: ansible-benefits.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: How do you handle secrets management in a DevOps pipeline?
|
||||
answer: secret-management.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What is GitOps, and how does it differ from traditional CI/CD?
|
||||
answer: what-is-gitops.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: Describe the process of blue-green deployment.
|
||||
answer: blue-green-deployment.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What are the main components of Kubernetes?
|
||||
answer: kubernetes-components.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: How would you monitor the health of a Kubernetes cluster?
|
||||
answer: cluster-health.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What is a Helm chart, and how is it used in Kubernetes?
|
||||
answer: what-is-helm-chart.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: Explain the concept of a canary release
|
||||
answer: canary-release.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What is the role of Docker Compose in a multi-container application?
|
||||
answer: docker-compose.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: How would you implement auto-scaling in a cloud environment?
|
||||
answer: auto-scaling.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What are some common challenges with microservices architecture?
|
||||
answer: microservice-challenges.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: How do you ensure high availability and disaster recovery in a cloud environment?
|
||||
answer: high-availability.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What is Prometheus, and how is it used in monitoring?
|
||||
answer: what-is-prometheus.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: Describe how you would implement logging for a distributed system
|
||||
answer: implement-logging.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: How do you manage network configurations in a cloud environment?
|
||||
answer: network-configuration.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: What is the purpose of a reverse proxy, and give an example of one
|
||||
answer: reverse-proxy.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: Explain the concept of serverless computing
|
||||
answer: serverless-computing.md
|
||||
topics:
|
||||
- 'Intermediate'
|
||||
- question: How would you migrate an existing application to a containerized environment?
|
||||
answer: migrate-environment.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: Describe your approach to implementing security in a DevOps pipeline (DevSecOps)
|
||||
answer: devsecops.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: What are the advantages and disadvantages of using Kubernetes Operators?
|
||||
answer: kubernetes-operators.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: How would you optimize a CI/CD pipeline for performance and reliability?
|
||||
answer: optimize-cicd.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: Explain the process of setting up a multi-cloud infrastructure using Terraform.
|
||||
answer: multi-cloud.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: How would you implement one in a Kubernetes cluster?
|
||||
answer: multi-cloud-kubernetes.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: How do you handle stateful applications in a Kubernetes environment?
|
||||
answer: stateful-applications.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: What are the key metrics you would monitor to ensure the health of a DevOps pipeline?
|
||||
answer: health-monitor.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: How would you implement zero-downtime deployments in a high-traffic application?
|
||||
answer: zero-downtime.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
- question: Describe your approach to handling data migrations in a continuous deployment pipeline.
|
||||
answer: data-migration.md
|
||||
topics:
|
||||
- 'Advanced'
|
||||
---
|
||||
|
||||

|
||||
|
||||
The evolution of technology and practices, coupled with the increase in complexity of the systems we develop, make the role of DevOps more relevant by the day.
|
||||
|
||||
But becoming a successful DevOps is not a trivial task, especially because this role is usually the evolution of a developer looking to get more involved in other related ops areas or someone from ops who’s starting to get more directly involved in the development space.
|
||||
|
||||
Either way, DevOps engineers live between the development and operations teams, understanding enough about each area to be able to work towards improving their interactions.
|
||||
|
||||
Because of this strange situation, while detailed roadmaps (be sure to check out our DevOps roadmap!) help a lot, getting ready for a DevOps interview requires a lot of work.
|
||||
|
||||
Here are the most relevant DevOps interview questions you’ll likely get asked during a DevOps interview, plus a few more that will push your skills to the next level.
|
||||
|
||||
## Preparing for your DevOps interview
|
||||
|
||||
Before diving into your DevOps technical interview, keep these key points in mind:
|
||||
|
||||
1. **Understand the core concepts**: Familiarize yourself with the essentials of DevOps practices, including continuous integration/continuous deployment (CI/CD), infrastructure as code (IaC), the software development lifecycle, and containerization. Understand how these concepts contribute to the overall development lifecycle.
|
||||
2. **Practice hands-on skills**: There is a lot of practical knowledge involved in the DevOps practice, so make sure you try what you read about. Set up some CI/CD pipelines for your pet projects, understand containerization, and pick a tool to get started. The more you practice, the more prepared you’ll be for real-world problems.
|
||||
3. **Study software architecture**: While you may not have the responsibilities of an architect, having a solid understanding of software architecture principles can be a huge help. Being able to discuss the different components of a system with architects would make you a huge asset to any team.
|
||||
4. **Research the Company**: In general, it’s always a great idea to research the company you’re interviewing for. In this case, investigate the company’s DevOps practices, the technologies they use, and their overall approach to software development. This will help you demonstrate a genuine interest in their operations and come prepared with thoughtful questions.
|
||||
|
||||
With that out of the way, let’s move on to the specific DevOps interview questions to prepare for.
|
||||
@@ -142,6 +142,8 @@ questions:
|
||||
- 'Advanced'
|
||||
---
|
||||
|
||||

|
||||
|
||||
Preparing for your front end web development interview is key to achieving a successful outcome, but understanding what kind of questions or topics are going to be asked is not easy.
|
||||
|
||||
So to help you get ready for your upcoming front end developer interview, here are 30 technical interview questions about web development with a focus on the front end, in other words, about JavaScript, HTML, and CSS.
|
||||
|
||||
@@ -0,0 +1,10 @@
|
||||
# BottomSheet
|
||||
|
||||
`Bottom sheets` are surfaces containing supplementary content that are anchored to the bottom of the screen.
|
||||
|
||||
There are several attributes that can be used to adjust the behavior of both standard and modal bottom sheets.
|
||||
Behavior attributes can be applied to standard bottom sheets in xml by setting them on a child View set to `app:layout_behavior` or programmatically.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@Android developers: Bottom sheets](https://developer.android.com/reference/com/google/android/material/bottomsheet/BottomSheetDialog)
|
||||
|
||||
@@ -0,0 +1,7 @@
|
||||
# ImageView
|
||||
|
||||
Displays image resources, for example Bitmap or Drawable resources. ImageView is also commonly used to apply tints to an image and handle image scaling.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@Android developers: ImageView](https://developer.android.com/reference/android/widget/ImageView)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user