mirror of
https://github.com/kamranahmedse/developer-roadmap.git
synced 2026-03-13 10:11:55 +08:00
Compare commits
39 Commits
feat/githu
...
feat/refer
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8a0edd4608 | ||
|
|
12a15d25df | ||
|
|
e190f5b30c | ||
|
|
f759eda4d2 | ||
|
|
35f6097a1b | ||
|
|
06c242cf32 | ||
|
|
5b09e61b86 | ||
|
|
a3fedad816 | ||
|
|
338f6c5d4a | ||
|
|
9d6d77f93e | ||
|
|
f4c717b958 | ||
|
|
65fe7aeb71 | ||
|
|
1d0e65c2c8 | ||
|
|
421133ecc2 | ||
|
|
346c630019 | ||
|
|
3b929e45d2 | ||
|
|
2bef597ced | ||
|
|
1219b9e905 | ||
|
|
87ef708da3 | ||
|
|
0643e86514 | ||
|
|
814b819195 | ||
|
|
9f2efc5872 | ||
|
|
55f0eff569 | ||
|
|
47936801fd | ||
|
|
6b118d14d3 | ||
|
|
efbd1d7f04 | ||
|
|
f036a11784 | ||
|
|
3d7bdc55bd | ||
|
|
b658591c45 | ||
|
|
52c1b20f56 | ||
|
|
e3ca03e531 | ||
|
|
2378cd4bb9 | ||
|
|
d673a06472 | ||
|
|
122bbe6b27 | ||
|
|
d2a36a9d4c | ||
|
|
04151f9693 | ||
|
|
8d8bca0c14 | ||
|
|
ddf96ff6d6 | ||
|
|
9d9d70de76 |
BIN
public/pdfs/roadmaps/ai-engineer.pdf
Normal file
BIN
public/pdfs/roadmaps/ai-engineer.pdf
Normal file
Binary file not shown.
File diff suppressed because it is too large
Load Diff
@@ -1,13 +1,30 @@
|
||||
{
|
||||
"gKTSe9yQFVbPVlLzWB0hC": {
|
||||
"title": "Search Engines",
|
||||
"description": "Search engines like Elasticsearch are specialized tools designed for fast, scalable, and flexible searching and analyzing of large volumes of data. Elasticsearch is an open-source, distributed search and analytics engine built on Apache Lucene, offering full-text search capabilities, real-time indexing, and advanced querying features. Key characteristics of search engines like Elasticsearch include:\n\n1. **Full-Text Search**: Support for complex search queries, including relevance scoring and text analysis.\n2. **Distributed Architecture**: Scalability through horizontal distribution across multiple nodes or servers.\n3. **Real-Time Indexing**: Ability to index and search data almost instantaneously.\n4. **Powerful Query DSL**: A domain-specific language for constructing and executing sophisticated queries.\n5. **Analytics**: Capabilities for aggregating and analyzing data, often used for log and event data analysis.\n\nElasticsearch is commonly used in applications requiring advanced search functionality, such as search engines, data analytics platforms, and real-time monitoring systems.",
|
||||
"links": []
|
||||
"description": "Search engines like Elasticsearch are specialized tools designed for fast, scalable, and flexible searching and analyzing of large volumes of data. Elasticsearch is an open-source, distributed search and analytics engine built on Apache Lucene, offering full-text search capabilities, real-time indexing, and advanced querying features. Key characteristics of search engines like Elasticsearch include:\n\n1. **Full-Text Search**: Support for complex search queries, including relevance scoring and text analysis.\n2. **Distributed Architecture**: Scalability through horizontal distribution across multiple nodes or servers.\n3. **Real-Time Indexing**: Ability to index and search data almost instantaneously.\n4. **Powerful Query DSL**: A domain-specific language for constructing and executing sophisticated queries.\n5. **Analytics**: Capabilities for aggregating and analyzing data, often used for log and event data analysis.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Elasticsearch",
|
||||
"url": "https://www.elastic.co/elasticsearch/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"9Fpoor-Os_9lvrwu5Zjh-": {
|
||||
"title": "Design and Development Principles",
|
||||
"description": "Design and Development Principles are fundamental guidelines that inform the creation of software systems. Key principles include:\n\n1. SOLID (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion)\n2. DRY (Don't Repeat Yourself)\n3. KISS (Keep It Simple, Stupid)\n4. YAGNI (You Aren't Gonna Need It)\n5. Separation of Concerns\n6. Modularity\n7. Encapsulation\n8. Composition over Inheritance\n9. Loose Coupling and High Cohesion\n10. Principle of Least Astonishment\n\nThese principles aim to create more maintainable, scalable, and robust software. They encourage clean code, promote reusability, reduce complexity, and enhance flexibility. While not rigid rules, these principles guide developers in making design decisions that lead to better software architecture and easier long-term maintenance. Applying these principles helps in creating systems that are easier to understand, modify, and extend over time.",
|
||||
"links": []
|
||||
"description": "Design and Development Principles are fundamental guidelines that inform the creation of software systems. Key principles include:\n\n* SOLID (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion)\n* DRY (Don't Repeat Yourself)\n* KISS (Keep It Simple, Stupid)\n* YAGNI (You Aren't Gonna Need It)\n* Separation of Concerns\n* Modularity\n* Encapsulation\n* Composition over Inheritance\n* Loose Coupling and High Cohesion\n* Principle of Least Astonishment\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Design Principles - Wikipedia",
|
||||
"url": "https://en.wikipedia.org/wiki/Design_principles",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Design Principles - Microsoft",
|
||||
"url": "https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/index",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"EwvLPSI6AlZ4TnNIJTZA4": {
|
||||
"title": "Learn about APIs",
|
||||
@@ -71,7 +88,7 @@
|
||||
"description": "Rust is a systems programming language known for its focus on safety, performance, and concurrency. It provides fine-grained control over system resources while ensuring memory safety without needing a garbage collector. Rust's ownership model enforces strict rules on how data is accessed and managed, preventing common issues like null pointer dereferences and data races. Its strong type system and modern features, such as pattern matching and concurrency support, make it suitable for a wide range of applications, from low-level systems programming to high-performance web servers and tools. Rust is gaining traction in both industry and open source for its reliability and efficiency.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "The Rust Programming Language - online book",
|
||||
"title": "The Rust Programming Language - Book",
|
||||
"url": "https://doc.rust-lang.org/book/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -334,8 +351,8 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Learn Git with Tutorials, News and Tips - Atlassian",
|
||||
"url": "https://www.atlassian.com/git",
|
||||
"title": "Git Documentation",
|
||||
"url": "https://git-scm.com/doc",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
@@ -370,8 +387,8 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Git",
|
||||
"url": "https://git-scm.com/",
|
||||
"title": "Git Documentation",
|
||||
"url": "https://git-scm.com/doc",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
@@ -396,7 +413,7 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Website",
|
||||
"title": "GitHub",
|
||||
"url": "https://github.com",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -424,7 +441,7 @@
|
||||
},
|
||||
"Ry_5Y-BK7HrkIc6X0JG1m": {
|
||||
"title": "Bitbucket",
|
||||
"description": "Bitbucket is a web-based version control repository hosting service owned by Atlassian. It primarily uses Git version control systems, offering both cloud-hosted and self-hosted options. Bitbucket provides features such as pull requests for code review, branch permissions, and inline commenting on code. It integrates seamlessly with other Atlassian products like Jira and Trello, making it popular among teams already using Atlassian tools. Bitbucket supports continuous integration and deployment through Bitbucket Pipelines. It offers unlimited private repositories for small teams, making it cost-effective for smaller organizations. While similar to GitHub in many aspects, Bitbucket's integration with Atlassian's ecosystem and its pricing model for private repositories are key differentiators. It's widely used for collaborative software development, particularly in enterprise environments already invested in Atlassian's suite of products.\n\nVisit the following resources to learn more:",
|
||||
"description": "Bitbucket is a web-based version control repository hosting service owned by Atlassian. It primarily uses Git version control systems, offering both cloud-hosted and self-hosted options. Bitbucket provides features such as pull requests for code review, branch permissions, and inline commenting on code. It integrates seamlessly with other Atlassian products like Jira and Trello, making it popular among teams already using Atlassian tools. Bitbucket supports continuous integration and deployment through Bitbucket Pipelines. It offers unlimited private repositories for small teams, making it cost-effective for smaller organizations.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Bitbucket Website",
|
||||
@@ -453,9 +470,9 @@
|
||||
"description": "GitLab is a web-based DevOps platform that provides a complete solution for the software development lifecycle. It offers source code management, continuous integration/continuous deployment (CI/CD), issue tracking, and more, all integrated into a single application. GitLab supports Git repositories and includes features like merge requests (similar to GitHub's pull requests), wiki pages, and issue boards. It emphasizes DevOps practices, providing built-in CI/CD pipelines, container registry, and Kubernetes integration. GitLab offers both cloud-hosted and self-hosted options, giving organizations flexibility in deployment. Its all-in-one approach differentiates it from competitors, as it includes features that might require multiple tools in other ecosystems. GitLab's focus on the entire DevOps lifecycle, from planning to monitoring, makes it popular among enterprises and teams seeking a unified platform for their development workflows.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "GitLab Website",
|
||||
"title": "GitLab",
|
||||
"url": "https://gitlab.com/",
|
||||
"type": "opensource"
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GitLab Documentation",
|
||||
@@ -546,7 +563,7 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "MS SQL website",
|
||||
"title": "MS SQL",
|
||||
"url": "https://www.microsoft.com/en-ca/sql-server/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -567,12 +584,12 @@
|
||||
"description": "MySQL is an open-source relational database management system (RDBMS) known for its speed, reliability, and ease of use. It uses SQL (Structured Query Language) for database interactions and supports a range of features for data management, including transactions, indexing, and stored procedures. MySQL is widely used for web applications, data warehousing, and various other applications due to its scalability and flexibility. It integrates well with many programming languages and platforms, and is often employed in conjunction with web servers and frameworks in popular software stacks like LAMP (Linux, Apache, MySQL, PHP/Python/Perl). MySQL is maintained by Oracle Corporation and has a large community and ecosystem supporting its development and use.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "MySQL website",
|
||||
"title": "MySQL",
|
||||
"url": "https://www.mysql.com/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "W3Schools - MySQL tutorial ",
|
||||
"title": "W3Schools - MySQL Tutorial",
|
||||
"url": "https://www.w3schools.com/mySQl/default.asp",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -603,12 +620,12 @@
|
||||
"description": "Oracle Database is a highly robust, enterprise-grade relational database management system (RDBMS) developed by Oracle Corporation. Known for its scalability, reliability, and comprehensive features, Oracle Database supports complex data management tasks and mission-critical applications. It provides advanced functionalities like SQL querying, transaction management, high availability through clustering, and data warehousing. Oracle's database solutions include support for various data models, such as relational, spatial, and graph, and offer tools for security, performance optimization, and data integration. It is widely used in industries requiring large-scale, secure, and high-performance data processing.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
"title": "Oracle Website",
|
||||
"url": "https://www.oracle.com/database/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Official Docs",
|
||||
"title": "Oracle Docs",
|
||||
"url": "https://docs.oracle.com/en/database/index.html",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -626,10 +643,10 @@
|
||||
},
|
||||
"tD3i-8gBpMKCHB-ITyDiU": {
|
||||
"title": "MariaDB",
|
||||
"description": "MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most featureful, stable, and sanely licensed open SQL server in the industry. MariaDB was created with the intention of being a more versatile, drop-in replacement version of MySQL\n\nVisit the following resources to learn more:",
|
||||
"description": "MariaDB server is a community developed fork of MySQL server. Started by core members of the original MySQL team, MariaDB actively works with outside developers to deliver the most feature rich, stable, and sanely licensed open SQL server in the industry. MariaDB was created with the intention of being a more versatile, drop-in replacement version of MySQL\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "MariaDB website",
|
||||
"title": "MariaDB",
|
||||
"url": "https://mariadb.org/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -782,8 +799,14 @@
|
||||
},
|
||||
"GwApfL4Yx-b5Y8dB9Vy__": {
|
||||
"title": "Failure Modes",
|
||||
"description": "Database failure modes refer to the various ways in which a database system can malfunction or cease to operate correctly. These include hardware failures (like disk crashes or network outages), software bugs, data corruption, performance degradation due to overload, and inconsistencies in distributed systems. Common failure modes involve data loss, system unavailability, replication lag in distributed databases, and deadlocks. To mitigate these, databases employ strategies such as redundancy, regular backups, transaction logging, and failover mechanisms. Understanding potential failure modes is crucial for designing robust database systems with high availability and data integrity. It informs the implementation of fault tolerance measures, recovery procedures, and monitoring systems to ensure database reliability and minimize downtime in critical applications.",
|
||||
"links": []
|
||||
"description": "Database failure modes refer to the various ways in which a database system can malfunction or cease to operate correctly. These include hardware failures (like disk crashes or network outages), software bugs, data corruption, performance degradation due to overload, and inconsistencies in distributed systems. Common failure modes involve data loss, system unavailability, replication lag in distributed databases, and deadlocks. To mitigate these, databases employ strategies such as redundancy, regular backups, transaction logging, and failover mechanisms. Understanding potential failure modes is crucial for designing robust database systems with high availability and data integrity. It informs the implementation of fault tolerance measures, recovery procedures, and monitoring systems to ensure database reliability and minimize downtime in critical applications.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Database Failure Modes",
|
||||
"url": "https://ieeexplore.ieee.org/document/7107294/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"rq_y_OBMD9AH_4aoecvAi": {
|
||||
"title": "Transactions",
|
||||
@@ -921,7 +944,7 @@
|
||||
"description": "Data replication is the process of creating and maintaining multiple copies of the same data across different locations or nodes in a distributed system. It enhances data availability, reliability, and performance by ensuring that data remains accessible even if one or more nodes fail. Replication can be synchronous (changes are applied to all copies simultaneously) or asynchronous (changes are propagated after being applied to the primary copy). It's widely used in database systems, content delivery networks, and distributed file systems. Replication strategies include master-slave, multi-master, and peer-to-peer models. While improving fault tolerance and read performance, replication introduces challenges in maintaining data consistency across copies and managing potential conflicts. Effective replication strategies must balance consistency, availability, and partition tolerance, often in line with the principles of the CAP theorem.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What is data replication?",
|
||||
"title": "Data Replication? - IBM",
|
||||
"url": "https://www.ibm.com/topics/data-replication",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -984,7 +1007,7 @@
|
||||
"description": "JSON or JavaScript Object Notation is an encoding scheme that is designed to eliminate the need for an ad-hoc code for each application to communicate with servers that communicate in a defined way. JSON API module exposes an implementation for data stores and data structures, such as entity types, bundles, and fields.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
"title": "JSON API",
|
||||
"url": "https://jsonapi.org/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1014,15 +1037,15 @@
|
||||
"url": "https://swagger.io/tools/swagger-editor/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": " REST API and OpenAPI: It’s Not an Either/Or Question ",
|
||||
"url": "https://www.youtube.com/watch?v=pRS9LRBgjYg",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "OpenAPI 3.0: How to Design and Document APIs with the Latest OpenAPI Specification 3.0",
|
||||
"url": "https://www.youtube.com/watch?v=6kwmW_p_Tig",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": " REST API and OpenAPI: It’s Not an Either/Or Question",
|
||||
"url": "https://www.youtube.com/watch?v=pRS9LRBgjYg",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -1109,7 +1132,7 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "GraphQL Official Website",
|
||||
"title": "GraphQL",
|
||||
"url": "https://graphql.org/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1130,7 +1153,7 @@
|
||||
"description": "Client-side caching is a technique where web browsers or applications store data locally on the user's device to improve performance and reduce server load. It involves saving copies of web pages, images, scripts, and other resources on the client's system for faster access on subsequent visits. Modern browsers implement various caching mechanisms, including HTTP caching (using headers like Cache-Control and ETag), service workers for offline functionality, and local storage APIs. Client-side caching significantly reduces network traffic and load times, enhancing user experience, especially on slower connections. However, it requires careful management to balance improved performance with the need for up-to-date content. Developers must implement appropriate cache invalidation strategies and consider cache-busting techniques for critical updates. Effective client-side caching is crucial for creating responsive, efficient web applications while minimizing server resource usage.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Client-side Caching",
|
||||
"title": "Client Side Caching",
|
||||
"url": "https://redis.io/docs/latest/develop/use/client-side-caching/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1143,13 +1166,18 @@
|
||||
},
|
||||
"Nq2BO53bHJdFT1rGZPjYx": {
|
||||
"title": "CDN",
|
||||
"description": "A Content Delivery Network (CDN) service aims to provide high availability and performance improvements of websites. This is achieved with fast delivery of website assets and content typically via geographically closer endpoints to the client requests. Traditional commercial CDNs (Amazon CloudFront, Akamai, CloudFlare and Fastly) provide servers across the globe which can be used for this purpose. Serving assets and contents via a CDN reduces bandwidth on website hosting, provides an extra layer of caching to reduce potential outages and can improve website security as well\n\nVisit the following resources to learn more:",
|
||||
"description": "A Content Delivery Network (CDN) service aims to provide high availability and performance improvements of websites. This is achieved with fast delivery of website assets and content typically via geographically closer endpoints to the client requests.\n\nTraditional commercial CDNs (Amazon CloudFront, Akamai, CloudFlare and Fastly) provide servers across the globe which can be used for this purpose. Serving assets and contents via a CDN reduces bandwidth on website hosting, provides an extra layer of caching to reduce potential outages and can improve website security as well\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "CloudFlare - What is a CDN? | How do CDNs work?",
|
||||
"url": "https://www.cloudflare.com/en-ca/learning/cdn/what-is-a-cdn/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "AWS - CDN",
|
||||
"url": "https://aws.amazon.com/what-is/cdn/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "What is Cloud CDN?",
|
||||
"url": "https://www.youtube.com/watch?v=841kyd_mfH0",
|
||||
@@ -1190,8 +1218,19 @@
|
||||
},
|
||||
"ELj8af7Mi38kUbaPJfCUR": {
|
||||
"title": "Caching",
|
||||
"description": "Caching is a technique used in computing to store and retrieve frequently accessed data quickly, reducing the need to fetch it from the original, slower source repeatedly. It involves keeping a copy of data in a location that's faster to access than its primary storage. Caching can occur at various levels, including browser caching, application-level caching, and database caching. It significantly improves performance by reducing latency, decreasing network traffic, and lowering the load on servers or databases. Common caching strategies include time-based expiration, least recently used (LRU) algorithms, and write-through or write-back policies. While caching enhances speed and efficiency, it also introduces challenges in maintaining data consistency and freshness. Effective cache management is crucial in balancing performance gains with the need for up-to-date information in dynamic systems.",
|
||||
"links": []
|
||||
"description": "Caching is a technique used in computing to store and retrieve frequently accessed data quickly, reducing the need to fetch it from the original, slower source repeatedly. It involves keeping a copy of data in a location that's faster to access than its primary storage. Caching can occur at various levels, including browser caching, application-level caching, and database caching. It significantly improves performance by reducing latency, decreasing network traffic, and lowering the load on servers or databases. Common caching strategies include time-based expiration, least recently used (LRU) algorithms, and write-through or write-back policies. While caching enhances speed and efficiency, it also introduces challenges in maintaining data consistency and freshness. Effective cache management is crucial in balancing performance gains with the need for up-to-date information in dynamic systems.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What is Caching - AWS",
|
||||
"url": "https://aws.amazon.com/caching/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Caching - Cloudflare",
|
||||
"url": "https://www.cloudflare.com/learning/cdn/what-is-caching/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"RBrIP5KbVQ2F0ly7kMfTo": {
|
||||
"title": "Web Security",
|
||||
@@ -1333,7 +1372,7 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "DevOps CI/CD Explained in 100 Seconds by Fireship",
|
||||
"title": "DevOps CI/CD Explained in 100 Seconds",
|
||||
"url": "https://www.youtube.com/watch?v=scEDHsr3APg",
|
||||
"type": "video"
|
||||
},
|
||||
@@ -1581,7 +1620,7 @@
|
||||
},
|
||||
"8DmabQJXlrT__COZrDVTV": {
|
||||
"title": "Twelve Factor Apps",
|
||||
"description": "The Twelve-Factor App methodology is a set of principles for building modern, scalable, and maintainable web applications, particularly suited for cloud environments. It emphasizes best practices for developing applications in a way that facilitates portability, scalability, and ease of deployment. Key principles include:\n\n1. **Codebase**: One codebase tracked in version control, with many deploys.\n2. **Dependencies**: Explicitly declare and isolate dependencies.\n3. **Config**: Store configuration in the environment.\n4. **Backing Services**: Treat backing services as attached resources.\n5. **Build, Release, Run**: Separate build and run stages.\n6. **Processes**: Execute the app as one or more stateless processes.\n7. **Port Binding**: Export services via port binding.\n8. **Concurrency**: Scale out via the process model.\n9. **Disposability**: Maximize robustness with fast startup and graceful shutdown.\n10. **Dev/Prod Parity**: Keep development, staging, and production environments as similar as possible.\n11. **Logs**: Treat logs as streams of events.\n12. **Admin Processes**: Run administrative or management tasks as one-off processes.\n\nThese principles help create applications that are easy to deploy, manage, and scale in cloud environments, promoting operational simplicity and consistency.\n\nVisit the following resources to learn more:",
|
||||
"description": "The Twelve-Factor App methodology is a set of principles for building modern, scalable, and maintainable web applications, particularly suited for cloud environments. It emphasizes best practices for developing applications in a way that facilitates portability, scalability, and ease of deployment. Key principles include:\n\n1. **Codebase**: One codebase tracked in version control, with many deploys.\n2. **Dependencies**: Explicitly declare and isolate dependencies.\n3. **Config**: Store configuration in the environment.\n4. **Backing Services**: Treat backing services as attached resources.\n5. **Build, Release, Run**: Separate build and run stages.\n6. **Processes**: Execute the app as one or more stateless processes.\n7. **Port Binding**: Export services via port binding.\n8. **Concurrency**: Scale out via the process model.\n9. **Disposability**: Maximize robustness with fast startup and graceful shutdown.\n10. **Dev/Prod Parity**: Keep development, staging, and production environments as similar as possible.\n11. **Logs**: Treat logs as streams of events.\n12. **Admin Processes**: Run administrative or management tasks as one-off processes.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "The Twelve-Factor App",
|
||||
@@ -1647,7 +1686,7 @@
|
||||
"description": "Apache Kafka is a distributed event streaming platform designed for high-throughput, fault-tolerant data processing. It acts as a message broker, allowing systems to publish and subscribe to streams of records, similar to a distributed commit log. Kafka is highly scalable and can handle large volumes of data with low latency, making it ideal for real-time analytics, log aggregation, and data integration. It features topics for organizing data streams, partitions for parallel processing, and replication for fault tolerance, enabling reliable and efficient handling of large-scale data flows across distributed systems.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Apache Kafka quickstart",
|
||||
"title": "Apache Kafka",
|
||||
"url": "https://kafka.apache.org/quickstart",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1704,12 +1743,12 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Getting started with LXD Containerization",
|
||||
"title": "Getting Started with LXD Containerization",
|
||||
"url": "https://www.youtube.com/watch?v=aIwgPKkVj8s",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "Getting started with LXC containers",
|
||||
"title": "Getting Started with LXC containers",
|
||||
"url": "https://youtu.be/CWmkSj_B-wo",
|
||||
"type": "video"
|
||||
}
|
||||
@@ -1767,7 +1806,7 @@
|
||||
"description": "Server-Sent Events (SSE) is a technology for sending real-time updates from a server to a web client over a single, persistent HTTP connection. It enables servers to push updates to clients efficiently and automatically reconnects if the connection is lost. SSE is ideal for applications needing one-way communication, such as live notifications or real-time data feeds, and uses a simple text-based format for transmitting event data, which can be easily handled by clients using the `EventSource` API in JavaScript.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Server-Sent Events - MDN",
|
||||
"title": "Server Sent Events - MDN",
|
||||
"url": "https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1783,7 +1822,7 @@
|
||||
"description": "Nginx is a high-performance, open-source web server and reverse proxy server known for its efficiency, scalability, and low resource consumption. Originally developed as a web server, Nginx is also commonly used as a load balancer, HTTP cache, and mail proxy. It excels at handling a large number of concurrent connections due to its asynchronous, event-driven architecture. Nginx's features include support for serving static content, handling dynamic content through proxying to application servers, and providing SSL/TLS termination. Its modular design allows for extensive customization and integration with various applications and services, making it a popular choice for modern web infrastructures.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
"title": "Nginx Website",
|
||||
"url": "https://nginx.org/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1809,7 +1848,7 @@
|
||||
"description": "Caddy is a modern, open-source web server written in Go. It's known for its simplicity, automatic HTTPS encryption, and HTTP/2 support out of the box. Caddy stands out for its ease of use, with a simple configuration syntax and the ability to serve static files with zero configuration. It automatically obtains and renews SSL/TLS certificates from Let's Encrypt, making secure deployments straightforward. Caddy supports various plugins and modules for extended functionality, including reverse proxying, load balancing, and dynamic virtual hosting. It's designed with security in mind, implementing modern web standards by default. While it may not match the raw performance of servers like Nginx in extremely high-load scenarios, Caddy's simplicity, built-in security features, and low resource usage make it an attractive choice for many web hosting needs, particularly for smaller to medium-sized projects or developers seeking a hassle-free server setup.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "caddyserver/caddy",
|
||||
"title": "caddyserver/caddy - Caddy on GitHub",
|
||||
"url": "https://github.com/caddyserver/caddy",
|
||||
"type": "opensource"
|
||||
},
|
||||
@@ -1856,7 +1895,7 @@
|
||||
"description": "Microsoft Internet Information Services (IIS) is a flexible, secure, and high-performance web server developed by Microsoft for hosting and managing web applications and services on Windows Server. IIS supports a variety of web technologies, including [ASP.NET](http://ASP.NET), PHP, and static content. It provides features such as request handling, authentication, SSL/TLS encryption, and URL rewriting. IIS also offers robust management tools, including a graphical user interface and command-line options, for configuring and monitoring web sites and applications. It is commonly used for deploying enterprise web applications and services in a Windows-based environment, offering integration with other Microsoft technologies and services.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
"title": "Microsoft -IIS",
|
||||
"url": "https://www.iis.net/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -1942,7 +1981,7 @@
|
||||
},
|
||||
"xPvVwGQw28uMeLYIWn8yn": {
|
||||
"title": "Memcached",
|
||||
"description": "Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library. Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.\n\nMemcached has no internal mechanism to track misses which may happen. However, some third-party utilities provide this functionality.\n\nVisit the following resources to learn more:",
|
||||
"description": "Memcached (pronounced variously mem-cash-dee or mem-cashed) is a general-purpose distributed memory-caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems (Linux and macOS) and on Microsoft Windows. It depends on the `libevent` library. Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in the least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "memcached/memcached",
|
||||
@@ -2091,7 +2130,7 @@
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Backpressure explained — the resisted flow of data through software",
|
||||
"title": "Backpressure explained — The Resisted Flow of Data through Software",
|
||||
"url": "https://medium.com/@jayphelps/backpressure-explained-the-flow-of-data-through-software-2350b3e77ce7",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -2136,7 +2175,7 @@
|
||||
},
|
||||
"f7iWBkC0X7yyCoP_YubVd": {
|
||||
"title": "Migration Strategies",
|
||||
"description": "Migration strategies involve planning and executing the transition of applications, data, or infrastructure from one environment to another, such as from on-premises systems to the cloud or between different cloud providers. Key strategies include:\n\n1. **Rehost (Lift and Shift)**: Moving applications as-is to the new environment with minimal changes, which is often the quickest but may not fully leverage new platform benefits.\n2. **Replatform**: Making some optimizations or changes to adapt applications for the new environment, enhancing performance or scalability while retaining most of the existing architecture.\n3. **Refactor**: Redesigning and modifying applications to optimize for the new environment, often taking advantage of new features and improving functionality or performance.\n4. **Repurchase**: Replacing existing applications with new, often cloud-based, solutions that better meet current needs.\n5. **Retain**: Keeping certain applications or systems in their current environment due to specific constraints or requirements.\n6. **Retire**: Decommissioning applications that are no longer needed or are redundant.\n\nEach strategy has its own trade-offs in terms of cost, complexity, and benefits, and the choice depends on factors like the application’s architecture, business needs, and resource availability.\n\nVisit the following resources to learn more:",
|
||||
"description": "Migration strategies involve planning and executing the transition of applications, data, or infrastructure from one environment to another, such as from on-premises systems to the cloud or between different cloud providers. Key strategies include:\n\n1. **Rehost (Lift and Shift)**: Moving applications as-is to the new environment with minimal changes, which is often the quickest but may not fully leverage new platform benefits.\n2. **Replatform**: Making some optimizations or changes to adapt applications for the new environment, enhancing performance or scalability while retaining most of the existing architecture.\n3. **Refactor**: Redesigning and modifying applications to optimize for the new environment, often taking advantage of new features and improving functionality or performance.\n4. **Repurchase**: Replacing existing applications with new, often cloud-based, solutions that better meet current needs.\n5. **Retain**: Keeping certain applications or systems in their current environment due to specific constraints or requirements.\n6. **Retire**: Decommissioning applications that are no longer needed or are redundant.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Databases as a Challenge for Continuous Delivery",
|
||||
@@ -2152,7 +2191,7 @@
|
||||
},
|
||||
"osQlGGy38xMcKLtgZtWaZ": {
|
||||
"title": "Types of Scaling",
|
||||
"description": "Horizontal scaling (scaling out/in) involves adding or removing instances of resources, such as servers or containers, to handle increased or decreased loads. It distributes the workload across multiple instances to improve performance and redundancy. This method enhances the system's capacity by expanding the number of nodes in a distributed system.\n\nVertical scaling (scaling up/down) involves increasing or decreasing the resources (CPU, memory, storage) of a single instance or server to handle more load or reduce capacity. This method improves performance by upgrading the existing hardware or virtual machine but has limits based on the maximum capacity of the individual resource.\n\nBoth approaches have their advantages: horizontal scaling offers better fault tolerance and flexibility, while vertical scaling is often simpler to implement but can be limited by the hardware constraints of a single machine.\n\nVisit the following resources to learn more:",
|
||||
"description": "Horizontal scaling (scaling out/in) involves adding or removing instances of resources, such as servers or containers, to handle increased or decreased loads. It distributes the workload across multiple instances to improve performance and redundancy. This method enhances the system's capacity by expanding the number of nodes in a distributed system.\n\nVertical scaling (scaling up/down) involves increasing or decreasing the resources (CPU, memory, storage) of a single instance or server to handle more load or reduce capacity. This method improves performance by upgrading the existing hardware or virtual machine but has limits based on the maximum capacity of the individual resource.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Horizontal vs Vertical Scaling",
|
||||
@@ -2207,7 +2246,7 @@
|
||||
"description": "Monitoring involves continuously observing and tracking the performance, availability, and health of systems, applications, and infrastructure. It typically includes collecting and analyzing metrics, logs, and events to ensure systems are operating within desired parameters. Monitoring helps detect anomalies, identify potential issues before they escalate, and provides insights into system behavior. It often involves tools and platforms that offer dashboards, alerts, and reporting features to facilitate real-time visibility and proactive management. Effective monitoring is crucial for maintaining system reliability, performance, and for supporting incident response and troubleshooting.\n\nA few popular tools are Grafana, Sentry, Mixpanel, NewRelic.",
|
||||
"links": [
|
||||
{
|
||||
"title": "Top monitoring tools 2024",
|
||||
"title": "Top Monitoring Tools",
|
||||
"url": "https://thectoclub.com/tools/best-application-monitoring-software/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -2307,9 +2346,9 @@
|
||||
"description": "Bcrypt is a password-hashing function designed to securely hash passwords for storage in databases. Created by Niels Provos and David Mazières, it's based on the Blowfish cipher and incorporates a salt to protect against rainbow table attacks. Bcrypt's key feature is its adaptive nature, allowing for the adjustment of its cost factor to make it slower as computational power increases, thus maintaining resistance against brute-force attacks over time. It produces a fixed-size hash output, typically 60 characters long, which includes the salt and cost factor. Bcrypt is widely used in many programming languages and frameworks due to its security strength and relative ease of implementation. Its deliberate slowness in processing makes it particularly effective for password storage, where speed is not a priority but security is paramount.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "bcrypts npm package",
|
||||
"title": "bcrypt",
|
||||
"url": "https://www.npmjs.com/package/bcrypt",
|
||||
"type": "article"
|
||||
"type": "opensource"
|
||||
},
|
||||
{
|
||||
"title": "Understanding bcrypt",
|
||||
@@ -2429,7 +2468,7 @@
|
||||
},
|
||||
"TZ0BWOENPv6pQm8qYB8Ow": {
|
||||
"title": "Server Security",
|
||||
"description": "Server security involves protecting servers from threats and vulnerabilities to ensure the confidentiality, integrity, and availability of the data and services they manage. Key practices include:\n\n1. **Patch Management**: Regularly updating software and operating systems to fix vulnerabilities.\n2. **Access Control**: Implementing strong authentication mechanisms and restricting access to authorized users only.\n3. **Firewalls and Intrusion Detection**: Using firewalls to block unauthorized access and intrusion detection systems to monitor and respond to suspicious activities.\n4. **Encryption**: Encrypting data both in transit and at rest to protect sensitive information from unauthorized access.\n5. **Security Hardening**: Configuring servers with minimal services and features, applying security best practices to reduce the attack surface.\n6. **Regular Backups**: Performing regular backups to ensure data can be restored in case of loss or corruption.\n7. **Monitoring and Logging**: Continuously monitoring server activity and maintaining logs for auditing and detecting potential security incidents.\n\nEffective server security is crucial for safeguarding against attacks, maintaining system stability, and protecting sensitive data.\n\nLearn more from the following resources:",
|
||||
"description": "Server security involves protecting servers from threats and vulnerabilities to ensure the confidentiality, integrity, and availability of the data and services they manage. Key practices include:\n\n1. **Patch Management**: Regularly updating software and operating systems to fix vulnerabilities.\n2. **Access Control**: Implementing strong authentication mechanisms and restricting access to authorized users only.\n3. **Firewalls and Intrusion Detection**: Using firewalls to block unauthorized access and intrusion detection systems to monitor and respond to suspicious activities.\n4. **Encryption**: Encrypting data both in transit and at rest to protect sensitive information from unauthorized access.\n5. **Security Hardening**: Configuring servers with minimal services and features, applying security best practices to reduce the attack surface.\n6. **Regular Backups**: Performing regular backups to ensure data can be restored in case of loss or corruption.\n7. **Monitoring and Logging**: Continuously monitoring server activity and maintaining logs for auditing and detecting potential security incidents.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What is a hardened server?",
|
||||
@@ -2600,7 +2639,7 @@
|
||||
},
|
||||
"hkxw9jPGYphmjhTjw8766": {
|
||||
"title": "DNS and how it works?",
|
||||
"description": "DNS (Domain Name System) is a hierarchical, decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It translates human-readable domain names (like [www.example.com](http://www.example.com)) into IP addresses (like 192.0.2.1) that computers use to identify each other. DNS servers distributed worldwide work together to resolve these queries, forming a global directory service. The system uses a tree-like structure with root servers at the top, followed by top-level domain servers (.com, .org, etc.), authoritative name servers for specific domains, and local DNS servers. DNS is crucial for the functioning of the Internet, enabling users to access websites and services using memorable names instead of numerical IP addresses. It also supports email routing, service discovery, and other network protocols.\n\nVisit the following resources to learn more:",
|
||||
"description": "DNS (Domain Name System) is a hierarchical, decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It translates human-readable domain names (like `www.example.com`) into IP addresses (like 192.0.2.1) that computers use to identify each other. DNS servers distributed worldwide work together to resolve these queries, forming a global directory service. The system uses a tree-like structure with root servers at the top, followed by top-level domain servers (.com, .org, etc.), authoritative name servers for specific domains, and local DNS servers. DNS is crucial for the functioning of the Internet, enabling users to access websites and services using memorable names instead of numerical IP addresses. It also supports email routing, service discovery, and other network protocols.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What is DNS?",
|
||||
@@ -2811,7 +2850,7 @@
|
||||
"description": "OpenID is an open standard for decentralized authentication that allows users to log in to multiple websites and applications using a single set of credentials, managed by an identity provider (IdP). It enables users to authenticate their identity through an external service, simplifying the login process and reducing the need for multiple usernames and passwords. OpenID typically works in conjunction with OAuth 2.0 for authorization, allowing users to grant access to their data while maintaining security. This approach enhances user convenience and streamlines identity management across various platforms.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Official Website",
|
||||
"title": "OpenID Website",
|
||||
"url": "https://openid.net/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -2839,7 +2878,7 @@
|
||||
},
|
||||
"UCHtaePVxS-0kpqlYxbfC": {
|
||||
"title": "SAML",
|
||||
"description": "Security Assertion Markup Language (SAML)\n-----------------------------------------\n\nSecurity Assertion Markup Language (SAML) is an XML-based framework used for single sign-on (SSO) and identity federation, enabling users to authenticate once and gain access to multiple applications or services. It allows for the exchange of authentication and authorization data between an identity provider (IdP) and a service provider (SP). SAML assertions are XML documents that contain user identity information and attributes, and are used to convey authentication credentials and permissions. By implementing SAML, organizations can streamline user management, enhance security through centralized authentication, and simplify the user experience by reducing the need for multiple logins across different systems.\n\nLearn more from the following resources:",
|
||||
"description": "Security Assertion Markup Language (SAML) is an XML-based framework used for single sign-on (SSO) and identity federation, enabling users to authenticate once and gain access to multiple applications or services. It allows for the exchange of authentication and authorization data between an identity provider (IdP) and a service provider (SP). SAML assertions are XML documents that contain user identity information and attributes, and are used to convey authentication credentials and permissions. By implementing SAML, organizations can streamline user management, enhance security through centralized authentication, and simplify the user experience by reducing the need for multiple logins across different systems.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "SAML Explained in Plain English",
|
||||
@@ -2884,17 +2923,17 @@
|
||||
"description": "Solr is an open-source, highly scalable search platform built on Apache Lucene, designed for full-text search, faceted search, and real-time indexing. It provides powerful features for indexing and querying large volumes of data with high performance and relevance. Solr supports complex queries, distributed searching, and advanced text analysis, including tokenization and stemming. It offers features such as faceted search, highlighting, and geographic search, and is commonly used for building search engines and data retrieval systems in various applications, from e-commerce to content management.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "apache/solr",
|
||||
"title": "Solr on Github",
|
||||
"url": "https://github.com/apache/solr",
|
||||
"type": "opensource"
|
||||
},
|
||||
{
|
||||
"title": "Official Website",
|
||||
"title": "Solr Website",
|
||||
"url": "https://solr.apache.org/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Official Documentation",
|
||||
"title": "Solr Documentation",
|
||||
"url": "https://solr.apache.org/resources.html#documentation",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -2910,7 +2949,7 @@
|
||||
"description": "Real-time data refers to information that is processed and made available immediately or with minimal delay, allowing users or systems to react promptly to current conditions. This type of data is essential in applications requiring immediate updates and responses, such as financial trading platforms, online gaming, real-time analytics, and monitoring systems. Real-time data processing involves capturing, analyzing, and delivering information as it is generated, often using technologies like stream processing frameworks (e.g., Apache Kafka, Apache Flink) and low-latency databases. Effective real-time data systems can handle high-speed data flows, ensuring timely and accurate decision-making.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Real-time data - Wiki",
|
||||
"title": "Real-time Data - Wiki",
|
||||
"url": "https://en.wikipedia.org/wiki/Real-time_data",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -2942,7 +2981,7 @@
|
||||
"description": "Short polling is a technique where a client periodically sends requests to a server at regular intervals to check for updates or new data. The server responds with the current state or any changes since the last request. While simple to implement and compatible with most HTTP infrastructures, short polling can be inefficient due to the frequent network requests and potential for increased latency in delivering updates. It contrasts with long polling and WebSockets, which offer more efficient mechanisms for real-time communication. Short polling is often used when real-time requirements are less stringent and ease of implementation is a priority.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Amazon SQS short and long polling",
|
||||
"title": "Amazon SQS Short and Long Polling",
|
||||
"url": "https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -2984,7 +3023,7 @@
|
||||
"description": "Amazon DynamoDB is a fully managed, serverless NoSQL database service provided by Amazon Web Services (AWS). It offers high-performance, scalable, and flexible data storage for applications of any scale. DynamoDB supports both key-value and document data models, providing fast and predictable performance with seamless scalability. It features automatic scaling, built-in security, backup and restore options, and global tables for multi-region deployment. DynamoDB excels in handling high-traffic web applications, gaming backends, mobile apps, and IoT solutions. It offers consistent single-digit millisecond latency at any scale and supports both strongly consistent and eventually consistent read models. With its integration into the AWS ecosystem, on-demand capacity mode, and support for transactions, DynamoDB is widely used for building highly responsive and scalable applications, particularly those with unpredictable workloads or requiring low-latency data access.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "AWS DynamoDB Website",
|
||||
"title": "AWS DynamoDB",
|
||||
"url": "https://aws.amazon.com/dynamodb/",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -3002,10 +3041,10 @@
|
||||
},
|
||||
"RyJFLLGieJ8Xjt-DlIayM": {
|
||||
"title": "Firebase",
|
||||
"description": "Firebase is a comprehensive mobile and web application development platform owned by Google. It provides a suite of cloud-based services that simplify app development, hosting, and scaling. Key features include real-time database, cloud storage, authentication, hosting, cloud functions, and analytics. Firebase offers real-time synchronization, allowing data to be updated across clients instantly. Its authentication service supports multiple providers, including email/password, social media logins, and phone authentication. The platform's serverless architecture enables developers to focus on front-end development without managing backend infrastructure. Firebase also provides tools for app testing, crash reporting, and performance monitoring. While it excels in rapid prototyping and building real-time applications, its proprietary nature and potential for vendor lock-in are considerations for large-scale or complex applications. Firebase's ease of use and integration with Google Cloud Platform make it popular for startups and projects requiring quick deployment.\n\nLearn more from the following resources:",
|
||||
"description": "Firebase is a comprehensive mobile and web application development platform owned by Google. It provides a suite of cloud-based services that simplify app development, hosting, and scaling. Key features include real-time database, cloud storage, authentication, hosting, cloud functions, and analytics. Firebase offers real-time synchronization, allowing data to be updated across clients instantly. Its authentication service supports multiple providers, including email/password, social media logins, and phone authentication. The platform's serverless architecture enables developers to focus on front-end development without managing backend infrastructure. Firebase also provides tools for app testing, crash reporting, and performance monitoring.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "The ultimate guide to Firebase",
|
||||
"title": "The Ultimate Guide to Firebase",
|
||||
"url": "https://fireship.io/lessons/the-ultimate-beginners-guide-to-firebase/",
|
||||
"type": "course"
|
||||
},
|
||||
@@ -3042,7 +3081,7 @@
|
||||
"description": "SQLite is a lightweight, serverless, self-contained SQL database engine that is designed for simplicity and efficiency. It is widely used in embedded systems and applications where a full-featured database server is not required, such as mobile apps, desktop applications, and small to medium-sized websites. SQLite stores data in a single file, which makes it easy to deploy and manage. It supports standard SQL queries and provides ACID (Atomicity, Consistency, Isolation, Durability) compliance to ensure data integrity. SQLite’s small footprint, minimal configuration, and ease of use make it a popular choice for applications needing a compact, high-performance database solution.\n\nVisit the following resources to learn more:",
|
||||
"links": [
|
||||
{
|
||||
"title": "SQLite website",
|
||||
"title": "SQLite",
|
||||
"url": "https://www.sqlite.org/index.html",
|
||||
"type": "article"
|
||||
},
|
||||
@@ -3104,7 +3143,7 @@
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "What is time series data?",
|
||||
"title": "What is Time Series Data?",
|
||||
"url": "https://www.youtube.com/watch?v=Se5ipte9DMY",
|
||||
"type": "video"
|
||||
}
|
||||
@@ -3209,7 +3248,7 @@
|
||||
"description": "Database migrations are a version-controlled way to manage and apply incremental changes to a database schema over time, allowing developers to modify the database structure (e.g., adding tables, altering columns) without affecting existing data. They ensure that the database evolves alongside application code in a consistent, repeatable manner across environments (e.g., development, testing, production), while maintaining compatibility with older versions of the schema. Migrations are typically written in SQL or a database-agnostic language, and are executed using migration tools like Liquibase, Flyway, or built-in ORM features such as Django or Rails migrations.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What are database migrations?",
|
||||
"title": "What are Database Migrations?",
|
||||
"url": "https://www.prisma.io/dataguide/types/relational/what-are-database-migrations",
|
||||
"type": "article"
|
||||
},
|
||||
|
||||
@@ -372,16 +372,6 @@
|
||||
"url": "https://www.coursera.org/lecture/data-structures/doubly-linked-lists-jpGKD",
|
||||
"type": "course"
|
||||
},
|
||||
{
|
||||
"title": "CS 61B Lecture 7: Linked Lists I",
|
||||
"url": "https://archive.org/details/ucberkeley_webcast_htzJdKoEmO0",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "CS 61B Lecture 7: Linked Lists II",
|
||||
"url": "https://archive.org/details/ucberkeley_webcast_-c4I3gFYe3w",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Linked List Data Structure | Illustrated Data Structures",
|
||||
"url": "https://www.youtube.com/watch?v=odW9FU8jPRQ",
|
||||
@@ -392,6 +382,16 @@
|
||||
"url": "https://www.youtube.com/watch?v=F8AbOfQwl1c",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "CS 61B Lecture 7: Linked Lists I",
|
||||
"url": "https://archive.org/details/ucberkeley_webcast_htzJdKoEmO0",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "CS 61B Lecture 7: Linked Lists II",
|
||||
"url": "https://archive.org/details/ucberkeley_webcast_-c4I3gFYe3w",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "Why you should avoid Linked Lists?",
|
||||
"url": "https://www.youtube.com/watch?v=YQs6IC-vgmo",
|
||||
@@ -511,16 +511,16 @@
|
||||
"url": "https://www.coursera.org/lecture/data-structures/dynamic-arrays-EwbnV",
|
||||
"type": "course"
|
||||
},
|
||||
{
|
||||
"title": "UC Berkeley CS61B - Linear and Multi-Dim Arrays (Start watching from 15m 32s)",
|
||||
"url": "https://archive.org/details/ucberkeley_webcast_Wp8oiO_CZZE",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Array Data Structure | Illustrated Data Structures",
|
||||
"url": "https://www.youtube.com/watch?v=QJNwK2uJyGs",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "UC Berkeley CS61B - Linear and Multi-Dim Arrays (Start watching from 15m 32s)",
|
||||
"url": "https://archive.org/details/ucberkeley_webcast_Wp8oiO_CZZE",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "Dynamic and Static Arrays",
|
||||
"url": "https://www.youtube.com/watch?v=PEnFFiQe1pM&list=PLDV1Zeh2NRsB6SWUrDFW2RmDotAfPbeHu&index=6",
|
||||
|
||||
@@ -766,7 +766,7 @@
|
||||
},
|
||||
"dJ0NUsODFhk52W2zZxoPh": {
|
||||
"title": "SSL and TLS Basics",
|
||||
"description": "Single Sign-On (SSO) is an authentication method that allows users to access multiple applications or systems with one set of login credentials. It enables users to log in once and gain access to various connected systems without re-entering credentials. SSO enhances user experience by reducing password fatigue, streamlines access management for IT departments, and can improve security by centralizing authentication controls. It typically uses protocols like SAML, OAuth, or OpenID Connect to securely share authentication information across different domains. While SSO offers convenience and can strengthen security when implemented correctly, it also presents a single point of failure if compromised, making robust security measures for the SSO system critical.\n\nLearn more from the following resources:",
|
||||
"description": "Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols used to provide security in internet communications. These protocols encrypt the data that is transmitted over the web, so anyone who tries to intercept packets will not be able to interpret the data. One difference that is important to know is that SSL is now deprecated due to security flaws, and most modern web browsers no longer support it. But TLS is still secure and widely supported, so preferably use TLS.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What’s the Difference Between SSL and TLS?",
|
||||
@@ -3223,7 +3223,7 @@
|
||||
},
|
||||
"6ILPXeUDDmmYRiA_gNTSr": {
|
||||
"title": "SSL vs TLS",
|
||||
"description": "Single Sign-On (SSO) is an authentication method that allows users to access multiple applications or systems with one set of login credentials. It enables users to log in once and gain access to various connected systems without re-entering credentials. SSO enhances user experience by reducing password fatigue, streamlines access management for IT departments, and can improve security by centralizing authentication controls. It typically uses protocols like SAML, OAuth, or OpenID Connect to securely share authentication information across different domains. While SSO offers convenience and can strengthen security when implemented correctly, it also presents a single point of failure if compromised, making robust security measures for the SSO system critical.\n\nLearn more from the following resources:",
|
||||
"description": "**SSL (Secure Sockets Layer)** is a cryptographic protocol used to secure communications by encrypting data transmitted between clients and servers. SSL establishes a secure connection through a process known as the handshake, during which the client and server agree on cryptographic algorithms, exchange keys, and authenticate the server with a digital certificate. SSL’s security is considered weaker compared to its successor, TLS, due to vulnerabilities in its older encryption methods and lack of modern cryptographic techniques.\n\n**TLS (Transport Layer Security)** improves upon SSL by using stronger encryption algorithms, more secure key exchange mechanisms, and enhanced certificate validation. Like SSL, TLS begins with a handshake where the client and server agree on a protocol version and cipher suite, exchange keys, and verify certificates. However, TLS incorporates additional features like Perfect Forward Secrecy (PFS) and more secure hashing algorithms, making it significantly more secure than SSL for modern communications.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "What’s the Difference Between SSL and TLS?",
|
||||
|
||||
@@ -267,6 +267,11 @@
|
||||
"url": "https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Learn Git Branching",
|
||||
"url": "https://learngitbranching.js.org/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "Git Branches Tutorial",
|
||||
"url": "https://www.youtube.com/watch?v=e2IbNHi4uCI",
|
||||
|
||||
@@ -112,8 +112,29 @@
|
||||
},
|
||||
"IduGSdUa2Fi7VFMLKgmsS": {
|
||||
"title": "iOS Architecture",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "iOS architecture refers to the design principles and patterns used to build iOS applications. It focuses on how to structure code, manage data, and ensure a smooth user experience. These architectural patterns help developers create maintainable, scalable, and testable applications while following best practices specific to iOS development. Use cases of these architectures may vary according to the requirements of the application. For example, MVC is used for simple apps, while MVVM is considered when the app is large and complex.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Model-View-Controller Pattern in swift (MVC) for Beginners",
|
||||
"url": "https://ahmedaminhassanismail.medium.com/model-view-controller-pattern-in-swift-mvc-for-beginners-35db8d479832",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "MVVM in iOS Swift",
|
||||
"url": "https://medium.com/@zebayasmeen76/mvvm-in-ios-swift-6afb150458fd",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
"title": "MVC Design Pattern Explained with Example",
|
||||
"url": "https://youtu.be/sbYaWJEAYIY?t=2",
|
||||
"type": "video"
|
||||
},
|
||||
{
|
||||
"title": "MVVM Design Pattern Explained with Example",
|
||||
"url": "https://www.youtube.com/watch?v=sLHVxnRS75w",
|
||||
"type": "video"
|
||||
}
|
||||
]
|
||||
},
|
||||
"IdGdLNgJI3WmONEFsMq-d": {
|
||||
"title": "Core OS",
|
||||
|
||||
@@ -12,8 +12,14 @@
|
||||
},
|
||||
"M-EXrTDeAEMz_IkEi-ab4": {
|
||||
"title": "In-memory Data Structure Store",
|
||||
"description": "",
|
||||
"links": []
|
||||
"description": "An in-memory database is a purpose-built database that relies primarily on internal memory for data storage. It enables minimal response times by eliminating the need to access standard disk drives (SSDs). In-memory databases are ideal for applications that require microsecond response times or have large spikes in traffic, such as gaming leaderboards, session stores, and real-time data analytics. The terms main memory database (MMDB), in-memory database system (IMDS), and real-time database system (RTDB) also refer to in-memory databases.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "Amazon MemoryDB",
|
||||
"url": "https://aws.amazon.com/memorydb/",
|
||||
"type": "article"
|
||||
}
|
||||
]
|
||||
},
|
||||
"l2aXyO3STnhbFjvUXPpm2": {
|
||||
"title": "Key-value Database",
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
"links": [
|
||||
{
|
||||
"title": "What is Software Architecture in Software Engineering?",
|
||||
"url": "https://webcache.googleusercontent.com/search?q=cache:ya4xvYaEckQJ:https://www.future-processing.com/blog/what-is-software-architecture-in-software-engineering/&cd=1&hl=es-419&ct=clnk&gl=ar",
|
||||
"url": "https://www.future-processing.com/blog/what-is-software-architecture-in-software-engineering/",
|
||||
"type": "article"
|
||||
},
|
||||
{
|
||||
|
||||
@@ -401,7 +401,7 @@
|
||||
},
|
||||
"HD1UGOidp7JGKdW6CEdQ_": {
|
||||
"title": "satisfies keyword",
|
||||
"description": "TypeScript developers are often faced with a dilemma: we want to ensure that some expression matches some type, but also want to keep the most specific type of that expression for inference purposes.\n\nFor example:\n\n // Each property can be a string or an RGB tuple.\n const palette = {\n red: [255, 0, 0],\n green: '#00ff00',\n bleu: [0, 0, 255],\n // ^^^^ sacrebleu - we've made a typo!\n };\n \n // We want to be able to use array methods on 'red'...\n const redComponent = palette.red.at(0);\n \n // or string methods on 'green'...\n const greenNormalized = palette.green.toUpperCase();\n \n\nNotice that we’ve written `bleu`, whereas we probably should have written `blue`. We could try to catch that `bleu` typo by using a type annotation on palette, but we’d lose the information about each property.\n\n type Colors = 'red' | 'green' | 'blue';\n type RGB = [red: number, green: number, blue: number];\n \n const palette: Record<Colors, string | RGB> = {\n red: [255, 0, 0],\n green: '#00ff00',\n bleu: [0, 0, 255],\n // ~~~~ The typo is now correctly detected\n };\n // But we now have an undesirable error here - 'palette.red' \"could\" be a string.\n const redComponent = palette.red.at(0);\n \n\nThe `satisfies` operator lets us validate that the type of an expression matches some type, without changing the resulting type of that expression. As an example, we could use `satisfies` to validate that all the properties of palette are compatible with `string | number[]`:\n\n type Colors = 'red' | 'green' | 'blue';\n type RGB = [red: number, green: number, blue: number];\n \n const palette = {\n red: [255, 0, 0],\n green: '#00ff00',\n bleu: [0, 0, 255],\n // ~~~~ The typo is now caught!\n } satisfies Record<Colors, string | RGB>;\n \n // Both of these methods are still accessible!\n const redComponent = palette.red.at(0);\n const greenNormalized = palette.green.toUpperCase();\n \n\nLearn more from the following resources:",
|
||||
"description": "The `satisfies` operator lets us validate that the type of an expression matches some type, without changing the resulting type of that expression.\n\nLearn more from the following resources:",
|
||||
"links": [
|
||||
{
|
||||
"title": "satisfies Keyword",
|
||||
|
||||
BIN
public/roadmaps/ai-engineer.png
Normal file
BIN
public/roadmaps/ai-engineer.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 478 KiB |
@@ -1,7 +1,7 @@
|
||||
import { type APIContext } from 'astro';
|
||||
import { api } from './api.ts';
|
||||
|
||||
export type LeadeboardUserDetails = {
|
||||
export type LeaderboardUserDetails = {
|
||||
id: string;
|
||||
name: string;
|
||||
avatar?: string;
|
||||
@@ -10,15 +10,19 @@ export type LeadeboardUserDetails = {
|
||||
|
||||
export type ListLeaderboardStatsResponse = {
|
||||
streaks: {
|
||||
active: LeadeboardUserDetails[];
|
||||
lifetime: LeadeboardUserDetails[];
|
||||
active: LeaderboardUserDetails[];
|
||||
lifetime: LeaderboardUserDetails[];
|
||||
};
|
||||
projectSubmissions: {
|
||||
currentMonth: LeadeboardUserDetails[];
|
||||
lifetime: LeadeboardUserDetails[];
|
||||
currentMonth: LeaderboardUserDetails[];
|
||||
lifetime: LeaderboardUserDetails[];
|
||||
};
|
||||
githubContributors: {
|
||||
currentMonth: LeadeboardUserDetails[];
|
||||
currentMonth: LeaderboardUserDetails[];
|
||||
};
|
||||
referrals: {
|
||||
currentMonth: LeaderboardUserDetails[];
|
||||
lifetime: LeaderboardUserDetails[];
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ import { useEffect, useRef, useState } from 'react';
|
||||
import { isLoggedIn } from '../../lib/jwt';
|
||||
import { httpGet } from '../../lib/http';
|
||||
import { useToast } from '../../hooks/use-toast';
|
||||
import { Flame, X, Zap, ZapOff } from 'lucide-react';
|
||||
import { Zap, ZapOff } from 'lucide-react';
|
||||
import { useOutsideClick } from '../../hooks/use-outside-click';
|
||||
import { StreakDay } from './StreakDay';
|
||||
import {
|
||||
@@ -11,15 +11,8 @@ import {
|
||||
} from '../../stores/page.ts';
|
||||
import { useStore } from '@nanostores/react';
|
||||
import { cn } from '../../lib/classname.ts';
|
||||
import { $accountStreak } from '../../stores/streak.ts';
|
||||
|
||||
type StreakResponse = {
|
||||
count: number;
|
||||
longestCount: number;
|
||||
previousCount?: number | null;
|
||||
firstVisitAt: Date;
|
||||
lastVisitAt: Date;
|
||||
};
|
||||
import { $accountStreak, type StreakResponse } from '../../stores/streak.ts';
|
||||
import { InviteFriends } from './InviteFriends.tsx';
|
||||
|
||||
type AccountStreakProps = {};
|
||||
|
||||
@@ -184,11 +177,10 @@ export function AccountStreak(props: AccountStreakProps) {
|
||||
<p className="-mt-[0px] mb-[1.5px] text-center text-xs tracking-wide text-slate-500">
|
||||
Visit every day to keep your streak going!
|
||||
</p>
|
||||
<p className='text-xs mt-1.5 text-center'>
|
||||
<a href="/leaderboard" className="text-purple-400 hover:underline underline-offset-2">
|
||||
See how you compare to others
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<InviteFriends
|
||||
refByUserCount={accountStreak?.refByUserCount || 0}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
92
src/components/AccountStreak/InviteFriends.tsx
Normal file
92
src/components/AccountStreak/InviteFriends.tsx
Normal file
@@ -0,0 +1,92 @@
|
||||
import { Copy, Heart } from 'lucide-react';
|
||||
import { useAuth } from '../../hooks/use-auth';
|
||||
import { useCopyText } from '../../hooks/use-copy-text';
|
||||
import { cn } from '../../lib/classname';
|
||||
import { CheckIcon } from '../ReactIcons/CheckIcon';
|
||||
import {TrophyEmoji} from "../ReactIcons/TrophyEmoji.tsx";
|
||||
|
||||
type InviteFriendsProps = {
|
||||
refByUserCount: number;
|
||||
};
|
||||
|
||||
export function InviteFriends(props: InviteFriendsProps) {
|
||||
const { refByUserCount } = props;
|
||||
|
||||
const user = useAuth();
|
||||
const { copyText, isCopied } = useCopyText();
|
||||
|
||||
const referralLink = new URL(
|
||||
`/signup?rc=${user?.id}`,
|
||||
import.meta.env.DEV ? 'http://localhost:3000' : 'https://roadmap.sh',
|
||||
).toString();
|
||||
|
||||
return (
|
||||
<div className="-mx-4 mt-6 flex flex-col border-t border-dashed border-t-slate-700 px-4 pt-5 text-center text-sm">
|
||||
<p className="font-medium text-slate-500">
|
||||
Invite people to join roadmap.sh
|
||||
</p>
|
||||
<div className="my-4 flex flex-col items-center gap-3.5 rounded-lg bg-slate-900/40 pb-4 pt-5">
|
||||
<div className="flex flex-row items-center justify-center gap-1.5">
|
||||
{Array.from({ length: 10 }).map((_, index) => (
|
||||
<Heart
|
||||
key={index}
|
||||
className={cn(
|
||||
'size-[20px] fill-current',
|
||||
index < refByUserCount ? 'text-yellow-300' : 'text-slate-700',
|
||||
refByUserCount === 0 && index === 0 ? 'text-slate-500' : '',
|
||||
)}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
{refByUserCount === 0 && (
|
||||
<p className="text-slate-500">You haven't invited anyone yet.</p>
|
||||
)}
|
||||
|
||||
{refByUserCount > 0 && refByUserCount < 10 && (
|
||||
<p className="text-slate-500">{refByUserCount} of 10 users joined</p>
|
||||
)}
|
||||
|
||||
{refByUserCount >= 10 && (
|
||||
<p className="text-slate-500">
|
||||
🎉 You've invited {refByUserCount} users
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
<p className="leading-normal text-slate-500">
|
||||
Share{' '}
|
||||
<button
|
||||
onClick={() => {
|
||||
copyText(referralLink);
|
||||
}}
|
||||
className={cn(
|
||||
'rounded-md bg-slate-700 px-1.5 py-[0.5px] text-slate-300 hover:bg-slate-600',
|
||||
{
|
||||
'bg-green-500 text-black hover:bg-green-500': isCopied,
|
||||
},
|
||||
)}
|
||||
>
|
||||
{!isCopied ? 'this link' : 'the copied link'}{' '}
|
||||
{!isCopied && (
|
||||
<Copy
|
||||
className="relative -top-[1.25px] inline-block size-3"
|
||||
strokeWidth={3}
|
||||
/>
|
||||
)}
|
||||
{isCopied && (
|
||||
<CheckIcon additionalClasses="relative -top-[1.25px] inline-block size-3" />
|
||||
)}
|
||||
</button>{' '}
|
||||
with anyone you think would benefit from roadmap.sh
|
||||
</p>
|
||||
|
||||
<p className="mt-6 text-center text-xs">
|
||||
<a
|
||||
href="/leaderboard"
|
||||
className="text-purple-400 underline-offset-2 hover:underline"
|
||||
>
|
||||
See how you rank on the leaderboard
|
||||
</a>
|
||||
</p>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -1,5 +1,7 @@
|
||||
import { type FormEvent, useState } from 'react';
|
||||
import { type FormEvent, useEffect, useState } from 'react';
|
||||
import { httpPost } from '../../lib/http';
|
||||
import { deleteUrlParam, getUrlParams } from '../../lib/browser';
|
||||
import { isLoggedIn, setAIReferralCode } from '../../lib/jwt';
|
||||
|
||||
type EmailSignupFormProps = {
|
||||
isDisabled?: boolean;
|
||||
@@ -9,6 +11,9 @@ type EmailSignupFormProps = {
|
||||
export function EmailSignupForm(props: EmailSignupFormProps) {
|
||||
const { isDisabled, setIsDisabled } = props;
|
||||
|
||||
const { rc: referralCode } = getUrlParams() as {
|
||||
rc?: string;
|
||||
};
|
||||
const [email, setEmail] = useState('');
|
||||
const [password, setPassword] = useState('');
|
||||
const [name, setName] = useState('');
|
||||
@@ -47,6 +52,16 @@ export function EmailSignupForm(props: EmailSignupFormProps) {
|
||||
)}`;
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
if (!referralCode || isLoggedIn()) {
|
||||
deleteUrlParam('rc');
|
||||
return;
|
||||
}
|
||||
|
||||
setAIReferralCode(referralCode);
|
||||
deleteUrlParam('rc');
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<form className="flex w-full flex-col gap-2" onSubmit={onSubmit}>
|
||||
<label htmlFor="name" className="sr-only">
|
||||
@@ -72,7 +87,7 @@ export function EmailSignupForm(props: EmailSignupFormProps) {
|
||||
type="email"
|
||||
autoComplete="email"
|
||||
required
|
||||
className="block w-full rounded-lg border border-gray-300 px-3 py-2 outline-none placeholder:text-gray-400 focus:ring-2 focus:ring-black focus:ring-offset-1"
|
||||
className="block w-full rounded-lg border border-gray-300 px-3 py-2 outline-none placeholder:text-gray-400 focus:ring-2 focus:ring-black focus:ring-offset-1"
|
||||
placeholder="Email Address"
|
||||
value={email}
|
||||
onInput={(e) => setEmail(String((e.target as any).value))}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
import { type ReactNode, useState } from 'react';
|
||||
import type {
|
||||
LeadeboardUserDetails,
|
||||
LeaderboardUserDetails,
|
||||
ListLeaderboardStatsResponse,
|
||||
} from '../../api/leaderboard';
|
||||
import { cn } from '../../lib/classname';
|
||||
import { FolderKanban, GitPullRequest, Trophy, Zap } from 'lucide-react';
|
||||
import { FolderKanban, GitPullRequest, Users, Users2, Zap } from 'lucide-react';
|
||||
import { TrophyEmoji } from '../ReactIcons/TrophyEmoji';
|
||||
import { SecondPlaceMedalEmoji } from '../ReactIcons/SecondPlaceMedalEmoji';
|
||||
import { ThirdPlaceMedalEmoji } from '../ReactIcons/ThirdPlaceMedalEmoji';
|
||||
@@ -17,74 +17,77 @@ export function LeaderboardPage(props: LeaderboardPageProps) {
|
||||
const { stats } = props;
|
||||
|
||||
return (
|
||||
<div className="min-h-screen bg-gray-50">
|
||||
<div className="container py-5 sm:py-10">
|
||||
<div className="mb-8 text-center">
|
||||
<div className="flex flex-col items-start sm:items-center justify-center">
|
||||
<img
|
||||
src={'/images/gifs/star.gif'}
|
||||
alt="party-popper"
|
||||
className="mb-4 mt-0 sm:mt-3 h-14 w-14 hidden sm:block"
|
||||
/>
|
||||
<div className="mb-0 sm:mb-4 flex flex-col items-start sm:items-center justify-start sm:justify-center">
|
||||
<h2 className="mb-1.5 sm:mb-2 text-2xl font-semibold sm:text-4xl">
|
||||
Leaderboard
|
||||
</h2>
|
||||
<p className="max-w-2xl text-left sm:text-center text-balance text-sm text-gray-500 sm:text-base">
|
||||
Top users based on their activity on roadmap.sh
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<div className="min-h-screen bg-gray-100">
|
||||
<div className="container pb-5 sm:pb-8">
|
||||
<h1 className="my-5 flex items-center text-lg font-medium text-black sm:mb-4 sm:mt-8">
|
||||
<Users2 className="mr-2 size-5 text-black" />
|
||||
Leaderboard
|
||||
</h1>
|
||||
|
||||
<div className="mt-5 sm:mt-8 grid gap-2 md:grid-cols-2">
|
||||
<LeaderboardLane
|
||||
title="Longest Visit Streak"
|
||||
tabs={[
|
||||
{
|
||||
title: 'Active',
|
||||
users: stats.streaks?.active || [],
|
||||
emptyIcon: <Zap className="size-16 text-gray-300" />,
|
||||
emptyText: 'No users with streaks yet',
|
||||
},
|
||||
{
|
||||
title: 'Lifetime',
|
||||
users: stats.streaks?.lifetime || [],
|
||||
emptyIcon: <Zap className="size-16 text-gray-300" />,
|
||||
emptyText: 'No users with streaks yet',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
<LeaderboardLane
|
||||
title="Projects Completed"
|
||||
tabs={[
|
||||
{
|
||||
title: 'This Month',
|
||||
users: stats.projectSubmissions.currentMonth,
|
||||
emptyIcon: <FolderKanban className="size-16 text-gray-300" />,
|
||||
emptyText: 'No projects submitted this month',
|
||||
},
|
||||
{
|
||||
title: 'Lifetime',
|
||||
users: stats.projectSubmissions.lifetime,
|
||||
emptyIcon: <FolderKanban className="size-16 text-gray-300" />,
|
||||
emptyText: 'No projects submitted yet',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
<LeaderboardLane
|
||||
title="Top Contributors"
|
||||
tabs={[
|
||||
{
|
||||
title: 'This Month',
|
||||
users: stats.githubContributors.currentMonth,
|
||||
emptyIcon: (
|
||||
<GitPullRequest className="size-16 text-gray-300" />
|
||||
),
|
||||
emptyText: 'No contributors this month',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
</div>
|
||||
<div className="grid gap-2 sm:gap-3 md:grid-cols-2">
|
||||
<LeaderboardLane
|
||||
title="Longest Visit Streak"
|
||||
tabs={[
|
||||
{
|
||||
title: 'Active',
|
||||
users: stats.streaks?.active || [],
|
||||
emptyIcon: <Zap className="size-16 text-gray-300" />,
|
||||
emptyText: 'No users with streaks yet',
|
||||
},
|
||||
{
|
||||
title: 'Lifetime',
|
||||
users: stats.streaks?.lifetime || [],
|
||||
emptyIcon: <Zap className="size-16 text-gray-300" />,
|
||||
emptyText: 'No users with streaks yet',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
<LeaderboardLane
|
||||
title="Projects Completed"
|
||||
tabs={[
|
||||
{
|
||||
title: 'This Month',
|
||||
users: stats.projectSubmissions.currentMonth,
|
||||
emptyIcon: <FolderKanban className="size-16 text-gray-300" />,
|
||||
emptyText: 'No projects submitted this month',
|
||||
},
|
||||
{
|
||||
title: 'Lifetime',
|
||||
users: stats.projectSubmissions.lifetime,
|
||||
emptyIcon: <FolderKanban className="size-16 text-gray-300" />,
|
||||
emptyText: 'No projects submitted yet',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
<LeaderboardLane
|
||||
title="Most Referrals"
|
||||
tabs={[
|
||||
{
|
||||
title: 'This Month',
|
||||
users: stats.referrals.currentMonth,
|
||||
emptyIcon: <Users className="size-16 text-gray-300" />,
|
||||
emptyText: 'No referrals this month',
|
||||
},
|
||||
{
|
||||
title: 'Lifetime',
|
||||
users: stats.referrals.lifetime,
|
||||
emptyIcon: <Users className="size-16 text-gray-300" />,
|
||||
emptyText: 'No referrals yet',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
<LeaderboardLane
|
||||
title="Top Contributors"
|
||||
subtitle="Past 2 weeks"
|
||||
tabs={[
|
||||
{
|
||||
title: 'This Month',
|
||||
users: stats.githubContributors.currentMonth,
|
||||
emptyIcon: <GitPullRequest className="size-16 text-gray-300" />,
|
||||
emptyText: 'No contributors this month',
|
||||
},
|
||||
]}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -93,27 +96,35 @@ export function LeaderboardPage(props: LeaderboardPageProps) {
|
||||
|
||||
type LeaderboardLaneProps = {
|
||||
title: string;
|
||||
subtitle?: string;
|
||||
tabs: {
|
||||
title: string;
|
||||
users: LeadeboardUserDetails[];
|
||||
users: LeaderboardUserDetails[];
|
||||
emptyIcon?: ReactNode;
|
||||
emptyText?: string;
|
||||
}[];
|
||||
};
|
||||
|
||||
function LeaderboardLane(props: LeaderboardLaneProps) {
|
||||
const { title, tabs } = props;
|
||||
const { title, subtitle, tabs } = props;
|
||||
|
||||
const [activeTab, setActiveTab] = useState(tabs[0]);
|
||||
const { users: usersToShow, emptyIcon, emptyText } = activeTab;
|
||||
|
||||
return (
|
||||
<div className="flex flex-col overflow-hidden rounded-xl border bg-white min-h-[450px] ">
|
||||
<div className="flex min-h-[450px] flex-col overflow-hidden rounded-xl border bg-white shadow-sm">
|
||||
<div className="mb-3 flex items-center justify-between gap-2 px-3 py-3">
|
||||
<h3 className="text-base font-medium">{title}</h3>
|
||||
<h3 className="text-sm font-medium">
|
||||
{title}{' '}
|
||||
{subtitle && (
|
||||
<span className="ml-1 text-sm font-normal text-gray-400">
|
||||
{subtitle}
|
||||
</span>
|
||||
)}
|
||||
</h3>
|
||||
|
||||
{tabs.length > 1 && (
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="flex items-center gap-1">
|
||||
{tabs.map((tab) => {
|
||||
const isActive = tab === activeTab;
|
||||
|
||||
@@ -122,10 +133,10 @@ function LeaderboardLane(props: LeaderboardLaneProps) {
|
||||
key={tab.title}
|
||||
onClick={() => setActiveTab(tab)}
|
||||
className={cn(
|
||||
'text-sm font-medium underline-offset-2 transition-colors',
|
||||
'text-xs transition-colors py-0.5 px-2 rounded-full',
|
||||
{
|
||||
'text-black underline': isActive,
|
||||
'text-gray-400 hover:text-gray-600': !isActive,
|
||||
'text-white bg-black': isActive,
|
||||
'hover:bg-gray-200': !isActive,
|
||||
},
|
||||
)}
|
||||
>
|
||||
@@ -181,7 +192,7 @@ function LeaderboardLane(props: LeaderboardLaneProps) {
|
||||
/>
|
||||
{isGitHubUser ? (
|
||||
<a
|
||||
href={`https://github.com/${user.name}`}
|
||||
href={`https://github.com/kamranahmedse/developer-roadmap/pulls?q=is%3Apr+is%3Aclosed+author%3A${user.name}`}
|
||||
target="_blank"
|
||||
className="truncate font-medium underline underline-offset-2"
|
||||
>
|
||||
@@ -201,17 +212,7 @@ function LeaderboardLane(props: LeaderboardLaneProps) {
|
||||
)}
|
||||
</div>
|
||||
|
||||
{isGitHubUser ? (
|
||||
<a
|
||||
target={'_blank'}
|
||||
href={`https://github.com/kamranahmedse/developer-roadmap/pulls/${user.name}`}
|
||||
className="text-sm text-gray-500"
|
||||
>
|
||||
{user.count}
|
||||
</a>
|
||||
) : (
|
||||
<span className="text-sm text-gray-500">{user.count}</span>
|
||||
)}
|
||||
<span className="text-sm text-gray-500">{user.count}</span>
|
||||
</li>
|
||||
);
|
||||
})}
|
||||
|
||||
@@ -5,10 +5,13 @@ import type {
|
||||
} from '../../lib/project.ts';
|
||||
import { Users } from 'lucide-react';
|
||||
import { formatCommaNumber } from '../../lib/number.ts';
|
||||
import { cn } from '../../lib/classname.ts';
|
||||
import { isLoggedIn } from '../../lib/jwt.ts';
|
||||
|
||||
type ProjectCardProps = {
|
||||
project: ProjectFileType;
|
||||
userCount?: number;
|
||||
status?: 'completed' | 'started' | 'none';
|
||||
};
|
||||
|
||||
const badgeVariants: Record<ProjectDifficultyType, string> = {
|
||||
@@ -18,10 +21,12 @@ const badgeVariants: Record<ProjectDifficultyType, string> = {
|
||||
};
|
||||
|
||||
export function ProjectCard(props: ProjectCardProps) {
|
||||
const { project, userCount = 0 } = props;
|
||||
|
||||
const { project, userCount = 0, status } = props;
|
||||
const { frontmatter, id } = project;
|
||||
|
||||
const isLoadingStatus = status === undefined;
|
||||
const userStartedCount = status !== 'none' && userCount === 0 ? userCount + 1 : userCount;
|
||||
|
||||
return (
|
||||
<a
|
||||
href={`/projects/${id}`}
|
||||
@@ -34,16 +39,45 @@ export function ProjectCard(props: ProjectCardProps) {
|
||||
/>
|
||||
<Badge variant={'grey'} text={frontmatter.nature} />
|
||||
</span>
|
||||
<span className="my-3 flex flex-col">
|
||||
<span className="my-3 flex min-h-[100px] flex-col">
|
||||
<span className="mb-1 font-medium">{frontmatter.title}</span>
|
||||
<span className="text-sm text-gray-500">{frontmatter.description}</span>
|
||||
</span>
|
||||
<span className="flex items-center gap-2 text-xs text-gray-400">
|
||||
<Users className="inline-block size-3.5" />
|
||||
{userCount > 0 ? (
|
||||
<>{formatCommaNumber(userCount)} Started</>
|
||||
<span className="flex min-h-[22px] items-center justify-between gap-2 text-xs text-gray-400">
|
||||
{isLoadingStatus ? (
|
||||
<>
|
||||
<span className="h-5 w-24 animate-pulse rounded bg-gray-200" />{' '}
|
||||
<span className="h-5 w-20 animate-pulse rounded bg-gray-200" />{' '}
|
||||
</>
|
||||
) : (
|
||||
<>Be the first to solve!</>
|
||||
<>
|
||||
<span className="flex items-center gap-1.5">
|
||||
<Users className="size-3.5" />
|
||||
{userStartedCount > 0 ? (
|
||||
<>{formatCommaNumber(userStartedCount)} Started</>
|
||||
) : (
|
||||
<>Be the first to solve!</>
|
||||
)}
|
||||
</span>
|
||||
|
||||
{status !== 'none' && (
|
||||
<span
|
||||
className={cn(
|
||||
'flex items-center gap-1.5 rounded-full border border-current px-2 py-0.5 capitalize',
|
||||
status === 'completed' && 'text-green-500',
|
||||
status === 'started' && 'text-yellow-500',
|
||||
)}
|
||||
>
|
||||
<span
|
||||
className={cn('inline-block h-2 w-2 rounded-full', {
|
||||
'bg-green-500': status === 'completed',
|
||||
'bg-yellow-500': status === 'started',
|
||||
})}
|
||||
/>
|
||||
{status}
|
||||
</span>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</span>
|
||||
</a>
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { ProjectCard } from './ProjectCard.tsx';
|
||||
import { HeartHandshake, Trash2 } from 'lucide-react';
|
||||
import { cn } from '../../lib/classname.ts';
|
||||
import { useMemo, useState } from 'react';
|
||||
import { useEffect, useMemo, useState } from 'react';
|
||||
import {
|
||||
projectDifficulties,
|
||||
type ProjectDifficultyType,
|
||||
@@ -12,6 +12,8 @@ import {
|
||||
getUrlParams,
|
||||
setUrlParams,
|
||||
} from '../../lib/browser.ts';
|
||||
import { httpPost } from '../../lib/http.ts';
|
||||
import { isLoggedIn } from '../../lib/jwt.ts';
|
||||
|
||||
type DifficultyButtonProps = {
|
||||
difficulty: ProjectDifficultyType;
|
||||
@@ -38,6 +40,11 @@ function DifficultyButton(props: DifficultyButtonProps) {
|
||||
);
|
||||
}
|
||||
|
||||
export type ListProjectStatusesResponse = Record<
|
||||
string,
|
||||
'completed' | 'started'
|
||||
>;
|
||||
|
||||
type ProjectsListProps = {
|
||||
projects: ProjectFileType[];
|
||||
userCounts: Record<string, number>;
|
||||
@@ -50,6 +57,30 @@ export function ProjectsList(props: ProjectsListProps) {
|
||||
const [difficulty, setDifficulty] = useState<
|
||||
ProjectDifficultyType | undefined
|
||||
>(urlDifficulty);
|
||||
const [projectStatuses, setProjectStatuses] =
|
||||
useState<ListProjectStatusesResponse>();
|
||||
|
||||
const loadProjectStatuses = async () => {
|
||||
if (!isLoggedIn()) {
|
||||
setProjectStatuses({});
|
||||
return;
|
||||
}
|
||||
|
||||
const projectIds = projects.map((project) => project.id);
|
||||
const { response, error } = await httpPost(
|
||||
`${import.meta.env.PUBLIC_API_URL}/v1-list-project-statuses`,
|
||||
{
|
||||
projectIds,
|
||||
},
|
||||
);
|
||||
|
||||
if (error || !response) {
|
||||
console.error(error);
|
||||
return;
|
||||
}
|
||||
|
||||
setProjectStatuses(response);
|
||||
};
|
||||
|
||||
const projectsByDifficulty: Map<ProjectDifficultyType, ProjectFileType[]> =
|
||||
useMemo(() => {
|
||||
@@ -72,12 +103,17 @@ export function ProjectsList(props: ProjectsListProps) {
|
||||
? projectsByDifficulty.get(difficulty) || []
|
||||
: projects;
|
||||
|
||||
useEffect(() => {
|
||||
loadProjectStatuses().finally();
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div className="flex flex-col">
|
||||
<div className="my-2.5 flex items-center justify-between">
|
||||
<div className="flex flex-wrap gap-1">
|
||||
{projectDifficulties.map((projectDifficulty) => (
|
||||
<DifficultyButton
|
||||
key={projectDifficulty}
|
||||
onClick={() => {
|
||||
setDifficulty(projectDifficulty);
|
||||
setUrlParams({ difficulty: projectDifficulty });
|
||||
@@ -130,7 +166,18 @@ export function ProjectsList(props: ProjectsListProps) {
|
||||
})
|
||||
.map((matchingProject) => {
|
||||
const count = userCounts[matchingProject?.id] || 0;
|
||||
return <ProjectCard project={matchingProject} userCount={count} />;
|
||||
return (
|
||||
<ProjectCard
|
||||
key={matchingProject.id}
|
||||
project={matchingProject}
|
||||
userCount={count}
|
||||
status={
|
||||
projectStatuses
|
||||
? (projectStatuses?.[matchingProject.id] || 'none')
|
||||
: undefined
|
||||
}
|
||||
/>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -190,6 +190,7 @@ export function ProjectsPage(props: ProjectsPageProps) {
|
||||
key={project.id}
|
||||
project={project}
|
||||
userCount={userCounts[project.id] || 0}
|
||||
status={'none'}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
|
||||
@@ -357,6 +357,11 @@ const groups: GroupType[] = [
|
||||
link: '/ai-data-scientist',
|
||||
type: 'role',
|
||||
},
|
||||
{
|
||||
title: 'AI Engineer',
|
||||
link: '/ai-engineer',
|
||||
type: 'role',
|
||||
},
|
||||
{
|
||||
title: 'Data Analyst',
|
||||
link: '/data-analyst',
|
||||
|
||||
@@ -175,7 +175,6 @@ export function TopicDetail(props: TopicDetailProps) {
|
||||
setError('');
|
||||
setIsLoading(true);
|
||||
setIsActive(true);
|
||||
sponsorHidden.set(true);
|
||||
|
||||
setTopicId(topicId);
|
||||
setResourceType(resourceType);
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
jsonUrl: '/jsons/roadmaps/ai-data-scientist.json'
|
||||
pdfUrl: '/pdfs/roadmaps/ai-data-scientist.pdf'
|
||||
order: 4
|
||||
order: 5
|
||||
renderer: 'editor'
|
||||
briefTitle: 'AI and Data Scientist'
|
||||
briefDescription: 'Step by step guide to becoming an AI and Data Scientist in 2024'
|
||||
|
||||
1
src/data/roadmaps/ai-engineer/ai-engineer.json
Normal file
1
src/data/roadmaps/ai-engineer/ai-engineer.json
Normal file
File diff suppressed because one or more lines are too long
50
src/data/roadmaps/ai-engineer/ai-engineer.md
Normal file
50
src/data/roadmaps/ai-engineer/ai-engineer.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
jsonUrl: '/jsons/roadmaps/ai-engineer.json'
|
||||
pdfUrl: '/pdfs/roadmaps/ai-engineer.pdf'
|
||||
order: 4
|
||||
renderer: 'editor'
|
||||
briefTitle: 'AI Engineer'
|
||||
briefDescription: 'Step by step guide to becoming an AI Engineer in 2024'
|
||||
title: 'AI Engineer Roadmap'
|
||||
description: 'Step by step guide to becoming an AI Engineer in 2024'
|
||||
hasTopics: true
|
||||
isNew: true
|
||||
dimensions:
|
||||
width: 968
|
||||
height: 3200
|
||||
question:
|
||||
title: 'What is an AI Engineer?'
|
||||
description: |
|
||||
An AI Engineer uses pre-trained models and existing AI tools to improve user experiences. They focus on applying AI in practical ways, without building models from scratch. This is different from AI Researchers and ML Engineers, who focus more on creating new models or developing AI theory.
|
||||
schema:
|
||||
headline: 'AI Engineer Roadmap'
|
||||
description: 'Learn how to become an AI Engineer with this interactive step by step guide in 2023. We also have resources and short descriptions attached to the roadmap items so you can get everything you want to learn in one place.'
|
||||
imageUrl: 'https://roadmap.sh/roadmaps/ai-engineer.png'
|
||||
datePublished: '2024-10-03'
|
||||
dateModified: '2024-10-03'
|
||||
seo:
|
||||
title: 'AI Engineer Roadmap'
|
||||
description: 'Learn to become an AI Engineer using this roadmap. Community driven, articles, resources, guides, interview questions, quizzes for modern backend development.'
|
||||
keywords:
|
||||
- 'ai engineer roadmap 2024'
|
||||
- 'guide to becoming an ai engineer'
|
||||
- 'ai engineer roadmap'
|
||||
- 'ai engineer skills'
|
||||
- 'become an ai engineer'
|
||||
- 'ai engineer career path'
|
||||
- 'skills for ai engineer'
|
||||
- 'ai engineer quiz'
|
||||
- 'ai engineer interview questions'
|
||||
relatedRoadmaps:
|
||||
- 'ai-data-scientist'
|
||||
- 'prompt-engineering'
|
||||
- 'data-analyst'
|
||||
- 'python'
|
||||
sitemap:
|
||||
priority: 1
|
||||
changefreq: 'monthly'
|
||||
tags:
|
||||
- 'roadmap'
|
||||
- 'main-sitemap'
|
||||
- 'role-roadmap'
|
||||
---
|
||||
@@ -0,0 +1,7 @@
|
||||
# Adding end-user IDs in prompts
|
||||
|
||||
Sending end-user IDs in your requests can be a useful tool to help OpenAI monitor and detect abuse. This allows OpenAI to provide you with more actionable feedback in the event that they may detect any policy violations in applications.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@OpenAI Documentation](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids)
|
||||
@@ -0,0 +1,8 @@
|
||||
# Agents Usecases
|
||||
|
||||
AI Agents allow you to automate complex workflows that involve multiple steps and decisions.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@What are AI Agents?](https://aws.amazon.com/what-is/ai-agents/)
|
||||
- [@video@What are AI Agents?](https://www.youtube.com/watch?v=F8NKVhkZZWI)
|
||||
@@ -0,0 +1,8 @@
|
||||
# AI Agents
|
||||
|
||||
AI Agents are a type of LLM that can be used to automate complex workflows that involve multiple steps and decisions.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@What are AI Agents?](https://aws.amazon.com/what-is/ai-agents/)
|
||||
- [@video@What are AI Agents?](https://www.youtube.com/watch?v=F8NKVhkZZWI)
|
||||
@@ -0,0 +1,8 @@
|
||||
# AI Agents
|
||||
|
||||
AI Agents are a type of LLM that can be used to automate complex workflows that involve multiple steps and decisions.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@What are AI Agents?](https://aws.amazon.com/what-is/ai-agents/)
|
||||
- [@video@What are AI Agents?](https://www.youtube.com/watch?v=F8NKVhkZZWI)
|
||||
@@ -0,0 +1,8 @@
|
||||
# AI Code Editors
|
||||
|
||||
AI code editors have the first-class support for AI in the editor. You can use AI to generate code, fix bugs, chat with your code, and more.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@website@Cursor](https://cursor.com/)
|
||||
- [@website@Zed AI](https://zed.dev/ai)
|
||||
@@ -0,0 +1,3 @@
|
||||
# AI Engineer vs ML Engineer
|
||||
|
||||
AI Engineer differs from an AI Researcher or ML Engineer. AI Engineers focus on leveraging pre-trained models and existing AI technologies to enhance user experiences without the need to train models from scratch.
|
||||
@@ -0,0 +1,3 @@
|
||||
# AI Safety and Ethics
|
||||
|
||||
Learn about the principles and guidelines for building safe and ethical AI systems.
|
||||
@@ -0,0 +1,3 @@
|
||||
# AI vs AGI
|
||||
|
||||
AI (Artificial Intelligence) refers to systems designed to perform specific tasks, like image recognition or language translation, often excelling in those narrow areas. In contrast, AGI (Artificial General Intelligence) would be a system capable of understanding, learning, and applying intelligence across a wide range of tasks, much like a human, and could adapt to new situations without specific programming.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Anomaly Detection
|
||||
|
||||
Embeddings transform complex data (like text or behavior) into numerical vectors, capturing relationships between data points. These vectors are stored in a vector database, which allows for efficient similarity searches. Anomalies can be detected by measuring the distance between a data point's vector and its nearest neighbors—if a point is significantly distant, it's likely anomalous. This approach is scalable, adaptable to various data types, and effective for tasks like fraud detection, predictive maintenance, and cybersecurity.
|
||||
@@ -0,0 +1,7 @@
|
||||
# Anthropic's Claude
|
||||
|
||||
Claude is a family of large language models developed by Anthropic. Claude 3.5 Sonnet is the latest model (at the time of this writing) in the series, known for its advanced reasoning and multi-modality capabilities.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Clause Website](https://claude.ai/)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Audio Processing
|
||||
|
||||
Using Multimodal AI, audio data can be processed with other types of data, such as text, images, or video, to enhance understanding and analysis. For example, it can synchronize audio with corresponding visual inputs, like lip movements in video, to improve speech recognition or emotion detection. This fusion of modalities enables more accurate transcription, better sentiment analysis, and enriched context understanding in applications such as virtual assistants, multimedia content analysis, and real-time communication systems.
|
||||
@@ -0,0 +1,7 @@
|
||||
# AWS Sagemaker
|
||||
|
||||
AWS Sagemaker is a fully managed platform that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. Sagemaker takes care of the underlying infrastructure, allowing developers to focus on building and improving their models.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@AWS Website](https://aws.amazon.com/sagemaker/)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Azure AI
|
||||
|
||||
Azure AI is a comprehensive set of AI services and tools provided by Microsoft. It includes a range of capabilities such as natural language processing, computer vision, speech recognition, and more. Azure AI is designed to help developers and organizations build, deploy, and scale AI solutions quickly and easily.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Azure Website](https://azure.microsoft.com/en-us/products/ai-services/)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Benefits of Pre-trained Models
|
||||
|
||||
LLM models are not only difficult to train, but they are also expensive. Pre-trained models are a cost-effective solution for developers and organizations looking to leverage the power of AI without the need to train models from scratch.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@article@Why you should use Pre-trained Models](https://cohere.com/blog/pre-trained-vs-in-house-nlp-models)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Bias and Fareness
|
||||
|
||||
Bias and fairness in AI arise when models produce skewed or unequal outcomes for different groups, often reflecting imbalances in the training data. This can lead to discriminatory effects in critical areas like hiring, lending, and law enforcement. Addressing these concerns involves ensuring diverse and representative data, implementing fairness metrics, and ongoing monitoring to prevent biased outcomes. Techniques like debiasing algorithms and transparency in model development help mitigate bias and promote fairness in AI systems.
|
||||
@@ -0,0 +1,7 @@
|
||||
# Capabilities / Context Length
|
||||
|
||||
OpenAI's capabilities include processing complex tasks like language understanding, code generation, and problem-solving. However, context length limits how much information the model can retain and reference during a session, affecting long conversations or documents. Advances aim to increase this context window for more coherent and detailed outputs over extended interactions.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@OpenAI Website](https://platform.openai.com/docs/guides/fine-tuning/token-limits)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Chat Completions API
|
||||
|
||||
The Chat Completions API allows developers to create conversational agents by sending user inputs (prompts) and receiving model-generated responses. It supports multiple-turn dialogues, maintaining context across exchanges to deliver relevant responses. This API is often used for chatbots, customer support, and interactive applications where maintaining conversation flow is essential.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@OpenAI Website](https://platform.openai.com/docs/api-reference/chat/completions)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Chroma
|
||||
|
||||
Chroma is a vector database designed to efficiently store, index, and query high-dimensional embeddings. It’s optimized for AI applications like semantic search, recommendation systems, and anomaly detection by allowing fast retrieval of similar vectors based on distance metrics (e.g., cosine similarity). Chroma enables scalable and real-time processing, making it a popular choice for projects involving embeddings from text, images, or other data types.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Chroma Website](https://docs.trychroma.com/)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Chunking
|
||||
|
||||
In Retrieval-Augmented Generation (RAG), **chunking** refers to breaking large documents or data into smaller, manageable pieces (chunks) to improve retrieval and generation efficiency. This process helps the system retrieve relevant information more accurately by indexing these chunks in a vector database. During a query, the model retrieves relevant chunks instead of entire documents, which enhances the precision of the generated responses and allows better handling of long-form content within the context length limits.
|
||||
@@ -0,0 +1,10 @@
|
||||
# Code Completion Tools
|
||||
|
||||
AI Code Completion Tools are software tools that use AI models to assist with code generation and editing. These tools help developers write code more quickly and efficiently by providing suggestions, completing code snippets, and suggesting improvements. AI Code Completion Tools can also be used to generate documentation, comments, and other code-related content.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@website@GitHub Copilot](https://copilot.github.com/)
|
||||
- [@website@Codeium](https://codeium.com/)
|
||||
- [@website@Supermaven](https://supermaven.com/)
|
||||
- [@website@TabNine](https://www.tabnine.com/)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Cohere
|
||||
|
||||
Cohere is an AI platform that provides natural language processing (NLP) models and tools, enabling developers to integrate powerful language understanding capabilities into their applications. It offers features like text generation, semantic search, classification, and embeddings. Cohere focuses on scalability and ease of use, making it popular for tasks such as content creation, customer support automation, and building search engines with advanced semantic understanding. It also provides a user-friendly API for custom NLP applications.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@website@Cohere](https://cohere.com/)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Conducting Adversarial Testing
|
||||
|
||||
Adversarial testing involves creating malicious inputs to test the robustness of AI models. This includes testing for prompt injection, evasion, and other adversarial attacks.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Constraining Outputs and Inputs
|
||||
|
||||
Constraining outputs and inputs is important for controlling the behavior of AI models. This includes techniques like output filtering, input validation, and rate limiting.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Cut-off Dates / Knowledge
|
||||
|
||||
OpenAI models have a knowledge cutoff date, meaning they only have access to information available up until a specific time. For example, my knowledge is up to date until September 2023. As a result, I may not be aware of recent developments, events, or newly released technology. Additionally, these models don’t have real-time internet access, so they can't retrieve or update information beyond their training data. This can limit the ability to provide the latest details or react to rapidly changing topics.
|
||||
@@ -0,0 +1,3 @@
|
||||
# DALL-E API
|
||||
|
||||
The DALL-E API allows developers to integrate OpenAI's image generation model into their applications. Using text-based prompts, the API generates unique images that match the descriptions provided by users. This makes it useful for tasks like creative design, marketing, product prototyping, and content creation. The API is highly customizable, enabling developers to adjust parameters such as image size and style. DALL-E excels at creating visually rich content from textual descriptions, expanding the possibilities for AI-driven creative workflows.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Data Classification
|
||||
|
||||
Embeddings are used in data classification by converting data (like text or images) into numerical vectors that capture underlying patterns and relationships. These vector representations make it easier for machine learning models to distinguish between different classes based on the similarity or distance between vectors in high-dimensional space. By training a classifier on these embeddings, tasks like sentiment analysis, document categorization, and image classification can be performed more accurately and efficiently. Embeddings simplify complex data and enhance classification by highlighting key features relevant to each class.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Development Tools
|
||||
|
||||
A lot of developer related tools have popped up since the AI revolution. It's being used in the coding editors, in the terminal, in the CI/CD pipelines, and more.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Embedding
|
||||
|
||||
Embedding refers to the conversion or mapping of discrete objects such as words, phrases, or even entire sentences into vectors of real numbers. It's an essential part of data preprocessing where high-dimensional data is transformed into a lower-dimensional equivalent. This dimensional reduction helps to preserve the semantic relationships between objects. In AI engineering, embedding techniques are often used in language-orientated tasks like sentiment analysis, text classification, and Natural Language Processing (NLP) to provide an understanding of the vast linguistic inputs AI models receive.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Embedding
|
||||
|
||||
Embedding refers to the conversion or mapping of discrete objects such as words, phrases, or even entire sentences into vectors of real numbers. It's an essential part of data preprocessing where high-dimensional data is transformed into a lower-dimensional equivalent. This dimensional reduction helps to preserve the semantic relationships between objects. In AI engineering, embedding techniques are often used in language-orientated tasks like sentiment analysis, text classification, and Natural Language Processing (NLP) to provide an understanding of the vast linguistic inputs AI models receive.
|
||||
@@ -0,0 +1,3 @@
|
||||
# FAISS
|
||||
|
||||
FAISS stands for Facebook AI Similarity Search, it is a database management library developed by Facebook's AI team. Primarily used for efficient similarity search and clustering of dense vectors, it allows users to search through billions of feature vectors swiftly and efficiently. As an AI engineer, learning FAISS is beneficial because these vectors represent objects that are typically used in machine learning or AI applications. For instance, in an image recognition task, a dense vector might be a list of pixels from an image, and FAISS allows a quick search of similar images in a large database.
|
||||
@@ -0,0 +1,7 @@
|
||||
# Fine-tuning
|
||||
|
||||
OpenAI API allows you to fine-tune and adapt pre-trained models to specific tasks or datasets, improving performance on domain-specific problems. By providing custom training data, the model learns from examples relevant to the intended application, such as specialized customer support, unique content generation, or industry-specific tasks.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@OpenAI Docs](https://platform.openai.com/docs/guides/fine-tuning)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Generation
|
||||
|
||||
In this step of implementing RAG, we use the found chunks to generate a response to the user's query using an LLM.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Google's Gemini
|
||||
|
||||
Gemini, formerly known as Bard, is a generative artificial intelligence chatbot developed by Google. Based on the large language model of the same name, it was launched in 2023 after being developed as a direct response to the rise of OpenAI's ChatGPT
|
||||
@@ -0,0 +1,7 @@
|
||||
# Hugging Face Hub
|
||||
|
||||
Hugging Face Hub is a platform where you can share, access and collaborate upon a wide array of machine learning models, primarily focused on Natural Language Processing (NLP) tasks. It is a central repository that facilitates storage and sharing of models, reducing the time and overhead usually associated with these tasks. For an AI Engineer, leveraging Hugging Face Hub can accelerate model development and deployment, effectively allowing them to work on structuring efficient AI solutions instead of worrying about model storage and accessibility issues.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Hugging Face](https://huggingface.co/)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Hugging Face Models
|
||||
|
||||
Hugging Face has a wide range of pre-trained models that can be used for a variety of tasks, including language understanding and generation, translation, chatbots, and more. Anyone can create an account and use their models, and the models are organized by task, provider, and other criteria.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Hugging Face](https://huggingface.co/models)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Hugging Face Models
|
||||
|
||||
Hugging Face has a wide range of pre-trained models that can be used for a variety of tasks, including language understanding and generation, translation, chatbots, and more. Anyone can create an account and use their models, and the models are organized by task, provider, and other criteria.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Hugging Face](https://huggingface.co/models)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Hugging Face Tasks
|
||||
|
||||
Hugging face has a section where they have a list of tasks with the popular models for that task.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Hugging Face](https://huggingface.co/tasks)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Hugging Face
|
||||
|
||||
Hugging Face is the platform where the machine learning community collaborates on models, datasets, and applications.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Hugging Face](https://huggingface.co/)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Image Generation
|
||||
|
||||
Image Generation often refers to the process of creating new images from an existing dataset or completely from scratch. For an AI Engineer, understanding image generation is crucial as it is one of the key aspects of machine learning and deep learning related to computer vision. It often involves techniques like convolutional neural networks (CNN), generative adversarial networks (GANs), and autoencoders. These technologies are used to generate artificial images that closely resemble original input, and can be applied in various fields such as healthcare, entertainment, security and more.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Image Understanding
|
||||
|
||||
Image Understanding involves extracting meaningful information from images, such as photos or videos. This process includes tasks like image recognition, where an AI system is trained to recognize certain objects within an image, and image segmentation, where an image is divided into multiple regions according to some criteria. For an AI engineer, mastering techniques in Image Understanding is crucial because it forms the basis for more complex tasks such as object detection, facial recognition, or even whole scene understanding, all of which play significant roles in various AI applications. As AI technologies continue evolving, the ability to analyze and interpret visual data becomes increasingly important in fields ranging from healthcare to autonomous vehicles.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Impact on Product Development
|
||||
|
||||
Incorporating Artificial Intelligence (AI) can transform the process of creating, testing, and delivering products. This could range from utilizing AI for enhanced data analysis to inform product design, use of AI-powered automation in production processes, or even AI as a core feature of the product itself.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Indexing Embeddings
|
||||
|
||||
This step involves converting data (such as text, images, or other content) into numerical vectors (embeddings) using a pre-trained model. These embeddings capture the semantic relationships between data points. Once generated, the embeddings are stored in a vector database, which organizes them in a way that enables efficient retrieval based on similarity. This indexed structure allows fast querying and comparison of vectors, facilitating tasks like semantic search, recommendation systems, and anomaly detection.
|
||||
@@ -0,0 +1,8 @@
|
||||
# Inference SDK
|
||||
|
||||
Inference is the process of using a trained model to make predictions on new data. As this process can be compute-intensive, running on a dedicated server can be an interesting option. The huggingface_hub library provides an easy way to call a service that runs inference for hosted models. There are several services you can connect to:
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Hugging Face Inference Client](https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_client)
|
||||
- [@official@Hugging Face Inference API](https://huggingface.co/docs/api-inference/en/index)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Inference
|
||||
|
||||
Inference involves using models developed through machine learning to make predictions or decisions. As part of the AI Engineer Roadmap, an AI engineer might create an inference engine, which uses rules and logic to infer new information based on existing data. Often used in natural language processing, image recognition, and similar tasks, inference can help AI systems provide useful outputs based on their training. Working with inference involves understanding different models, how they work, and how to apply them to new data to achieve reliable results.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Introduction
|
||||
|
||||
An AI Engineer uses pre-trained models and existing AI tools to improve user experiences. They focus on applying AI in practical ways, without building models from scratch. This is different from AI Researchers and ML Engineers, who focus more on creating new models or developing AI theory.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Know your Customers / Usecases
|
||||
|
||||
Understanding your target customers and use-cases helps making informed decisions during the development to ensure that the final AI solution appropriately meets the relevant needs of the users. You can use this knowledge to choose the right tools, frameworks, technologies, design the right architecture, and even prevent abuse.
|
||||
@@ -0,0 +1,3 @@
|
||||
# LanceDB
|
||||
|
||||
LanceDB is a relatively new, multithreaded, high-speed data warehouse optimized for AI and machine learning data processing. It's designed to handle massive amounts of data, enables quick storage and retrieval, and supports lossless data compression. For an AI engineer, learning LanceDB could be beneficial as it can be integrated with machine learning frameworks for collecting, processing and analyzing large datasets. These functionalities can help to streamline the process for AI model training, which requires extensive data testing and validation.
|
||||
@@ -0,0 +1,3 @@
|
||||
# LangChain for Multimodal Apps
|
||||
|
||||
LangChain is a software framework that helps facilitate the integration of large language models into applications. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Langchain
|
||||
|
||||
LangChain is a software framework that helps facilitate the integration of large language models into applications. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Limitations and Considerations under Pre-trained Models
|
||||
|
||||
Pre-trained Models are AI models that are previously trained on a large benchmark dataset and provide a starting point for AI developers. They help in saving training time and computational resources. However, they also come with certain limitations and considerations. These models can sometimes fail to generalize well to tasks outside of their original context due to issues like dataset bias or overfitting. Furthermore, using them without understanding their internal working can lead to problematic consequences. Finally, transfer learning, which is the mechanism to deploy these pre-trained models, might not always be the optimum solution for every AI project. Thus, an AI Engineer must be aware of these factors while working with pre-trained models.
|
||||
@@ -0,0 +1,7 @@
|
||||
# Llama Index
|
||||
|
||||
LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@LlamaIndex Official Website](https://llamaindex.ai/)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Llama Index
|
||||
|
||||
LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@LlamaIndex Official Website](https://llamaindex.ai/)
|
||||
@@ -0,0 +1,3 @@
|
||||
# LLMs
|
||||
|
||||
LLM or Large Language Models are AI models that are trained on a large amount of text data to understand and generate human language. They are the core of applications like ChatGPT, and are used for a variety of tasks, including language translation, question answering, and more.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Manual Implementation
|
||||
|
||||
You can build the AI agents manually by coding the logic from scratch without using any frameworks or libraries. For example, you can use the OpenAI API and write the looping logic yourself to keep the agent running until it has the answer.
|
||||
@@ -0,0 +1,9 @@
|
||||
# Maximum Tokens
|
||||
|
||||
Number of Maximum tokens in OpenAI API depends on the model you are using.
|
||||
|
||||
For example, the `gpt-4o` model has a maximum of 128,000 tokens.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@OpenAI API Documentation](https://platform.openai.com/docs/api-reference/completions/create)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Mistral AI
|
||||
|
||||
Mistral AI is a French startup founded in 2023, specializing in open-source large language models (LLMs). Created by former Meta and Google DeepMind researchers, it focuses on efficient, customizable AI solutions that promote transparency. Its flagship models, Mistral Large and Mixtral, offer state-of-the-art performance with lower resource requirements, gaining significant attention in the AI field.
|
||||
@@ -0,0 +1,7 @@
|
||||
# Hugging Face Models
|
||||
|
||||
Hugging Face has a wide range of pre-trained models that can be used for a variety of tasks, including language understanding and generation, translation, chatbots, and more. Anyone can create an account and use their models, and the models are organized by task, provider, and other criteria.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Hugging Face](https://huggingface.co/models)
|
||||
@@ -0,0 +1,7 @@
|
||||
# MongoDB Atlas
|
||||
|
||||
MongoDB Atlas is a fully managed cloud-based NoSQL database service by MongoDB. It simplifies database deployment and management across platforms like AWS, Azure, and Google Cloud. Using a flexible document model, Atlas automates tasks such as scaling, backups, and security, allowing developers to focus on building applications. With features like real-time analytics and global clusters, it offers a powerful solution for scalable and resilient app development.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@MongoDB Atlas Vector Search](https://www.mongodb.com/products/platform/atlas-vector-search)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Multimodal AI Usecases
|
||||
|
||||
Multimodal AI integrates various data types for diverse applications. In human-computer interaction, it enhances interfaces using speech, gestures, and facial expressions. In healthcare, it combines medical scans and records for accurate diagnoses. For autonomous vehicles, it processes data from sensors for real-time navigation. Additionally, it generates images from text and summarizes videos in content creation, while also analyzing satellite and sensor data for climate insights.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Multimodal AI
|
||||
|
||||
Multimodal AI refers to artificial intelligence systems capable of processing and integrating multiple types of data inputs simultaneously, such as text, images, audio, and video. Unlike traditional AI models that focus on a single data type, multimodal AI combines various inputs to achieve a more comprehensive understanding and generate more robust outputs. This approach mimics human cognition, which naturally integrates information from multiple senses to form a complete perception of the world. By leveraging diverse data sources, multimodal AI can perform complex tasks like image captioning, visual question answering, and cross-modal content generation.
|
||||
@@ -0,0 +1,7 @@
|
||||
# Ollama Models
|
||||
|
||||
Ollama supports a wide range of language models, including but not limited to Llama, Phi, Mistral, Gemma and more.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Ollama Models](https://ollama.com/library)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Ollama SDK
|
||||
|
||||
Ollama SDK can be used to develop applications locally.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Ollama SDK](https://ollama.com)
|
||||
@@ -0,0 +1,7 @@
|
||||
# Ollama
|
||||
|
||||
Ollama is an open-source tool for running large language models (LLMs) locally on personal computers. It supports various models like Llama 2, Mistral, and Code Llama, bundling weights, configurations, and data into a single package. Ollama offers a user-friendly interface, API access, and integration capabilities, allowing users to leverage AI capabilities while maintaining data privacy and control. It's designed for easy installation and use on macOS and Linux, with Windows support in development.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Ollama](https://ollama.com)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Open AI Assistant API
|
||||
|
||||
OpenAI Assistant API is a tool provided by OpenAI that allows developers to integrate the same AI used in ChatGPT into their own applications, products or services. This AI conducts dynamic, interactive and context-aware conversations useful for building AI assistants in various applications. In the AI Engineer Roadmap, mastering the use of APIs like the Open AI Assistant API is a crucial skill, as it allows engineers to harness the power and versatility of pre-trained algorithms and use them for their desired tasks. AI Engineers can offload the intricacies of model training and maintenance, focusing more on product development and innovation.
|
||||
@@ -0,0 +1,7 @@
|
||||
# Open AI Embedding Models
|
||||
|
||||
Open AI embedding models refer to the artificial intelligence variants designed to reformat or transcribe input data into compact, dense numerical vectors. These models simplify and reduce the input data from its original complex nature, creating a digital representation that is easier to manipulate. This data reduction technique is critical in the AI Engineer Roadmap because it paves the way for natural language processing tasks. It helps in making precise predictions, clustering similar data, and producing accurate search results based on contextual relevance.
|
||||
|
||||
Visit the following resources to learn more:
|
||||
|
||||
- [@official@Open AI Embedding Models](https://platform.openai.com/docs/guides/embeddings)
|
||||
@@ -0,0 +1,3 @@
|
||||
# Open AI Embeddings API
|
||||
|
||||
Open AI Embeddings API is a powerful system that is used to generate high-quality word and sentence embeddings. With this API, it becomes a breeze to convert textual data into a numerical format that Machine Learning models can process. This conversion of text into numerical data is crucial for Natural Language Processing (NLP) tasks that an AI Engineer often encounters. Understanding and harnessing the capabilities of the Open AI Embeddings API, therefore, forms an essential part of the AI Engineer's roadmap.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Open AI Models
|
||||
|
||||
Open AI Models are a set of pre-designed, predefined models provided by OpenAI. These models are trained using Machine Learning algorithms to perform artificial intelligence tasks without any need of explicit programming. OpenAI's models are suited for various applications such as text generation, classification and extraction, allowing AI engineers to leverage them for effective implementations. Therefore, understanding and utilizing these models becomes an essential aspect in the roadmap for an AI engineer to develop AI-powered solutions with more efficiency and quality.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Open AI Playground
|
||||
|
||||
Open AI Playground is an interactive platform, provided by OpenAI, that enables developers to experiment with and understand the capabilities of OpenAI's offerings. Here, you can try out several cutting-edge language models like GPT-3 or Codex. This tool is crucial in the journey of becoming an AI Engineer, because it provides a hands-on experience in implementing and testing language models. Manipulating models directly helps you get a good grasp on how AI models can influence the results based on input parameters. Therefore, Open AI Playground holds significance on the AI Engineer's roadmap not only as a learning tool, but also as a vital platform for rapid prototyping and debugging.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Open-Source Embeddings
|
||||
|
||||
Open-source embeddings, such as Word2Vec, GloVe, and FastText, are essentially vector representations of words or phrases. These representations capture the semantic relationships between words and their surrounding context in a multi-dimensional space, making it easier for machine learning models to understand and process textual data. In the AI Engineer Roadmap, gaining knowledge of open-source embeddings is critical. These embeddings serve as a foundation for natural language processing tasks, ranging from sentiment analysis to chatbot development, and are widely used in the AI field for their ability to enhance the performance of machine learning models dealing with text data.
|
||||
@@ -0,0 +1,3 @@
|
||||
# Open vs Closed Source Models
|
||||
|
||||
Open source models are types of software whose source code is available for the public to view, modify, and distribute. They encourage collaboration and transparency, often resulting in rapid improvements and innovations. Closed source models, on the other hand, do not make their source code available and are typically developed and maintained by specific companies or teams. They often provide more stability, support, and consistency. Within the AI Engineer Roadmap, both open and closed source models play a unique role. While open source models allow for customization, experimentation and a broader understanding of underlying algorithms, closed source models might offer proprietary algorithms and structures that could lead to more efficient or unique solutions. Therefore, understanding the differences, advantages, and drawbacks of both models is essential for an aspiring AI engineer.
|
||||
@@ -0,0 +1,3 @@
|
||||
# OpenAI API
|
||||
|
||||
OpenAI API is a powerful language model developed by OpenAI, a non-profit artificial intelligence research organization. It uses machine learning to generate text from a given set of keywords or sentences, presenting the capability to learn, understand, and generate human-friendly content. As an AI Engineering aspirant, familiarity with tools like the OpenAI API positions you on the right track. It can help with creating AI applications that can analyze and generate text, which is particularly useful in AI tasks such as data extraction, summarization, translation, and natural language processing.
|
||||
@@ -0,0 +1,3 @@
|
||||
# OpenAI Assistant API
|
||||
|
||||
OpenAI Assistant API is a tool developed by OpenAI which allows developers to establish interaction between their applications, products or services and state-of-the-art AI models. By integrating this API in their software architecture, artificial intelligence engineers can leverage the power of advanced language models developed by the OpenAI community. These integrated models can accomplish a multitude of tasks, like writing emails, generating code, answering questions, tutoring in different subjects and even creating conversational agents. For an AI engineer, mastery over such APIs means they can deploy and control highly sophisticated AI models with just a few lines of code.
|
||||
@@ -0,0 +1,3 @@
|
||||
# OpenAI Functions / Tools
|
||||
|
||||
OpenAI, a leading organization in the field of artificial intelligence, provides a suite of functions and tools to enable developers and AI engineers to design, test, and deploy AI models. These tools include robust APIs for tasks like natural language processing, vision, and reinforcement learning, and platforms like GPT-3, CLIP, and Codex that provide pre-trained models. Utilization of these OpenAI components allows AI engineers to get a head-start in application development, simplifying the process of integration and reducing the time required for model training and tuning. Understanding and being adept at these tools forms a crucial part of the AI Engineer's roadmap to build impactful AI-driven applications.
|
||||
@@ -0,0 +1,3 @@
|
||||
# OpenAI Models
|
||||
|
||||
OpenAI is an artificial intelligence research lab that is known for its cutting-edge models. These models, like GPT-3, are pre-trained on vast amounts of data and perform remarkably well on tasks like language translation, question-answering, and more without needing any specific task training. Using these pre-trained models can give a massive head-start in building AI applications, as it saves the substantial time and resources that are required for training models from scratch. For an AI Engineer, understanding and leveraging these pre-trained models can greatly accelerate development and lead to superior AI systems.
|
||||
@@ -0,0 +1,3 @@
|
||||
# OpenAI Moderation API
|
||||
|
||||
OpenAI Moderation API is a feature or service provided by OpenAI that helps in controlling or filtering the output generated by an AI model. It is highly useful in identifying and preventing content that violates OpenAI’s usage policies from being shown. As an AI engineer, learning to work with this API helps implement a layer of security to ensure that the AI models developed are producing content that aligns with the ethical and moral guidelines set in place. Thus, it becomes a fundamental aspect of the AI Engineer Roadmap when dealing with user-generated content or creating AI-based services that interact with people.
|
||||
@@ -0,0 +1,3 @@
|
||||
# OpenAI Vision API
|
||||
|
||||
OpenAI Vision API is an API provided by OpenAI that is designed to analyze and generate insights from images. By feeding it an image, the Vision API can provide information about the objects and activities present in the image. For AI Engineers, this tool can be particularly useful for conducting Computer Vision tasks effortlessly. Using this API can support in creating applications that need image recognition, object detection and similar functionality, saving AI Engineers from having to create complex image processing algorithms from scratch. Understanding how to work with APIs, especially ones as advanced as the OpenAI Vision API, is an essential skill in the AI Engineer's roadmap.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user