diff --git a/.known-issues/sphinx.conf b/.known-issues/sphinx.conf index cd04ab53..b288c5eb 100644 --- a/.known-issues/sphinx.conf +++ b/.known-issues/sphinx.conf @@ -24,3 +24,10 @@ ^.*WARNING: 'mermaid': Unknown option keys: .*$ # Ignore unknown pygments lexer names ^.*WARNING: Pygments lexer name .* is not known$ +# +^.*WARNING: document isn't included in any toctree$ +# +^.*WARNING: the .* extension does not declare if it is safe for parallel reading, assuming it isn't - please ask the extension author to check and make it explicit$ +# +^.*WARNING: doing serial read$ +# \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 2a3bfc2c..bf828b25 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -8,7 +8,7 @@ Please check the [Contributing guidelines](https://github.com/opea-project/docs/ Thank you for being a part of this journey. We can't wait to see what we can achieve together! -# Additional Content +## Additional Content - [Code of Conduct](https://github.com/opea-project/docs/tree/main/community/CODE_OF_CONDUCT.md) - [Security Policy](https://github.com/opea-project/docs/tree/main/community/SECURITY.md) diff --git a/community/index.rst b/community/index.rst index 32ecd38c..4be54063 100644 --- a/community/index.rst +++ b/community/index.rst @@ -46,38 +46,32 @@ support systems: Contributing Guides ******************* -.. toctree:: - :maxdepth: 1 +- :doc:`Contribution Guide ` - CONTRIBUTING - codeowner - SECURITY +- :doc:`OPEA Project Code Owners ` + +- :doc:`Reporting a Vulnerability ` Roadmaps ******** -.. toctree:: - :maxdepth: 1 - :glob: - - ../roadmap/* +- :doc:`OPEA 2024 - 2025 Roadmap <../roadmap/2024-2025>` +- :doc:`OPEA CI/CD Roadmap <../roadmap/CICD>` Project Governance ****************** -.. toctree:: - :maxdepth: 1 +- :doc:` Technical Charter (the “Charter”) for OPEA a Series of LF Projects, LLC ` + +- :doc:`Technical Steering Committee (TSC) ` + +- :doc:`Contributor Covenant Code of Conduct ` - charter - TSC - CODE_OF_CONDUCT - SECURITY +- :doc:`Reporting a Vulnerability ` RFC Proposals ************* -.. toctree:: - :maxdepth: 1 +- :doc:`RFC Archive ` - rfcs diff --git a/conf.py b/conf.py index 5cfb662a..b4038254 100644 --- a/conf.py +++ b/conf.py @@ -89,7 +89,7 @@ # Toc options 'collapse_navigation': False, 'sticky_navigation': True, - 'navigation_depth': 4, + 'titles_only': True, } diff --git a/developer-guides/index.rst b/developer-guides/index.rst index 2b9ee21f..52c9a766 100644 --- a/developer-guides/index.rst +++ b/developer-guides/index.rst @@ -1,25 +1,26 @@ .. _developer_guides: -Developer Guides +Developer Guide ################ -Coding Guides +Coding Guide ************* +- :doc:`OPEA API Service Spec (v1.0) ` + + +Documentation Guide +******************** + .. toctree:: :maxdepth: 1 - OPEA_API +- :doc:`Documentation Guidelines ` +- :doc:`Drawings Using Graphviz ` +- :doc:`OPEA Documentation Generation ` -Documentation Guides -******************** -.. toctree:: - :maxdepth: 1 - doc_guidelines - graphviz - docbuild diff --git a/examples/AgentQnA/AgentQnA_Guide.rst b/examples/AgentQnA/AgentQnA_Guide.rst index a4889928..ff8ac7c0 100644 --- a/examples/AgentQnA/AgentQnA_Guide.rst +++ b/examples/AgentQnA/AgentQnA_Guide.rst @@ -9,7 +9,7 @@ AgentQnA Sample Guide Overview ******** -This example showcases a hierarchical multi-agent system for question-answering applications. +This example showcases a hierarchical multi-agent system for question-answering applications. Purpose ******* @@ -49,6 +49,7 @@ Single Node +++++++++++++++ .. toctree:: :maxdepth: 1 + :glob: Xeon Scalable Processor Gaudi diff --git a/examples/ChatQnA/ChatQnA_Guide.rst b/examples/ChatQnA/ChatQnA_Guide.rst index a9663741..aa18dc71 100644 --- a/examples/ChatQnA/ChatQnA_Guide.rst +++ b/examples/ChatQnA/ChatQnA_Guide.rst @@ -45,7 +45,7 @@ The ChatQnA example is designed to be a simple, yet powerful, demonstration of the RAG architecture. It is a great starting point for developers looking to build chatbots that can provide accurate and up-to-date information to users. -To facilitate sharing of individual services across multiple GenAI applications, use the GenAI Microservices Connector (GMC) to deploy your application. Apart from service sharing , it also supports specifying sequential, parallel, and alternative steps in a GenAI pipeline. In so doing, it supports dynamic switching between models used in any stage of a GenAI pipeline. For example, within the ChatQnA pipeline, using GMC one could switch the model used in the embedder, re-ranker, and/or the LLM. +To facilitate sharing of individual services across multiple GenAI applications, use the GenAI Microservices Connector (GMC) to deploy your application. Apart from service sharing , it also supports specifying sequential, parallel, and alternative steps in a GenAI pipeline. In so doing, it supports dynamic switching between models used in any stage of a GenAI pipeline. For example, within the ChatQnA pipeline, using GMC one could switch the model used in the embedder, re-ranker, and/or the LLM. Upstream Vanilla Kubernetes or Red Hat OpenShift Container Platform (RHOCP) can be used with or without GMC, while use with GMC provides additional features. @@ -204,21 +204,22 @@ The gateway serves as the interface for users to access. The gateway routes inco Deployment ********** -Here are some deployment options depending on your hardware and environment. -It includes both single-node and orchestrated multi-node configurations. -Choose the one that best fits your requirements. +Here are some deployment options depending on your hardware and environment. +It includes both single-node and orchestrated multi-node configurations. +Choose the one that best fits your requirements. Single Node *********** .. toctree:: :maxdepth: 1 + :glob: Xeon Scalable Processor Gaudi AI Accelerator Nvidia GPU AI PC - + ---- Kubernetes @@ -305,7 +306,7 @@ Here is another example of exporting metrics data from a TGI microservice (insid scrape_configs: - job_name: "tgi" - + static_configs: - targets: ["llm-dependency-svc.default.svc.cluster.local:9009"] @@ -330,7 +331,7 @@ The TGI metrics can be accessed at: .. code-block:: bash - http://${host_ip}:9009/metrics + http://${host_ip}:9009/metrics Set Up the Grafana Dashboard ============================ @@ -373,7 +374,7 @@ Run the Grafana server, without hanging-up the process: Log in to Grafana using the default credentials: -.. code-block:: +.. code-block:: username: admin password: admin diff --git a/examples/CodeGen/CodeGen_Guide.rst b/examples/CodeGen/CodeGen_Guide.rst index 2b1da1bc..2dda3dd3 100644 --- a/examples/CodeGen/CodeGen_Guide.rst +++ b/examples/CodeGen/CodeGen_Guide.rst @@ -9,12 +9,12 @@ Codegen Sample Guide Overview ******** -The CodeGen example uses specialized AI models that went through training with datasets that -encompass repositories, documentation, programming code, and web data. With an understanding -of various programming languages, coding patterns, and software development concepts, the -CodeGen LLMs assist developers and programmers. The LLMs can be integrated into the developers' -Integrated Development Environments (IDEs) to have more contextual awareness to write more -refined and relevant code based on the suggestions. +The CodeGen example uses specialized AI models that went through training with datasets that +encompass repositories, documentation, programming code, and web data. With an understanding +of various programming languages, coding patterns, and software development concepts, the +CodeGen LLMs assist developers and programmers. The LLMs can be integrated into the developers' +Integrated Development Environments (IDEs) to have more contextual awareness to write more +refined and relevant code based on the suggestions. Purpose ******* @@ -29,8 +29,8 @@ Purpose How It Works ************ -The CodeGen example uses an open-source code generation model with Text Generation Inference (TGI) -for serving deployment. It is presented as a Code Copilot application as shown in the diagram below. +The CodeGen example uses an open-source code generation model with Text Generation Inference (TGI) +for serving deployment. It is presented as a Code Copilot application as shown in the diagram below. .. figure:: /GenAIExamples/CodeGen/assets/img/codegen_architecture.png :alt: CodeGen Architecture Diagram @@ -41,5 +41,6 @@ Here are some deployment options, depending on your hardware and environment: .. toctree:: :maxdepth: 1 + :glob: Gaudi AI Accelerator diff --git a/examples/index.rst b/examples/index.rst index c7d72fb3..b88a56b9 100644 --- a/examples/index.rst +++ b/examples/index.rst @@ -7,23 +7,12 @@ GenAIExamples are designed to give developers an easy entry into generative AI, .. toctree:: :maxdepth: 1 + :glob: - AgentQnA/AgentQnA_Guide - ChatQnA/ChatQnA_Guide - CodeGen/CodeGen_Guide + /GenAIExamples/README ----- We're building this documentation from content in the :GenAIExamples_blob:`GenAIExamples` GitHub repository. -.. rst-class:: rst-columns - -.. toctree:: - :maxdepth: 1 - :glob: - - /GenAIExamples/README - examples.rst - /GenAIExamples/* diff --git a/faq.md b/faq.md index b5186d9a..9f8c69f5 100644 --- a/faq.md +++ b/faq.md @@ -1,65 +1,65 @@ -# OPEA Frequently Asked Questions +# OPEA Frequently Asked Questions -## What is OPEA's mission? +## What is OPEA's mission? OPEA’s mission is to offer a validated enterprise-grade GenAI (Generative Artificial Intelligence) RAG reference implementation. This will simplify GenAI development and deployment, thereby accelerating time-to-market. -## What is OPEA? +## What is OPEA? The project currently consists of a technical conceptual framework that enables GenAI implementations to meet enterprise-grade requirements. The project offers a set of reference implementations for a wide range of enterprise use cases that can be used out-of-the-box. Additionally, the project provides a set of validation and compliance tools to ensure the reference implementations meet the needs outlined in the conceptual framework. This enables new reference implementations to be contributed and validated in an open manner. Partnering with the LF AI & Data places it in the perfect spot for multi-partner development, evolution, and expansion. -## What problems are faced by GenAI deployments within the enterprise? +## What problems are faced by GenAI deployments within the enterprise? Enterprises face a myriad of challenges in the development and deployment of GenAI. The development of new models, algorithms, fine-tuning techniques, detecting and resolving bias, and how to deploy large solutions at scale continues to evolve at a rapid pace. One of the biggest challenges enterprises come up against is a lack of standardized software tools and technologies from which to choose. Additionally, enterprises want the flexibility to innovate rapidly, extend functionality to meet their business needs while ensuring the solution is secure and trustworthy. The lack of a framework that encompasses both proprietary and open solutions impedes enterprises from charting their destiny. This results in an enormous investment of time and money, impacting the time-to-market advantage. OPEA answers the need for a multi-provider, ecosystem-supported framework that enables the evaluation, selection, customization, and trusted deployment of solutions that businesses can rely on. -## Why now? +## Why now? The major adoption and deployment cycle of robust, secure, enterprise-grade GenAI solutions across all industries is in its early stages. Enterprise-grade solutions will require collaboration in the open ecosystem. The time is now for the ecosystem to come together and accelerate GenAI deployments across enterprises by offering a standardized set of tools and technologies while supporting three key tenets – openness, security, and scalability. This will require the ecosystem to work together to build reference implementations that are performant, trustworthy, and enterprise-grade ready. -## How does it compare to other options for deploying Gen AI solutions within the enterprise? +## How does it compare to other options for deploying Gen AI solutions within the enterprise? There is no alternative that brings the entire ecosystem together in a vendor-neutral manner and delivers on the promise of openness, security, and scalability. This is our primary motivation for creating the OPEA project. -## Will OPEA reference implementations work with proprietary components? +## Will OPEA reference implementations work with proprietary components? Like any other open-source project, the community will determine which components are needed by the broader ecosystem. Enterprises can always extend the OPEA project with other multi-vendor proprietary solutions to achieve their business goals. -## What does OPEA acronym stand for? +## What does OPEA acronym stand for? Open Platform for Enterprise AI. -## How do I pronounce OPEA? +## How do I pronounce OPEA? It is pronounced ‘OH-PEA-AY.’ ## What initial companies and open-source projects joined OPEA? AnyScale, Cloudera, DataStax, Domino Data Lab, HuggingFace, Intel, KX, MariaDB Foundation, MinIO, Qdrant, Red Hat, SAS, VMware by Broadcom, Yellowbrick Data, Zilliz. -## What is Intel contributing? -OPEA is to be defined jointly by several community partners, with a call for broad ecosystem contribution, under the well-established LF AI & Data Foundation. As a starting point, Intel has contributed a Technical Conceptual Framework that shows how to construct and optimize curated GenAI pipelines built for secure, turnkey enterprise deployment. At launch, Intel contributed several reference implementations on Intel hardware across Intel® Xeon® 5, Intel® Xeon® 6, and Intel® Gaudi® 2, which you can see in a GitHub repo here. Over time we intend to add to that contribution, including a software infrastructure stack to enable fully containerized AI workload deployments, as well as potentially implementations of those containerized workloads. +## What is Intel contributing? +OPEA is to be defined jointly by several community partners, with a call for broad ecosystem contribution, under the well-established LF AI & Data Foundation. As a starting point, Intel has contributed a Technical Conceptual Framework that shows how to construct and optimize curated GenAI pipelines built for secure, turnkey enterprise deployment. At launch, Intel contributed several reference implementations on Intel hardware across Intel® Xeon® 5, Intel® Xeon® 6, and Intel® Gaudi® 2, which you can see in a GitHub repo here. Over time we intend to add to that contribution, including a software infrastructure stack to enable fully containerized AI workload deployments, as well as potentially implementations of those containerized workloads. ## When you say Technical Conceptual Framework, what components are included? The models and modules can be part of an OPEA repository or be published in a stable, unobstructed repository (e.g., Hugging Face) and cleared for use by an OPEA assessment. These include: -* Ingest/Data Processing -* Embedding Models/Services -* Indexing/Vector/Graph data stores -* Retrieval/Ranking -* Prompt Engines -* Guardrails -* Memory systems +* Ingest/Data Processing +* Embedding Models/Services +* Indexing/Vector/Graph data stores +* Retrieval/Ranking +* Prompt Engines +* Guardrails +* Memory systems -## What are the different ways partners can contribute to OPEA? -There are different ways partners can contribute to this project: +## What are the different ways partners can contribute to OPEA? +There are different ways partners can contribute to this project: -* Join the project and contribute assets in terms of use cases, code, test harness, etc. -* Provide technical leadership -* Drive community engagement and evangelism -* Offer program management for various projects -* Become a maintainer, committer, and adopter -* Define and offer use cases for various industry verticals that shape OPEA project -* Build the infrastructure to support OPEA projects +* Join the project and contribute assets in terms of use cases, code, test harness, etc. +* Provide technical leadership +* Drive community engagement and evangelism +* Offer program management for various projects +* Become a maintainer, committer, and adopter +* Define and offer use cases for various industry verticals that shape OPEA project +* Build the infrastructure to support OPEA projects -## Where can partners see the latest draft of the Conceptual Framework spec? -A version of the spec is available in the documentation (["docs"](https://github.com/opea-project/docs)) repository within this project. +## Where can partners see the latest draft of the Conceptual Framework spec? +A version of the spec is available in the documentation (["docs"](https://github.com/opea-project/docs)) repository within this project. -## Is there a cost for joining? -There is no cost for anyone to join and contribute to the OPEA project. +## Is there a cost for joining? +There is no cost for anyone to join and contribute to the OPEA project. ## Do I need to be a Linux Foundation member to join? -Anyone can join and contribute. You don’t need to be a Linux Foundation member. +Anyone can join and contribute. You don’t need to be a Linux Foundation member. ## Where can I report a bug or vulnerability? Vulnerability reports and bug submissions can be sent to [info@opea.dev](mailto:info@opea.dev). \ No newline at end of file diff --git a/index.rst b/index.rst index 4c6b3f37..c9cd72b2 100644 --- a/index.rst +++ b/index.rst @@ -63,20 +63,22 @@ Source code for the OPEA Project is maintained in the .. toctree:: - :maxdepth: 1 :hidden: + :maxdepth: 2 + :glob: - Documentation Home - introduction/index - getting-started/README - examples/index - microservices/index - deploy/index - eval/index + Home + Overview + Getting Started + Tutorial + Projects developer-guides/index community/index release_notes/index CONTRIBUTING - faq + Q&A + Repo + .. _OPEA Project GitHub repository: https://github.com/opea-project + diff --git a/introduction/index.rst b/introduction/index.rst index a9bab2ab..7e9db559 100644 --- a/introduction/index.rst +++ b/introduction/index.rst @@ -84,7 +84,4 @@ Links to: * Get Involved with the OPEA Open Source Community * Browse the OPEA wiki, mailing lists, and working groups: https://wiki.lfaidata.foundation/display/DL/OPEA+Home -.. toctree:: - :maxdepth: 1 - - ../framework/framework +- :doc:`Open Platform for Enterprise AI (OPEA) Framework Draft Proposal <../framework/framework>` diff --git a/projects.rst b/projects.rst new file mode 100644 index 00000000..f5f59f9a --- /dev/null +++ b/projects.rst @@ -0,0 +1,12 @@ +OPEA Products +########################## + +OPEA provide end-to-end solution based on several products to different functions. + +.. toctree:: + :maxdepth: 1 + + GenAI Examples + GenAI Microservices + Deploying GenAI + Evaluating GenAI \ No newline at end of file diff --git a/release_notes/index.rst b/release_notes/index.rst index e36dc137..2b7300b6 100644 --- a/release_notes/index.rst +++ b/release_notes/index.rst @@ -1,6 +1,6 @@ .. _release_notes: -Release Notes +Release Note ############# Release plan & guide. diff --git a/tutorial.rst b/tutorial.rst new file mode 100644 index 00000000..dd090f50 --- /dev/null +++ b/tutorial.rst @@ -0,0 +1,10 @@ +OPEA Tutorial +########################## + +Provide following tutorials: + +- :ref:`ChatQnA_Guide`. +- :ref:`AgentQnA_Guide`. +- :ref:`CodeGen_Guide`. + +If you want to learn more, please refer to :doc:`/GenAIExamples/README`.