Skip to content

Commit

Permalink
doc: Fix headings (#706)
Browse files Browse the repository at this point in the history
Only one H1 heading with the title is allowed.  The rest must be H2 and
deeper, so adjust them accordingly.

Signed-off-by: David B. Kinder <[email protected]>
  • Loading branch information
dbkinder authored Sep 19, 2024
1 parent ef90fbb commit f6ae4fa
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 20 deletions.
10 changes: 5 additions & 5 deletions comps/embeddings/predictionguard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,30 @@ This embedding microservice is designed to efficiently convert text into vectori

**Note** - The BridgeTower model implemented in Prediction Guard can actually embed text, images, or text + images (jointly). For now this service only embeds text, but a follow on contribution will enable the multimodal functionality.

# 🚀 Start Microservice with Docker
## 🚀 Start Microservice with Docker

## Setup Environment Variables
### Setup Environment Variables

Setup the following environment variables first

```bash
export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
```

## Build Docker Images
### Build Docker Images

```bash
cd ../../..
docker build -t opea/embedding-predictionguard:latest -f comps/embeddings/predictionguard/Dockerfile .
```

## Start Service
### Start Service

```bash
docker run -d --name="embedding-predictionguard" -p 6000:6000 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY opea/embedding-predictionguard:latest
```

# 🚀 Consume Embeddings Service
## 🚀 Consume Embeddings Service

```bash
curl localhost:6000/v1/embeddings \
Expand Down
14 changes: 7 additions & 7 deletions comps/llms/text-generation/predictionguard/README.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
# Introduction
# Prediction Guard Introduction

[Prediction Guard](https://docs.predictionguard.com) allows you to utilize hosted open access LLMs, LVMs, and embedding functionality with seamlessly integrated safeguards. In addition to providing a scalable access to open models, Prediction Guard allows you to configure factual consistency checks, toxicity filters, PII filters, and prompt injection blocking. Join the [Prediction Guard Discord channel](https://discord.gg/TFHgnhAFKd) and request an API key to get started.

# Get Started
## Get Started

## Build Docker Image
### Build Docker Image

```bash
cd ../../..
docker build -t opea/llm-textgen-predictionguard:latest -f comps/llms/text-generation/predictionguard/Dockerfile .
```

## Run the Predictionguard Microservice
### Run the Predictionguard Microservice

```bash
docker run -d -p 9000:9000 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY --name llm-textgen-predictionguard opea/llm-textgen-predictionguard:latest
```

# Consume the Prediction Guard Microservice
## Consume the Prediction Guard Microservice

See the [Prediction Guard docs](https://docs.predictionguard.com/) for available model options.

## Without streaming
### Without streaming

```bash
curl -X POST http://localhost:9000/v1/chat/completions \
Expand All @@ -37,7 +37,7 @@ curl -X POST http://localhost:9000/v1/chat/completions \
}'
```

## With streaming
### With streaming

```bash
curl -N -X POST http://localhost:9000/v1/chat/completions \
Expand Down
16 changes: 8 additions & 8 deletions comps/lvms/predictionguard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,44 +4,44 @@

Visual Question and Answering is one of the multimodal tasks empowered by LVMs (Large Visual Models). This microservice supports visual Q&A by using a LLaVA model available via the Prediction Guard API. It accepts two inputs: a prompt and an image. It outputs the answer to the prompt about the image.

# 🚀1. Start Microservice with Python
## 🚀1. Start Microservice with Python

## 1.1 Install Requirements
### 1.1 Install Requirements

```bash
pip install -r requirements.txt
```

## 1.2 Start LVM Service
### 1.2 Start LVM Service

```bash
python lvm.py
```

# 🚀2. Start Microservice with Docker (Option 2)
## 🚀2. Start Microservice with Docker (Option 2)

## 2.1 Setup Environment Variables
### 2.1 Setup Environment Variables

Setup the following environment variables first

```bash
export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
```

## 2.1 Build Docker Images
### 2.1 Build Docker Images

```bash
cd ../../..
docker build -t opea/lvm-predictionguard:latest -f comps/lvms/predictionguard/Dockerfile .
```

## 2.2 Start Service
### 2.2 Start Service

```bash
docker run -d --name="lvm-predictionguard" -p 9399:9399 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY opea/lvm-predictionguard:latest
```

# 🚀3. Consume LVM Service
## 🚀3. Consume LVM Service

```bash
curl -X POST http://localhost:9399/v1/lvm \
Expand Down

0 comments on commit f6ae4fa

Please sign in to comment.