Skip to content

Commit

Permalink
feat: organize for release
Browse files Browse the repository at this point in the history
  • Loading branch information
AlmogBaku committed May 9, 2024
1 parent ca69f6a commit 9e3247f
Show file tree
Hide file tree
Showing 8 changed files with 283 additions and 10 deletions.
122 changes: 122 additions & 0 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
name: Release

on:
workflow_dispatch:
inputs:
prerelease:
default: true
push:
branches:
- master

permissions:
contents: write

jobs:
tests:
name: "Run tests"
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install ruff pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Test with pytest
run: |
pytest --ignore=tests/example.py --doctest-modules --junitxml=junit/test-results.xml
version:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.changelog.outputs.version }}
tag: ${{ steps.changelog.outputs.tag }}
changelog: ${{ steps.changelog.outputs.changelog }}
clean_changelog: ${{ steps.changelog.outputs.clean_changelog }}
skipped: ${{ steps.changelog.outputs.skipped }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Conventional Changelog Action
id: changelog
uses: TriPSs/conventional-changelog-action@v5
with:
release-count: '1'
skip-version-file: 'true'
skip-commit: 'true'
skip-git-pull: 'true'
git-push: 'false'
fallback-version: '0.1.0'
release:
name: "Release and publish the version"
needs: [ tests, version ]
runs-on: ubuntu-latest
permissions:
id-token: write # IMPORTANT: this permission is mandatory for trusted publishing
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Extract repository name

- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
context: .
file: ./Dockerfile
push: true
tags: ghcr.io/${{ github.repository }}/${{github.repository}}:${{ needs.version.outputs.version }}

- name: Update changelog
shell: bash
run: |
git config user.name github-actions
git config user.email [email protected]
touch CHANGELOG.md
echo -e "${{ needs.version.outputs.changelog }}\n\n$(cat CHANGELOG.md)" > CHANGELOG.md
git add CHANGELOG.md
git add docs/reference.md
git commit -m "chore(release): ${{ needs.version.outputs.version }}" CHANGELOG.md
git push
- name: Tag
uses: actions/github-script@v7
with:
script: |
github.rest.git.createRef({
owner: context.repo.owner,
repo: context.repo.repo,
ref: 'refs/tags/${{ needs.version.outputs.tag }}',
sha: context.sha
})
- name: Release on GitHub
uses: softprops/action-gh-release@v1
with:
tag_name: ${{ needs.version.outputs.tag }}
files: dist/*
body: |
Released to ghcr.io/${{ github.repository }}/${{github.repository}}:${{ needs.version.outputs.version }}
---
${{ needs.version.outputs.clean_changelog }}
prerelease: ${{ inputs.prerelease }}
name: Version ${{ needs.version.outputs.version }}
generate_release_notes: false
4 changes: 3 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@ FROM python:3.11-slim
WORKDIR /app

# Copy the build directory from the previous stage
COPY --from=build-stage /app/dist /app/dist
COPY --from=build /app/dist /app/dist
COPY server /app/server
COPY server/requirements.txt /app/server

WORKDIR /app/server
RUN pip install -r requirements.txt
ENV DIST_DIR=/app/dist/

# Command to run on container start
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8080"]
Expand Down
87 changes: 86 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,89 @@
# ![Logo](./public/sandbox.svg)
# LLM Playground

This playground can help you quickly experiment with the LLM model.
LLM Playground is a versatile environment for experimenting with different large language models (LLMs). It facilitates basic evaluation and comparisons directly in your browser, without the need to set up projects or write code in Jupyter notebooks. This tool supports a variety of LLMs, including OpenAI models, through configurable endpoints.

<picture>
<source media="(prefers-color-scheme: dark)" srcset="./screenshot-dark.png">
<img alt="LLM Playground screenshot" src="./screenshot.png">
</picture>

## Features

- **Flexible Configuration**: Use environmental variables, a settings YAML file, or a `.env` file.
- **Support for Multiple Vendors**: Compatible with OpenAI and other LLM providers through the [LiteLLM Proxy](https://docs.litellm.ai/docs/simple_proxy).
- **Easy to Use**: Designed for straightforward setup and minimal overhead.

## Getting Started

### Prerequisites

- Docker installed on your machine.

### Installation

To get started with LLM Playground, you can use Docker to pull and run the container:

```bash
docker pull ghcr.io/almogbaku/llm-playground
docker run -p 8080:8080 ghcr.io/almogbaku/llm-playground
```

This will start the LLM Playground on port 8080.

## Configuration

LLM Playground allows various configuration methods including environment variables, a `.env` file, or a `settings.yml` file.

### Configuration Options

- `openai_api_key`: Your OpenAI API key.
- `openai_organization`: Your OpenAI organization ID.
- `openai_base_url`: Base URL for the OpenAI API.
- `models`: Configuration for the models and endpoints.

### Models Configuration

Configure your models using one of the following methods:

1. **Direct Configuration**: Specify models directly in the `models.models` parameter.
2. **API Provider URLs**: Set `models.urls` to fetch models from an LLM-Playground compatible API((need to return an array of Model)[server/src/types.py]).
3. **OpenAI API URLs**: Set `models.oai_urls` to fetch models from an OpenAI compatible API.

Each model can be configured with a `base_url` if it does not utilize OpenAI or is not fetched from `models.oai_urls`.

### Example Configuration

Here is a more detailed example using an environment variable setup:

```bash
export OPENAI_API_KEY="your-openai-api-key"

export MODELS_MODELS_0_NAME="LLama3"
export MODELS_MODELS_0_DESCRIPTION="Facebook's Llama3 Model"
export MODELS_MODELS_0_TYPE="chat"
export MODELS_MODELS_0_MAX_TOKENS="32000"
export MODELS_MODELS_0_VENDOR="Facebook"
```

For multiple models, repeat the pattern adjusting the `MODEL_#_` prefix.

### YAML Configuration Example

```yaml
openai_api_key: "your-openai-api-key"
models:
urls: ["http://example.com/api/models"] # Fetch models from an LLM-Playground compatible API
oai_urls: ["http://example.com/api/openai-models"] # Fetch models from an OpenAI compatible API
models:
- name: "llama3"
description: "Facebook's Llama3 Model"
type: "chat"
base_url: "https://api.example.com"
max_tokens: 32000
vendor: "Facebook"
```
## Usage
Once deployed, access LLM Playground by visiting `http://localhost:8080`. Choose from the available models to start your experiments and comparisons.
Binary file added screenshot-dark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
33 changes: 25 additions & 8 deletions server/main.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from contextlib import asynccontextmanager
from typing import List, AsyncIterator
from urllib.parse import urlparse

import httpx
from fastapi import FastAPI
Expand All @@ -21,6 +22,8 @@
async def lifespan(app: FastAPI):
global default_client, models, settings

oai_default_base_url = "https://api.openai.com/v1"

default_client = AsyncOpenAI(
api_key=settings.openai_api_key,
base_url=settings.openai_base_url,
Expand All @@ -29,31 +32,44 @@ async def lifespan(app: FastAPI):

if ((settings.openai_base_url is None)
and settings.openai_api_key
and (settings.models.oai_urls is None or ("https://api.openai.com/v1" not in settings.models.oai_urls))):
and (settings.models.oai_urls is None or (oai_default_base_url not in settings.models.oai_urls))):
if settings.models.oai_urls is None:
settings.models.oai_urls = []
settings.models.oai_urls.append("https://api.openai.com/v1")
settings.models.oai_urls.append(oai_default_base_url)

for url in settings.models.oai_urls or []:
cli = AsyncOpenAI(
api_key=settings.openai_api_key,
base_url=url,
)
resp = await cli.models.list()
base_url = url.rsplit("/models")[0] if url.startswith(settings.openai_base_url or "https://api.openai.com/v1") else None
vendor = "OpenAI" if "openai" in url else urlparse(url).hostname.rsplit(".", 1)[0].rsplit(".", 1)[-1]

models += [
Model(name=model.id, system_prompt=True, type='chat', vendor='openai')
Model(name=model.id, system_prompt=True, type='chat', vendor=vendor, base_url=base_url)
for model in resp.data if model.id.startswith("gpt")
]
models += [
Model(name=model.id, type='completions', vendor='openai')
Model(name=model.id, type='completions', vendor=vendor, base_url=base_url)
for model in resp.data if "instruct" in model.id
]

for url in settings.models.urls or []:
models += [Model(**model) for model in httpx.get(url).json()]
for model in httpx.get(url).json():
if model.base_url is None:
raise ValueError(f"Model {model.name} is missing a base_url.")
if model.vendor is None:
model.vendor = urlparse(model.base_url).hostname.rsplit(".", 1)[0].rsplit(".", 1)[-1]
models.append(model)

if settings.models.models:
models += settings.models.models
for model in settings.models.models:
if model.base_url is None:
raise ValueError(f"Model {model.name} is missing a base_url.")
if model.vendor is None:
model.vendor = urlparse(model.base_url).hostname.rsplit(".", 1)[0].rsplit(".", 1)[-1]
models.append(model)

yield

Expand Down Expand Up @@ -94,9 +110,10 @@ async def get_models():


def client(model: Model) -> AsyncOpenAI:
if model.api_key:
if model.api_key or model.base_url:
return AsyncOpenAI(
api_key=model.api_key,
api_key=model.api_key or settings.openai_api_key,
base_url=model.base_url or settings.openai_base_url,
)

return default_client
Expand Down
46 changes: 46 additions & 0 deletions server/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
annotated-types==0.6.0
anyio==4.3.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
distro==1.9.0
dnspython==2.6.1
email_validator==2.1.1
fastapi==0.111.0
fastapi-cli==0.0.2
h11==0.14.0
httpcore==1.0.5
httptools==0.6.1
httpx==0.27.0
idna==3.7
itsdangerous==2.2.0
Jinja2==3.1.4
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
openai==1.25.2
orjson==3.10.3
pydantic==2.7.1
pydantic-extra-types==2.7.0
pydantic-settings==2.2.1
pydantic_core==2.18.2
Pygments==2.18.0
python-dotenv==1.0.1
python-multipart==0.0.9
PyYAML==6.0.1
regex==2024.4.28
requests==2.31.0
rich==13.7.1
shellingham==1.5.4
sniffio==1.3.1
starlette==0.37.2
tiktoken==0.6.0
tqdm==4.66.4
typer==0.12.3
typing_extensions==4.11.0
ujson==5.9.0
urllib3==2.2.1
uvicorn==0.29.0
uvloop==0.19.0
watchfiles==0.21.0
websockets==12.0
1 change: 1 addition & 0 deletions server/src/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ class Model(BaseModel):
max_tokens: Optional[int] = None
vendor: Optional[str] = None
api_key: Optional[str] = Field(None, hidden=True)
base_url: Optional[str] = Field(None, hidden=True)


class ModelsConfig(BaseModel):
Expand Down

0 comments on commit 9e3247f

Please sign in to comment.