Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Quality of Life issues #429

Merged
merged 10 commits into from
Apr 24, 2024
Merged
5 changes: 5 additions & 0 deletions .github/workflows/e2e.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
name: e2e
on:
pull_request:
types:
- ready_for_review
- review_requested
- synchronize
paths:
# Catch-all
- "**"
Expand Down Expand Up @@ -39,6 +43,7 @@ concurrency:
jobs:
e2e:
runs-on: ai-ubuntu-big-boy-8-core
if: ${{ !github.event.pull_request.draft }}

steps:
- name: Checkout Repo
Expand Down
23 changes: 22 additions & 1 deletion .github/workflows/pytest.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
name: pytest
on: [pull_request]
on:
pull_request:
paths:
# Ignore updates to the .github directory, unless it's this current file
- "!.github/**"
- ".github/workflows/pytest.yaml"

# Ignore docs and website things
- "!**.md"
- "!docs/**"
- "!adr/**"
- "!website/**"
- "!netlify.toml"

# Ignore updates to generic github metadata files
- "!CODEOWNERS"
- "!.gitignore"
- "!LICENSE"

# Ignore LFAI-UI things (for now?)
- "!src/leapfrogai_ui/**"
- "!packages/ui/**"

# Declare default permissions as read only.
permissions: read-all
Expand Down
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,10 @@ build/
*.whl
.model/
*.gguf
.env
.ruff_cache
.branches
.temp

# local model and tokenizer files
*.bin
Expand Down
34 changes: 22 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![LeapfrogAI Logo](https://github.com/defenseunicorns/leapfrogai/raw/main/docs/imgs/leapfrogai.png)
![LeapfrogAI](https://github.com/defenseunicorns/leapfrogai/raw/main/docs/imgs/leapfrogai.png)

[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/defenseunicorns/leapfrogai/badge)](https://api.securityscorecards.dev/projects/github.com/defenseunicorns/leapfrogai)

Expand All @@ -17,7 +17,14 @@
- [Usage](#usage)
- [UDS (Latest)](#uds-latest)
- [UDS (Dev)](#uds-dev)
- [CPU](#cpu)
- [GPU](#gpu)
- [Local Dev](#local-dev)
- [API](#api-1)
- [Backend: llama-cpp-python](#backend-llama-cpp-python)
- [Backend: text-embeddings](#backend-text-embeddings)
- [Backend: vllm](#backend-vllm)
- [Backend: whisper](#backend-whisper)
- [Community](#community)

## Overview
Expand Down Expand Up @@ -118,24 +125,25 @@ If you want to make some changes to LeapfrogAI before deploying via UDS (for exa
Make sure your system has the [required dependencies](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#prerequisites).

For ease, it's best to create a virtual environment:
```

``` shell
python -m venv .venv
source .venv/bin/activate
```

Each component is built into its own Zarf package. You can build all of the packages you need at once with the following `Make` targets:

```
``` shell
make build-cpu # api, llama-cpp-python, text-embeddings, whisper
make build-gpu # api, vllm, text-embeddings, whisper
make build-all # all of the backends
```

**OR**

You can build components individually using teh following `Make` targets:
You can build components individually using the following `Make` targets:

```
``` shell
make build-api
make build-vllm # if you have GPUs
make build-llama-cpp-python # if you have CPU only
Expand All @@ -146,15 +154,17 @@ make build-whisper
Once the packages are created, you can deploy either a CPU or GPU-enabled deployment via one of the UDS bundles:

#### CPU
```

``` shell
cd uds-bundles/dev/cpu
uds create .
uds deploy k3d-core-slim-dev:0.18.0
uds deploy uds-bundle-leapfrogai*.tar.zst
```

#### GPU
```

``` shell
cd uds-bundles/dev/gpu
uds create .
uds deploy k3d-core-slim-dev:0.18.0 --set K3D_EXTRA_ARGS="--gpus=all --image=ghcr.io/justinthelaw/k3d-gpu-support:v1.27.4-k3s1-cuda" # be sure to check if a newer version exists
Expand All @@ -167,7 +177,7 @@ The following instructions are for running each of the LFAI components for local

It is highly recommended to make a virtual environment to keep the development environment clean:

```
``` shell
python -m venv .venv
source .venv/bin/activate
```
Expand All @@ -187,7 +197,7 @@ uvicorn leapfrogai_api.main:app --port 3000 --reload

To run the llama-cpp-python backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/llama-cpp-python
python -m pip install .
Expand All @@ -199,7 +209,7 @@ lfai-cli --app-dir=. main:Model
#### Backend: text-embeddings
To run the text-embeddings backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/text-embeddings
python -m pip install .
Expand All @@ -210,7 +220,7 @@ python -u main.py
#### Backend: vllm
To run the vllm backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/vllm
python -m pip install .
Expand All @@ -222,7 +232,7 @@ python -u src/main.py
#### Backend: whisper
To run the vllm backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/whisper
python -m pip install ".[dev]"
Expand Down
Loading