Skip to content

Commit

Permalink
fix: Quality of Life issues (#429)
Browse files Browse the repository at this point in the history
* fix a couple README typos and missing gitignores

* fixing a couple issues with the pipelines running on unrelated files

* e2e only runs when pr is ready

* e2e only runs when pr is ready

* add README to e2e ignore

* one last update to e2e

* already ignores md

* update comment
  • Loading branch information
gphorvath authored Apr 24, 2024
1 parent 48c3e51 commit 5f5444b
Show file tree
Hide file tree
Showing 4 changed files with 52 additions and 12 deletions.
5 changes: 5 additions & 0 deletions .github/workflows/e2e.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
name: e2e
on:
pull_request:
types:
- ready_for_review
- review_requested
- synchronize
paths:
# Catch-all
- "**"
Expand Down Expand Up @@ -42,6 +46,7 @@ concurrency:
jobs:
e2e:
runs-on: ai-ubuntu-big-boy-8-core
if: ${{ !github.event.pull_request.draft }}

steps:
- name: Checkout Repo
Expand Down
23 changes: 22 additions & 1 deletion .github/workflows/pytest.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
name: pytest
on: [pull_request]
on:
pull_request:
paths:
# Ignore updates to the .github directory, unless it's this current file
- "!.github/**"
- ".github/workflows/pytest.yaml"

# Ignore docs and website things
- "!**.md"
- "!docs/**"
- "!adr/**"
- "!website/**"
- "!netlify.toml"

# Ignore updates to generic github metadata files
- "!CODEOWNERS"
- "!.gitignore"
- "!LICENSE"

# Ignore LFAI-UI things (no Python)
- "!src/leapfrogai_ui/**"
- "!packages/ui/**"

# Declare default permissions as read only.
permissions: read-all
Expand Down
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,10 @@ build/
*.whl
.model/
*.gguf
.env
.ruff_cache
.branches
.temp

# local model and tokenizer files
*.bin
Expand Down
32 changes: 21 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![LeapfrogAI Logo](https://github.com/defenseunicorns/leapfrogai/raw/main/docs/imgs/leapfrogai.png)
![LeapfrogAI](https://github.com/defenseunicorns/leapfrogai/raw/main/docs/imgs/leapfrogai.png)

[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/defenseunicorns/leapfrogai/badge)](https://api.securityscorecards.dev/projects/github.com/defenseunicorns/leapfrogai)

Expand All @@ -17,7 +17,14 @@
- [Usage](#usage)
- [UDS (Latest)](#uds-latest)
- [UDS (Dev)](#uds-dev)
- [CPU](#cpu)
- [GPU](#gpu)
- [Local Dev](#local-dev)
- [API](#api-1)
- [Backend: llama-cpp-python](#backend-llama-cpp-python)
- [Backend: text-embeddings](#backend-text-embeddings)
- [Backend: vllm](#backend-vllm)
- [Backend: whisper](#backend-whisper)
- [Community](#community)

## Overview
Expand Down Expand Up @@ -118,14 +125,15 @@ If you want to make some changes to LeapfrogAI before deploying via UDS (for exa
Make sure your system has the [required dependencies](https://docs.leapfrog.ai/docs/local-deploy-guide/quick_start/#prerequisites).

For ease, it's best to create a virtual environment:
```

``` shell
python -m venv .venv
source .venv/bin/activate
```

Each component is built into its own Zarf package. You can build all of the packages you need at once with the following `Make` targets:

```
``` shell
make build-cpu # api, llama-cpp-python, text-embeddings, whisper
make build-gpu # api, vllm, text-embeddings, whisper
make build-all # all of the backends
Expand All @@ -135,7 +143,7 @@ make build-all # all of the backends

You can build components individually using the following `Make` targets:

```
``` shell
make build-api
make build-vllm # if you have GPUs
make build-llama-cpp-python # if you have CPU only
Expand All @@ -146,15 +154,17 @@ make build-whisper
Once the packages are created, you can deploy either a CPU or GPU-enabled deployment via one of the UDS bundles:

#### CPU
```

``` shell
cd uds-bundles/dev/cpu
uds create .
uds deploy k3d-core-slim-dev:0.18.0
uds deploy uds-bundle-leapfrogai*.tar.zst
```

#### GPU
```

``` shell
cd uds-bundles/dev/gpu
uds create .
uds deploy k3d-core-slim-dev:0.18.0 --set K3D_EXTRA_ARGS="--gpus=all --image=ghcr.io/justinthelaw/k3d-gpu-support:v1.27.4-k3s1-cuda" # be sure to check if a newer version exists
Expand All @@ -167,7 +177,7 @@ The following instructions are for running each of the LFAI components for local

It is highly recommended to make a virtual environment to keep the development environment clean:

```
``` shell
python -m venv .venv
source .venv/bin/activate
```
Expand All @@ -191,7 +201,7 @@ The instructions for running the basic repeater model (used for testing the API)

To run the llama-cpp-python backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/llama-cpp-python
python -m pip install .
Expand All @@ -203,7 +213,7 @@ lfai-cli --app-dir=. main:Model
#### Backend: text-embeddings
To run the text-embeddings backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/text-embeddings
python -m pip install .
Expand All @@ -214,7 +224,7 @@ python -u main.py
#### Backend: vllm
To run the vllm backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/vllm
python -m pip install .
Expand All @@ -226,7 +236,7 @@ python -u src/main.py
#### Backend: whisper
To run the vllm backend locally (starting from the root directory of the repository):

```
``` shell
python -m pip install src/leapfrogai_sdk
cd packages/whisper
python -m pip install ".[dev]"
Expand Down

0 comments on commit 5f5444b

Please sign in to comment.