Skip to content

Commit

Permalink
docs: added first docs page (moved from README), and markdown support
Browse files Browse the repository at this point in the history
  • Loading branch information
ErikBjare committed Oct 27, 2023
1 parent 9e1efd2 commit 32b86b3
Show file tree
Hide file tree
Showing 8 changed files with 144 additions and 25 deletions.
3 changes: 2 additions & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
python-version: '3.11'

- name: Install dependencies
run: |
Expand Down Expand Up @@ -54,6 +54,7 @@ jobs:
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./docs/_build
publish_branch: gh-pages
destination_dir: docs
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ precommit:
make lint

docs:
make -C docs html
poetry run make -C docs html
27 changes: 6 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,11 @@ A local alternative to ChatGPT's "Advanced Data Analysis" (previously "Code Inte
- 🤖 Support for many models
- Including GPT-4 and **any model that runs in llama.cpp**

In progress:
🚧 In progress:

- 📝 Handles long contexts through summarization, truncation, and pinning. (🚧 WIP)
- 💬 Optional web UI and API for conversations. (🚧 WIP)
- 📝 Handle long contexts intelligently through summarization, truncation, and pinning.
- 💬 Web UI and API for conversations.
- 🌐 Browse, interact, and automate the web from the terminal.

## 🛠 Use Cases

Expand Down Expand Up @@ -95,26 +96,10 @@ gptme --server

And browse to http://localhost:5000/ to see the web UI.

### 🖥 Local Models
## 📚 Documentation

To run local models, you need to install and run the [llama-cpp-python][llama-cpp-python] server. To ensure you get the most out of your hardware, make sure you build it with [the appropriate hardware acceleration][hwaccel].
For more information, see the [documentation](https://erikbjare.github.io/gptme/docs/).

For macOS, you can find detailed instructions [here][metal].

I recommend the WizardCoder-Python models.

[llama-cpp-python]: https://github.com/abetlen/llama-cpp-python
[hwaccel]: https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration
[metal]: https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md

```sh
MODEL=~/ML/wizardcoder-python-13b-v1.0.Q4_K_M.gguf
poetry run python -m llama_cpp.server --model $MODEL --n_gpu_layers 1 # Use `--n_gpu_layer 1` if you have a M1/M2 chip

# Now, to use it:
export OPENAI_API_BASE="http://localhost:8000/v1"
gptme --llm llama
```

## 🛠 Usage

Expand Down
3 changes: 2 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration

extensions = []
extensions = ['myst_parser']


templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
Expand Down
2 changes: 2 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ Welcome to gptme's documentation!
:maxdepth: 2
:caption: Contents:

local-models



Indices and tables
Expand Down
22 changes: 22 additions & 0 deletions docs/local-models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
🖥 Local Models
===============

To run gptme with local models, you need to install and run the [llama-cpp-python][llama-cpp-python] server. To ensure you get the most out of your hardware, make sure you build it with [the appropriate hardware acceleration][hwaccel].

For macOS, you can find detailed instructions [here][metal].

I recommend the WizardCoder-Python models.

[llama-cpp-python]: https://github.com/abetlen/llama-cpp-python
[hwaccel]: https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration
[metal]: https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md

```sh
MODEL=~/ML/wizardcoder-python-13b-v1.0.Q4_K_M.gguf
poetry run python -m llama_cpp.server --model $MODEL --n_gpu_layers 1 # Use `--n_gpu_layer 1` if you have a M1/M2 chip

# Now, to use it:
export OPENAI_API_BASE="http://localhost:8000/v1"
gptme --llm local
```

109 changes: 108 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ mypy = "*"
ruff = "*"
black = "*"
sphinx = "^7.2.6"
myst-parser = "^2.0.0"

[tool.poetry.extras]
server = ["llama-cpp-python", "flask"]
Expand Down

0 comments on commit 32b86b3

Please sign in to comment.