diff --git a/.github/codecov.yml b/.github/codecov.yml index a32384ed..c1582d7e 100644 --- a/.github/codecov.yml +++ b/.github/codecov.yml @@ -1,6 +1,6 @@ # TODO: set up flags, but would require splitting the `make test` command for each flag, or set the flag for each provider/model # https://docs.codecov.com/docs/flags - + # https://docs.codecov.com/docs/components component_management: individual_components: diff --git a/README.md b/README.md index 7b906126..c06f12c5 100644 --- a/README.md +++ b/README.md @@ -152,7 +152,7 @@ You can find more [Demos][docs-demos] and [Examples][docs-examples] in the [docu - 💻 Code execution - Executes code in your local environment with the [shell][docs-tools-shell] and [python][docs-tools-python] tools. - 🧩 Read, write, and change files - - Makes incremental changes with the [patch][docs-tools-patch] tool. + - Makes incremental changes with the [patch][docs-tools-patch] tool. - 🌐 Search and browse the web. - Can use a browser via Playwright with the [browser][docs-tools-browser] tool. - 👀 Vision diff --git a/docs/bot.md b/docs/bot.md index ae2f3bea..e36a0639 100644 --- a/docs/bot.md +++ b/docs/bot.md @@ -7,7 +7,7 @@ The `gptme-bot` composite action is a GitHub Action that automates the process o ## Usage -To use the `gptme-bot` composite action in your repo, you need to create a GitHub Actions workflow file that triggers the action in response to comments on issues or pull requests. +To use the `gptme-bot` composite action in your repo, you need to create a GitHub Actions workflow file that triggers the action in response to comments on issues or pull requests. Here is an example workflow file that triggers the action in response to comments on issues: @@ -35,8 +35,8 @@ jobs: allowlist: "erikbjare" ``` -The `gptme-bot` action will then run the `gptme` command-line tool with the command specified in the comment, and perform actions based on the output of the tool. +The `gptme-bot` action will then run the `gptme` command-line tool with the command specified in the comment, and perform actions based on the output of the tool. If a question was asked, it will simply reply. -If a request was made it will check out the appropriate branch, install dependencies, run `gptme`, then commit and push any changes made. If the issue is a pull request, the bot will push changes directly to the pull request branch. If the issue is not a pull request, the bot will create a new pull request with the changes. +If a request was made it will check out the appropriate branch, install dependencies, run `gptme`, then commit and push any changes made. If the issue is a pull request, the bot will push changes directly to the pull request branch. If the issue is not a pull request, the bot will create a new pull request with the changes. diff --git a/docs/examples.rst b/docs/examples.rst index 5fef1c97..34bb0ad6 100644 --- a/docs/examples.rst +++ b/docs/examples.rst @@ -64,4 +64,3 @@ Generate docstrings for all functions in a file: gptme --non-interactive "Patch these files to include concise docstrings for all functions, skip functions that already have docstrings. Include: brief description, parameters." $@ These examples demonstrate how gptme can be used to create simple yet powerful automation tools. Each script can be easily customized and expanded to fit specific project needs. - diff --git a/docs/finetuning.md b/docs/finetuning.md index f10fdf63..06080e03 100644 --- a/docs/finetuning.md +++ b/docs/finetuning.md @@ -55,7 +55,7 @@ TODO... ## Model suggestions - HuggingFaceH4/zephyr-7b-beta - - teknium/Replit-v2-CodeInstruct-3B + - teknium/Replit-v2-CodeInstruct-3B - I had issues with this one on M2, but would be good to have some 3B model as an example used in testing/debug. [oa-datasets]: https://projects.laion.ai/Open-Assistant/docs/data/datasets diff --git a/docs/providers.md b/docs/providers.md index fddca4dd..84706d17 100644 --- a/docs/providers.md +++ b/docs/providers.md @@ -40,7 +40,7 @@ export OPENROUTER_API_KEY="your-api-key" ## Local/Ollama -There are several ways to run local LLM models in a way that exposes a OpenAI API-compatible server. +There are several ways to run local LLM models in a way that exposes a OpenAI API-compatible server. Here's we will cover how to achieve that with `ollama`.