Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Force then squashing #3

Merged
merged 1 commit into from
Sep 29, 2023
Merged

Force then squashing #3

merged 1 commit into from
Sep 29, 2023

Conversation

unaidedelf8777
Copy link
Owner

Describe the changes you have made:

Reference any relevant issue (Fixes #000)

  • I have performed a self-review of my code:

I have tested the code on the following OS:

  • Windows
  • MacOS
  • Linux

AI Language Model (if applicable)

  • GPT4
  • GPT3
  • Llama 7B
  • Llama 13B
  • Llama 34B
  • Huggingface model (Please specify which one)

@unaidedelf8777 unaidedelf8777 merged commit d983560 into main Sep 29, 2023
unaidedelf8777 added a commit that referenced this pull request Sep 29, 2023
…he base interpreter class or anything in the core folder was needed.

Update README from base/main

merge rebased branch to main. (#2)

* fix: stop overwriting boolean config values

Without the default set to None, any boolean CLI flag that isn't passed reverts to its default state even if it is configured in the config.yaml file.

* The Generator Update (English docs)

* Improved --conversations, --config

---------

quality of life and error messages

errors and stuff again

re-add readline method because doc formatting removed it somehow

fix readline method of wrapper

added file upload and download functionality

finalized upload and download commands. tested stuff

visual

Improved --conversations, --config

The Generator Update (English docs)

fix: stop overwriting boolean config values

Without the default set to None, any boolean CLI flag that isn't passed reverts to its default state even if it is configured in the config.yaml file.

Update WINDOWS.md

Warns the user to re-launch cmd windows after installing llama locally

Fix ARM64 llama-cpp-python Install on Apple Silicon

This commit updates the `MACOS.md` documentation to include detailed steps for correctly installing `llama-cpp-python` with ARM64 architecture support on Apple Silicon-based macOS systems. The update provides:

- A prerequisite check for Xcode Command Line Tools.
- Step-by-step installation instructions for `llama-cpp-python` with ARM64 and Metal support.
- A verification step to confirm the correct installation of `llama-cpp-python` for ARM64 architecture.
- An additional step for installing server components for `llama-cpp-python`.

This commit resolves the issue described in `ARM64 Installation Issue with llama-cpp-python on Apple Silicon Macs for interpreter --local OpenInterpreter#503`.

Broken empty message response

fix crash on unknwon command on call to display help message

removed unnecessary spaces

Update get_relevant_procedures.py

Fixed a typo in the instructions to the model

The Generator Update

The Generator Update

The Generator Update - Azure fix

The Generator Update - Azure function calling

The Generator Update - Azure fix

Better debugging

Better debugging

Proper TokenTrimming for new models

Generator Update Fixes (Updated Version)

Generator Update Quick Fixes

Added example JARVIS Colab Notebook

Added example JARVIS Colab Notebook

Skip wrap_in_trap on Windows

fix: allow args to have choices and defaults

This allows non-boolean args to define possible options and default values, which were ignored previously.

feat: add semgrep code scanning via --safe flag

This reintroduces the --safe functionality from OpenInterpreter#24.

--safe has 3 possible values auto, ask, and off

Code scanning is opt-in.

fix: default to 'off' for scan_code attribute

fix: toggle code_scan based on auto_run setting; update --scan docs

revert: undo default and choices change to cli.py

This is being removed from this PR in favor of a standalone fix in OpenInterpreter#511

feat: cleanup code scanning and convert to safe mode

docs: fix naming of safe_mode flag in README

fix: pass debug_mode flag into file cleanup for code scan

fix: remove extra tempfile import from scan_code util

Fixed first message inturruption error

Holding `--safe` docs for pip release

fix: stop overwriting safe_mode config.yaml setting with default in args

Fixed `%load` magic command

But I think we should deprecate it in favor of `--conversations`.

Generalized API key error message

Better model validation, better config debugging

Better config debugging

Better config debugging

Better config debugging

Better --config

Cleaned up initial message

Generator Update Quick Fixes II

Force then squashing (#3)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant