Skip to content

Commit

Permalink
Fix merge conflicts
Browse files Browse the repository at this point in the history
  • Loading branch information
unaidedelf8777 committed Sep 29, 2023
2 parents 33319da + 7a3f54d commit 63b48e4
Show file tree
Hide file tree
Showing 28 changed files with 883 additions and 86 deletions.
3 changes: 3 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"python.analysis.typeCheckingMode": "basic"
}
22 changes: 22 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ https://github.com/KillianLucas/open-interpreter/assets/63927363/37152071-680d-4

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing)

#### Along with an example implementation of a voice interface (inspired by _Her_):

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK)

## Quick Start

```shell
Expand Down Expand Up @@ -93,6 +97,15 @@ This combines the power of GPT-4's Code Interpreter with the flexibility of your

## Commands

**Update:** The Generator Update (0.1.5) introduced streaming:

```python
message = "What operating system are we on?"

for chunk in interpreter.chat(message, display=False, stream=True):
print(chunk)
```

### Interactive Chat

To start an interactive chat in your terminal, either run `interpreter` from the command line:
Expand All @@ -107,6 +120,15 @@ Or `interpreter.chat()` from a .py file:
interpreter.chat()
```

**You can also stream each chunk:**

```python
message = "What operating system are we on?"

for chunk in interpreter.chat(message, display=False, stream=True):
print(chunk)
```

### Programmatic Chat

For more precise control, you can pass messages directly to `.chat(message)`:
Expand Down
56 changes: 47 additions & 9 deletions docs/MACOS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,42 +4,80 @@ When running Open Interpreter on macOS with Code-Llama (either because you did
not enter an OpenAI API key or you ran `interpreter --local`) you may want to
make sure it works correctly by following the instructions below.

Tested on **MacOS Ventura 13.5** with **M2 Pro Chip**.
Tested on **MacOS Ventura 13.5** with **M2 Pro Chip** and **MacOS Ventura 13.5.1** with **M1 Max**.

I use conda as a virtual environment but you can choose whatever you want. If you go with conda you will find the Apple M1 version of miniconda here: [Link](https://docs.conda.io/projects/miniconda/en/latest/)

```
```bash
conda create -n openinterpreter python=3.11.4
```

**Activate your environment:**

```
```bash
conda activate openinterpreter
```

**Install open-interpreter:**

```
```bash
pip install open-interpreter
```

**Uninstall any previously installed llama-cpp-python packages:**

```
```bash
pip uninstall llama-cpp-python -y
```

**Install llama-cpp-python with Apple Silicon support:**
## Install llama-cpp-python with Apple Silicon support

### Prerequisites: Xcode Command Line Tools

Before running the `CMAKE_ARGS` command to install `llama-cpp-python`, make sure you have Xcode Command Line Tools installed on your system. These tools include compilers and build systems essential for source code compilation.

Part 1
Before proceeding, make sure you have the Xcode Command Line Tools installed. You can check whether they are installed by running:

```bash
xcode-select -p
```
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir

If this command returns a path, then the Xcode Command Line Tools are already installed. If not, you'll get an error message, and you can install them by running:

```bash
xcode-select --install
```

Part 2
Follow the on-screen instructions to complete the installation. Once installed, you can proceed with installing an Apple Silicon compatible `llama-cpp-python`.

---
### Step 1: Installing llama-cpp-python with ARM64 Architecture and Metal Support


```bash
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir
--no-cache-dir
```

### Step 2: Verifying Installation of llama-cpp-python with ARM64 Support

After completing the installation, you can verify that `llama-cpp-python` was correctly installed with ARM64 architecture support by running the following command:

```bash
lipo -info /path/to/libllama.dylib
```

Replace `/path/to/` with the actual path to the `libllama.dylib` file. You should see output similar to:

```bash
Non-fat file: /Users/[user]/miniconda3/envs/openinterpreter/lib/python3.11/site-packages/llama_cpp/libllama.dylib is architecture: arm64
```

If the architecture is indicated as `arm64`, then you've successfully installed the ARM64 version of `llama-cpp-python`.

### Step 3: Installing Server Components for llama-cpp-python


```bash
pip install 'llama-cpp-python[server]'
```
4 changes: 4 additions & 0 deletions docs/WINDOWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,3 +40,7 @@ The resolve this issue, perform the following steps.
```
Alternatively, if you want to include GPU suppport, follow the steps in [Local Language Models with GPU Support](./GPU.md)
6. Make sure you close and re-launch any cmd windows that were running interpreter
30 changes: 26 additions & 4 deletions interpreter/cli/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import subprocess
import os
import platform
import pkg_resources
import appdirs
from ..utils.display_markdown_message import display_markdown_message
from ..terminal_interface.conversation_navigator import conversation_navigator
Expand Down Expand Up @@ -77,8 +78,15 @@
"name": "use_containers",
"nickname": "uc",
"help_text": "optionally use a Docker Container for the interpreters code execution. this will seperate execution from your main computer. this also allows execution on a remote server via the 'DOCKER_HOST' environment variable and the dockerengine api.",
"type": bool
}
"type": bool,
},
{
"name": "safe_mode",
"nickname": "safe",
"help_text": "optionally enable safety mechanisms like code scanning; valid options are off, ask, and auto",
"type": str,
"choices": ["off", "ask", "auto"]
},
]

def cli(interpreter):
Expand All @@ -90,12 +98,16 @@ def cli(interpreter):
if arg["type"] == bool:
parser.add_argument(f'-{arg["nickname"]}', f'--{arg["name"]}', dest=arg["name"], help=arg["help_text"], action='store_true', default=None)
else:
parser.add_argument(f'-{arg["nickname"]}', f'--{arg["name"]}', dest=arg["name"], help=arg["help_text"], type=arg["type"])
choices = arg["choices"] if "choices" in arg else None
default = arg["default"] if "default" in arg else None

parser.add_argument(f'-{arg["nickname"]}', f'--{arg["name"]}', dest=arg["name"], help=arg["help_text"], type=arg["type"], choices=choices, default=default)

# Add special arguments
parser.add_argument('--config', dest='config', action='store_true', help='open config.yaml file in text editor')
parser.add_argument('--conversations', dest='conversations', action='store_true', help='list conversations to resume')
parser.add_argument('-f', '--fast', dest='fast', action='store_true', help='(depracated) runs `interpreter --model gpt-3.5-turbo`')
parser.add_argument('--version', dest='version', action='store_true', help="get Open Interpreter's version number")

# TODO: Implement model explorer
# parser.add_argument('--models', dest='models', action='store_true', help='list avaliable models')
Expand All @@ -105,7 +117,8 @@ def cli(interpreter):
# This should be pushed into an open_config.py util
# If --config is used, open the config.yaml file in the Open Interpreter folder of the user's config dir
if args.config:
config_path = os.path.join(appdirs.user_config_dir(), 'Open Interpreter', 'config.yaml')
config_dir = appdirs.user_config_dir("Open Interpreter")
config_path = os.path.join(config_dir, 'config.yaml')
print(f"Opening `{config_path}`...")
# Use the default system editor to open the file
if platform.system() == 'Windows':
Expand Down Expand Up @@ -133,6 +146,10 @@ def cli(interpreter):
if attr_value is not None and hasattr(interpreter, attr_name):
setattr(interpreter, attr_name, attr_value)

# if safe_mode and auto_run are enabled, safe_mode disables auto_run
if interpreter.auto_run and not interpreter.safe_mode == "off":
setattr(interpreter, "auto_run", False)

# Default to CodeLlama if --local is on but --model is unset
if interpreter.local and args.model is None:
# This will cause the terminal_interface to walk the user through setting up a local LLM
Expand All @@ -143,6 +160,11 @@ def cli(interpreter):
conversation_navigator(interpreter)
return

if args.version:
version = pkg_resources.get_distribution("open-interpreter").version
print(f"Open Interpreter {version}")
return

# Depracated --fast
if args.fast:
# This will cause the terminal_interface to walk the user through setting up a local LLM
Expand Down
27 changes: 6 additions & 21 deletions interpreter/code_interpreters/create_code_interpreter.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,9 @@
import os
import uuid
import weakref

import appdirs
from .languages.applescript import AppleScript
from .languages.html import HTML
from .languages.javascript import JavaScript
from .languages.python import Python
from .languages.r import R
from .languages.shell import Shell

LANGUAGE_MAP = {
"python": Python,
"bash": Shell,
"shell": Shell,
"javascript": JavaScript,
"html": HTML,
"applescript": AppleScript,
"r": R,
}
from .language_map import language_map


# Global dictionary to store the session IDs by the weak reference of the calling objects
SESSION_IDS_BY_OBJECT = weakref.WeakKeyDictionary()
Expand Down Expand Up @@ -87,17 +72,17 @@ def create_code_interpreter(language, use_containers=False):

try:
# Retrieve the specific CodeInterpreter class based on the language
CodeInterpreter = LANGUAGE_MAP[language]
CodeInterpreter = language_map[language]

# Retrieve the session ID for the current calling object, if available
session_id = SESSION_IDS_BY_OBJECT.get(caller_object, None) if caller_object else None

if not use_containers:
if not use_containers or session_id is None:
return CodeInterpreter()

session_path = os.path.join(
appdirs.user_data_dir("Open Interpreter"), "sessions", session_id
)
appdirs.user_data_dir("Open Interpreter"), "sessions", session_id)

if not os.path.exists(session_path):
os.makedirs(session_path)
return CodeInterpreter(session_id=session_id, use_docker=use_containers)
Expand Down
17 changes: 17 additions & 0 deletions interpreter/code_interpreters/language_map.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from .languages.python import Python
from .languages.shell import Shell
from .languages.javascript import JavaScript
from .languages.html import HTML
from .languages.applescript import AppleScript
from .languages.r import R


language_map = {
"python": Python,
"bash": Shell,
"shell": Shell,
"javascript": JavaScript,
"html": HTML,
"applescript": AppleScript,
"r": R,
}
4 changes: 4 additions & 0 deletions interpreter/code_interpreters/languages/applescript.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@
from ..subprocess_code_interpreter import SubprocessCodeInterpreter

class AppleScript(SubprocessCodeInterpreter):

file_extension = "applescript"
proper_name = "AppleScript"

def __init__(self, **kwargs):
super().__init__(**kwargs)
self.start_cmd = os.environ.get('SHELL', '/bin/zsh')
Expand Down
3 changes: 3 additions & 0 deletions interpreter/code_interpreters/languages/html.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
from ..base_code_interpreter import BaseCodeInterpreter

class HTML(BaseCodeInterpreter):
file_extension = "html"
proper_name = "HTML"

def __init__(self):
super().__init__()

Expand Down
3 changes: 3 additions & 0 deletions interpreter/code_interpreters/languages/javascript.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@
import re

class JavaScript(SubprocessCodeInterpreter):
file_extension = "js"
proper_name = "JavaScript"

def __init__(self, **kwargs):
super().__init__(**kwargs)
self.start_cmd = "node -i"
Expand Down
6 changes: 5 additions & 1 deletion interpreter/code_interpreters/languages/python.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,17 @@
import re

class Python(SubprocessCodeInterpreter):

file_extension = "py"
proper_name = "Python"

def __init__(self, **kwargs):
super().__init__(**kwargs)
if 'use_docker' in kwargs and kwargs['use_docker']:
self.start_cmd = "python3 -i -q -u"
else:
self.start_cmd = sys.executable + " -i -q -u"

def preprocess_code(self, code):
return preprocess_python(code)

Expand Down
4 changes: 4 additions & 0 deletions interpreter/code_interpreters/languages/r.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@
import re

class R(SubprocessCodeInterpreter):

file_extension = "r"
proper_name = "R"

def __init__(self, **kwargs):
super().__init__(**kwargs)
self.start_cmd = "R -q --vanilla" # Start R in quiet and vanilla mode
Expand Down
8 changes: 6 additions & 2 deletions interpreter/code_interpreters/languages/shell.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,12 @@
import os

class Shell(SubprocessCodeInterpreter):

file_extension = "sh"
proper_name = "Shell"

def __init__(self, **kwargs):
super().__init__(**kwargs)

# Determine the start command based on the platform
if platform.system() == 'Windows':
self.start_cmd = 'cmd.exe'
Expand Down Expand Up @@ -39,7 +42,8 @@ def preprocess_shell(code):
code = add_active_line_prints(code)

# Wrap in a trap for errors
code = wrap_in_trap(code)
if platform.system() != 'Windows':
code = wrap_in_trap(code)

# Add end command (we'll be listening for this so we know when it ends)
code += '\necho "## end_of_execution ##"'
Expand Down
Loading

0 comments on commit 63b48e4

Please sign in to comment.