Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: refactor tools, codeblock, and tooluse #113

Merged
merged 17 commits into from
Sep 9, 2024

Conversation

ErikBjare
Copy link
Owner

@ErikBjare ErikBjare commented Sep 9, 2024

Trying to reduce the size of gptme/tools/__init__.py

TODO

  • check that things still work somewhat
  • make sure docs look ok after

Copy link
Contributor

ellipsis-dev bot commented Sep 9, 2024

Your free trial has expired. To keep using Ellipsis, sign up at https://app.ellipsis.dev for $20/seat/month or reach us at [email protected]

gptme/tools/__init__.py Outdated Show resolved Hide resolved
gptme/tools/browser.py Outdated Show resolved Hide resolved
gptme/tools/chats.py Outdated Show resolved Hide resolved
gptme/tools/python.py Outdated Show resolved Hide resolved
gptme/tools/tmux.py Outdated Show resolved Hide resolved
@codecov-commenter
Copy link

codecov-commenter commented Sep 9, 2024

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
71 1 70 0
View the top 1 failed tests by shortest run time
tests.test_eval test_eval_cli
Stack Traces | 31.3s run time
@pytest.mark.slow
    def test_eval_cli():
        runner = CliRunner()
        test_set = ["hello"]
        result = runner.invoke(
            main,
            [
                *test_set,
            ],
        )
        assert result
        assert result.exit_code == 0
>       assert "correct file" in result.output
E       AssertionError: assert 'correct file' in '=== Running evals ===\n=== Completed test hello ===\n=== Completed test hello ===\n=== Completed test hello ===\n\n=== Finished ===\n\n\n\n=== Model Results ===\n\nResults for model: openai/gpt-4o\nCompleted 1 tests in 0.00s:\n- hello in 0.00s (gen: 0.00s, run: 0.00s, eval: 0.00s)\n\nResults for model: anthropic/claude-3-5-sonnet-20240620\nCompleted 1 tests in 0.00s:\n- hello in 0.00s (gen: 0.00s, run: 0.00s, eval: 0.00s)\n\nResults for model: openrouter/meta-llama/llama-3.1-405b-instruct\nCompleted 1 tests in 0.00s:\n- hello in 0.00s (gen: 0.00s, run: 0.00s, eval: 0.00s)\n\n\n=== Model Comparison ===\nModel                                          hello\n---------------------------------------------  --------\nopenai/gpt-4o                                  ❌ 0.00s\nanthropic/claude-3-5-sonnet-20240620           ❌ 0.00s\nopenrouter/meta-llama/llama-3.1-405b-instruct  ❌ 0.00s\n\nResults saved to .../gptme/eval_results/eval_results_20240909_202435.csv\n'
E        +  where '=== Running evals ===\n=== Completed test hello ===\n=== Completed test hello ===\n=== Completed test hello ===\n\n=== Finished ===\n\n\n\n=== Model Results ===\n\nResults for model: openai/gpt-4o\nCompleted 1 tests in 0.00s:\n- hello in 0.00s (gen: 0.00s, run: 0.00s, eval: 0.00s)\n\nResults for model: anthropic/claude-3-5-sonnet-20240620\nCompleted 1 tests in 0.00s:\n- hello in 0.00s (gen: 0.00s, run: 0.00s, eval: 0.00s)\n\nResults for model: openrouter/meta-llama/llama-3.1-405b-instruct\nCompleted 1 tests in 0.00s:\n- hello in 0.00s (gen: 0.00s, run: 0.00s, eval: 0.00s)\n\n\n=== Model Comparison ===\nModel                                          hello\n---------------------------------------------  --------\nopenai/gpt-4o                                  ❌ 0.00s\nanthropic/claude-3-5-sonnet-20240620           ❌ 0.00s\nopenrouter/meta-llama/llama-3.1-405b-instruct  ❌ 0.00s\n\nResults saved to .../gptme/eval_results/eval_results_20240909_202435.csv\n' = <Result okay>.output

tests/test_eval.py:20: AssertionError

To view individual test run time comparison to the main branch, go to the Test Analytics Dashboard

gptme/tools/patch.py Outdated Show resolved Hide resolved
gptme/tools/patch.py Outdated Show resolved Hide resolved
@ErikBjare ErikBjare changed the title refactor: refactor tools, streamline amending __doc__ for tools, refactor type_o_tooluse to ToolUse.from_type refactor: refactor tools, codeblock, and tooluse Sep 9, 2024
gptme/tools/base.py Outdated Show resolved Hide resolved
gptme/tools/patch.py Outdated Show resolved Hide resolved
gptme/codeblock.py Outdated Show resolved Hide resolved
@ErikBjare ErikBjare merged commit 0cad5ca into master Sep 9, 2024
5 of 6 checks passed
@ErikBjare ErikBjare deleted the dev/refactor-tools branch September 9, 2024 20:16
@ErikBjare ErikBjare mentioned this pull request Sep 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants