Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should prompt-chaining work? #106

Closed
ErikBjare opened this issue Sep 6, 2024 · 0 comments
Closed

How should prompt-chaining work? #106

ErikBjare opened this issue Sep 6, 2024 · 0 comments
Labels
bug Something isn't working enhancement New feature or request

Comments

@ErikBjare
Copy link
Owner

I thought that gptme "first prompt" - "second prompt" would wait for the first prompt to completely finish (exhaust without a command given), but turns out it runs the second prompt as soon as tools for the first one are given.

Example:

$ gptme 'what is 2+2' - 'what is the last answer times 23'
User: what is 2+2
Assistant: To calculate 2+2, I can use Python:

\```python
2 + 2
\```
Out[1]: System:
Executed code block.

Result:
\```
4
\```
User: what is the last answer times 23
Assistant: To calculate the last answer (4) times 23, I'll use Python again:

\```python
4 * 23
\```

What could be a good test case so we can test for this in CI? Something where it needs both a tool result and to comment/summarize or run more tools before it can generate an answer for the second prompt (the result of which should contain an expected flag to test for the presence of).

@ErikBjare ErikBjare added bug Something isn't working enhancement New feature or request labels Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant