Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About raw prompt display option in OntoGPT command line #181

Closed
yy20716 opened this issue Aug 23, 2023 · 2 comments · Fixed by #183
Closed

About raw prompt display option in OntoGPT command line #181

yy20716 opened this issue Aug 23, 2023 · 2 comments · Fixed by #183

Comments

@yy20716
Copy link

yy20716 commented Aug 23, 2023

Hello @caufieldjh and @cmungall,

I'm reaching out to inquire whether ontoGPT could potentially offer additional command line options to display the raw prompt that's being passed to the language models. It appears that certain portions of the prompts are stored in the output yaml file, as evident in this example. However, it's uncertain if this represents the complete prompt.

I also noticed the existence of the "--show-prompt" option in the cli.py file, though it seems to be specifically related to SPINDOCTOR, if I've interpreted it correctly. Could you kindly clarify whether such an option already exists? Your assistance is greatly appreciated. Thank you for your valuable support.

@caufieldjh
Copy link
Member

Hi @yy20716 - you're right, the output generally truncates the text of the prompt or omits the text input entirely, working under the assumption that the output contains the text in its own field. The recursion in the extraction approach also means that while this is the initial prompt in the above spaghetti example:

From the text below, extract the following entities in the following format:

label: <the name of the recipe>
description: <a brief textual description of the recipe>
categories: <a semicolon separated list of the categories to which this recipe belongs>
ingredients: <a semicolon separated list of the ingredients plus quantities of the recipe>
steps: <a semicolon separated list of the individual steps involved in this recipe>


Text:
SIMPLE SPAGHETTI

DIRECTIONS
On medium heat melt the butter and sautee the onion and bell peppers.
Add the hamburger meat and cook until meat is well done.
Add the tomato sauce, salt, pepper and garlic powder.
Salt, pepper and garlic powder can be adjusted to your own tastes.
Cook noodles as directed.
Mix the sauce and noodles if you like, I keep them separated.

INGREDIENTS
UNITS: US
1
small onion (chopped)
1
bell pepper (chopped)
2
tablespoons garlic powder
3
tablespoons butter
1
teaspoon salt
1
teaspoon pepper
2
(15 ounce) cans tomato sauce
1
(16 ounce) box spaghetti noodles
1 - 1 1⁄2
lb hamburger meat.

This prompt is then also created and queried:

Split the following piece of text into fields in the following format:

food_item: <the food item>
amount: <the quantity of the ingredient, e.g. 2 lbs>


Text:
1 small onion (chopped)

then this:

Split the following piece of text into fields in the following format:

food: <the food item>
state: <the state of the food item (e.g. chopped, diced)>


Text:
small onion

and this

Split the following piece of text into fields in the following format:

value: <the value of the quantity>
unit: <the unit of the quantity, e.g. grams, cups, etc.>


Text:
1

and so on.

That being said, it would certainly be useful to see all those prompts, so I'll add the --show-prompt option to other CLI commands.

@caufieldjh caufieldjh linked a pull request Aug 24, 2023 that will close this issue
@caufieldjh
Copy link
Member

OK, the --show-prompt can now be used with most CLI commands, though it writes the prompt out to the logger, so it will need to be used with the verbosity setting like -vvv.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants