Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llm models --options should show supported attachment types, too #612

Closed
simonw opened this issue Nov 6, 2024 · 2 comments
Closed

llm models --options should show supported attachment types, too #612

simonw opened this issue Nov 6, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@simonw
Copy link
Owner

simonw commented Nov 6, 2024

Refs:

@simonw simonw added the enhancement New feature or request label Nov 6, 2024
@simonw
Copy link
Owner Author

simonw commented Nov 6, 2024

Had to sort them alphabetically to get the tests to pass reliably.

@simonw simonw closed this as completed in 12df1a3 Nov 6, 2024
@simonw
Copy link
Owner Author

simonw commented Nov 6, 2024

Example output (from the cog maintained docs):

llm/docs/usage.md

Lines 250 to 282 in 12df1a3

OpenAI Chat: gpt-4o (aliases: 4o)
Options:
temperature: float
What sampling temperature to use, between 0 and 2. Higher values like
0.8 will make the output more random, while lower values like 0.2 will
make it more focused and deterministic.
max_tokens: int
Maximum number of tokens to generate.
top_p: float
An alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p
probability mass. So 0.1 means only the tokens comprising the top 10%
probability mass are considered. Recommended to use top_p or
temperature but not both.
frequency_penalty: float
Number between -2.0 and 2.0. Positive values penalize new tokens based
on their existing frequency in the text so far, decreasing the model's
likelihood to repeat the same line verbatim.
presence_penalty: float
Number between -2.0 and 2.0. Positive values penalize new tokens based
on whether they appear in the text so far, increasing the model's
likelihood to talk about new topics.
stop: str
A string where the API will stop generating further tokens.
logit_bias: dict, str
Modify the likelihood of specified tokens appearing in the completion.
Pass a JSON string like '{"1712":-100, "892":-100, "1489":-100}'
seed: int
Integer seed to attempt to sample deterministically
json_object: boolean
Output a valid JSON object {...}. Prompt must mention JSON.
Attachment types:
image/png, image/gif, image/webp, image/jpeg

simonw added a commit that referenced this issue Nov 6, 2024
simonw added a commit that referenced this issue Nov 14, 2024
simonw added a commit that referenced this issue Nov 17, 2024
simonw added a commit that referenced this issue Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant