Skip to content
This repository has been archived by the owner on May 13, 2024. It is now read-only.

Commit

Permalink
fix: temperature to float
Browse files Browse the repository at this point in the history
  • Loading branch information
Chris Lemke committed Mar 1, 2023
1 parent 6a175d1 commit 8517939
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ With the keyword `cfi` you can generate images by DALL·E 2. Just type in a desc
## Configure the workflow (optional) 🦾
You can tweak the workflow to your liking. The following parameters are available. Simply adjust them in the [workflow's configuration](https://www.alfredapp.com/help/workflows/user-configuration/).
- **OpenAI model**: Following models are available: `Ada`, `Babbage`, `Curie`, `Davinci` (ascending quality). Default: `Davinci`.
- **Temperature**: The temperature determines how greedy the generative model is. If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability of grammar errors and the generation of nonsense. Default: `0`.
- **Temperature**: The temperature determines how greedy the generative model is (between 0 and 2). If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability of grammar errors and the generation of nonsense. Default: `0`.
- **Maximum tokens**: The maximum number of tokens to generate in the completion. Default: `50`.
- **Top-p**: Top-p sampling selects from the smallest possible set of words whose cumulative probability exceeds probability p. In this way, the number of words in the set can be dynamically increased and decreased according to the nearest word probability distribution. Default: `1`.
- **Frequency penalty**: A value between `-2.0` and `2.0`. The frequency penalty parameter controls the model’s tendency to repeat predictions. Default: `0`.
Expand Down
4 changes: 2 additions & 2 deletions info.plist
Original file line number Diff line number Diff line change
Expand Up @@ -976,7 +976,7 @@ With the keyword `cfi` you can generate images by DALL·E 2. Just type in a desc
## Configure the workflow (optional) 🦾
You can tweak the workflow to your liking. The following parameters are available. Simply adjust them in the [workflow's configuration](https://www.alfredapp.com/help/workflows/user-configuration/).
- **OpenAI model**: Following models are available: `Ada`, `Babbage`, `Curie`, `Davinci` (ascending quality). Default: `Davinci`.
- **Temperature**: The temperature determines how greedy the generative model is. If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability of grammar errors and the generation of nonsense. Default: `0`.
- **Temperature**: The temperature determines how greedy the generative model is (between 0 and 2). If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability of grammar errors and the generation of nonsense. Default: `0`.
- **Maximum tokens**: The maximum number of tokens to generate in the completion. Default: `50`.
- **Top-p**: Top-p sampling selects from the smallest possible set of words whose cumulative probability exceeds probability p. In this way, the number of words in the set can be dynamically increased and decreased according to the nearest word probability distribution. Default: `1`.
- **Frequency penalty**: A value between `-2.0` and `2.0`. The frequency penalty parameter controls the model’s tendency to repeat predictions. Default: `0`.
Expand Down Expand Up @@ -1295,7 +1295,7 @@ As soon as OpenAI releases the ChatGPT API, we will integrate it into this workf
<true/>
</dict>
<key>description</key>
<string>The temperature determines how greedy the generative model is. If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability for grammar errors and the generation of nonsense.</string>
<string>The temperature determines how greedy the generative model is (between 0 and 2). If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability for grammar errors and the generation of nonsense.</string>
<key>label</key>
<string>Temperature</string>
<key>type</key>
Expand Down
8 changes: 4 additions & 4 deletions workflow/src/text_completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

openai.api_key = os.getenv("api_key")
__model = os.getenv("model") or "text-davinci-003"
__temperature = int(os.getenv("temperature") or 0)
__temperature = float(os.getenv("temperature") or 0.0)
__max_tokens = int(os.getenv("max_tokens") or 50)
__top_p = int(os.getenv("top_p") or 1)
__frequency_penalty = float(os.getenv("frequency_penalty") or 0.0)
Expand Down Expand Up @@ -51,9 +51,9 @@ def stdout_write(output_string: str) -> None:

def env_value_checks() -> None:
"""Checks the environment variables for invalid values."""
if __temperature < 0:
if __temperature < 0 or __temperature > 2.0:
stdout_write(
f"🚨 'Temperature' must be ≥ 0. But you have set it to {__temperature}."
f"🚨 'Temperature' must be ≤ 2.0 and ≥ 0. But you have set it to {__temperature}."
)
sys.exit(0)

Expand Down Expand Up @@ -81,7 +81,7 @@ def env_value_checks() -> None:
def make_request(
model: str,
prompt: str,
temperature: int,
temperature: float,
max_tokens: int,
top_p: int,
frequency_penalty: float,
Expand Down

0 comments on commit 8517939

Please sign in to comment.