Skip to content

Commit

Permalink
Exclude max_tokens from request if it's None (#14334)
Browse files Browse the repository at this point in the history
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->


We found a request with `max_tokens=None` results in the following error
in Anthropic:

```
HTTPError: 400 Client Error: Bad Request for url: https://oregon.staging.cloud.databricks.com/serving-endpoints/corey-anthropic/invocations. 
Response text: {"error_code":"INVALID_PARAMETER_VALUE","message":"INVALID_PARAMETER_VALUE: max_tokens was not of type Integer: null"}
```

This PR excludes `max_tokens` if it's None.
  • Loading branch information
harupy authored Dec 6, 2023
1 parent 86b08d7 commit 5efaedf
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 11 deletions.
4 changes: 2 additions & 2 deletions libs/langchain/langchain/chat_models/mlflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,13 +115,13 @@ def _generate(
"messages": message_dicts,
"temperature": self.temperature,
"n": self.n,
"stop": stop or self.stop,
"max_tokens": self.max_tokens,
**self.extra_params,
**kwargs,
}
if stop := self.stop or stop:
data["stop"] = stop
if self.max_tokens is not None:
data["max_tokens"] = self.max_tokens
resp = self._client.predict(endpoint=self.endpoint, inputs=data)
return ChatMlflow._create_chat_result(resp)

Expand Down
15 changes: 7 additions & 8 deletions libs/langchain/langchain/llms/databricks.py
Original file line number Diff line number Diff line change
Expand Up @@ -334,13 +334,14 @@ class Config:

@property
def _llm_params(self) -> Dict[str, Any]:
params = {
params: Dict[str, Any] = {
"temperature": self.temperature,
"n": self.n,
"stop": self.stop,
"max_tokens": self.max_tokens,
**(self.model_kwargs or self.extra_params),
}
if self.stop:
params["stop"] = self.stop
if self.max_tokens is not None:
params["max_tokens"] = self.max_tokens
return params

@validator("cluster_id", always=True)
Expand Down Expand Up @@ -457,11 +458,9 @@ def _call(
request: Dict[str, Any] = {"prompt": prompt}
if self._client.llm:
request.update(self._llm_params)
request.update(self.model_kwargs or self.extra_params)
else:
request.update(self.model_kwargs or self.extra_params)
request.update(self.model_kwargs or self.extra_params)
request.update(kwargs)
if stop := self.stop or stop:
if stop:
request["stop"] = stop

if self.transform_input_fn:
Expand Down
4 changes: 3 additions & 1 deletion libs/langchain/langchain/llms/mlflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,12 +106,14 @@ def _call(
"prompt": prompt,
"temperature": self.temperature,
"n": self.n,
"max_tokens": self.max_tokens,
**self.extra_params,
**kwargs,
}
if stop := self.stop or stop:
data["stop"] = stop
if self.max_tokens is not None:
data["max_tokens"] = self.max_tokens

resp = self._client.predict(endpoint=self.endpoint, inputs=data)
return resp["choices"][0]["text"]

Expand Down

0 comments on commit 5efaedf

Please sign in to comment.