We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
了解到目前使用trainablemodel时,tools选择在内部处理,使用onlinechatmodel时,tools选择交给api处理。 当自己部署模型作为api时,选择onlinechatmodel会导致报错 看起来像是vllm部署的原因
实际工作中,一般都会私有化部署一个大模型的api,然后使用onlinechatmodel,如果部署的api不支持tools选择,那么就没法进行functioncall。是否可以在不改变已经部署大模型api的条件下,通过参数控制tools的处理逻辑呢。
The text was updated successfully, but these errors were encountered:
这里建议使用TrainableModule部署一个大模型服务,再通过TrainableModule().deploy_method(xx, url=xx)去连接
TrainableModule().deploy_method(xx, url=xx)
Sorry, something went wrong.
No branches or pull requests
了解到目前使用trainablemodel时,tools选择在内部处理,使用onlinechatmodel时,tools选择交给api处理。

当自己部署模型作为api时,选择onlinechatmodel会导致报错
看起来像是vllm部署的原因
实际工作中,一般都会私有化部署一个大模型的api,然后使用onlinechatmodel,如果部署的api不支持tools选择,那么就没法进行functioncall。是否可以在不改变已经部署大模型api的条件下,通过参数控制tools的处理逻辑呢。
The text was updated successfully, but these errors were encountered: