-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 还是关于智谱api问题 #3909
Comments
🥰 Description of requirementsI now highly suspect that the previous api calling problem of glm-4v-plus was caused by not tagging it with , similar to the following Suggestion: Increase the environment variable Reference document: Document 🧐 SolutionSuggestion: Increase the environment variable Reference document: Document 📝 Supplementary informationNo response |
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
A similar issue was discussed in a closed thread: 自定义模型gpt-4-turbo-2024-04-09,不支持上传图片. The solution involved ensuring the model supports image recognition and configuring it accordingly in the settings. To address your request, you can add an environment variable
Here is the updated code: import { ModelProviderCard } from '@/types/llm';
// Function to get custom models from environment variable
const getCustomModels = () => {
const customModels = process.env.ZHIPU_MODEL_LIST;
return customModels ? JSON.parse(customModels) : [];
};
// Default models
const defaultModels = [
{
description:
'GLM-4-AllTools 是专门为支持智能体和相关任务而进一步优化的模型版本。它能够自主理解用户的意图,规划复杂的指令,并能够调用一个或多个工具(例如网络浏览器、代码解释器和文本生图像)以完成复杂的任务。',
displayName: 'GLM-4-AllTools',
enabled: true,
functionCall: true,
id: 'glm-4-alltools',
tokens: 128_000,
},
{
description: '高智能旗舰:性能全面提升,长文本和复杂任务能力显著增强',
displayName: 'GLM-4-Plus',
enabled: true,
functionCall: true,
id: 'glm-4-plus',
tokens: 128_000,
},
{
description: '高智能模型:适用于处理高度复杂和多样化的任务',
displayName: 'GLM-4-0520',
enabled: true,
functionCall: true,
id: 'glm-4-0520',
tokens: 128_000,
},
{
description: '旧版旗舰:发布于2024年1月16日,目前已被 GLM-4-0520 取代', // deprecated on 2025-06
displayName: 'GLM-4',
functionCall: true,
id: 'glm-4',
tokens: 128_000,
},
{
description: '性价比最高的版本,综合性能接近GLM-4,速度快,价格实惠',
displayName: 'GLM-4-Air',
enabled: true,
functionCall: true,
id: 'glm-4-air',
tokens: 128_000,
},
{
description: 'GLM-4-Air 的高性能版本,效果不变,推理速度达到其2.6倍',
displayName: 'GLM-4-AirX',
functionCall: true,
id: 'glm-4-airx',
tokens: 8192,
},
{
description: '超长输入:专为处理超长文本和记忆型任务设计',
displayName: 'GLM-4-Long',
enabled: true,
functionCall: true,
id: 'glm-4-long',
tokens: 1_024_000,
},
{
description: '适用简单任务,速度最快,价格最实惠的版本',
displayName: 'GLM-4-Flash',
enabled: true,
functionCall: true,
id: 'glm-4-flash',
tokens: 128_000,
},
{
description:
'视频和图像理解:具备视频内容和多图片的理解能力',
displayName: 'GLM-4V-Plus',
enabled: true,
id: 'glm-4v-plus',
tokens: 8192,
vision: true,
fileUpload: true, // Added file upload support
},
{
description:
'图像理解:具备图像理解能力和推理能力',
displayName: 'GLM-4V',
enabled: true,
id: 'glm-4v',
tokens: 2048,
vision: true,
},
{
description:
'CodeGeeX是一款强大的AI编程助手,提供智能问答和代码补全功能,支持多种编程语言,帮助开发者提高编程效率。',
displayName: 'CodeGeeX-4',
id: 'codegeex-4',
tokens: 128_000,
},
{
description:
'支持基于人设的角色扮演、超长多轮的记忆、千人千面的角色对话,广泛应用于情感陪伴、游戏智能NPC、网红/明星/影视剧IP分身、数字人/虚拟主播、文字冒险游戏等拟人对话或游戏场景。',
displayName: 'CharGLM-3',
id: 'charglm-3',
tokens: 4096,
},
{
description: '心理模型:具备专业咨询能力,帮助用户理解情感并应对情绪问题',
displayName: 'Emohaa',
id: 'emohaa',
tokens: 8192,
},
];
// Merge default models with custom models from environment variable
const chatModels = [...defaultModels, ...getCustomModels()];
const ZhiPu: ModelProviderCard = {
chatModels,
checkModel: 'glm-4-flash',
id: 'zhipu',
name: 'ZhiPu',
};
export default ZhiPu; This code will allow you to add custom models via the |
至于为什么不用one api,然后在 |
As for why I don’t use one api and configure it in |
不只是glm-4v-plus需要增加这个标签,glm-4v也需要,并且也许所有视觉模型都需要 |
@dosu ,帮我把此议题的 |
@dosu, please help me change the |
你不能使用一个 API 并在 要设置 const ZhiPu: ModelProviderCard = {
chatModels: [
{
description: 'GLM-4V-Plus 具备对视频内容及多图片的理解能力,适合多模态任务。',
displayName: 'GLM-4V-Plus',
enabled: true,
id: 'glm-4v-plus',
pricing: {
currency: 'CNY',
input: 10,
output: 10,
},
tokens: 8192,
vision: true,
fileUpload: true, // Added file upload support
},
{
description: 'GLM-4V 提供强大的图像理解与推理能力,支持多种视觉任务。',
displayName: 'GLM-4V',
id: 'glm-4v',
pricing: {
currency: 'CNY',
input: 50,
output: 50,
},
tokens: 2048,
vision: true,
fileUpload: true, // Added file upload support
},
// ... other models
],
checkModel: 'glm-4-flash',
description: '智谱 AI 提供多模态与语言模型的开放平台,支持广泛的AI应用场景,包括文本处理、图像理解与编程辅助等。',
id: 'zhipu',
modelsUrl: 'https://open.bigmodel.cn/dev/howuse/model',
name: 'ZhiPu',
url: 'https://zhipuai.cn',
};
export default ZhiPu; 此外,确保在配置中正确设置了 export const llmEnv = getLLMConfig({
ENABLED_ZHIPU: !!process.env.ZHIPU_API_KEY,
ZHIPU_API_KEY: process.env.ZHIPU_API_KEY,
ZHIPU_MODEL_LIST: process.env.ZHIPU_MODEL_LIST,
}); 此配置包括 标签已更改为 |
You cannot use an API and configure it in To set the const ZhiPu: ModelProviderCard = {
chatModels: [
{
description: 'GLM-4V-Plus has the ability to understand video content and multiple pictures, and is suitable for multi-modal tasks. ',
displayName: 'GLM-4V-Plus',
enabled: true,
id: 'glm-4v-plus',
pricing: {
currency: 'CNY',
input: 10,
output: 10,
},
tokens: 8192,
vision: true,
fileUpload: true, // Added file upload support
},
{
description: 'GLM-4V provides powerful image understanding and reasoning capabilities, supporting a variety of visual tasks. ',
displayName: 'GLM-4V',
id: 'glm-4v',
pricing: {
currency: 'CNY',
input: 50,
output: 50,
},
tokens: 2048,
vision: true,
fileUpload: true, // Added file upload support
},
// ... other models
],
checkModel: 'glm-4-flash',
description: 'Zhipu AI provides an open platform for multi-modal and language models, supporting a wide range of AI application scenarios, including text processing, image understanding, and programming assistance. ',
id: 'zhipu',
modelsUrl: 'https://open.bigmodel.cn/dev/howuse/model',
name: 'ZhiPu',
url: 'https://zhipuai.cn',
};
export default ZhiPu; Additionally, make sure the export const llmEnv = getLLMConfig({
ENABLED_ZHIPU: !!process.env.ZHIPU_API_KEY,
ZHIPU_API_KEY: process.env.ZHIPU_API_KEY,
ZHIPU_MODEL_LIST: process.env.ZHIPU_MODEL_LIST,
}); This configuration includes the The label has been changed to |
@arvinxx 大佬来看看呗,我怕此议题被刷到下面去了您看不见 |
@arvinxx Boss, come and take a look. I’m afraid this issue has been brushed down so you can’t see it. |
@BiFangKNT ZHIPU_MODEL_LIST 这个应该是有的吧 |
@BiFangKNT ZHIPU_MODEL_LIST This should be available, right? |
文档里没写。而且这是其一,其二是建议更新一下官方默认tag吧,目前为止智谱的视觉模型默认还是没有上传功能,而且glm-4-alltool这个多模态模型默认甚至连视觉功能也不支持 |
It's not written in the document. And this is one, and the second is to update the official default tag. So far, the visual model of Zhipu does not have the upload function by default, and the multi-modal model glm-4-alltool does not even support the visual function by default. |
我试过了,并没有。设置这个环境变量无法生效。 |
I tried it and it didn't work. Setting this environment variable does not take effect. |
@arvinxx 大佬,查明了,智谱api不支持ipv6,所以不能用只有ipv6的url传递 但是可以通过转换为base64传递。可以在相关逻辑里添加一个判断,如果是智谱系的模型,就先转为base64。或者在智谱的模型设置这里添加一个是否转换为base64的开关 测试脚本如下:
返回结果如下:
并且还是建议增加ZHIPU_MODEL_LIST环境变量。 另外我还有一个问题: |
This issue is closed, If you have any questions, you can comment and reply. |
🥰 需求描述
我现在高度怀疑之前glm-4v-plus的api调用问题 #3863 是由于没有给它打上
<file>
的tag导致的,类似如下而目前的glm-4v-plus并没有这个tag
建议:增加环境变量
ZHIPU_MODEL_LIST
使能够自定义添加tag,并且尽快更新默认的tag参考文档:文档
🧐 解决方案
建议:增加环境变量
ZHIPU_MODEL_LIST
使能够自定义添加tag,并且尽快更新默认的tag参考文档:文档
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: