This project implements ChatGPT continuous dialogue based on ConversationId, which can be quickly integrated with just a few lines of code.
ChatGPTSharp is available as NuGet package.
Use ConversationId for continuous conversations.
ChatGPTClientSettings settings = new ChatGPTClientSettings();
settings.OpenAIToken = File.ReadAllText("KEY.txt");
settings.ModelName = "gpt-4o";
settings.ProxyUri = "http://127.0.0.1:1081";
var client = new ChatGPTClient(settings);
client.IsDebug = true;
var ChatImageModels = new List<ChatImageModel>()
{
ChatImageModel.CreateWithFile(@"C:\Users\aiqin\Pictures\20231221155547.png", ImageDetailMode.Low)
};
var systemPrompt = "";
var msg = await client.SendMessage("Please describe this image", systemPrompt: systemPrompt, images: ChatImageModels);
Console.WriteLine($"{msg.Response} {msg.ConversationId}, {msg.MessageId}");
msg = await client.SendMessage("Have you eaten today?", msg.ConversationId, msg.MessageId);
Console.WriteLine($"{msg.Response} {msg.ConversationId}, {msg.MessageId}");
- Removed obsolete Vision judgment
- Added setting to disable token calculation
- Added support for sending images using the Vision model and pre-computing image tokens (local files only).
- Improved the token algorithm for messages to align with the official API.
- Added more default token count data for official website models and automatic conversion of '16k' in model names to maximum tokens.
- Considering the increasing number of tokens in models, introduced a method to support unlimited MaxResponseTokens and MaxPromptTokens. Setting them to 0 will remove the limit.
- Add support gpt-3.5-turbo-16k
- Removed the old token algorithm code and now supports netstandard2.0, now, the library can also be used with .NET Framework.
- Support for the GPT-4 model and correction of the maximum token count for 8k and 32k.
Changelog for earlier versions.
- The initialization method of ChatGPTClient adds a request timeout setting and changes the default timeout time from 20 seconds to 60 seconds.
- Using TiktokenSharp to calculate token count, fixing the issue of inaccurate token calculation.
- token algorithm fix
- The token algorithm has been temporarily removed, which may cause exceptions when certain strings are combined. It will be restored after subsequent testing is completed.
- Add SendMessage parameters sendSystemType and sendSystemMessage to specify the insertion of system messages into the conversation.
- Add local token algorithm of gpt3, the algorithm is from js library gpt-3-encoder
This code base references node-chatgpt-api