-
-
Notifications
You must be signed in to change notification settings - Fork 531
Change Logs
- Message list now accept RunId
- Upgraded to Microsoft.Extensions.AI version 9.0.1, which resolves the "Method not found: '!!0" error when used alongside other SDKs with different versions.
- .NET 9 support added.
⚠️ Support for .NET 6 and .NET 7 has ended.- Fixed utility library issues and synced with latest version.
- Fixed an issue with the
Store
parameter being included in requests by default, causing errors with Azure OpenAI models. The parameter is now optional and excluded from serialization unless explicitly set.
-
Added support for
Microsoft.Extensions.AI
IChatClient
andIEmbeddingGenerator
(more information will be coming soon to the Wiki). -
Added missing
Temperature
andTopP
parameters toAssistantResponse
. -
Added missing
Store
parameter toChatCompletionCreateRequest
. -
Breaking Changes:
⚠️ CreatedAt
parameter renamed toCreatedAtUnix
and converted tolong
instead ofint
. AddedCreatedAt
parameter asDateTimeOffset
type, which will automatically convert Unix time toDateTime
.
- Realtime API implementation is completed. As usual this is the first version and it may contain bugs. Please report any issues you encounter.
- Realtime Sample
-
Compatibility Enhancement: You can now use this library alongside the official OpenAI library and/or Semantic Kernel within the same project. The name changes in this update support this feature.
-
Namespace and Package ID Update: The namespace and PackageId have been changed from
Betalgo.OpenAI
toBetalgo.Ranul.OpenAI
. -
OpenAI Naming Consistency: We've standardized the use of "OpenAI" throughout the library, replacing any instances of "OpenAi" or other variations.
-
Migration Instructions: Intellisense should assist you in updating your code. If it doesn't, please make the following changes manually:
- Switch to the new NuGet package:
Betalgo.Ranul.OpenAI
instead ofBetalgo.OpenAI
. - Update all namespaces from
OpenAI
toBetalgo.Ranul.OpenAI
. - Replace all occurrences of "OpenAi", "Openai", or any other variations with "OpenAI".
- Switch to the new NuGet package:
-
Need Help?: If you encounter any issues, feel free to reach out via our Discord channel, Reddit channel, or GitHub discussions. We're happy to assist.
-
Feedback Welcomed: If you notice any mistakes or missing name changes, please create an issue to let us know.
-
Utilities Library Status: Please note that the Utilities library might remain broken for a while. I will focus on fixing it after completing the real-time API implementation.
- Fixed incorrect Azure Urls.
- Token usage response extended with
PromptTokensDetails
,audio_tokens
andcached_tokens
. - Model list extended with
Gpt_4o_2024_08_06
andChatgpt_4o_latest
.
-
moved
strict
paremeter fromToolDefinition
toFunctionDefinition
-
moved
strict
paremeter fromToolDefinition
toFunctionDefinition
- Added Support for o1 reasing models (
o1-mini
ando1-preview
). - Added
MaxCompletionTokens
forchat completions
. - Added support for
ParallelToolCalls
forchat completions
. - Added support for
ServiceTier
forchat completions
. - Added support for
ChunkingStrategy
inVector Store
andVector Store Files
. - Added support for
Strict
inToolDefinition
. - Added support for
MaxNumberResults
andRankingOptions
forFileSearchTool
. - Added support for
ReasoningTokens
fortoken usage
. - Added support for
ResponseFormatOneOfType
forAssistantResponse.cs
.
- Added support for Structured Outputs, here is the link for samples: Wiki, Structured Outputs
- Updated Models with new GPT-4o mini model.
- Fixed Azure Assistant URLs.
- Updated library logo.
- Added support for tool resources in Assistant response.
- Introduced
IsDelta
into BaseResponseModel, which can help to determine if incoming data is part of the delta.
- Assistant Stream now returns the
BaseResponse
type, but they can be cast to the appropriate types(RunStepResponse
,RunResponse
,MessageResponse
). The reason for this change is that we realized the stream API returns multiple different object types rather than returning a single object type. - The Base Response now has a
StreamEvent
field, which can be used to determine the type of event while streaming.
- Added Stream support for submitToolOutputsToRun, createRun, and createThreadAndRun
- With this update, we are now in sync with OpenAI's latest API changes. We shouldn't have any missing features as of now. 🎉
- Updated Assistant tests, added sample for CreateMessageWithImage
- Azure Assistant endpoints are updated since documentation reference still earlier version (Assistant v1). I am not sure if Azure supports all Assistant v2 features. So, feedback is much appreciated.
- Fixed error handling and response parsing for audio transcription result in text mode.
- Fixed Culture issue for number conversions (Audio Temperature and Image N)
- Removed file_ids from Create Assistant
- Added Support for Chat LogProbs
- Fixed File_Id Typo in file VisionImageUrl
- Updated File purpose enum list
- Assistant (Beta) feature is now available in the main package. Be aware there might still be bugs due to the beta status of the feature and the SDK itself. Please report any issues you encounter.
- Use
"UseBeta": true
in your config file orserviceCollection.AddOpenAIService(r => r.UseBeta = true);
ornew OpenAiOptions { UseBeta = true }
in your service registration to enable Assistant features. - Expect more frequent breaking changes around the assistant API due to its beta nature.
- All Assistant endpoints are implemented except for streaming functionality, which will be added soon.
- The Playground has samples for every endpoint usage, but lacks a complete implementation for the Assistant APIs. Refer to Assistants overview - OpenAI API for more details.
- Special thanks to all contributors for making this version possible!
- Fixed a bug with multiple tools calling in stream mode.
- Added error handling for streaming.
- Added usage information for streaming (use
StreamOptions = new(){IncludeUsage = true,}
to get usage information). - Added timestamp_granularities[] for Create transcription to provide the timestamp of every word.
- Fixed incorrect mapping for batch API error response.
- Added support for Batch API
- Added support for new Models
gpt-4-turbo
andgpt-4-turbo-2024-04-09
thanks to @ChaseIngersol
- Added support for .NET 8.0 thanks to @BroMarduk
- Utilities library updated to work with only .NET 8.0
- Fixed a bug that was causing binary image to be sent as base64 string, Thanks to @yt3trees
- Fixed a bug that was blocking CreateCompletionAsStream on some platforms. #331
- Fixed a bug that was causing an error with multiple tool calls, now we are handling index parameter #493, thanks to @David-Buyer
- Fixed again🥲 incorrect Model Naming -
moderation
models andada embedding 2
model
- Fixed function calling streaming bugs thanks to @David-Buyer @dogdie233 @gavi @Maracaipe611
- Breaking Change:
While streaming (
CreateCompletionAsStream
), there were some unexpected incoming data chunks like:pings
or:events
, etc. @gavi discovered this issue. We are now ignoring these chunks. If you were using it, you need to setjustDataMode
to false.
- Added support for new models :
TextEmbeddingV3Small
,TextEmbeddingV3Large
,Gpt_3_5_Turbo_0125
,Gpt_4_0125_preview
,Gpt_4_turbo_preview
,Text_moderation_007
,Text_moderation_latest
,Text_moderation_stable
- Added optinal dimension and encoding for embedding thanks to @shanepowell
- Fixed the response format of AudioCreateSpeechRequest.
- Updated Azure OpenAI version to
2023-12-01-preview
, which now supports dall-e 3. - Added the ability to retrieve header values from the base response, such as ratelimit, etc. Please note that this feature is experimental and may change in the future.
- Semi-Breaking change:
- The SDK will now attempt to handle 500 errors and other similar errors from the OpenAI server. Previously, an exception was thrown in such cases. Now, the SDK will try to read the response and return it as an error message. This change provides more visibility to developers and helps them understand the cause of the error.
- Let's start with breaking changes:
- OpenAI has replaced function calling with tools. We have made the necessary changes to our code. This is not a major change; now you just have a wrapper around your function calling, which is named as "tool". The Playground provides an example. Please take a look to see how you can update your code.
This update was completed by @shanepowell. Many thanks to him.
- OpenAI has replaced function calling with tools. We have made the necessary changes to our code. This is not a major change; now you just have a wrapper around your function calling, which is named as "tool". The Playground provides an example. Please take a look to see how you can update your code.
- Now we support the Vision API, which involves passing message contents to the existing chat method. It is quite easy to use, but documentation was not available in the OpenAI API documentation.
This feature was completed by @belaszalontai. Many thanks to them.
- Added support for "Create Speech" thanks to @belaszalontai / @szabe74
- Added support for Dall-e 3, thanks to @belaszalontai and @szabe74
- Added support for GPT-4-Turbo/Vision thanks to @ChaseIngersol
- Models are updated with the latest.
-
Reverting a breking change which will be also Breaking Changes(only for 7.3.0):
- Reverting the usage of
EnsureStatusCode()
which caused the loss of error information. Initially, I thought it would help in implementing HTTP retry tools, but now I believe it is a bad idea for two reasons.- You can't simply retry if the request wasn't successful because it could fail for various reasons. For example, you might have used too many tokens in your request, causing OpenAI to reject the response, or you might have tried to use a nonexistent model. It would be better to use the Error object in your retry rules. All responses are already derived from this base object.
- We will lose error response data.
- Reverting the usage of
- Updated Moderation categories as reported by @dmki.
-
Breaking Changes:
- Introduced the use of
EnsureStatusCode()
after making requests.Please adjust your code accordingly for handling failure cases. Thanks to @miroljub1995 for reporting. - Previously, we used to override paths in the base domain, but this behavior has now changed. If you were using
abc.com/mypath
as the base domain, we used to ignore/mypath
. This will no longer be the case, and the code will now respect/mypath
. Thanks to @Hzw576816 for reporting.
- Introduced the use of
- Added Chatgpt Finetununig support thanks to @aghimir3
- Default Azure Openai version increased thanks to @mac8005
- Fixed Azure Openai Audio endpoint thanks to @mac8005
- Added error handling for PlatformNotSupportedException in PostAsStreamAsync when using HttpClient.Send, now falls back to SendRequestPreNet6 for compatibility on platforms like MAUI, Mac. Thanks to @Almis90
- We now have a function caller describe method that automatically generates function descriptions. This method is available in the utilities library. Thanks to @vbandi
- This release was a bit late and took longer than expected due to a couple of reasons. The future was quite big, and I couldn't cover all possibilities. However, I believe I have covered most of the function definitions (with some details missing). Additionally, I added an option to build it manually. If you don't know what I mean, you don't need to worry. I plan to cover the rest of the function definition in the next release. Until then, you can discover this by playing in the playground or in the source code. This version also support using other libraries to export your function definition.
- We now have support for functions! Big cheers to @rzubek for completing most of this feature.
- Additionally, we have made bug fixes and improvements. Thanks to @choshinyoung, @yt3trees, @WeihanLi, @N0ker, and all the bug reporters. (Apologies if I missed any names. Please let me know if I missed your name and you have a commit.)
- Bugfix https://github.com/betalgo/openai/pull/302
- Added support for Function role https://github.com/betalgo/openai/issues/303
- Function Calling: We're releasing this version to bring in a new feature that lets you call functions faster. But remember, this version might not be perfectly stable and we might change it a lot later. A big shout-out to @rzubek for helping us add this feature. Although I liked his work, I didn't have enough time to look into it thoroughly. Still, the tests I did showed it was working, so I decided to add his feature to our code. This lets everyone use it now. Even though I'm busy moving houses and didn't have much time, seeing @rzubek's help made things a lot easier for me.
- Support for New Models: This update also includes support for new models that OpenAI recently launched. I've also changed the naming style to match OpenAI's. Model names will no longer start with 'chat'; instead, they'll start with 'gpt_3_5' and so on.
- The code now supports .NET 7.0. Big cheers to @BroMarduk for making this happen.
- The library now automatically disposes of the Httpclient when it's created by the constructor. This feature is thanks to @BroMarduk.
- New support has been added for using more than one instance at the same time. Check out this link for more details. Thanks to @remixtedi for bringing this to my attention.
- A lot of small improvements have been done by @BroMarduk.
-
Breaking Changes 😢
- I've removed 'GPT3' from the namespace, so you might need to modify some aspects of your project. But don't worry, it's pretty simple! For instance, instead of writing
using OpenAI.GPT3.Interfaces
, you'll now writeusing OpenAI.Interfaces
. - The order of the OpenAI constructor parameters has changed. It now takes 'options' first, then 'httpclient'.
//Before var openAiService = new OpenAIService(httpClient, options); //Now var openAiService = new OpenAIService(options, httpClient);
- I've removed 'GPT3' from the namespace, so you might need to modify some aspects of your project. But don't worry, it's pretty simple! For instance, instead of writing
- Updated Azure OpenAI default API version to the preview version to support ChatGPT. thanks to all issue reporters
- Added support for an optional chat
name
field. thanks to @shanepowell - Breaking Change
-
FineTuneCreateRequest.PromptLossWeight
converto to float thanks to @JohnJ0808
-
- Mostly bug fixes
- Fixed Moderation functions. https://github.com/betalgo/openai/issues/214 thanks to @scolmarg @AbdelAzizMohamedMousa @digitalvir
- Added File Stream support for Whisper, Thanks to @Swimburger
- Fixed Whisper default response type, Thanks to @Swimburger
- Performance improvements and code clean up,again Thanks to @Swimburger 👏
- Code clenaup, Thanks to @WeihanLi
- Released update message about nuget Package ID change
-
Breaking Changes:
-
I am going to update library namespace fromReverted namespace change, maybe next time.Betalgo.OpenAI.GPT3
toOpenAI.GPT3
. This is the first time I am trying to update my nuget packageId. If something broken, please be patient. I will be fixing it soon. -
Small Typo change on model name
Model.GPT4
to Model.GPT_4
-
ServiceCollection.AddOpenAIService();
now returnsIHttpClientBuilder
which means it allows you to play with httpclient object. Thanks for all the reporters and @LGinC. Here is a little sample
-
ServiceCollection.AddOpenAIService()
.ConfigurePrimaryHttpMessageHandler((s => new HttpClientHandler
{
Proxy = new WebProxy("1.1.1.1:1010"),
});
-
Breaking Changes: Typo fixed in Content Moderation CategoryScores, changing
Sexualminors
toSexualMinors
. Thanks to @HowToDoThis. - Tokenizer changes thanks to @IS4Code.
- Performance improvement
- Introduced a new method
TokenCount
that returns the number of tokens instead of a list. - Breaking Changes: Removed overridden methods that were basically string conversions. I think these methods were not used much and it is fairly easy to do these conversions outside of the method. If you disagree, let me know and I can consider adding them back.
- Added .Net Standart Support, Massive thanks to @pdcruze and @ricaun
-
Breaking change:
ChatMessage.FromAssistance
is nowChatMessage.FromAssistant
. Thanks to @Swimburger - The Tokenizer method has been extended with
cleanUpCREOL
. You can use this option to clean up Windows-style line endings. Thanks to @gspentzas1991
- Removed Microsoft.AspNet.WebApi.Client dependency
- The action build device has been updated to ubuntu due to suspicions that the EOL of the vocab.bpe file had been altered in the last few Windows builds.
- Added support for TextEmbeddingAdaV2 Model.
- Introduced support for Whisper.
- Grateful thanks to @shanepowell for contributing RetrieveFileContent.
- Resolved an issue that was causing problems with the tokenizer. A clean build should hopefully address this.
- Added support for skip options validation
- We all beeen waiting for this moment. Please enjoy Chat GPT API
- Added support for Chat GPT API
- Fixed Tokenizer Bug, it was not working properly.
-
Breaking Changes
- Renamed
Engine
keyword toModel
in accordance with OpenAI's new naming convention. - Deprecated
DefaultEngineId
in favor ofDefaultModelId
. -
DefaultEngineId
andDefaultModelId
is not static anymore.
- Renamed
-
Added support for Azure OpenAI, a big thanks to @copypastedeveloper!
-
Added support for Tokenizer, inspired by @dluc's https://github.com/dluc/openai-tools repository. Please consider giving the repo a star.
These two changes are recent additions, so please let me know if you encounter any issues.
- Updated documentation links from beta.openai.com to platform.openai.com.
- Sad news, we have Breaking Changes.
-
SetDefaultEngineId()
replaced bySetDefaultModelId()
-
RetrieveModel(modelId)
will not use the default Model anymore. You have to pass modelId as a parameter. - I implemented Model overwrite logic.
- If you pass a modelId as a parameter it will overwrite the Default Model Id and object modelId
- If you pass your modelId in your object it will overwrite the Default Model Id
- If you don't pass any modelId it will use Default Model Id
- If you didn't set a Default Model Id, SDK will throw a null argument exception
- Parameter Model Id > Object Model Id > Default Model Id
- If you find this complicated please have a look at the implementation, OpenAI.SDK/Extensions/ModelExtension.cs -> ProcessModelId()
-
- New Method introduced: GetDefaultModelId();
- Some name changes about the legacy
engine
keyword with the newmodel
keyword - Started to use the latest Completion endpoint. This expecting to solve finetuning issues. Thanks to @maiemy and other reporters.
- Bug-fix, ImageEditRequest.Mask now is optional. thanks to @hanialaraj (if you are using edit request without mask your image has to be RGBA, RGB is not allowed)
- Bug-fix, now we are handling logprops response properly, thanks to @KosmonikOS
- Code clean-up, thanks to @KosmonikOS
- Bug-fix,added jsonignore for
stop
andstopAsList
, thanks to @Patapum
-
Breaking change.
-
EmbeddingCreateRequest.Input
was a string list type now it is a string type.
I have introducedInputAsList
property instead ofInput
. You may need to update your code according the change.
Both Input(string) and InputAsList(string list) avaliable for use
-
-
Added string and string List support for some of the propertis.
- CompletionCreateRequest --> Prompt & PromptAsList / Stop & StopAsList
- CreateModerationRequest --> Input & InputAsList
- EmbeddingCreateRequest --> Input & InputAsList
- Added support for new models (davinciv3 & edit models)
- Added support for Edit endpoint.
- (Warning: edit endpoint works with only some of the models, I couldn't find documentation about it, please follow the thread for more information: https://community.openai.com/t/is-edit-endpoint-documentation-incorrect/23361 )
- Some objects were created as class instead of record at last version. I change them to record. This will be breaking changes for some of you.
- With this version I think we cover all of openai APIs
- In next version I will be focusing on code cleanup and refactoring.
- If I don't need to relase bug-fix for this version also I will be updating library with dotnet 7 in next version as I promised.
- OpenAI made a surprise release yesterday and they have announced DALL·E API. I needed to do other things but I couldn't resist. Because I was rushing, some methods and class names may will change in the next release. Until that day, enjoy your creative AI.
- This library now fully support all DALL·E features.
- I tried to complete Edit API too bu unfortunately something was wrong with the documentation, I need to ask some questions in the community forum.
- Bug-fixes
- FineTuneCreateRequest suffix json property name changed "Suffix" to "suffix"
- CompletionCreateRequest user json property name changed "User" to "user" (Thanks to @shaneqld), also now it is a nullable string
- I have good news and bad news
- Moderation feature implementation is done. Now we support Moderation.
- Updated some request and response models to catch up with changes in OpenAI API
- New version has some breaking changes. Because we are in the fall season I needed to do some cleanup. Sorry for breaking changes but most of them are just renaming. I believe they can be solved before your coffee finish.
- I am hoping to support Edit Feature in the next version.
- Thanks to @c-d and @sarilouis for their contributions to this version.
- Now we support Embedding endpoint. Thanks to @sarilouis
- Bug fixes and updates for Models
- Code clean-up
- Removed deprecated Answers, Classifications, and Search endpoints https://community.openai.com/t/answers-classification-search-endpoint-deprecation/18532. They will be still available until December at web-API. If you still need them please do not update to this version.
- Code clean-up
- Organization id is not a required value anymore, Thanks to @samuelnygaard
- Removed deprecated Engine Endpoint and replaced it with Models Endpoint. Now Model response has more fields.
- Regarding OpenAI Engine naming, I had to rename Engine Enum and static fields. They are quite similar but you have to replace them with new ones. Please use Models class instead of Engine class.
- To support fast engine name changing I have created a new Method,
Models.ModelNameBuilder()
you may consider using it.