Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fetch, ESM & streamCompletion util #45

Closed
wants to merge 2 commits into from

Conversation

gfortaine
Copy link

@gfortaine gfortaine commented Jan 4, 2023

  • chore(upgrade): upgrade to TypeScript 4.9
  • chore(upgrade): enable ESM support
  • chore(upgrade): add streamCompletion helper function (credits to @schnerd & @rauschma)
  • chore(upgrade): update README for fetch support

@gfortaine
Copy link
Author

Related openai/openai-openapi#10

@gfortaine gfortaine mentioned this pull request Jan 4, 2023
@gfortaine gfortaine force-pushed the master branch 3 times, most recently from e8a3330 to 4ef8ed1 Compare January 4, 2023 23:59
@gfortaine
Copy link
Author

Here is the package 🎉 : https://www.npmjs.com/package/@fortaine/openai

@gfortaine gfortaine force-pushed the master branch 3 times, most recently from 045daf7 to d413829 Compare January 5, 2023 23:23
@gfortaine gfortaine force-pushed the master branch 8 times, most recently from 79d4747 to be8bdbc Compare January 17, 2023 14:07
@gfortaine gfortaine changed the title V4 Upgrade : axios 1.x, ESM, TypeScript 4.9 & streamCompletion util V5 Upgrade : fetch, ESM, TypeScript 4.9 & streamCompletion util Jan 17, 2023
@gfortaine gfortaine force-pushed the master branch 2 times, most recently from 75effed to 85b4bea Compare January 17, 2023 23:12
@schnerd
Copy link
Collaborator

schnerd commented Jan 18, 2023

Thank you for all the contributions – just wanted to let you know that we've seen this and will respond soon, things have just been a little busy and having trouble carving out time to fully evaluate all the changes that are included here.

@leerob
Copy link

leerob commented Jan 18, 2023

Thank you for making this PR! Very excited for it, as it will allow the client to be used on Edge compute platforms (e.g. Vercel Edge Functions).

@gfortaine gfortaine force-pushed the master branch 3 times, most recently from e9bd8e9 to 15e9aae Compare January 23, 2023 12:35
apis/baseapi.ts Outdated Show resolved Hide resolved
@gfortaine gfortaine force-pushed the master branch 4 times, most recently from e598ce1 to d3e0148 Compare January 29, 2023 23:47
@gfortaine
Copy link
Author

gfortaine commented Jan 30, 2023

@schnerd @leerob @DennisKo this PR should be fully compatible with Vercel's Edge Runtime now 🚀 A few comments :

  • This package is now pure ESM
  • Ajv SerDes has been removed for the time being and ObjectSerializer has been fixed (the generator was generating "any" types creating some nasty bugs during the serialization process, by example by using an array for prompt in createCompletion)

@louisgv
Copy link

louisgv commented Jan 31, 2023

Really looking forward to this PR - it will enable the openai API to be used in background service workers for web extension!

I feel like bundling isomorphic-fetch and http might bloat the bundle. The ideal setup for me would be to have the constructor allowing a custom axios instance to be passed down.

@danielrhodes
Copy link

This adds a number of dependencies that make this harder to run in other contexts, for example a CloudFlare Worker (which doesn't run in node). Just switching out axios for fetch would be great on its own.

@cfortuner
Copy link

Any update here?

Would love to see this merged soon!

@rogerahuntley
Copy link

Patiently awaiting this ❤️

@ericlewis
Copy link

Patiently awaiting this ❤️

I have a lazy version of this which is automated: @ericlewis/openai. It is automatically updated every hour. It consists of a VERY simple script, so for transparency:

#!/usr/bin/env bash

touch .npmrc && echo "//registry.npmjs.org/:_authToken=\${NPM_TOKEN}" >> .npmrc

openai_version=$(npm view openai version)
ericlewis_version=$(npm view @ericlewis/openai version)

# Compare the versions and output the result
if [[ "$openai_version" == "$ericlewis_version" ]]; then
  echo "Versions match, skipping."
else
  npm install change-package-name typescript@4 --save-dev
  npm uninstall axios
  npm install redaxios
  npx change-package-name @ericlewis/openai
  
  sed -i "s/import type { AxiosPromise, AxiosInstance, AxiosRequestConfig } from 'axios';/type AxiosPromise<T = any> = Promise<{data: T, status: number, statusText: string, request?: any, headers: any, config: any}>;\ntype AxiosInstance = any;\ntype AxiosRequestConfig = any;\nimport globalAxios from 'redaxios';/g" *.ts
  sed -i "s/import type { AxiosInstance, AxiosResponse } from 'axios';/type AxiosInstance = any;/g" *.ts
  sed -i "s/<T = unknown, R = AxiosResponse<T>>//g" *.ts
  sed -i "s/return axios.request<T, R>(axiosRequestArgs);/return axios.request(axiosRequestArgs);/g" *.ts
  sed -i 's/"target": "es6",/"target": "es2021",\n    "esModuleInterop": true,/g' tsconfig.json
  sed -i '/import globalAxios from '\''axios'\'';/d' *.ts
  
  npm run build
  npm publish
fi

@ericlewis ericlewis mentioned this pull request Mar 2, 2023
@gfortaine gfortaine changed the title V5 Upgrade : fetch, ESM, TypeScript 4.9 & streamCompletion util fetch, ESM & streamCompletion util Mar 5, 2023
@gfortaine
Copy link
Author

@leerob @DennisKo @louisgv @danielrhodes @cfortuner @rogerahuntley @ericlewis cc @schnerd Here it is 🎉 :

import fetchAdapter from "@haverstack/axios-fetch-adapter";

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
  baseOptions: { 
    adapter: fetchAdapter
  }
});

Various references :

@dqbd
Copy link

dqbd commented Mar 6, 2023

import fetchAdapter from "@haverstack/axios-fetch-adapter";

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
  baseOptions: { 
    adapter: fetchAdapter
  }
});

Note: It doesn't seem like @vespaiach/axios-fetch-adapter supports streaming (cc @gfortaine)

@s123121
Copy link

s123121 commented Mar 24, 2023

Is there ETA for this ? How can I help moving this forward

@dustinlacewell
Copy link

This is sorely needed. Can someone speak to what is needed / where help can look to assist?

@dan-kwiat
Copy link

dan-kwiat commented Mar 31, 2023

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

@dustinlacewell
Copy link

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

It's not helpful for third-party libraries that use openai though. (Though it is very welcome, thanks for making it.)

@dosstx
Copy link

dosstx commented May 28, 2023

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

It's not helpful for third-party libraries that use openai though. (Though it is very welcome, thanks for making it.)

About to install and try out this. Are you saying it won't work for non-Next apps like Nuxt, etc?

@dan-kwiat
Copy link

@dosstx did you try it? openai-edge has no dependencies so works anywhere. If you're running in an environment without a global fetch defined, you can pass your own since v0.6:

import { Configuration, OpenAIApi } from "openai-edge"
const configuration = new Configuration({
  apiKey: "your-key-here"
})
const openai = new OpenAIApi(configuration, null, $fetch)

@dosstx
Copy link

dosstx commented May 29, 2023

@dan-kwiat Using Nuxt and setting up the code like so, I get empty response object back despite status 200:

response from server:

Response {                                                                                                                               7:24:44 AM  
  [Symbol(realm)]: { settingsObject: {} },
  [Symbol(state)]: {
    aborted: false,
    rangeRequested: false,
    timingAllowPassed: false,
    requestIncludesCredentials: false,
                                                                                                                                                  7:26:05 AM  
 hmr update /app.vue                                                                                                                             7:26:05 AM  
 Vite server hmr 6 files in 82.896ms                                                                                                             7:26:07 AM
server:  Response {                                                                                                                               7:26:12 AM
  [Symbol(realm)]: { settingsObject: {} },
  [Symbol(state)]: {
    aborted: false,
    rangeRequested: false,
    timingAllowPassed: false,
      [Symbol(headers map sorted)]: null
    },
    urlList: [],
    body: { stream: undefined, source: null, length: null }
  },
  [Symbol(headers)]: HeadersList {
    cookies: null,
    [Symbol(headers map)]: Map(4) {
      'access-control-allow-origin' => [Object],
      'content-type' => [Object],
      'cache-control' => [Object],
      'x-accel-buffering' => [Object]
    },
    [Symbol(headers map sorted)]: null
  }
}

server/api/chat.post.js: <---- the server api route to call the openai api

import { Configuration, OpenAIApi } from "openai-edge"

const rConfig = useRuntimeConfig()

const configuration = new Configuration({
  apiKey: rConfig.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)

export const config = {
  runtime: "edge",
}

export default defineEventHandler(async (event) => {

  try {
    const completion = await openai.createChatCompletion({
      model: "gpt-3.5-turbo",
      messages: [
        { role: "system", content: "You are a helpful assistant." },
        { role: "user", content: "Who won the world series in 2020?" },
        {
          role: "assistant",
          content: "The Los Angeles Dodgers won the World Series in 2020.",
        },
        { role: "user", content: "Where was it played?" },
      ],
      max_tokens: 7,
      temperature: 0,
      stream: true,
    })

    const response = new Response(completion.body, {
      headers: {
        "Access-Control-Allow-Origin": "*",
        "Content-Type": "text/event-stream;charset=utf-8",
        "Cache-Control": "no-cache, no-transform",
        "X-Accel-Buffering": "no",
      },
    })
    console.log('server: ', response)
    return response
  } catch (error: any) {
    console.error(error)

    return new Response(JSON.stringify(error), {
      status: 400,
      headers: {
        "content-type": "application/json",
      },
    })
  }
})

@dustinlacewell
Copy link

@s123121 @dustinlacewell in the meantime you can use openai-edge (supports streaming)

It's not helpful for third-party libraries that use openai though. (Though it is very welcome, thanks for making it.)

About to install and try out this. Are you saying it won't work for non-Next apps like Nuxt, etc?

I was speaking of third-party packages that use openai as a dependency. No good way of forcing those packages to use openai-edge afaik.

@rattrayalex
Copy link
Collaborator

Thank you for putting this together!

Great news; we have a new, fully-rewritten upcoming version v4.0.0 that supports ESM, uses fetch, and has conveniences for streaming (with more coming soon).

Please give it a try and let us know what you think in the linked discussion!

@rattrayalex rattrayalex added the fixed in v4 Issues addressed by v4 label Jul 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fixed in v4 Issues addressed by v4
Projects
None yet
Development

Successfully merging this pull request may close these issues.