Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncaught (in promise) Error: failed to call OrtRun(). error code = 6 #54

Closed
ldenoue opened this issue Mar 24, 2023 · 9 comments
Closed
Labels
bug Something isn't working

Comments

@ldenoue
Copy link

ldenoue commented Mar 24, 2023

Describe the bug
A clear and concise description of what the bug is.

Uncaught (in promise) Error: failed to call OrtRun(). error code = 6.
    at e.run (ort-web.min.js:6:454860)
    at e.run (ort-web.min.js:6:444208)
    at e.OnnxruntimeWebAssemblySessionHandler.run (ort-web.min.js:6:447139)
    at o.run (inference-session-impl.js:91:44)
    at x (models.js:52:32)
    at A (models.js:147:34)
    at Function.forward (models.js:936:22)
    at O (models.js:202:29)
    at Function.runBeam (models.js:927:22)
    at Function.generate (models.js:558:41)

How to reproduce
Steps or a minimal working example to reproduce the behavior

Expected behavior
A clear and concise description of what you expected to happen.

Try on this audio file in Chrome for macOS

Logs/screenshots
If applicable, add logs/screenshots to help explain your problem.

Environment

  • Transformers.js version: latest from npm
  • Browser (if applicable): Chrome
  • Operating system (if applicable): MacOS
  • Other:
file.webm

Additional context
Add any other context about the problem here.

@ldenoue ldenoue added the bug Something isn't working label Mar 24, 2023
@ldenoue ldenoue changed the title [Bug] Title goes here. Uncaught (in promise) Error: failed to call OrtRun(). error code = 6 Mar 24, 2023
@xenova
Copy link
Collaborator

xenova commented Mar 24, 2023

Could you post the code you used?

I'm not currently at my computer, so I can't run it yet, but one potential issue is that it's a .webm video file, which might be messing with the audio extraction part of the code.

@ldenoue
Copy link
Author

ldenoue commented Mar 24, 2023

Here is what I'm using in my worker file:

importScripts('https://cdn.jsdelivr.net/npm/@xenova/transformers/dist/transformers.min.js');
async function speech_to_text(data) {
	if (!data.model)
		data.model = 'openai/whisper-tiny.en'
	let pipe = await pipeline('automatic-speech-recognition',data.model);
    return await pipe(data.audio, {
		max_new_tokens: Infinity,
		top_k: 0,
		do_sample: false,
		chunk_length_s: 30,
		stride_length_s: 5,
		return_timestamps: true,
		force_full_sequences: false,
		return_chunks: true,
        callback_function: function () {
            /*const decodedText = pipe.tokenizer.decode(beams[0].output_token_ids, {
                skip_special_tokens: true,
            })*/
            self.postMessage({
                type: 'update',
                data: ''//decodedText.trim()
            });
        }
    })
}

self.addEventListener('message', async (event) => {
    const data = event.data;

    let result = await speech_to_text(data);
    self.postMessage({
        type: 'result',
        data: result
    });
});

@xenova
Copy link
Collaborator

xenova commented Mar 24, 2023

Okay, one problem is that you are creating the model each time the function is run.

Note that let pipe = await pipeline('automatic-speech-recognition',data.model); allocates new memory for a model, and due to how ONNX handles memory, does not get released unless the .dispose() method is called.

I would recommend creating a factory/singleton class to handle this, to ensure that only one model is created

@ldenoue
Copy link
Author

ldenoue commented Mar 25, 2023

@xenova do you think it's okay if I just call pipe.dispose() ?
I'd rather free the memory as soon as the transcription is done, because my web app needs memory for other tasks (video encoding).

let pipe = await pipeline('automatic-speech-recognition',data.model);
let result = await pipe(data.audio, {...});
pipe.dispose();
return result;

@xenova
Copy link
Collaborator

xenova commented Mar 25, 2023

Yep, .dispose() will work 👍

It's an asynchronous function, so, if you want to make 100% the resources are cleaned up before progressing, remember to await its result.

Here's some example usage: https://github.com/xenova/transformers.js/blob/main/tests/index.js

@ldenoue
Copy link
Author

ldenoue commented Mar 25, 2023

I noticed that the tiny.en model doesn't have the issue, only the tiny (and base models).
Perhaps a memory issue then since I expect tiny to be bigger than tiny.en?

@xenova
Copy link
Collaborator

xenova commented Mar 25, 2023

Hmmm, perhaps. I'll do some testing tonight 👍

@xenova
Copy link
Collaborator

xenova commented Mar 30, 2023

I made some updates to the ONNX models (exporting after HF made some fixes):

and I tested all 6 models on my side (tiny/base/small for english-only/multilingual), and everything seemed to work correctly!

I'll close the issue for now, but feel free to reopen or open a new issue if you have any problems/questions!

@xenova xenova closed this as completed Mar 30, 2023
@ldenoue
Copy link
Author

ldenoue commented Mar 30, 2023

Confirmed it now works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants