-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[audioworklet] Expose fetch() in AudioWorkletGlobalScope. #1439
Comments
WG: let's take this up at TPAC. Not obvious that we want to add network data transfers to audio thread as opposed to doing these elsewhere (main thread, Worker) and message or otherwise serialize the data into the AW thread. |
I won't be at TPAC. What you have to realize is that I think worklets have an opque origin at the moment, so everything would essentially be a CORS request. We probably want to change that as I doubt that's workable. We also need to be careful at who decides on CSP, the referrer policy, referrer, and the client for the worklet. There's some precedent with style sheets, modules, and workers, but those are also all different in subtle ways. |
(Also, the WG will need to figure out a testing story. We have tests for |
@padenot noted that fetch() operation can be potentially blocking the worklet (rendering) thread. That can directly affect the performance of audio rendering causing glitches, so perhaps AudioWorklet should stay away from fetch() or XHR. The remote data can be fetched/processed in the main thread and passed to the worklet scope via MessagePort anyway. |
Not blocking per se, but very heavy, I suppose. I sure hope that all the implementations implement fetch properly with multiple threads. One of the issues is that allowing |
I think this issue started from my use case, so I'll comment that I'm not necessarily pushing for fetch() or XHR, but it is just something I looked at to instantiate WASM in the AudioWorklet. For Csound, we have a 3.5 meg .wasm binary that needs to be loaded, then instantiated within the AudioWorklet. Loading the .wasm in the main thread and transferring via MessagePort to the AudioWorklet was the first option I tested and I realized quickly that it wasn't possible to test since port wasn't yet implemented. I next tried to use Fetch and XHR within the AudioWorklet code so that I could load the .wasm there and call WebAssembly.instantiate(), but found both options were not available and inquired if they were supposed to be or not. My thought was that the fetch would happen when the CsoundProcessor.js file was added as a module (via addModule()) but before an instance of the Processor was created and attached to the audio graph. Once MessagePort lands in Chrome Canary, trying to pass the WASM binary data via the port will be the first thing I try. If that works and is the recommended solution I'd be happy to use that and move on. |
A good point. I previously stood up for My concern was more about the awkward layering: the MessagePort belongs to AWN/AWP, but somehow it will be used for the "global" purpose. (i.e. loading WASM binary as a global function library). @kunstmusik Having CSound WASM module running on AudioWorklet sounds really exciting considering the Faust port is already in the works. (CLM, PureData and ChucK someday?) Would love to see how AudioWorklet changes the paradigm of computer music programming. |
@padenot - Perhaps it would be good to differentiate phases of lifecycle here; if I understand correctly, it seems you're referring to fetch only when an AudioWorklet is already attached to the audio graph? I'm seeing a possibility as being part of the module load time, sort of like how importScripts() blocks when used in loading Workers, but it doesn't come into play when the worker is working. One thing to note about recommending MessagePort is that using it for my use case requires something of a two-phased initialization of the AudioWorklet. It would go something like this:
If fetch() could be done AudioWorklet, the construction is single-pass:
For reference, see the Faust examples at http://faust.grame.fr/modules-worklet/. If you look at any of the -processor.js files, go to the bottom to see the WASM instantiation before registerProcessor. The binary data for their WASM has been encoded into JS; the idea of fetch() in Worklet then would be to replace the text-encoded binary that's embedded into the files with direct loading of .wasm. @hoch - Yes, the Csound community is pretty glad to have it on the Web, and will be even happier once we can finally leave ScriptProcessorNode behind. :) Also, I had not thought about the two-phase initialization requirements by using MessagePort in my first message above; I don't know whether fetch in AudioWorklet is the way to go, but having a simpler initialization would be a nice to have. |
For sure, it is convenient for your use case but I can see calling fetch() easily affects negatively while the worklet thread is busy with handling numerous process() from other processors. Being able to handle multiple processors concurrently is the primary goal of the AudioWorklet. Fetching and loading 3.5M WASM binary in the worklet scope will definitely pause the audio rendering if there are other nodes in play. This doesn't matter if Csound/AudioWorklet is your sole purpose, developers can use that in many different ways. (e.g. Dynamically loading Csound module while the other part of WebAudio graph is being rendered.) For these reasons, using MessagePort after the resource-heavy fetching/loading makes sense in general. Also even if it requires a "two-phase" set up, but it's one-time thing. |
I think eventually when we get WebAssembly/JS module integration you'll be able to just directly import wasm into the worklet. So in the medium-term future I think we'll have a good solution for you. |
@hoch - Could you clarify: to what degree can AudioWorklets block others? Thinking of edge cases, if I write an infinite loop in my processor.js file that gets evaluated during addModule(), does that block other AudioWorklets if they are in the same tab or all audio worklets browser-wide? (Is the thread that runs code given to addModule() the audio thread?) What if I have an infinite-loop in my AWP's process() callback? (Thinking outside of Csound, with live-coding JS systems, I can imagine making a coding mistake that causes things to become unstable like this.) If this is just a concern to not to interfere with the audio graph only within the current tab, then I'd say, let me be responsible for what happens, even if it means I might shoot myself in the foot. That'd be my preference at least. Another question: Can I post messages to MessagePort when AWN is instantiated but before it is attached to the audio graph? Or will I have to do:
|
@domenic That would certainly be nice, but wouldn't the module load then have the exact same issue of potentially blocking as using fetch? (Assuming this is all going to be done at the same point in the lifecycle when the addModule() Promise is resolved.) |
Perhaps; I'm not as familiar with that. What it would avoid is allowing fetching of arbitrary data at arbitrary points in the lifecycle, which would be possible if fetch() itself were exposed. |
Let me clarify few things:
After the construction super class AWP/AWN, the constructor() {
super();
this.port.start();
this.port.postMessage('hey');
} |
Sorry about a bit off-topic question, but will AWP.port.onmessage() also be running on the same thread as AWP.process(), or will it be in control thread? And if latter, is control thread == main thread? Thanks! |
Yes.
Yes. |
excellent, then also non-AudioParam synthesis parameters can be updated directly without threading concerns. |
Yeap. That's the design goal of AudioWorklet. |
@hoch Thanks for the clarifications. At this point, I think I've covered all of the details of what I'm looking to do and there isn't much more to add to this thread from my side. I'll wait for MessagePort to land as that seems like the next thing to test to get this all working. |
@hoch can we close this or assign to V.next? |
SGTM. |
Sorry -- which of the above choices sounds good? |
If it were up to me, I would close this and wait for ES6 |
Closing until we have a clear, sanctioned approach to fetching ancillary resources in a WorkletGlobalSCope. |
@hoch see w3c/css-houdini-drafts#506 (though that's for dynamic import only). |
@annevk Thanks! I guess WG needs to revisit this when all the tools are available in WorkletGlobalScope. |
Whilst I understand that WASM is outside of the scope of the web-audio-api, it is a good case study because WASM can have significant common code - whether that common code is as WASM or js is irrelevant. This common code paradigm highlights the need to be able to inject externally defined code into the AudioWorkeletProcessor's AudioWorkletGlobalScope. This would allow an AudioWorkeletProcessor deived class to access such common code. When attempting to integrate WASM I referenced both @kunstmusik CSound repo (https://github.com/kunstmusik/csound-audioworklet.git) It is pretty clear that attempting to use common code (such as WASM or regular js) is a little cumbersome with the current AudioWorklet specification. Faust gets around this by essentially copying and pasting WASM binary in the form of base64 text into their AudioWorkletProcessor derived js files. I can see the point of not allowing fetch in the AudioWorkletProcessor ... it would make the incoming/outgoing hardware related DMA audio buffers overflow/underflow unless there was significant overhead in managing this - which would be a headache. Similarly, it may seem a little too much to allow fetch to execute during the AudioWorkletGlobalScope::addModule promise execution. I think a way around this may be to allow the main thread to fetch and validate whatever common code is required by the AudioWorkletProcessor(s). For this reason, perhaps we need a method in the main thread to allow the injection of a function or class into the AudioWorkletGlobalScope for each AudioWorkletProcessor. @hoch mentions in this issue #1474 the eval() concept ... clearly there has been a lot of discussion on this topic before. Here is an example scenario which illustrates the concept :
If this were acceptable, then what would that AudioWorkletGlobalScope method be ? |
@flatmax Let me summarize this issue and the resolution first: fetch() is inherently a non-trivial operation (accessing network stack, allocating memory, cloning data and etc) so doing it directly on the rendering thread is not a good idea. That's why we shut this issue down - along with the agreement from Worklet folks. So we let the main thread handle the fetching task, then pass the blob to AudioWorkletGlobalScope with these two options:
I hope the option 2 can resolve this WASM module loading issue. |
https://fetch.spec.whatwg.org/#headers-class
Developers are having trouble with loading WASM binary in AudioWorkletGlobalScope, mainly because
fetch()
API is not exposed. The use case makes sense to me, but I am not 100% sure of its implication.In terms of the spec change, this will be as simple as adding 'AudioWorklet' to the interface.
@domenic @annevk WDYT?
The text was updated successfully, but these errors were encountered: