-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
downloadprogress
event clarification
#20
Comments
webmachinelearning/prompt-api#4 is related. Some precedent from XHR:
I think this precedent is reasonable. What do you think of the following pseudo-spec? If the state is not "after-download", then:
If state is "readily", then never fire any This doesn't take care of the edge case you mention but I think that is handled best by expanding the monitor class to have some way of monitoring installation / loading progress, per webmachinelearning/prompt-api#4. |
This might not be 0% if something else caused the download to begin already. XHR doesn't have this possibility. The first event should just reflect the current state of the download.
Is there a race here? const translatorCapabilities = await ai.translator.capabilities();
if (languageDetectorCapabilities.available === "after-download") {
const detector = await ai.languageDetector.create();
} will the state for the
2 is the only version that avoids a race but it's awkward to implement because the span of a JS task is only known to the renderer process. It would be simpler to say that you wlll get a 100% event. What happens if the XHR response was cached? Do you immediately get a 100% event? |
I think it might still be worthwhile, for predictability, to always fire the 0 immediately. This helps very slightly with the cross-context fingerprinting issues: it makes it harder to tell the difference between "the user at 5.52% through the download" vs. "the user who downloaded 5.52% + whatever in 50 ms".
My intention (which has not yet met implementation reality, so thanks for engaging) is that we keep a copy of the model's state in the renderer process, and only update it via queued tasks. In particular the task that resolves the promise returned by Note that things that happen due to the action of one renderer process should not broadcast updates to all other renderer processes. You should only get I think this design avoids all races. It matches your (2), I guess? I don't understand what's hard to implement it as I would assume the usual way to update state is via the browser queuing tasks, which ensures that any JS tasks running between those browser tasks always see consistent state.
You get both the 0% and 100% events. (In separate queued tasks, although that would be hard to observe.) |
It's not hard, but it requires extra state tracking. It might not be what you meant but what I meant by 2 was that const translatorCapabilities1 = await ai.translator.capabilities();
// do something for 5 seconds (yeah I know that's bad)
const translatorCapabilities2 = await ai.translator.capabilities();
console.log(translatorCapabilities1.available == translatorCapabilities2.available); will always log The browser cannot naturally tell that both calls to |
I don't think we should have a task-bound cache to handle such cases. Any time you have an |
Oh yeah! But you still have this weird thing where when you call |
I feel like that falls out naturally if you only update the renderer-process state via tasks. Is there an alternative architecture you were thinking of where it's awkward? |
When you get a result from |
Yes, I think the factory object is a natural place to store the state when you update the renderer process via posted tasks in my model. |
The problem is that the response has to be invalidated at the end of the task (otherwise consistency has to be maintained indefinitely). There is no mechanism for that. Or I guess you just maintain consistency for the duration of the context, with the data only being updated when I don't see a problem speccing or implementing that but it seems odd and avoidable if we just promise to always deliver an initial progress event instead. |
Yes, that's the intent. To be clear, the data is updated whenever either
I think that is what we should do, per #20 (comment), where I said
Let me try to spell out the case that I think we're talking about, with my proposed model:
|
I think I agree with all of that although 4.ii and 5.ii are odd because it says "This goes in parallel to" but then a and b are sequential. Also, we're always firing the events even when it's already downloaded. I thought you were arguing against that. If you're not then I'm happy. |
"In parallel to" means "on a background thread" or "in the browser process": https://html.spec.whatwg.org/#in-parallel
In all my above examples the model's state, as seen by the Let's add the following scenario:
In this case I think not firing any |
As I write the spec for this, I'm having second thoughts. It doesn't really make sense to consult the cached availability for the model in Let's put this on hold until I can get a reasonable spec written, hopefully over the next few days. |
I'd like us to make a few things clear about downloadprogress
Part of me wants to say that if the
create
call's promise is not resolvable immediately, then we should queue onedownloadprogress
event immediately. It could be 0%, it could be 100%. Without this, a page has to pop up the download progress defaulting to 0% until it receives the first event.It's an edge case but let's say something already triggered a model download, the download has finished and installation is happening (which may be slow) or installation already failed for some reason and will fail again. There's no download to do, just waiting for installation. The page should show the download at 100%. If the page defaults 0% and displays that in the UI even though no download is occurring that is confusing. If the installation fails, to the user, it looks like the download failed without making any progress.
The text was updated successfully, but these errors were encountered: