Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support restarting download when storing large resources in Cache #713

Closed
wanderview opened this issue Jun 17, 2015 · 11 comments
Closed

Support restarting download when storing large resources in Cache #713

wanderview opened this issue Jun 17, 2015 · 11 comments
Milestone

Comments

@wanderview
Copy link
Member

Recently I noticed that a website was using SW+Cache to offline its app shell, but it was using FileSystem API to store its videos:

http://richapps.de/taking-the-web-offline-service-worker/

What features might be missing to force the use of FileSystem?

It seems one might be the ability to start downloading a large resource, start it streaming into Cache, network interruption occurs, and then continue the download when network is restored.

I believe Cache currently rejects the cache.put() or cache.add() if the body errors out. (Or rather its currently vaguely spec'd.) There is definitely no way to continue a partially stored download, though.

Thoughts?

@jakearchibald @annevk

@flaki
Copy link

flaki commented Jun 21, 2015

cc @brittanystoroz

@annevk
Copy link
Member

annevk commented Jun 21, 2015

Can the current cache API be made to handle that kind of scenario? I reckon we want a very similar setup to whatever we do for HTTP range internally. See also whatwg/fetch#38

@wanderview
Copy link
Member Author

Cache would have to grow the concept of partial entries like our internal http cache has. I'm not sure what the API would look like, though.

@annevk
Copy link
Member

annevk commented Jun 24, 2015

Can you explain the data model? Or maybe @jakearchibald can figure this out?

@jakearchibald
Copy link
Contributor

Hah, I've been staring at this issue for like 20 mins before your comment appeared. Ok, brain dumping:

So the cache does have the concept of an incumbent record, which is used to store a backup complete entry while the main entry is incomplete, and may yet fail. That takes care of cache matches while the entry is still being streamed to the cache. The cache should yield the in-progress entry, which is handy for larger responses. But I'm not convinced we should keep the partial entry around if streaming failed.

cache.put(request, response) should probably fail if the response is 206. I'm not against the cache returning 206 (or 416) responses if cache.match(request) is called and the request has a range header.

As for downloading a large resource, I think we need a way of doing that that allows the service worker to terminate & come back with an event when the download is complete. Would that solve this issue? Eg:

cache.backgroundDownloadThisPlease(request);
// then later…
self.addEventListener('backgrounddownloadcomplete', event => {
  event.cache;
  event.request;
  event.response;
});

The background downloader is welcome to do ranges and combine them, it may show a downloading notification to the user showing progress (optional?).

@jakearchibald
Copy link
Contributor

If we want to go low-level, we could have something like response.merge(otherResponse) which takes 2 partial responses and returns a new partial response (multipart/byteranges if the content is non-contiguous), or a complete 200 response if it merge forms a complete response. But I don't know if that's useful.

@wanderview
Copy link
Member Author

cache.backgroundDownloadThisPlease(request);

Is this the same thing as background sync?

What about something like cache.append(request, response) which does a match on request, then appends response to its body. It would reject if the headers don't match, isn't a range request, etc. This could also allow the status code to be updated from 206. Just a thought.

Alternatively, we could do nothing and make the SW do the merging in script. Once Streams is more widely available this could be done in a memory efficient way. It would just be a fair amount of boilerplate to write. Match all your partial responses, create a new response with a pipe body, and then concat the partial responses into the new pipe.

@jakearchibald
Copy link
Contributor

Is this the same thing as background sync?

Background sync keeps the SW open to get a fulfill/reject response from the promise passed to waitUntil, whereas backgroundDownloadThisPlease would allow the SW (or even the whole browser) to be shut down and the download continue.

What about something like cache.append(request, response)

I guess if response gave up halfway through, the downloaded content would still be added to the existing partial response? This means we have to work out how to handle a cache containing partials, but that's possible.

@jakearchibald jakearchibald added this to the Version 2 milestone Oct 28, 2015
@jakearchibald
Copy link
Contributor

https://github.com/WICG/background-fetch is now a thing 😄

@DanielBaulig
Copy link

Has the concept of caching partial content responses been revisited? Im interested in the following concepts specifically:

  • storing partial content responses in a cache,
  • appending additional partial responses to the cache entry,
  • and retrieving partial content responses using requests with Range headers

We have a use-case that requires all three of these and currently don't see any way to implement this using the current API semantics.

There's an example here that shows how to read a partial content response from a complete entry, but this sadly isn't sufficient for us.

@jakearchibald
Copy link
Contributor

Closing this in favour of https://github.com/WICG/background-fetch. We can look at storing partial responses in another issue, but I feel like making request/response storable in IDB might be a better first step, as it would enable these kind of things to be easier-built.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants