-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standardize support for streaming uploads #28
Comments
@vayam that wouldn't allow streaming a download to tus-unaware clients though... |
@jonhoo I see your point. Upload Client
For clients interested in downloading whatever is available:
Download client (Standard HTTP client - browser/curl) waits until the file is downloaded
Advanced Downloader
Advanced Downloader to receive available bytes
|
The flows you indicate correspond mostly to the kind of flow I had in mind too. Some points though:
How about something like this?
|
Even range based requests should work same as GET. because, standard video players would do range based
Agreed. No Custom Request Headers
Shouldn't it be
Yes that would work on a server that supports Range based requests.
Not true. All standard implementations - Akamai, S3 return It would be nice to have GET with range and without to be consistent. Because if the same url is passed to video player, it will do range requests if server supports it. For the video to play it has to return If server supports range requests and you know how many bytes were actually received, you can do byte range
@felixge @jonhoo The more I think all we need is a better name for |
No, as we discussed in #26, I do see your point that now HEAD and GET return different values, so perhaps swapping them around might be appropriate. That is, say that
Ah, ok, I wasn't aware. Fair enough - 206 it is then. I'm not sure I agree with that interpretation of the standard, but in this case it might be better to follow the de facto standard.
Are you sure about this?
That example there seems good to me. Making the last number of |
Entity-Length already means length of full object
Yes |
Ah, sorry, I misread The end result of doing it that way would be that the default will always be to download the full file even if that means waiting for the server to receive all the bits. Services that want only the available bits would then use a |
The reason we have
|
Okay, fair enough, but it doesn't seem to be needed for anything download related? |
Yes that is correct. |
Ok, to summarize:
Did I miss anything? |
I still think Also, the bits about being able to request only certain ranges of a file (that is, bits that the server already has) should probably be mentioned in the spec? Apart from that I think you have everything. |
Yes
Not sure. Not entirely convinced with
Yes
Nope |
Can you come up with a valid byte range request where in server responds back with "bytes received so far", without breaking our set assumption |
@vayam not sure I understand your question? The server already includes |
Ah, fair enough. If I still believe |
How about |
Sounds good to me. We should clarify that it is the bytes |
@jonhoo what's the benefit of |
In my opinion, |
I did some experiments with tusd and brewtus node server. It is not trivial to implement Here is my test Upload to tus.io demo site using tuspy
Now issue a download while upload is in progress.
@felixge, @jonhoo I suggest we keep it simple. Make |
@vayam in a way the timeout you show above has the same end result as having |
@jonhoo the issue is @felixge, how about we get rid of
By default
@felixge, @jonhoo can we discuss this on irc. i am usually available morning EST |
Wouldn't it then make more sense as we decided above to include They could either choose to do a streaming download by requesting the whole file and not timing out, or they could choose to do a non-streaming download by only downloading bytes 0-70 using |
I have been in two minds about this. Both have advantages and disadvantages. For now, I will throttle |
@vayam sorry for the lack of activity on the project lately. A few things have changed on my end, which means I won't be able to continue with the project for a while : /. Is there any chance you might be interested in taking over the project? If so @kvz and @tim-kos would be happy to help with anything you might need. |
@vayam added you on skype, looking forward to having a chat! |
This seems to be an interesting idea in conjunction with long running uploads but also adds additional problems when looking at uploads with unknown-size (see #16) and non-contiguous data streams (see #3). Until we have finished the original aim of tus, uploading data, I would like to move this to the backlog. |
As @felixge pointed out in #26, it would be good to have a standardized way of providing URL endpoints where a client can retrieve a file that is currently being uploaded that will stay open until the entire file has been sent.
Following the decision in #26 to replace
Offset
withContent-Length
, clients will by default be getting only the bytes that have been uploaded at the time of the request. A conforming client might be able to detect theEntity-Length
header and keep the connection open to stream more bytes, but it would be good to define the protocol in such a way that "normal" HTTP clients would be able to request a file being uploaded and receive the entire file too.One way of achieving this might be to change the default behavior of HEAD and GET requests to by default serve
Content-Length = Entity-Length
and stream the file to the client, but add a request flag a client to send if they wish to only get the uploaded bytes and not wait for the rest. Something likeAccept: incomplete
, except with a more appropriate header field (Accept
is only for content types).The text was updated successfully, but these errors were encountered: