-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attach add-on sometimes reorders output #1893
Comments
@markspeters I suggest to set the binary type to arraybuffer and avoiding the blob type at all. Mixing synchronous and asynchronous code at this stage unfortunately desyncs the websocket chunks, which are guaranteed to arrive in order by the protocol. Imho introducing another sync semantic on top of the websocket chunks (which already do this with frame ids) is a waste of bandwidth. @Tyriar Any thoughts on this? Is there a specific reason for blob support here, which I dont see? I think we should drop blob support completely (already removed it in my upcoming UTF8 support, which works fine without it). |
Thanks @jerch: Just for context, I'm not using this for in-band syncing. Every message I'm sending, I'm sending the same way and synchronously. The backend is Tomcat's implementation of JSR 356 and in my case I'm transmitting these packets the exact same way, through the RemoteEndpoint.Basic's sendBinary method which synchronous. They arrive in order in the client (as far as Chrome debugger shows anyway), it's just the shorter message happens to be decoded (or maybe encoded?) as ArrayBuffer while the longer one as a Blob. I'll have to do some digging into the websocket spec and Tomcat's implementation to see if I have any control over this. Hopefully just setting the binaryType on the client side works as you suggest. |
Yep that was sufficient, thanks for saving me from my own workaround. Looks like the different versions of Chrome I was testing on defaulted to different binaryTypes. I must've been wrong about it giving me both ArrayBuffers and Blobs, it must've just been loading the later, smaller blob first. I had taken the WS spec to mean that binaryType was only ever a hint, but that was talking about using it as a hint for storage characteristics. Given this is under the client's control, I get your argument about just dropping support for blob rather than fixing this issue. Maybe warn when attaching to a blob-binaryType websocket? |
I don't really know anything about the blob binary type, @jerch do you think we should add a warning to code or just jsdoc? |
Hmm, Im not that used to that code part, it just happens that I had to deal with this for the upcoming UTF8 support too. Had to read the spec myself to get an idea, what going on there. This is what the spec says:
The main difference between the two binary types is that an Now on our xterm.js demo we have the following requirements: In a second step I want to reenable UTF8 binary data transport, but this time with our custom UTF8 decoder, that works streamlined and can correctly decode half transmitted unicode chars. To get direct access from JS To sum this up:
Edit: Not sure yet how the zmodem addon will deal with this, I think it relies on arbitrary binary data being sent, thus a string or UTF8 only transport will fail. Hmm, this needs some further thinking. |
@jerch As I understand it, your current concern w.r.t. ArrayBuffer is binary frames that are partitioned on a UTF-8 wide char, so it can't be decoded properly? I don't think that's a situation I'll hit given the nature of the backend I'm talking to but I understand the concern for the general case. Switching to text frames would be an option for us though. Just doing some digging in the TextDecoder docs, looks like it does support decoding in chunks via the const msg1 = new ArrayBuffer(3);
const msg2 = new ArrayBuffer(3);
new Uint8Array(msg1).set([65, 0xF0, 0x9F]);
new Uint8Array(msg2).set([0x98, 0x80, 66]);
const decoder = new TextDecoder("utf-8");
const output1 = decoder.decode(msg1, {stream: true});
const output2 = decoder.decode(msg2, {stream: true});
console.log(output1);
console.log(output2); If you run this, the smiley will be correctly decoded as part of the second output. So maybe this concern would be mitigated by keeping the TextDecoder instance around and using the |
@markspeters |
@jerch I think we can close this one off? |
Yes - this should not be the case anymore with v4. The code reponsible for reordering got removed (culprit was the file API with an event loop triggered read callback). |
The attach add-on sometimes reorders data despite the frames being received in-order over the socket.
Details
Steps to reproduce
Expected: The data from these two messages to be displayed in order.
Actual: They're sometimes reordered.
My hack fix
The issue appears to be that FileReader's readAsArrayBuffer as asynchronous, so subsequent messages can be received/decoded/displayed while the FileReader is still loading.
I'm able to work around this by attaching an ID to each displayable message as they come in, and using that in displayData to check whether there are outstanding messages (and buffering them if there are). I'm patching the JS, not the TS but hopefully the idea comes across:
This seems to be working great, the only concern I have is recoverability if the filereader fails to decode a chunk--with this change it'll buffer subsequent data till the end of time.
Thanks for your work on this fantastic software.
The text was updated successfully, but these errors were encountered: