-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process messages immediately after is sent in chuncked streaming #30737
Conversation
while (start < bytes.length) { | ||
int end = bytes.length; | ||
for (int i = start; i < bytes.length - separator.length; i++) { | ||
for (int i = start; i < end; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I modified this logic to work regardless of the position of the delimiter.
@@ -141,7 +141,6 @@ public void testStreamJsonMultiFromMulti() { | |||
|
|||
private void testJsonMulti(String path) { | |||
Client client = ClientBuilder.newBuilder().register(new JacksonBasicMessageBodyReader(new ObjectMapper())).build(); | |||
; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unrelated to the changes, but it was a leftover
@FroMage mind taking a look as well? |
This comment has been minimized.
This comment has been minimized.
.subscribe() | ||
.withSubscriber(AssertSubscriber.create(3)) | ||
// wait for 3 ticks plus some half tick ms of extra time (this should not be necessary, but CI is slow) | ||
.awaitItems(3, Duration.ofMillis((TICK_EVERY_MS * 3) + (TICK_EVERY_MS / 2))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be duration of (TICK_EVERY_MS * 2) ...
as 1st item is issued at time 0, 2nd at time 200ms, 3rd at time 400ms. So wait time 200ms * 2,5 should be correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not have to be precise taking into account that there are more things involved (REST call, subscriptions, etc).
According to my experience, 200 ms of back time is enough to reproduce the issue and verify this is indeed fixed.
This comment has been minimized.
This comment has been minimized.
Will this be also backported to 2.16.x branch? |
@FroMage please review this |
Is it possible to merge this, please? |
It's possible, but really want a review from @FroMage on this one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds fair to send a newline after each element… except that we don't know if there will be a next one, at this stage.
So won't a stream of one string be indistinguishable from a stream of one string plus one empty string, from the client's perspective?
Tbh, I don't follow what you're asking or requesting as per your comment. Is there a use case you foresee that these changes won't work? In terms of functionality, this will work the same as before, the only thing that changes is when to process the message. |
@FroMage do you need some input or help to test this PR? I could make some additional tests if you have any scenarios in mind. |
issue #30690 updated with reproducer for quarkus 3.0.1.Final |
This comment has been minimized.
This comment has been minimized.
@FroMage can you have a look at this one please? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay.
It's the names that are super confusing.
My concern was whether we would be able from the client's perspective to deal with adding \n
at the end of each element, even when we don't know there's a next one.
Consider the following data sent and how they're received:
- Sending nothing → client receives nothing, that's a 0-element reception of no chunks
- Sending "foo" → client receives one element with data
foo
- Sending "foo\nbar" → client receives two elements with data
foo
andbar
- Sending "foo\n" → client receives two elements with data
foo
and `` (the empty string)
So, previously, \n
was a separator between elements, and now it's a line terminator. This could lead to bad issues.
Now, it happens that we're streaming differently in different cases:
APPLICATION_NDJSON_TYPE
andAPPLICATION_STREAM_JSON_TYPE
will stream usingChunkedStreamingMultiSubscriber
, and since both types are JSON types, we can only stream JSON values, where Strings are delimited by""
and so a trailing empty line will never be considered as a valid value, and hopefully will be discarded by the client. I haven't checked the specs, though, so I'm not sure it's even valid.- Any other streaming case that isn't SSE will use
StreamingMultiSubscriber
(which is a supertype ofChunkedStreamingMultiSubscriber
and frankly that is confusing, because the names are just confusing), which allows JSON or regular HTTP streaming. Now, in this case, adding\n
as a line terminator would be very problematic, especially for regular HTTP streaming. But this class doesn't add line separators in this PR, so it's fine.
Frankly, this is much more complex and delicate than it appears at first by just glancing at this PR. I haven't checked the two specs whose behaviour this modifies and I hope it won't affect clients with a trailing empty chunk. I am reasonably confident it should not, given that JSON doesn't allow empty values and would consider it as whitespace, but I hope a JS client would not produce an error. I hope the PR authors checked the specs :)
I've just checked that when sending |
Which client? With what mime type? Using chunked encoding? |
REST Client Reactive: @GET
@Path("/stream/string")
@Produces(RestMediaType.APPLICATION_STREAM_JSON)
@RestStreamElementType(MediaType.APPLICATION_JSON)
Multi<String> readString(); Server side: @Path("/stream")
public static class StreamingResource {
@GET
@Path("/string")
@Produces(RestMediaType.APPLICATION_STREAM_JSON)
@RestStreamElementType(MediaType.APPLICATION_JSON)
public String readString() {
return "\"one\"\n\"two\"\n\"3\"\n\"four\"\n";
} |
Yeah, those are not the clients I was most worried about. But I guess we'll hear about JS clients if it breaks :) |
Oh, I see now, sorry for my misunderstanding! |
My fault, I often make answers too short to properly understand, sorry :) |
Failing Jobs - Building 1187551
Full information is available in the Build summary check run. Failures⚙️ JVM Tests - JDK 17 #- Failing: extensions/opentelemetry/deployment
! Skipped: extensions/micrometer-registry-prometheus/deployment extensions/micrometer/deployment extensions/quartz/deployment and 28 more 📦 extensions/opentelemetry/deployment✖
|
The test failures are unrelated. Merging. |
Fix #30690