Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Vert.x "response has already been written" should be prevented #3157

Closed
michael72 opened this issue Sep 7, 2023 · 5 comments
Closed

Comments

@michael72
Copy link

Tapir version: v1.7.3

Scala version: 2.13.11

I use Tapir with vert.x and Future serving a remote webserver that gets some huge json file via PUT and a slow connection - all works fine when the file is smaller.
Due to reasons I haven't yet quite fathomed there is a io.netty.channel.StacklessClosedChannelException: null happening which itself leads to

java.lang.IllegalStateException: Response has already been written
at io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:421) ~[availability-simulator.jar:0.1.0-SNAPSHOT]
at io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:409) ~[availability-simulator.jar:0.1.0-SNAPSHOT]
at io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:478) ~[availability-simulator.jar:0.1.0-SNAPSHOT]
at sttp.tapir.server.vertx.VertxFutureServerInterpreter.$anonfun$endpointHandler$5(VertxFutureServerInterpreter.scala:73)

looking at the code

.foreach { ex =>
  logger.error("Error while processing the request", ex)
  if (rc.response().bytesWritten() > 0) rc.response().end()
    rc.fail(ex)
  }

it seems like the same problem happens as in this stackoverflow request which according to this answer could be solved as by checking the request has not ended before issuing the end:

if (!req.response().ended()) {
    req.response().end(response.body());
}

i.e. this should probably fix the needless "response has already been written" error.

.foreach { ex =>
  logger.error("Error while processing the request", ex)
  if (rc.response().bytesWritten() > 0 && !req.response().ended()) rc.response().end()
    rc.fail(ex)
  }

=> the code lines were copy&pasted so the same problem should occur with vert.x and Cats + Zio

@adamw
Copy link
Member

adamw commented Sep 7, 2023

Do you know what is the response that is being written? Maybe some vertx-global timeout kicks in and closes the connection, or is it connection closed on the client side?

Either way, since you made a detail investigation, maybe you would be open to opening a PR with a fix?

@michael72
Copy link
Author

michael72 commented Sep 8, 2023

Hello @adamw,

since the error logs occur only as an "afterthought" and all internal operations were successful and the response (accept) has also been written it is only the logs that are a bit annoying here. And of course it would be nice to find out what happens. It could either be an internal timeout or the amount of data being processed. I tried with a better internet connection and transfer rate and same error logs happen again. When I shorten the json file to 2/3rd of its original size it works fine.

OK - it is probably worth mentioning that - because the data file being processed is quite large and we have to save memory - I went from jsonBody[List[Item]] to inputStreamBody in the API - the generated documentation itself still uses the jsonBody endpoint. I also tried going via fileBody - same errors. I tried closing the stream manually - same errors.
With the jsonBody itself or using stringBody (but then having the issue of needing a lot more memory) it all works fine - no error logs.
So I guess it could be a problem of how I use the API or maybe some internal configuration issue - either inside tapir or in vertx.

Maybe worth mentioning: actual file size of the json: 100MB, about 40000 items, processing time to check the content until "accept" is sent: 3.5 seconds.

To the actual bugfix: I could add the code as mentioned - not sure (yet) about the code duplication though ;-) - and I would need a couple of days right now to find the time for it.

@michael72
Copy link
Author

Hm - one question about the code again - I think I don't get it
here, here or here

the end() function itself returns a Future[Void] - is that future simply ignored in the code? Same goes for the fail code afterwards?

@adamw
Copy link
Member

adamw commented Sep 8, 2023

To the actual bugfix: I could add the code as mentioned - not sure (yet) about the code duplication though ;-) - and I would need a couple of days right now to find the time for it.

No problem :) Yeah there is some code duplication, but that's the price you pay for integrating with 3 different runtimes.

the end() function itself returns a Future[Void] - is that future simply ignored in the code? Same goes for the fail code afterwards?

It seems so, I guess we're not interested at that point when the request will actually finish writing? But there might be sth wrong with that of course

@michael72
Copy link
Author

OK - the bug with the doubled error message is definitely fixed in 1.8.0. I also tested with our product, where the error was reproducible before.

Thanks so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants