Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Netty warnings #3225

Closed
rucek opened this issue Oct 4, 2023 · 7 comments · Fixed by #3579
Closed

Netty warnings #3225

rucek opened this issue Oct 4, 2023 · 7 comments · Fixed by #3579

Comments

@rucek
Copy link
Contributor

rucek commented Oct 4, 2023

Tapir version: 1.7.6

Scala version: 2.13.10

Describe the bug

When using a Netty-based server, the following warnings are logged on every request/response:

15:47:10.632 [KQueueEventLoopGroup-2-1] WARN  io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0x27f0b934, L:/127.0.0.1:8080 - R:/127.0.0.1:59292]'
15:47:10.657 [KQueueEventLoopGroup-2-3] WARN  i.n.channel.kqueue.KQueueEventLoop - events[1]=[200, -1] had no channel!

How to reproduce?

Use adopt-tapir to generate an example with Netty, run with sbt run and make a sample request.

Additional information

Tested on an Intel MBP with macOS 14.0

@rucek rucek changed the title [BUG] Netty warnings Netty warnings Oct 4, 2023
@adamw adamw added the Netty label Oct 17, 2023
@peterschrott
Copy link

Ran into the exact same issue.

I get the same logs when using curl (curl --request GET --url 'http://localhost:8080/hello?name=test'), when using the generated swagger ui the warning turns into:

11:28:51.922 [KQueueEventLoopGroup-2-1] WARN  io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0x54df3e2f, L:/127.0.0.1:8080 - R:/127.0.0.1:54061]'

and then after 20 seconds (see the timestamps):

11:29:11.979 [KQueueEventLoopGroup-2-2] WARN  i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.timeout.ReadTimeoutException: null

I found that in NettyBootstrap.scala the SO_TIMEOUT timeout is set to the default of 60 seconds (NettyConfig.defaultNoStreaming).

When providing a custom Netty configuration without a socket timeout the warning vanishes:

private val nettyConfig = NettyConfig.defaultNoStreaming.copy(socketTimeout = None)
NettyFutureServer(nettyConfig)
NettyFutureServer(nettyConf).port(8080).addEndpoints(Endpoints.all).start()

@peterschrott
Copy link

UPDATE:

I was actually wrong. Adding the netty configuration only fixes:

11:28:51.922 [KQueueEventLoopGroup-2-1] WARN  io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0x54df3e2f, L:/127.0.0.1:8080 - R:/127.0.0.1:54061]'

But:

11:29:11.979 [KQueueEventLoopGroup-2-2] WARN  i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.timeout.ReadTimeoutException: null

still occures.

@nima-taheri-hs
Copy link

nima-taheri-hs commented Nov 22, 2023

In our company we upgraded from tapir 1.2.x to 1.9.x to enjoy the graceful shutdown feature, but these warnings started to show up all of a sudden. So we reverted back to 1.2.x

@peterschrott
Copy link

UPDATED:

I just went back all the versions from 1.9.1 and found that this was introduced in v1.6.1 (changes).

With the custom netty settings, as proposed before, all warnings vanish when also setting requestTimeout to None:

private val nettyConfig = NettyConfig.defaultNoStreaming.copy(socketTimeout = None, requestTimeout = None) 

I dont know the underlying consequences of it, not even if the warnings are bad. Just wanted to mention it here.

If i have time, I will investigate further on this.

@amorfis
Copy link
Member

amorfis commented Jan 2, 2024

Thoughts and findings after some investigation:

The problem stems from kqueue as default setting (on non-linux systems) of eventLoopConfig in NettyConfig. KQueue does not support socket timeout setting, hence the SO_TIMEOUT warning. Socket timeout should be set to None when using kqueue.

I was successful to reproduce the second warning: WARN i.n.channel.kqueue.KQueueEventLoop - events[1]=[200, -1] had no channel! It also shows up only when kqueue when kqueue or nio is chosen.

Some workaround would be to use nio as evengLoopConfig and None for socketTimeout - none of above problems appear with such setting.

NettyConfig could show some meaningful warning message when kqueue or nio is configured with socket timeout.

@peterschrott
Copy link

@amorfis thanks for sharing your findings. I guess you are talking about eventLoopConfig not eventGroupConfig?

I set the eventLoopConfig in NettyConfig as followed:

NettyConfig.default.eventLoopConfig(EventLoopConfig.nio)

but the warning regarding so timeout is still logged:

2024-01-17 12:56:08.301 WARN  [nioEventLoopGroup-2-1] io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0xa73fb9a0, L:/[0:0:0:0:0:0:0:1]:5013 - R:/[0:0:0:0:0:0:0:1]:51517]'
2024-01-17 12:56:28.504 WARN  [nioEventLoopGroup-2-2] io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.timeout.ReadTimeoutException: null

Now they are logged in the thread nioEventLoopGroup instead of KQueueEventLoopGroup.

Am I missing something?

(tested locally on macOS 14.2.1 (23C71))

@amorfis
Copy link
Member

amorfis commented Jan 22, 2024

@peterschrott Yes you are right, thank you for pointing out my mistake.

You are right that nio also does not support the SO_TIMEOUT setting. So the only option for now to get rid of those warnings seem to configure nio with None as socketTimeout.

@kciesielski kciesielski linked a pull request Apr 5, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants