-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPv6 outgoing connections not working #7637
Comments
Can reproduce. Have had this issue for months :/ |
I can spin a test build to remove the custom fallback handler if you fancy a go? |
That would be great! Will the installation procedure be the same as described here? https://wiki.servarr.com/radarr/installation |
I am using the Helm chart from truecharts (hence docker) and have the same issues, this is the chart https://truecharts.org/docs/charts/stable/radarr. |
should have this at least partly fixed / working combined with the ipv6 fallback logic |
I upgraded to v4.3.2.6857 which includes this commit. Unfortunately the problem remains with the same stack trace. Do you need further information? Thanks! |
Issue remains in v4.4.2.6956 @bakerboy448 could you please remove the tags you added so this gets attention again? |
Fresh round of trace logs anyone? |
I don't think anything except line numbers has changed since I opened this issue:
|
Just ran into the same issue while migrating my home lab to IPv6 only. It doesn't only affect connections towards my download client but indexers as well. The Radarr version used 4.3.2.6857.
|
I just ran into this myself on an IPv6-only system. Is there a workaround some people have found until this is fixed? |
got pretty much the same error. i can curl the prowlarr/sonarr api's on all hosts/docker instances. e.g. when entering https ipv6 e.g. https://[ipv6:test:0123] it says it is invalid. same as https://[ipv6]:9696 edit//
prowlarr log.
i removed my ipv6 prefix because its routeable. it looks like the can authenticate, but then shit hits the fan. somehow the http requets are not 200 but 400. edit// regards |
I spent some time looking at this a few months back and came to the realisation that the happy eyeballs algorithm has not been implemented. Meaning that if for one reason or another ipv6 connectivity is lost it will never be retried until radarr is restarted. In a k8s environment that happens often as pods are moved between hosts
The above line is the offending one.
The suggested patch for an override is simple, but i don't known the codebase enough to add such a setting to config.xml
|
I Wanted to add the forceIPv6 flag to the constructor of the |
Is there an existing issue for this?
Current Behavior
My download client is IPv6 only. When I try to set it up, I get the following HTTP 400 reply:
trace log at the end of the issue.
More so, when I disable IPv4 on the machine running radarr, when searching for movies I get a HTTP 503 with the stack trace in the response, also attached below.
I can see DNS requests for
api.radarr.video
and responses for both A and AAAA records using wireshark. When both IPv4 and IPv6 are enabled, the connection is established using the IPv4 adress from the A record. With only IPv6 enabled, no connection is established with the errorNetwork is unreachable
On the same machine, Sonarr is able to connect to the IPv6-only download client.
curl
also works as expected, even preferring IPv6 over IPv4 addresses.Expected Behavior
When dual stack is available, outgoing connections should be preferred via IPv6 instead of IPv4.
When only IPv6 is available and the remote host has an IPv6 address, a connection should be established.
Steps To Reproduce
or
Environment
What branch are you running?
Master
Trace Logs?
The "No data available" for remote IPv6-only hosts:
The "Network is unreachable" when trying to connect to dual stack remote hosts but from IPv6-only machine:
I suspect the relevant line is
NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.onConnect(SocketsHttpConnectionContext context, CancellationToken cancellationToken) in D:\a\1\s\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:line 272
The mentioned issue in the source code is still open, but when you investigate further, there are PRs merged that claim to fix the issue?
AB#3824
The text was updated successfully, but these errors were encountered: