Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iroh-net: regression: blob downloads freeze at a low percentage completed #2951

Closed
arilotter opened this issue Nov 19, 2024 · 1 comment · Fixed by #3062
Closed

iroh-net: regression: blob downloads freeze at a low percentage completed #2951

arilotter opened this issue Nov 19, 2024 · 1 comment · Fixed by #3062
Assignees
Labels
bug Something isn't working c-iroh perf performance related issues regression
Milestone

Comments

@arilotter
Copy link

My iroh network started to see downloads not complete after 0.27.
we bisected the problematic change down to #2782
this PR caused about 1/16 of our blob downloads to freeze & fail, reliably.
I've sent trace logs over to @b5 via sendme - they're a few gigs in size.

0.28 did not fix this issue, though #2876 was marked as potential fixer.
This issue #2852 might be related, since it was opened around the same time.

@flub flub self-assigned this Nov 27, 2024
@ramfox ramfox moved this to 📋 Backlog in iroh Nov 27, 2024
@flub flub added c-iroh bug Something isn't working regression perf performance related issues labels Dec 9, 2024
@flub flub added this to the v0.30.0 milestone Dec 9, 2024
@ramfox ramfox moved this to 👍 Ready in iroh Dec 10, 2024
@ramfox ramfox modified the milestones: v0.30.0, v0.31.0 Dec 16, 2024
@rklaehn
Copy link
Contributor

rklaehn commented Jan 3, 2025

Not sure if this is the issue, but we have been seeing timeouts using the transfer example https://github.com/n0-computer/iroh/blob/main/iroh/examples/transfer.rs when used in relay only mode.

Tests with #3077 showed that this fixes the issue, but it will have to make its way to main. Should be in 0.31 though.

github-merge-queue bot pushed a commit that referenced this issue Jan 3, 2025
…MagicSock (#3062)

## Description

This refactors how datagrams flow from the MagicSock (AsyncUdpSocket) to
relay server and back. It also vastly simplifies the actors involved in
communicating with a relay server.

- The `RelayActor` managed all connections to relay servers.
  - It starts a new `ActiveRelayActor` for each relay server needed.
  - The `ActiveRelayActor` will exit when unused.
    - Unless it is for the home relay, this one never exits.
  - Each `ActiveRelayActor` uses a relay `Client`.
- The relay `Client` is now a `Stream` and `Sink` directly connected to
the `TcpStream` connected to the relay server. This eliminates several
actors previously used here in the `Client` and `Conn`.
- Each `ActiveRelayActor` will try and maintain a connection with the
relay server.
- If connections fail, exponential backoff is used for reconnections.
- When `AsyncUdpSocket` needs to send datagrams:
  - It (now) puts them on a queue to the `RelayActor`.
- The `RelayActor` ensures the correct `ActiveRelayActor` is running and
forwards datagrams to it.
  - The `ActiveRelayActor` sends datagrams directly to the relay server.
- The relay receive path is now:
- Whenever `ActiveRelayActor` is connected it reads from the underlying
`TcpStream`.
- Received datagrams are placed on an mpsc channel that now bypasses the
`RelayActor` and goes straight to the `AsyncUpdSocket` interface.

Along the way many bugs are fixed.  Some of them:

- The relay datagrams send and receive queue now behave more correctly
when they are full. So the `AsyncUdpSocket` behaves better.
- Though there still is a bug with the send queue not waking up all the
tasks that might be waiting to send. This needs a followup: #3067.
- The `RelayActor` now avoids blocking. This means it can still react to
events when the datagrams queues are full and reconnect relay servers
etc as needed to unblock.
- The `ActiveRelayActor` also avoids blocking. Allowing it to react to
connection breakage and the need to reconnect at any time.
- The `ActiveRelayActor` now correctly handles connection errors and
retries with backoff.
- The `ActiveRleayActor` will no longer queue unsent datagrams forever,
but flush them every 400ms.
- This also stops the send queue into the `RelayActor` from completely
blocking.


## Breaking Changes

### iroh-relay

- `Conn` is no longer public.
- The `Client` is completely changed.  See module docs.

## Notes & open questions

- Potentially the relay `Conn` and `Client` don't need to be two
separate things now? Though Client is convenient as it only implements
one Sink interface, while Conn is also a Frame sink. This means on Conn
you often have to use very annoying syntax when calling things like
.flush() or .close() etc.
- Maybe a few items from the `ActiveRelayActor` can be moved back into
the relay `Client`, though that would probably require some gymnastics.
The current structure of `ActiveRelayActor` is fairly reasonable and
handles things correctly. Though it does have a lot of client knowledge
baked in. Being able to reason about the client as a stream + sink is
what enabled me to write the good `ActiveRelayActor` though, so I'm
fairly happy that this code makes sense as it is.

If all goes well this should:
Closes #3008 
Closes #2971 
Closes #2951

## Change checklist

- [x] Self-review.
- [x] Documentation updates following the [style
guide](https://rust-lang.github.io/rfcs/1574-more-api-documentation-conventions.html#appendix-a-full-conventions-text),
if relevant.
- [x] Tests if relevant.
- [x] All breaking changes documented.

---------

Co-authored-by: Friedel Ziegelmayer <[email protected]>
@flub flub closed this as completed in #3062 Jan 3, 2025
@github-project-automation github-project-automation bot moved this from 👍 Ready to ✅ Done in iroh Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working c-iroh perf performance related issues regression
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants