Performance optimizations #774
Replies: 11 comments 35 replies
-
1. Replace the inner BTreeMap with another non-ordered structureThe torrent entry: info stored for each torrent in the tracker: pub struct Torrent {
pub(crate) peers: std::collections::BTreeMap<peer::Id, Arc<peer::Peer>>,
pub(crate) downloaded: u32,
} A couple of rules would allow us to use a more straightforward and potentially faster data structure for the Peer List:
Maybe a faster alternative could be a fixed array like arrayvec. Aquatic is using them. In fact, pub enum PeerMap<I: Ip> {
Small(SmallPeerMap<I>),
Large(LargePeerMap<I>),
} Pros:
Cons:
|
Beta Was this translation helpful? Give feedback.
-
2. Replace the outer BTreeMap with a DashMapThis is being implemented by @mickvandijke: It's not merged because DashMap does not allow iteration over all torrents, and the API needs that feature. On the other hand, it was implemented with a memory consumption limit and the other repositories don't have that feature, so we should:
|
Beta Was this translation helpful? Give feedback.
-
3. Split the repository into two repositories for IPv4 and IPv6That's another improvement I've seen in the aquatic repo: #[derive(Clone)]
pub struct TorrentMaps {
ipv4: TorrentMapShards<Ipv4AddrBytes>,
ipv6: TorrentMapShards<Ipv6AddrBytes>,
} Clients only get peers that are using the same IP version, so we have to filter out clients not using the same IP version as the current client making the request. That would be faster for tracker standards requests, but the API endpoints would be slower if we need to merge results from both repositories. |
Beta Was this translation helpful? Give feedback.
-
4. Preallocate capacity for data structuresThe Pros:
Cons:
If you change any of the current data structures, consider reserving capacity. For example, the peer list could be a vector with a capacity of 74 peers. |
Beta Was this translation helpful? Give feedback.
-
5. Sharding for repositoriesIf I'm not wrong, aquatic has:
https://github.com/greatest-ape/aquatic/blob/master/documents/aquatic-architecture-2024.svg I think each "Swarm Worker" has its independent torrent repository. IT's a kind of load-balancer for requests. All requests must stick to the same worker; otherwise, they could get a different peer list. However, I think that should not be a problem. But I'm not sure if that's the way it works. I have yet to read the code deeply. Maybe @greatest-ape can confirm. |
Beta Was this translation helpful? Give feedback.
-
6. Use crossbeam channels with one thread owning the repositoryhttps://docs.rs/crossbeam-channel/latest/crossbeam_channel/ Instead of sharing the ownership of the repository between all the threads handling requests, there is only one thread that owns the repository, and it receives requests via a This assumes that:
|
Beta Was this translation helpful? Give feedback.
-
7. Use
|
Beta Was this translation helpful? Give feedback.
-
8. External service for the repository using Use KeyDBSee #754 The repository could be a wrapper of an external service like KeyDB. Pros:
Cons:
|
Beta Was this translation helpful? Give feedback.
-
9. Use other lock implementationsFor example: https://github.com/Amanieu/parking_lot |
Beta Was this translation helpful? Give feedback.
-
10. Use a different web frameworkThis is only possible for the HTTP Tracker and the Tracker API. https://www.techempower.com/benchmarks/#section=data-r16&hw=ph&test=plaintext |
Beta Was this translation helpful? Give feedback.
-
Hi @Power2All @greatest-ape I found this challenge https://mas-bandwidth.com/writing-highly-scalable-backends-in-udp/. There are some suggestions to write highly scalable backends in UDP. |
Beta Was this translation helpful? Give feedback.
-
I've been thinking and researching how to improve the tracker's performance. I would like to summarize what I've learned so that people who want to contribute to this topic can have a starting point.
First, I will explain what we have right now and why. Then, I will introduce some changes we have been working on and other performance changes that could be implemented following other trackers' implementations, especially the aquatic tracker, which seems to be the fastest.
Structure for torrent repository implementations
In production code we use one torrent repository implementation which is harcoded. Hovewer, we maintain other alternatives for becnhmarking. There are several torrent repository implementations. All of them match a pattern like this:
Current implementation
UDPDATE: The current implementation has changed. It now uses a SkipMap! But the nested collections structured is still used. The inner collection for peer lists uses a BTreeMap.
Currently, we are using a
BtreeMap
of torrents where the key is the infohash. For each entry in the BtreeMap, we keep the torrent statistics and anotherBTreeMap
with the peer list. That's all!In short, it would be something like this:
The real implementation is more verbose because:
Arc<Mutex>
to safely share the data structures between threads (Arc
) and to be able to mutate (Mutex
) the state.These are the relevant types:
By replacing aliases and generics, you would have something like:
That is the torrent repository.
Type of requests
To improve performance, we must know what types of requests we serve with this data structure.
Tracker
announce
: This is a write-and-read operation into the repository. The peer is inserted or updated in the torrent's peer list. This is the most frequent request and the main one. The main tracker's purpose is to keep a list of peers interested in the same torrent (such as seeders or leechers).scrape
: This is a read-only operation.torrent list
andtorrent details
: These are like indirectscrape
requests via the REST API.Why this implementation
RwLock
for the outer collection (torrents)It allows us to have "a number of readers or at most one writer at any point in time". All
announce
requests are simultaneously a write and a read operation. If we only hadannounce
requests, it would not make sense to use aRwLock
because all clients would try to lock for writing. But since we also have ascrape
request, this allows multiple scrape requests simultaneously. However,announce
andscrape
requests cannot be handled simultaneously.BTreeMap
for the outer collections (torrents)The torrent repository is a
BTreeMap
. We use aBTreeMap
because we need an ordered list of torrents for the API. Unlike the standardscrape
requests, the API allows one to get all torrents without providing a list of infohashes. That means we need to iterate over all torrents to implement the API endpoint (with pagination).We are considering alternative data structures like DashMap. This structure allows parallel inserts/updates.
The
RwLock
is needed for insertion because we add a new node to the tree, not modify a preexisting node.There is an open question: Why do we want the API to include an endpoint to get all torrents, and what does that mean? Currently, the API returns the torrents orderer by infohash (with pagination). Still, if you request the same page after a while, you can get a different result because torrents could be removed. In fact, since the DashMap does not allow iterating over all torrents, we are considering using an alternative structure (BoxCar) to keep all the torrents (including not active torrents) so that the API can return consistent results. That may be a change we will introduce in the future even if, in the end, we don't use the DashMap. See this discussion for details about this topic.
Mutex
for the inner collection (peer list)Actors must acquire a lock to read/write a torrent entry. I don't know why a
Mutex
was used originally, but that does not allow multiple readers. Forannounce
requests, that's OK, butscrape
requests don't need to be exclusive.BTreeMap
for the inner collection (peer list)The peer list is not a list but also a
BTreeMap
.I don't know why a
BTreeMap
was used originally. I guess it could have been just an array or vector because, as far as I know, we don't need to return an ordered list of peers.Please comment on this thread if you have some questions (or answers).
cc @da2ce7 @mickvandijke is all that correct? Anything else to add?
Now, I will add some comments with proposals for improvement.
Proposals
Performace issues
For issues related to optimization, use the label optimization.
This list keeps track of what has been done related to this discussion.
SkipMap
instead of aBTreeMap
#777Resources
Beta Was this translation helpful? Give feedback.
All reactions