-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error connection id could not be verified
#14
Comments
It seems there are a couple of IPs making a lot of requests: From: 2024-11-15T11:50:26.058130Z (initial record time in logs) This is the number of records containing the error per IP:
|
I've tried with another UDP tracker client and it works: |
I've added a new UDP tracker to the demo on a new port $ cargo run --bin udp_tracker_client announce 144.126.245.19:6868 9c38422213e30bff212b30c360d26f9a02136422
Compiling torrust-tracker-client v3.0.0-develop (/home/josecelano/Documents/git/committer/me/github/torrust/torrust-tracker/console/tracker-client)
Finished `dev` profile [optimized + debuginfo] target(s) in 7.78s
Running `/home/josecelano/Documents/git/committer/me/github/torrust/torrust-tracker/target/debug/udp_tracker_client announce '144.126.245.19:6868' 9c38422213e30bff212b30c360d26f9a02136422`
sending connection request...
connection_id: ConnectionId(I64(-9193186782767578663))
sending announce request...
{
"AnnounceIpv4": {
"transaction_id": -888840697,
"announce_interval": 300,
"leechers": 0,
"seeders": 1,
"peers": []
}
} Either, the clients are not sending the connection ID correctly or there is a bug when the load is high. I'm going to enable the trace level to check if the clients are sending the connection ID they get in the cc @da2ce7 |
I think there is another possible cause. If announce requests take longer than 2 minutes to be processed, the connection ID is expired. |
Let's move to symmetric encrypted connection ID's so that we can give an error response telling when it expired, (or just invalid). |
@josecelano I have created a draft (unfinished for now) implmentation of the encrypted connection ID: |
Hi @da2ce7, cool, I will take a look tomorrow. We have to check the performance, I think the hashing for the connection ID was one of the things that takes a big percentage of the total time of request processing. |
Hi @da2ce7, I have updated the tracker demo after merging your PR. I've checked the logs, and we are still getting many errors. However, now we know the exact reason:
It would be convenient to include the unencrypted issue time in the logs to know the exact value. Anyway, I'm going to continue with my debug plan. I want to check if these failing requests have a previous |
@josecelano I will try and make another PR that makes the log messages nicer for this case. |
Hi @da2ce7, I'm getting many announce request errors from my BitTorrent client. I think it is because the demo tracker is not responding in time. The server CPU has been under 50% in the last 14 days. I suppose the problem is that we have a lot of request errors in the logs for this issue, and that makes the UDP tracker slower because it has to write the error to the log file. I think we should either:
Could it be that some clients don't make the connect request, or they make it wrong? |
Here is what the KI says. Given your requirements, using a Counting Bloom Filter (CBF) or a similar probabilistic data structure can be adapted to handle this scenario effectively. Here's how you could implement a solution that addresses your concerns: Solution Overview:
Implementation Details:
Advantages:
Challenges:
Implementation Steps:
By employing this hierarchical approach with Counting Bloom Filters, you can effectively manage IP-based errors at different levels of granularity, protecting your network's performance while minimizing the impact on innocent IPs. |
Hi @da2ce7, I've downloaded 3-minute tracker logs with Socket addresses ordered by number of logs records (10 first ones): $ grep -oP "\d+\.\d+\.\d+\.\d+:\d+" ./tracker_logs.txt | sort | uniq -c | sort -nr | head
84398 0.0.0.0:6969
4632 *.142.69.160:53234
3311 *.248.77.85:38473
2581 *.241.177.250:43822
2276 *.182.117.236:43124
2216 *.86.246.61:1024
1895 *.118.235.66:6881
1846 *.38.196.30:6816
1845 *.181.47.215:6817
1818 *.181.47.215:6812 IP addresses ordered by number of logs records (10 first ones): $ grep -oP "\d+\.\d+\.\d+\.\d+" ./tracker_logs.txt | sort | uniq -c | sort -nr | head
85291 0.0.0.0
13435 *.38.196.30
12789 *.181.47.215
5052 *.65.200.220
4869 *.119.120.243
4785 *.79.3.85
4632 *.142.69.160
3895 *.248.77.85
3734 *.165.243.54
3403 *.236.91.11 Socket addresses (for a given IP, the second one in number of logs records) ordered by number of logs records (10 first ones): $ grep -oP "*.38.196.30:.\d+" ./tracker_logs.txt | sort | uniq -c | sort -nr | head
1846 *.38.196.30:6816
1755 *.38.196.30:6812
1728 *.38.196.30:6814
1710 *.38.196.30:6811
1689 *.38.196.30:6817
1602 *.38.196.30:6813
1566 *.38.196.30:6818
1539 *.38.196.30:6815 I've also checked one socket address. The client is making the connect request, but it's not using the received ID in the ConnectionId(I64(-2325979768337461225)) The
I guess this is a bad client implementation. I can email you the logs if you want to check other things. |
|
I wondered if the client producing the errors was newtracke. But It seems it's not. It seems they are using these IPs:
I will check if we receive a request from one of those two IPs. |
I've been using the demo tracker logs, and there are a lot of errors verifying the connection ID:
The text was updated successfully, but these errors were encountered: