-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Index page incredibly slow after update to 1.21.8 #29769
Comments
Can you force-reload this page and try again? |
Force reload (in Chrome) and it didn't have any effect. Also loaded in Firefox (which I don't use for anything except testing so it's never hit the site before) and it had the same performance problem. |
Quick update, my coworker appears to not have this issue even after hard refreshes. I STILL have the issue however, even after hitting it locally on the server itself with Edge. I have a feeling this is specific to administrative users. |
Can you provide some backend logs and web browser console -> network screenshots? |
Sure, here is the contents of the log after clearing it and refreshing the main page a few times:
|
We're experiencing the same, although for some users it already started in 1.21.1 |
I know in one particular older version (not sure if it was 1.21.1 off hand), the main page performance dropped drastically but I believe it was the org/repos listing that took forever at that time. |
Looks promising! |
Just noting that after updating to 1.21.9 this morning, it didn't make any difference on this page load issue for me. |
If heatmap is related, maybe this issue is related: #21045 |
Possibly but I doubt it, the heatmap (despite a fair number of repos on our instance) wasn't an issue before until 1.21.8, something changed in that (and seems to be only affecting admins) that absolutely made the main page almost unusable at this point. Also noting (probably expected) that the latest 1.21.10 version performance is unchanged. |
maybe we need a trace level log to detect where takes long time to handle the request. |
I stopped the service, cleared the log, set to trace level, and after starting the service refreshed the main page and stopped it again. Hopefully this helps! |
Thank you. It's very helpful. |
Just a quick update on this, I've updated our instance to 1.22.0 and this is still an issue. Before the update I also converted the entirety of the database collation to the recommended Latin1_General_CS_AS as well (to no effect). |
Can you post the backend logs when you visit the dashboard after enable database logs? https://docs.gitea.com/next/administration/logging-config?_highlight=log_sql#the-xorm-logger And if yo can visit the database, please do a count on |
I've attached the requested log, cleared it first followed by service start, refreshed dashboard, stopped service. The count of the action table is 139,460. |
Interesting bit to add to this, I noticed yesterday when I had filled the entire feed with commits that were only mine that the index page loaded in a second or two (normal). Today now that the feed has other commits (synced commits for mirrors in orgs) it's back to being super super slow again to refresh. |
The slow dashboard happened on gitea 1.21.9, some logs may be usefull.
|
Just adding that this is still an issue with 1.22.1. |
I see this
to in my gitea 1.22.1 on ubuntu with PostgreSQL. |
The "cache context is expired, may be misused..." log entry I see in #31752. |
I have the same issue with the slow home page / user page on my gitea instance. Due to the suspected heatmap, I tested the performance with |
@lixfel do you see slow queries in gitea.log? |
Yes (MariaDB backend). Time of the query roughly matches reported page load time for (logged in)
I have also found these queries (just once, not for every slow page load):
|
I'm using Gitea with PostgreSQL. I can't check the statements on my setup. Just try to analyse the statements with ANALYSE (https://mariadb.com/kb/en/analyze-statement/). And post the output here. |
Context: ~122k Actions (I have in the meantime set the recommended old action cleanup up, therefore slightly shorter times). 1. Statement{
"query_optimization": {
"r_total_time_ms": 0.224562772
},
"query_block": {
"select_id": 1,
"r_loops": 1,
"r_total_time_ms": 14692.04853,
"filesort": {
"sort_key": "`action`.created_unix desc",
"r_loops": 1,
"r_total_time_ms": 0.310724109,
"r_limit": 5,
"r_used_priority_queue": true,
"r_output_rows": 6,
"r_sort_mode": "sort_key,rowid",
"temporary_table": {
"nested_loop": [
{
"table": {
"table_name": "repository",
"access_type": "index",
"possible_keys": ["PRIMARY"],
"key": "IDX_repository_is_fork",
"key_length": "1",
"used_key_parts": ["is_fork"],
"r_loops": 1,
"rows": 101,
"r_rows": 101,
"r_table_time_ms": 0.128251931,
"r_other_time_ms": 0.164203069,
"r_engine_stats": {
"pages_accessed": 1
},
"filtered": 100,
"r_filtered": 100,
"using_index": true
}
},
{
"table": {
"table_name": "action",
"access_type": "ref",
"possible_keys": [
"IDX_action_user_id",
"IDX_action_au_r_c_u_d",
"IDX_action_r_u_d"
],
"key": "IDX_action_r_u_d",
"key_length": "19",
"used_key_parts": ["repo_id", "user_id", "is_deleted"],
"ref": ["gitea.repository.id", "const", "const"],
"r_loops": 101,
"rows": 129,
"r_rows": 258.2079208,
"r_table_time_ms": 14677.53835,
"r_other_time_ms": 12.82903396,
"r_engine_stats": {
"pages_accessed": 78573,
"pages_read_count": 8730,
"pages_read_time_ms": 49065154.15
},
"filtered": 27.69502831,
"r_filtered": 4.022393497,
"attached_condition": "`action`.act_user_id = 1"
}
}
]
}
}
}
} 2. Statement{
"query_optimization": {
"r_total_time_ms": 0.46650906
},
"query_block": {
"select_id": 1,
"r_loops": 1,
"r_total_time_ms": 15869.32961,
"filesort": {
"sort_key": "`action`.created_unix desc",
"r_loops": 1,
"r_total_time_ms": 0.49666901,
"r_limit": 5,
"r_used_priority_queue": true,
"r_output_rows": 6,
"r_sort_mode": "sort_key,rowid",
"temporary_table": {
"nested_loop": [
{
"table": {
"table_name": "user",
"access_type": "const",
"possible_keys": ["PRIMARY"],
"key": "PRIMARY",
"key_length": "8",
"used_key_parts": ["id"],
"ref": ["const"],
"r_loops": 0,
"rows": 1,
"r_rows": null,
"r_engine_stats": {
"pages_accessed": 1
},
"filtered": 100,
"r_filtered": null
}
},
{
"table": {
"table_name": "repository",
"access_type": "range",
"possible_keys": ["PRIMARY", "IDX_repository_is_private"],
"key": "IDX_repository_is_private",
"key_length": "2",
"used_key_parts": ["is_private"],
"r_loops": 1,
"rows": 67,
"r_rows": 67,
"r_table_time_ms": 0.365640891,
"r_other_time_ms": 0.387999654,
"r_engine_stats": {
"pages_accessed": 135
},
"filtered": 100,
"r_filtered": 76.11940299,
"index_condition": "repository.is_private = 0",
"attached_condition": "!<in_optimizer>(repository.owner_id,repository.owner_id in (subquery#4))"
}
},
{
"table": {
"table_name": "repository",
"access_type": "eq_ref",
"possible_keys": ["PRIMARY"],
"key": "PRIMARY",
"key_length": "8",
"used_key_parts": ["id"],
"ref": ["gitea.repository.id"],
"r_loops": 51,
"rows": 1,
"r_rows": 1,
"r_table_time_ms": 0.300514901,
"r_other_time_ms": 0.032468465,
"r_engine_stats": {
"pages_accessed": 104
},
"filtered": 100,
"r_filtered": 100,
"using_index": true
}
},
{
"table": {
"table_name": "action",
"access_type": "ref",
"possible_keys": [
"IDX_action_user_id",
"IDX_action_au_r_c_u_d",
"IDX_action_r_u_d"
],
"key": "IDX_action_r_u_d",
"key_length": "19",
"used_key_parts": ["repo_id", "user_id", "is_deleted"],
"ref": ["gitea.repository.id", "const", "const"],
"r_loops": 51,
"rows": 129,
"r_rows": 507.8627451,
"r_table_time_ms": 15852.30487,
"r_other_time_ms": 13.85372599,
"r_engine_stats": {
"pages_accessed": 77944,
"pages_read_count": 9094,
"pages_read_time_ms": 53190293.67
},
"filtered": 100,
"r_filtered": 3.899463341,
"attached_condition": "`action`.act_user_id = 1 and `action`.is_private = 0"
}
}
],
"subqueries": [
{
"query_block": {
"select_id": 4,
"r_loops": 1,
"r_total_time_ms": 0.055215297,
"nested_loop": [
{
"table": {
"table_name": "user",
"access_type": "ALL",
"possible_keys": ["PRIMARY"],
"r_loops": 1,
"rows": 31,
"r_rows": 31,
"r_table_time_ms": 0.028239505,
"r_other_time_ms": 0.019582574,
"r_engine_stats": {
"pages_accessed": 1
},
"filtered": 100,
"r_filtered": 87.09677419,
"attached_condition": "`user`.`type` = 0 and `user`.visibility in (0,2)"
}
}
]
}
}
]
}
}
}
} 3. Statement{
"query_optimization": {
"r_total_time_ms": 0.382885099
},
"query_block": {
"select_id": 1,
"r_loops": 1,
"r_total_time_ms": 15281.56887,
"nested_loop": [
{
"table": {
"table_name": "user",
"access_type": "const",
"possible_keys": ["PRIMARY"],
"key": "PRIMARY",
"key_length": "8",
"used_key_parts": ["id"],
"ref": ["const"],
"r_loops": 0,
"rows": 1,
"r_rows": null,
"r_engine_stats": {
"pages_accessed": 1
},
"filtered": 100,
"r_filtered": null
}
},
{
"table": {
"table_name": "repository",
"access_type": "ALL",
"possible_keys": ["PRIMARY", "IDX_repository_is_private"],
"r_loops": 1,
"rows": 101,
"r_rows": 101,
"r_table_time_ms": 0.149008663,
"r_other_time_ms": 0.449722574,
"r_engine_stats": {
"pages_accessed": 4
},
"filtered": 66.33663177,
"r_filtered": 50.4950495,
"attached_condition": "repository.is_private = 0 and !<in_optimizer>(repository.owner_id,repository.owner_id in (subquery#4))"
}
},
{
"table": {
"table_name": "repository",
"access_type": "eq_ref",
"possible_keys": ["PRIMARY"],
"key": "PRIMARY",
"key_length": "8",
"used_key_parts": ["id"],
"ref": ["gitea.repository.id"],
"r_loops": 51,
"rows": 1,
"r_rows": 1,
"r_table_time_ms": 0.315848614,
"r_other_time_ms": 0.031174901,
"r_engine_stats": {
"pages_accessed": 104
},
"filtered": 100,
"r_filtered": 100,
"using_index": true
}
},
{
"table": {
"table_name": "action",
"access_type": "ref",
"possible_keys": [
"IDX_action_user_id",
"IDX_action_au_r_c_u_d",
"IDX_action_r_u_d"
],
"key": "IDX_action_r_u_d",
"key_length": "19",
"used_key_parts": ["repo_id", "user_id", "is_deleted"],
"ref": ["gitea.repository.id", "const", "const"],
"r_loops": 51,
"rows": 129,
"r_rows": 507.8627451,
"r_table_time_ms": 15273.24721,
"r_other_time_ms": 7.312031735,
"r_engine_stats": {
"pages_accessed": 77933,
"pages_read_count": 8640,
"pages_read_time_ms": 51436849.48
},
"filtered": 100,
"r_filtered": 3.899463341,
"attached_condition": "`action`.act_user_id = 1 and `action`.is_private = 0"
}
}
],
"subqueries": [
{
"query_block": {
"select_id": 4,
"r_loops": 1,
"r_total_time_ms": 0.049603218,
"nested_loop": [
{
"table": {
"table_name": "user",
"access_type": "ALL",
"possible_keys": ["PRIMARY"],
"r_loops": 1,
"rows": 31,
"r_rows": 31,
"r_table_time_ms": 0.025692178,
"r_other_time_ms": 0.020458218,
"r_engine_stats": {
"pages_accessed": 1
},
"filtered": 100,
"r_filtered": 83.87096774,
"attached_condition": "`user`.`type` = 0 and `user`.visibility in (0,1)"
}
}
]
}
}
]
}
} |
We're running a relatively small instance in Docker, backed by SQLite and are also observing periodic, extremely long delays for quite some time (I don't remember the exact version, but I do remember it was the one where the GoGit was replaced in favor of the git binary, which is why back then I chalked the added delay up to further optimizations being necessary in interacting with the git binary). Having enabled trace logging now, on 1.22.2 I can confirm the pattern observed by others above. Please note that the following log entries are complete and have only been annotated with comments and anonymized.
|
One more data point to add to this: Due to my gut feeling telling me that the SQL queries themselves are probably not slow as such (because the same behavior is consistently reported by many people in this discussion spanning a variety of DB backends), but rather are themselves the "victims" of some other activity slowing down the instance (maybe such as e.g. filesystem access to update some recently changed aspects of the git repositories, when showing the dashboard) I tried massively increasing the RAM allocated to the VM running Gitea and I can report that I have not observed slowdowns as bad as before since then. Before the increase the VM had 16GB of RAM to work with with never more than 2GB active and the rest used as FS cache. With this configuration we had regular delays exceeding 35 seconds, sometimes even exceeding 1 minute(!). Now the VM has 64GB of RAM to work with. The active size remains at less than 2GB and the rest used FS cache. With this configuration we've only have very few reports of slowness in general and those that were reported normally remained below 10 seconds only a select few (3 to be exact) took longer and not a single one exceeded 13 seconds. |
But in this case you can switch to PostgreSQL. Will be better when gitea is used concurrently. |
True, but I doubt it would make a difference in the context of this specific issue, as others who have reported the same behavior are using PostgresSQL. This is one of the reasons why I suspect the root cause of this issue to be something else than the DB backend. The behavior change with extra FS cache alone partially supports this assumption. |
I come to a different conclusion than you: The data presented supports the hypothesis that the SQL queries themselves are the problem. My reports on this issue were on a dedicated server with typically 16+GiB free/file cache memory with the slow times persisting after multiple page loads (when all data should be in the file system cache; the database is significantly smaller). A single page load of a frequently displayed page should never require the loading of multiple Gigabytes of data. Edit: I suspect the reduction in slow page load times after the memory increase might have more to do with larger caches (due to more available memory) in the database software. |
And this is major difference to the situation I observed, which is why I came to a different conclusion. Edit: But it could of course very well be that we are actually observing two different issues that just happen to surface under similar circumstances. |
In our case it made no difference reloading immediately after the slow dashboard load, it ALWAYS loaded incredibly slow. The frustrating part now is that after many months of problematic dashboard loading at least all of this year out of nowhere the slow dashboard loading seems to have stopped happening. This was a week or two before upgrading to 1.22.2 when that came out, so I don't have a way to repo this now unfortunately. |
#32127 should be a step to resolve the problem. |
Any updates? |
Updated to 1.22.4. Its better but not perfect yet with a 4.4 s page load time instead of 15 s. |
Please upload some logs and Pprof diagnose reports. |
I did not manage to get pprof running (the port 6060 used by ENABLE_PPROF just returned 404 to me), but got a relevant log excerpt:
Context: 108k Actions total, 29.5k Undeleted actions from my user Using analyze select on the query:
Removing the |
This problem should be fixed by #31821 and will be released on v1.23 |
Can confirm that the issue seems to be resolved all around for me. |
Closed at the moment. Please fire a new issue if a similar problem occurs. |
Description
Updated from 1.21.7 to 1.21.8 and now the index page takes almost 10 seconds to load, it's insanely slow now... seems to be the heat map that is holding everything up?
Gitea Version
1.21.8
Can you reproduce the bug on the Gitea demo site?
No
Log Gist
No response
Screenshots
Git Version
2.42.0, Wire Protocol Version 2 Enabled
Operating System
Windows Server 2019
How are you running Gitea?
As a Windows service using the provided binary.
Database
MSSQL
The text was updated successfully, but these errors were encountered: