-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracker: Serialization failure: 1213 Deadlock found when trying to get lock #6398
Comments
I was able to reproduce this on my local machine while trying to import big log file (I guess it was Piwik 2.5). |
thanks for reproducing |
Thanks for the report, we should investigate this for sure. |
Hello Thanks for the future fix |
Any news about that point . Every night my importlogs script fails because of that error. Thanks |
do you guys get other errors in your web server log or mysql error log? I don't know what can cause this issue so far. |
Nothing else tnat what jas been mentioned In piwik output I have Fatal error: Internal Server Error And in apache log : Hote that's help |
I can also reproduce the error in 2.9.1. I tried to import a logfile of the size of 60 MB. This is the output of mysql.err (mariadb 5.5.34) 141120 15:22:20 |
I get the same error messages with Piwik 2.9.1 and Oracle's mysql-community-server-5.5.33 on openSUSE 12.2 (x86_64), and I seem to observe gaps in the imported data. |
The error messages no longer appear when I change the (non-default) importer option |
My solution is to "downgrade" (reinstallation) to piwik 2.6.1. --recorders=24 is not a problem! |
I have used a number of versions of PIWIK previously and am seeing this problem with 2.9.0. Previously I have collected about 12 months worth of data and am running via BASh script to process through my archives, consolidate data into reasonable "chunks" and then upload with the python script... Once I see the error on my main database, it then seems to repeat a lot. It is as if I have blocked the pipe and everything backs up. I have tried a number of things just to experiment to see if I can see a root cause:
I conject that {for whatever reason} this is the dbase in a bad state, and all MySQL information about these kind of things suggests that its all about how the data is being pushed in, causing MySQL to "protect" its data (my words). Furthermore pietsch's "fix" suggests that the database end of the connection has reached a state that can acceot data but (humanised word) "slow down" the rate of input.... So a question to PIWIK. Does the database just accept data when uploading with python OR does the php code and database together manipulate it in the upload session. My current approach is to slow my rebuild right down and also to stop the continuing data collection from my live source... in otherwords I have one stream of data going into the database. My worry os that once a break occurs. That will be that and I will be back to uploading 20 records every 10 minutes or so. zero-universe's suggestion is most wise at the moment.... What sort of information shall I collect to benefit this forum, assuming that I see the error once again? Kev |
any updates on this topic ? |
@zero-universe nop |
same here (v2.9.1), reducing recorders from 12 to 2 helped, but its only a temporary fix. i import hourly, and with only 2 threads it takes to much time. (115 req/sec instead of 280-350 req/sec) |
Had to reduce recorders to 1 ... 👎 |
The same affects me (2.9.1) |
The same affects on version 2.10.0 |
Same here. Reducing recorders to 1 helps Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction |
We are also having this problem in this environment: CentOS 7 And like others here, reducing the number of recorders to 1 appears to have eliminated the problem. |
+1 here. Also had to reduce recorders to 1. Debian Wheezy |
Hi, I wrote the transaction patch in 2.4.0. I will try to find a solution, I already have an Idea that different Data handling within piwik might be the problem. As long as the analysis takes, please try to set "bulk_requests_use_transaction = 0" in config/global.ini |
You are a genius! |
Any update on this issue? I have also noticed that if you use MySQLi instead of PDO the deadlock errors go away but the number of records/sec drop significantly. |
I'm moving this to 2.12.0 and hopefully we can fix it there. Maybe @medic123de will have a solution for it :-) |
Hi everyone We have merged a Pull request which is supposed to address this issue. It may not be completely fixed, but we'll need your feedback to confirm this. In the meantime, we'll mark as closed. To use the beta release please see: http://piwik.org/faq/how-to-update/faq_159/ |
Hi FYI, I am just seeing this with Matomo 3.5.0 and using
|
Maybe I missed a bit.
|
Hi! If you were asking me:
|
Yes. I have to admit, I'm not using the python script for this, so maybe our custom loader has helped with this issue as well. But I had to make those changes to get it to work even with our loader. I'll look again soon and try to reproduce with the import_logs.py. I don't have the free time to look at this right now though. I think maybe the report invalidation transaction should be moved out of this path. Maybe make another API call out of it that just sets the value and commits, and get it out of the mix all together. |
…nd multiple recorders. (matomo-org#12733) * visitorGeolocator: output actual changes in debug mode (matomo-org#12478) closes matomo-org#12477 * Revert "visitorGeolocator: output actual changes in debug mode (matomo-org#12478)" (matomo-org#12480) This reverts commit 19a7654. * Fix SystemIntegration test (matomo-org#12726) Found `piwik` * This addresses the various dead lock issues when using transactions and bulk load. Issue matomo-org#6398 * Fix archive test to work with multi recorder fix. * Minor changes and renames
Does anyone have any suggestions to fix this issue in the meantime? I see that there's an open pull request but it looks like it may not be likely to fix the issue? "bulk_requests_use_transaction=0" seems to do very little, if anything. If it helps, I'm encountering the exact same deadlock issue using the bulk tracking rather than the importer. So it doesn't seem to be related to the log importer. |
Just created an issue see: #14619, Tracker mode: configure MySQL transaction isolation level to READ UNCOMMITTED. to avoid gap locks |
Hey everyone, if you still experience this issue please comment. |
reproduced in 4.7.0 running in Docker container:
the error happend, when I was running |
Thanks for contributing to this issue. As it has been a few months since the last activity and we believe this is likely not an issue anymore, we will now close this. If that's not the case, please do feel free to either reopen this issue or open a new one. We will gladly take a look again! |
I see a variant of the original error frequently (from the tracker, not importing).
I'm running the docker image (currently Some more information:
|
@gg-kialo Thanks for the report. Would you maybe be able to create a new bug report for your issue? (your issue might be more edge case than the original (closed) issue here (eg. might be related to MariaDB Galera cluster? or something else like the request payload (do you know if the issue is reproducible?)). we'd still like to fix it eventually but we aren't experiencing it internally at this time so a new issue would be helpful 👍 |
so i use the tracking api.
my code separates actions into groups of 500-800 (depending on session length)
each group is put into a queue, and threads send these groups to piwik to be tracked.
currently 15 threads send these groups simultaneously.
why i do this:
in my particular setup, i did some testing and around 500 was the sweet spot for piwik's ability to handle bulk tracks (less didn't decrease the time much, and more started increasing the time significantly)
i also noticed that the times it takes to track these hits doesn't seem to increase much if you have multiple bulk requests at the same time.
the problem:
i recently upgraded from 2.2.2 to 2.7.0, and now when i run the same importer, i start getting error 500s from the tracker requests.
looking at the apache error logs, there seems to be deadlock issues:
[Tue Oct 07 13:00:01 2014] [error] [client 130.14.24.60] Error in Piwik (tracker): Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction \t\t\t\t\t\t\t\tIn query: UPDATE piwikdev_log_visit SET idvisitor = ?, visit_total_time = ?, visit_last_action_time = ?, visit_exit_idaction_url = ?, visit_total_actions = visit_total_actions + 1 , custom_var_k1 = ?, custom_var_v1 = ? WHERE idsite = ? AND idvisit = ? \t\t\t\t\t\t\t\tParameters: array
The text was updated successfully, but these errors were encountered: