You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
mattab opened this issue
Mar 21, 2010
· 6 comments
Labels
BugFor errors / faults / flaws / inconsistencies etc.CriticalIndicates the severity of an issue is very critical and the issue has a very high priority.
because there are memory leaks issues during archiving, see #766, when a Piwik has hundreds or thousands of websites in the system, the memory usage peaks and can lead to ridiculous values.
Until #766 is addressed and as a useful safe guard, we should looop over each website and trigger archiving separately for each. This is a quick fix that will help many users get past the memory usage issues.
The text was updated successfully, but these errors were encountered:
(In [2025]) Fixes #1227
the archive script now loops over all websites and triggers a different request for each archive, helping out with the archive memory exhausted issue (refs #766)
Thanks for your work on this. Overall I would say its better. Its still aching slow running over the sites and my most active site (15k visits / 20k actions per day, out of around 150k per day) is taking 2 GB to process the yearly stats.
I tried the new archive.sh script, but it didn't work for me, because the list of site id's seems to be utf encoded it doesn't pass the isnumeric test. The convertToUnicode=0 param to the api call doesn't seem to do anything here.
BugFor errors / faults / flaws / inconsistencies etc.CriticalIndicates the severity of an issue is very critical and the issue has a very high priority.
Currently, archive.sh simply loops over the periods and triggers archiving for all websites: https://github.com/piwik/piwik/blob/master/misc/cron/archive.sh
because there are memory leaks issues during archiving, see #766, when a Piwik has hundreds or thousands of websites in the system, the memory usage peaks and can lead to ridiculous values.
Until #766 is addressed and as a useful safe guard, we should looop over each website and trigger archiving separately for each. This is a quick fix that will help many users get past the memory usage issues.
The text was updated successfully, but these errors were encountered: