-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bulk load Piwik logs with documented API: improved tracking performance, allow performance testing #5554
Comments
This would make Piwik easy to prefer over Analytics, as admins then can imports years worth of logs and data into Piwik and see the trends more easily. I would be able to make the people at work to use Piwik instead of Google Analytics if I can import the common apache-logs into Piwik. |
Will tackle in conjunction with #762. |
Attachment: Proposed plugin |
Proposed plugin attached.
|
|
really interesting!
Before putting the plugin in core, I think we should do the cleanup of hook names and come up with a consistent naming. Please post here your updated versions - really looking forward to have a better look. |
This would be extraordinary helpful when tracking webservices. It would be possible to track very easy in cases like this: |
vipsoft, did you try this plugin with real traffic and see if it was working? How does it deal with cookies? Would it work in the real case of a traffic replay from apache logs? |
I'm using this plugin with a python script that parses through apache logs with real traffic. It seems to ignore the timestamp and all visits are recorded on the current server time. Has anyone encountered this? |
set, I think this is a bug in the code. Also, I expect it doesn't deal well with cookies and therefore would leave to invalid statistics. This is an alpha/not ready for production release. |
See an example of a PHP function that creates a request to piwik.php |
Will this module not create race conditions due to parallel writing to the logfile? In difference to Apache we don't have the file open the whole time - the file gets opened/closed every time piwik.php gets requested and writes data to it. Normal file system implementations don't care about simultaneous write, so we're gonna loose some log lines on high load piwik sites.. Even if there were a locking mechanism in the file system, it will probably screw the server load with high stat(), wait() and I/O. |
moving to post 1.0 as it requires significant time / testing. |
halfdan, I think we will have to lock the file before writing a new line into it, then release lock. Hopefully there is a large margin before this ends up causing performance issues. |
I just did a quick test: with 200 concurrent requests, 1000 total requests, calling a php file acquiring a lock and appending 8k to a log file. If the lock is non blocking it fails to acquire it in approx 50% of requests and 1000 requests take ~0.15s. So the conclusion is that we are safe to write to log file as long as we lock, this is not going to be the bottleneck I used this script with ab
|
Random thought: to be safe implementing this change, it would be nice to research code coverage status of our tracker unit tests. I believe it could be pretty easy to cover close to 100% but we just don't know what is not tested with our current integration tests. |
Once we implement bulk loading, hopefully we can get rid of the Tracker/DB custom classes and use Zend all the time? |
I need this (just mentioning it to follow the ticket changes :-) ) |
hello guys, i'm currently working on an import script. It takes a very long time to import just the logfiles from a singleday. Do you know anyway to speed that up?
It would be even better to have the chance to disable some plugins using that function. Thanks |
I assume during import, there is no browser-triggered archiving or scheduled tasks. It's been a while since I've looked at the tracker performance (the biggest issue for me was a memleak), so it would be timely to review, since Piwik 1.2 will have first party cookies. It would be nice if the importer could also parse a custom log format (if defined), or handle the tables created by mod_log_dbd or mod_log_sql. |
The KSVisitorImport Plugin can be easily extended with various importer plugins. currently it can import 2 types of apache logs and has a dummy class for importing google data. (while this data is not really imported) Additionally i started to integrate an ajax based progressbar to see, that the progress is still living, while it runs quite long. regards |
I have been experimenting with the KSVisitorImport. In the imports I have run, my system shows low CPU usage and low IO, so I suspect the performance bottleneck is network IO, probably DNS lookups. If this theory is correct, then DNS caching may yield a performance boost. |
The gethostbyaddr() call in the Provider plugin can be a bottleneck. There are some interesting ideas in the user contributed notes for http://php.net/gethostbyaddr that we could (1) test for during installation to see which methods work |
Thanks for the tip. Commenting out the gethostbyaddr() call resulted in a 60x performance increase. It should possible to substantially speed up the DNS lookups by using multiplexing (see http://wezfurlong.org/blog/2005/may/guru-multiplexing), i.e. doing a bunch of DNS lookups in parallel. So instead of waiting 1-3 seconds for each lookup, you run 10 at a time in parallel, then it only costs you 0.1-0.3 seconds per lookup. I'm going to experiment with this. |
jdhildeb, this is very interesting. We will very soon (few months) load logs in parrallel and I didn't take in consideration that this might be a bottleneck. Alternatively, we can do the DNS lookup on the piwik.php hit and log it. This might be easier and will not cause issues with DNS slowness until many requests per second. Also, the DNS server should be locally cached to the server itself (not sure how this is called again...) |
hi guys, to avoid problems with the anon plugin we should do the following: first hit: next hit with same ip What do you think? regards |
Kay, no this is what DNS cache server is for.. |
Is there somebody working on an importer? - sounds so?
You're right, but the import shows that this would improve performance drastically, especially when additional information like geoip and provider and perhaps other calculated data is cached at this point, which doesn't need to be recalculated within some minutes ;) as it is relativly statically. |
re: comment:48 Right now, gethostbyaddr() is called for a new visit. I'm opposed to moving it to piwik.php because it would mean calling gethostbyaddr() on every request.
I've created ticket #2152 to investigate performance enhancements for gethostbyaddr(). |
re: comment:51 Before we add more caching to the tracker, we need to follow-up on #735 (tracker mem leak). Caching geoip and provider information may have limited benefit because these plugins are called for a new visit (session). |
Replying to matt:
Matt, what do you mean by "in parallel"? What are your plans and approach? If piwik is logging visits from multiple websites, this will not likely be a bottleneck, as each webserver request for piwik.php (ultimately calling visit->handle(), and then invoking plugins ) is handled by a different process or thread (depending on webserver and configuration), and so your DNS lookups are inherently parallelized. However in the case of KSVisitImport (or presumably any other importer) the hits are read from an apache log file and are all processed sequentially in a single PHP request. In this model we must wait for one DNS lookup to complete before the next record can be imported. This is the bottleneck I'm bumping into. The optimization I have in mind is along the lines of:
|
jdhildeb, the idea of the bulk import will be to keep using piwik.php as it is, but instead of logging in the DB, it will log a "pretty log" in a file, a bit like an apache log but enriched with all the data piwik.php receives.
Alternatively, we could disable the DNS lookup since it is not so useful and costly..
|
Hi, I'm new on piwik, just installed it today. We do have a big concern logging the requests, we can't rely on mysql not because the daily amount of requests, but because the huge amount we get during odd peaks, like tv advertising. All we need are 2 scripts, one to print/export requests very fast, and another to load those requests. That way we can choose the method we will use to import data, by syslogd, queues, etc. It is important to have the timestamp and the requester IP. We think we will need to set the archive to always run with 10 minutes delay to make sure the queue has no past queries that will not be processed, is that right/possible ? We saw some discussion on MySql in the forums, and we do like MySql and its performance, but if you have thousands of connections at same time, you have to multiply it to the amount of memory you configured per connection on mysql and your max_connections, you can easily run out of slots or run out of memory. Of course we need more knowledge on piwik, maybe by the API it would be easy, we would appreciate if you have any good ideas to do those 2 scripts, we will keep you guys informed if we get any progress. So far, our best guess is to hack the piwik.php to send a 200 header followed by an ob_end_flush and leave the rest running in background, and somehow log the mysql query or the parameters sent to the php script in a queue. Later run the mysql query or reproduce the php request in the original piwik.php using some kind of trick in the php fast-cgi to pass the original parameters. We think we will also have to install lots of piwik virtualhosts to be able to spread each website in different databases and tables. Maybe we can use the mysql partition for that, lets try, but it would be great if each website has its own tables. Our main goal is to not change, or change minimal lines of piwik code. Thanks a lot for the initiative :) |
lorieri, please contact me at matt att piwik.org and we can discuss these things. WE are hoping to work on Scalable Piwik in the next few months and will create a brainstorm group. Email me with your traffic details and use case. Cheers |
Hi guys, I'm not sure if I can give the exact number, but you can see the pattern in the slide #19 in that presentation: http://www.slideshare.net/rgaiser/r7-no-aws-qcon-sp-2011 |
I also experimented with importing data, perhaps that helps you: |
Hi, thanks |
Replying to vipsoft: I think piwik should resolve names only after archive run and aggregate similar ips |
From email
See attached file. |
Attachment: |
Attachment: |
The API to import Bulk Visits and Pages in Piwik is now implemented! See: #3134 It's now possible to import ie. 100 visits or 1000 pageviews in one POST http request |
The feature is implemented. We have integration tests as well (ImportLogsTest) See the Tracking API reference in: http://piwik.org/docs/tracking-api/reference/ in particular the section "Advanced: Bulk Tracking Requests" |
We're working on Reddis for storing tracker requests, see #6075 |
Currently data can be pushed in the database using the piwik.php script, called from the piwik.js tag.
For example an HTTP request that stores a page view in Piwik looks like:
```
http://piwik.org/demo/piwik.php?url=http%3A%2F%2Fpiwik.org%2F&action_name=&idsite=1&res=1440×900&h=16&m=22&s=20&fla=1&dir=0&qt=0&realp=0&pdf=1&wma=1&java=1&cookie=1&title=Piwik%20-%20Web%20analytics%20-%20Open%20source&urlref=
```
We want to improve the way Piwik logs data at tracking time.
- Piwik will now create ‘access logs’, similar to apache access logs, containing all the REQUEST details (url, the ‘piwik’ cookie which depends on #409, user agent, referer url, IP, language, POST data, other $SERVER, etc.).
- Every 10s (or 30s or 1min) the ‘Tracking Bulkloader’ will be triggered by the Maintenance task (see #1184).
– it will connect to the DB once, then read all log lines and process visits in memory,
– creating flat files for the DB updates,
– eventually using memcache for visits/pages/options/cookies data store,
– then bulk inserts/updates visits/pages/conversions and cookies
– The process would log reports`
2010-09-08 04:03:02 – Loaded 1405 visits, 45000 page views, 345 goals – Duration 32s – Logs since 2010-09-08 04:02:52`. How do we handle timeout errors? memory errors?
- Persists status of ‘bulk logs loaded’ in piwikoption
- disconnect DB
Speed of tracking data load would be greatly improved and would compensate the performance loss resulting from the cookie loss in #409.
The log replay script would work in several modes
- replay a single visit using a HTTP request containing the input data (the content of one log line from the log file).
– The script API to record stats would become public and be documented. This will allow any user to record stats in Piwik from any source (mobile phones, php apps, desktop apps, etc.).
- replay a set of visits given a log filename, will read the log file and load it in the DB
This way, we can use the log replay script in the following use case
- Tracker performance improvement
described above: Every 10s (or 30s or 1min) the ‘Tracking Bulkloader’ loads all logs at once, which is much faster than connecting to the DB/updates/inserts at every page view.
– There will also need to be a new ‘Super user’ setting
– Enable tracking logs bulk loader
– Record visits in DB every 10s
– Enable Memcached wrapper
- Performance testing
replaying existing real logs make it easier to test performance changes in different releases and when doing code updates. See #2000
Notes
- Maybe the tracking code has to be modified to remove the logic that ‘selects’ or tries to ‘match a visitor md5config to a previous visit’ as it can be done using the cookie store (#409) I believe.
- can we use an existing code to parse apache logs in an efficient way, and which would work with several log formats automatically (including windows IIS formats)?
- see also the [DB schema](http://dev.piwik.org/trac/wiki/DatabaseSchema) even if it’s outdated, most fields and tables are unchanged.
The text was updated successfully, but these errors were encountered: