-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU consumption in the app #1457
Comments
Reverting 2dae035 makes it even worse |
I've tried using https://github.com/klauspost/compress/gzip but it doesn't seem to make a big difference. @davkal , have the requests to |
We are seeing over 100% CPU usage in the app, with 5 probes connected. Performance seems to be ok, though This is with :latest from yesterday |
You mean that the app is usable? |
Yes they have. Whenever you click on the search field, all topologies are fetched. But only then, no recurring timer. |
@davkal I meant without search (this is master) |
Not from the frontend. Just checked |
Yes the app is usable. We are using m3.large instances
|
@tomwilkie Could this be a bug on #1418 ? |
@janwillies Can you check the number of env variables in your containers? |
170 per container, times ~100 2016-05-10 14:23 GMT+02:00 Alfonso Acosta [email protected]:
|
@janwillies Cool. Then truncating them might help. |
@janwillies Reports that the app is still consuming 129% CPU with 5 nodes |
I am also seeing similar issues. |
Attaching pprof data from our cluster on AWS seeing high CPU issue: |
This is the profile from @idcrosby, nothing too different from the previous profiles in this issue except for the futex contention on notesleep. |
@janwillies @willejs @idcrosby Scope 0.17 is out and should improve the App's CPU consumption by at least 50% . Please let us know if the performance is acceptable now. |
@2opremio I tried various compression level with kinvolk-archives@37cc006 and I get the following:
So if Scope were using I have not checked the decompression speed in the app. |
Good job. I thought we were using the default compression. Also, it may be worth checking other compression libraries |
The list of compression https://golang.org/pkg/compress/ bzip2 is not suitable because it has only decompression and not compression (see golang/go#4828) File size: 1662592 (msgpack uncompressed)
I saved a msgpack file (by using kinvolk-archives@37cc006, then used gunzip on the file from Unfortunately, a lower compression level seems to imply a slower decompression. This is important since this bug is about the app. In this table, flat with level=7 seems overall better for my msgpack file. But this is variable and when running the test another time, gzip with level=6 was the best overall. |
We want the middle ground between a small compression size, a fast compression time and a fast decompression time. Tests suggest that the default compression level is better than the maximum compression level: although the reports are 4% bigger and decompress slower, they compress 33% faster. See discussion on weaveworks#1457 (comment)
On the service, with 4 nodes (probes) the app is over 100% CPU
Profile: pprof.localhost:4040.samples.cpu.001.pb.gz
The profile shows 80% CPU (maybe I was seeing garbage collection spikes?)
Perhaps the cache removed on #1447 would help?
The text was updated successfully, but these errors were encountered: