-
-
Notifications
You must be signed in to change notification settings - Fork 32k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Higher CPU continues in 0.118 #40292
Comments
Could be related to this issue #39890 |
There isn't much we can do to help without a |
Maybe you can tell me how to install.... |
I've tried
|
Which Debug you need?
|
Try turning on full debug mode. (default: debug) and seeing if there is anything interesting. Without a |
Is there a way to send you this debug log?
|
If you want to send it privately, [email protected] would be fine.
…Sent from my Mobile
On Sep 19, 2020, at 9:11 AM, Giel Janssens ***@***.***> wrote:
Is there a way to send you this debug log?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I've send you a wetransfer link because |
Try disabling this template as its watching all states and you have a massive amount of state changed events happening. You could replace it with an automation to only happen every few seconds and it should help |
Also change
This will avoid creating a list every time a sensor updates |
I've had tried that yesterday, there was no difference. So I turned it back on... But I will make an automation of that... |
|
I can tell you what I see is inefficient, but I'm pretty much shooting in the dark without a |
I'll try to change that all. |
It looks like you are getting about 300 state changed events per second 😮 |
I'd focus on reducing the state change events to only what you need. |
How do you mean? Delete some sensors? In 0.114.x I haven't seen these problems... |
I have disabled my
this appears to be less frequent |
I have found this
in the logs from the MariaDB addon |
If you disable all your template entities, does the issue go away? |
It is likely the mysql load is caused by the processing of all those state changed events. I'm not sure what has changed between 0.114 and 0.115 that would cause that increase. It may be caused by a change in the integration that is generating these events. |
I get the same error with
(I did have to add the |
|
Do you have something that is constantly gathering logbook data? There are at least 5 separate logbook api requests running in your py-spy. |
The next thing to watch for is the |
It corresponds to my subjective perception - utility meters + integration eat a huge of resources. |
And I have a lot of them |
Sometimes seeing warnings from integration sensor in my log
|
Would you please open a new issue for that one. |
Will do tomorrow.... |
Now I see this in my logs
Do I have to open a new issue? |
Yes please. It likely just means the connection wasn't closed cleanly. I doubt it actually affects anything. |
Thanks @bdraco |
@gieljnssns Can you give the above a shot ^ |
@bdraco |
Thanks. The latency in the event loop is better because it doesn't have to suspend to execute the subprocess. Unfortunately the overhead of parsing the the netlink messages is using more cpu time since the python version is slower than the c version as there must be a large number of neighbors on your host. The overall result was likely better for me since I don't have as many neighbors on my host.
Potentially something like this might be faster: I'll do some testing today if I don't run out of time. cc @mudape |
For testing |
It looks like this is what we need to limit the query svinota/pyroute2#723 |
Well it looks like that is only supported on newer kernels so there is no way to get a single record on older kernels which means the only way is to dump the whole table and filter. That is unfortunate, as I don't think we have a way to make this faster without a kernel upgrade. Even on 4.4.59, get isn't implemented
|
I did find some more polling that shouldn't be happening in the latest profiles. I've cleaned that up in 0.119dev. |
HACS beta 1.8.0 has been tagged https://github.com/hacs/integration/releases with the performance cleanups I mentioned above. |
|
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. |
The problem
I don't know if it is a core or frontend problem or something else. But I'm seeing a higher CPU use in generally, but especially when clicking on a card in the Lovelace view.
My normal CPU use was 2 a 3 % now it is at 6 to 8 %
When I want to show the history the fans of my NUC immediately start to blow and I see always a turning circle.
And the CPU use peaks at about 18%
This are some screenshots from Glances when clicking on history in Lovelace
I had debug on for
But not seeing anything special
I've tried the Py-Spy way, but I can not get it installed
Environment
Problem-relevant
configuration.yaml
I don't know
Traceback/Error logs
Additional information
If you need some more info, ask...
The text was updated successfully, but these errors were encountered: