Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU metrics are incorrect on Windows machines with more than 64 cores #40926

Closed
faec opened this issue Sep 20, 2024 · 18 comments · Fixed by elastic/elastic-agent-system-metrics#192 or #41965
Assignees
Labels
bug Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team

Comments

@faec
Copy link
Contributor

faec commented Sep 20, 2024

On Windows, Metricbeat measures CPU use via the Windows API call GetSystemTimes. Each metrics interval, it fetches the CPU numbers, and compares them to the previous measurement to determine CPU load during that interval. On most systems this includes CPU time "including all threads in all processes, on all processors". However, on systems with more than 64 cores, it returns only the data for the current processor group of up to 64 cores.

This has two consequences on high-core machines:

  • Reported CPU metrics include only CPU cores in the same processor group as the Metricbeat process.
  • When the Windows scheduler moves Metricbeat from one processor group to another (which is unpredictable but happens regularly), GetSystemTimes returns data from a different set of cores. If the new processor group has a lower CPU total than the previous one, Metricbeat will report negative numbers for some CPU metrics.

The most visible symptom is occasional negative numbers in CPU-related metrics, especially coming in pairs of two adjacent data points.

This seems to apply to ~all Metricbeat versions, and all versions of Windows that support more than 64 CPU cores.

@faec faec added bug Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team labels Sep 20, 2024
@elasticmachine
Copy link
Collaborator

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

@nimarezainia
Copy link
Contributor

fyi @flexitrev

@cmacknz
Copy link
Member

cmacknz commented Oct 21, 2024

It is not immediately clear how to solve this, and the solution does not strictly seem like it will be simple, especially without being able to use the Task Manager implementation as a reference.

I suspect we will need something like a thread or process per processor group with their processor group affinities set appropriately so that they get scheduled into different processor groups.

@gabriellandau
Copy link

However, on systems with more than 64 cores, it returns only the data for the current processor group of up to 64 cores.

Is this documented anywhere? I don't see it in the GetSystemTimes docs.

My reading of this MS doc suggests our threads can switch between processor groups. Here's python pseudocode to do this. I don't have a >64T system to test with.

global last_system_times = {}


def get_cpu_time_deltas():
    idle_time_delta, kernel_time_delta, user_time_delta = 0, 0, 0
    
    # Backup original affinity
    GetThreadGroupAffinity(GetCurrentThread(), &saved_group_affinity)

    # Enumerate each NUMA node
    for node in range(GetNumaHighestNodeNumber()):
        # Enumerate all the processor groups for this NUMA node
        for group_affinity in GetNumaNodeProcessorMask2(node):
        
            # Switch to this processor group
            SetThreadGroupAffinity(GetCurrentThread(), &group_affinity)
            
            # Retrieve metrics for this processor group
            GetSystemTimes(&idle, &kernel, &user)
            
            # Retrieve old values
            if group_affinity.group in last_system_times:
                last_idle, last_kernel, last_user = last_system_times[group_affinity.group]
                
                # Compute deltas
                idle_time_delta += (idle - last_idle)
                kernel_time_delta += (kernel - last_kernel)
                user_time_delta += (user - last_user)
            
            # Store latest values
            last_system_times[group_affinity.group] = (idle, kernel, user)
            
    # Restore original affinity
    SetThreadGroupAffinity(GetCurrentThread(), &saved_group_affinity)
    
    return idle_time_delta, kernel_time_delta, user_time_delta

@VihasMakwana
Copy link
Contributor

@cmacknz @flexitrev @pierrehilbert

I ran two instances of the metricbeat binary on my personal Windows machine (with 8 cores): one with performance counters enabled and the other with performance counters disabled (using the latest build from the main branch).
Here are the results:

The left side of the dashboard is one with performance counters and right side is latest build from main

  • Average CPU usage of my system:
    Image
  • Idle CPU % (per core)
    Image
  • System CPU % (per core)
    Image
  • User CPU % (per core)
    Image

There is a clear correlation between the two sets of data, with the values being identical across both versions.

@pierrehilbert
Copy link
Collaborator

I don't think values are identical but instead similars.
Did you check directly in ES values to compare them?

@VihasMakwana
Copy link
Contributor

@pierrehilbert I think I know why. When I started two metricbeat binaries, there was a time difference, and the CPU data collection interval was set to 5 seconds. As a result, the two Metricbeat instances likely didn’t capture the CPU times at the exact same moment.

Let me retest it for 30 mins and reduce the interval to 1s and compare the results.

@VihasMakwana
Copy link
Contributor

VihasMakwana commented Nov 26, 2024

@pierrehilbert

Did you check directly in ES values to compare them?

Yes.

@VihasMakwana
Copy link
Contributor

@pierrehilbert updated results. I ran two binaries at the exact same time (almost)
Image
Image
Image

The results are better than before. What do you think?

@flexitrev
Copy link

I don't see any performance impact that would worry me. This means if performance counters give us better, more reliable data there is no reason we shouldn't use them.

@cmacknz
Copy link
Member

cmacknz commented Nov 26, 2024

Agreed, this looks similar at a high level.

@cmacknz
Copy link
Member

cmacknz commented Nov 26, 2024

Since this change looks like it is happening in elastic-agent-system-metrics, which has no mechanism to tie things to a stack release, I would like some way for us to switch between both methods to exist. One that ideally we can expose in the configuration of the system/metrics input.

This will let us update to the latest version of elastic-agent-system-metrics everywhere (because of a CVE related dependency update for example) without worry of also changing underlying behavior, and it will give us a way to easily switch back to the old way immediately if there is some unexpected problem to work around.

Let's avoid the problems we had with turning off the introduction of the degraded state in system metrics, and build a proper feature flag for this from the beginning.

@VihasMakwana
Copy link
Contributor

Since this change looks like it is happening in elastic-agent-system-metrics, which has no mechanism to tie things to a stack release, I would like some way for us to switch between both methods to exist. One that ideally we can expose in the configuration of the system/metrics input.

This will let us update to the latest version of elastic-agent-system-metrics everywhere (because of a CVE related dependency update for example) without worry of also changing underlying behavior, and it will give us a way to easily switch back to the old way immediately if there is some unexpected problem to work around.

Let's avoid the problems we had with turning off the introduction of the degraded state in system metrics, and build a proper feature flag for this from the beginning.

I implemented a similar thing initially. Something like following:

  • Use Performance counter by default.
  • If any errors are encountered, fallback to current implementation.

But your point makes more sense. A flag would be much more preferable.

@flexitrev
Copy link

I realize I jumped the gun on saying anything about performance here, since the binaries were being run at the same time, and measuring the same thing (thanks @strawgate). What I'd like to make sure is that we are not introducing a performance impact. Can we benchmark the two different approaches?

@VihasMakwana
Copy link
Contributor

What I'd like to make sure is that we are not introducing a performance impact. Can we benchmark the two different approaches?

Yes. I'll keep you posted on it.

@VihasMakwana
Copy link
Contributor

VihasMakwana commented Nov 27, 2024

Here's the result of running the metricbeat binaries on 180 core machine.
Image

The left side of the dashboard is one with performance counters and right side is latest build from main

The number on the right side only reports CPUs from current processor group while the left side of the dashboard utilises performance counters and reports metrics for each core installed on the system.


Total CPU usage reported by both metricbeats:

The one on the left side is accurate with Task Manager and it uses performance counters.

@VihasMakwana
Copy link
Contributor

We're not introducing any performance impact.
The memory usage by metricbeat running under elastic-agent stays constant. Nothing really stands out between the two versions.

The CPU usage by metricbeat stays the same, before and after my implmentation. Here's the graph.
Image

The "dip" in the graph is when I turned off the agent, so no data was reported during that time, so you can ignore that.
The CPU usage to the left of dip is with 8.17.0-SNAPSHOT version elastic-agent and right side is with my implementation of performance counters.

@VihasMakwana
Copy link
Contributor

Reopening until verified once more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team
Projects
None yet
8 participants