Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EventCounter vs PerformanceCounter documentation & guidance #346

Closed
MedAnd opened this issue Jun 18, 2019 · 11 comments
Closed

EventCounter vs PerformanceCounter documentation & guidance #346

MedAnd opened this issue Jun 18, 2019 · 11 comments
Labels
documentation Documentation related issue question Further information is requested
Milestone

Comments

@MedAnd
Copy link

MedAnd commented Jun 18, 2019

  1. On .Net Full Framework, a PerformanceCounter value seems to survive as long as one producer or one listener exists. Should all producers & listeners of the PerformanceCounter crash, restart, node restarts etc, the value is lost. Does EventCounter running on the .Net Core run-time behave in the same manner?

  2. Official guidance states that PerformanceCounters should not be created & immediately used due to latency time to enable the counters, is the same true for EventCounter running on the .Net Core run-time?

  3. What is the envisaged pattern to re-load an EventCounter on server restart (in light of previous question on delay of usage after creation)?

  4. Lastly any guidance on limits & resource utilization for how many EventCounter counters can be monitored per host on Windows / Linux, for example:
    4.1 Can we collect/report on 1000 individual EventCounter counters every 10 seconds. What is the size (1 GB / day) compared to PerformanceCounter?
    4.2 Impact on host / container CPU and memory whilst monitoring 1000 individual EventCounter counters every 10 seconds etc?

@jorive jorive added documentation Documentation related issue question Further information is requested labels Jun 18, 2019
@noahfalk
Copy link
Member

On .Net Full Framework, a PerformanceCounter value seems to survive as long as one producer or one listener exists. Should all producers & listeners of the PerformanceCounter crash, restart, node restarts etc, the value is lost. Does EventCounter running on the .Net Core run-time behave in the same manner?

The short answer: We want to make it easy to use EventCounter connected to a persistent store and I expect most scenarios will do so, but EventCounter isolated from all other parts of the end to end workflow has very minimal intrinsic persistence.
The much longer answer: To understand what EventCounter can do, its helpful to understand what a baseline world looks like without EventCounter and then see what problems we can solve by adding it. In our baseline world we've got a tracing system like ETW, Lttng, or EventPipe which lets software component X emit log messages and some other software component Y, usually in another process, can listen to those messages. We can operate on the messages in real-time or save them somewhere and read them back later depending on how we configure the tracing system. Now lets assume we want to model a performance counter - component X has some metric that changes over time and we need to communicate those (time,value) datapoints to component Y. Simple enough, component X starts logging messages to the tracing system that encode the metric:
Metric A, time=1, value=19
Metric A, time=2, value=14
Metric A, time=3, value=21
...
Component Y parses the log messages, and voila poor man's perf counter. However you probably see places where this solution wouldn't work so well:
a) The components had to both understand what the format of the log messages was. What if other components wanted to add their own counters or read the counters in the same logging session? All components would need to agree on the transmission format. In addition to encoding time and value there is also some basic metadata like the name of a counter and the measurement units.
b) What if component X is updating the metric a million times per second but component Y only needs the value once per minute? That would be very expensive to send all those messages and not use them. We need a way to throttle X so it doesn't transmit more messages than is necessary and needless consume CPU/IO.
c) If we start throttling X from 1 million updates per second to 1 one per minute then what is the best way to summarize the 60 million values? In some cases arbitrary sampling might be OK, but sometimes we might want a different statistic like the sum of all the values in the interval or the maximum value.
These are problems that EventCounter is intended to help solve. Within the process emitting the counters you declare an instance of EventCounter, configure some metadata, and then start calling WriteMetric() every time you want to make an update. As long as there is no listener EventCounter discards the data immediately. If at some point a listener connects it will communicate to the EventCounter what the desired update rate is. EventCounter will then batch up the updates in-memory, consolidate them by computing summary statistics, and then send updates of those statistics at the agreed upon transmission rate.

To return to the original question that was asked, how does persistence play a role here? There is a tiny amount - When component X called WriteMetric() to update the counter that value was not immediately transmitted. Instead it was stored in memory so that it could be aggregated with other updates. The listener decides what the duration of this persistence is, typical values are likely to range from 1 second to 10 minutes but that isn't required. If the process emitting the counters terminates then all in-memory data is naturally lost, but updates that were already emitted to the logging system may or may not be persisted depending on how the end-to-end flow has been set up. It is relatively easy to configure ETW, Lttng, or EventPipe to log into a file so that is one way the data might be persisted. A more common way performance counter data is typically persisted is that the updates are transmitted to a time series database. I imagine that most people using counters in other contexts already have some form of persistence and probably a graphical viewer so I see our role as making it easy to ingest the counter messages from the logging system into the user's persistent store.

Your interest in persistence on the twitter thread seems a little different than what I expect most users are looking for. Typically I expect users to want persistence so they can do diagnostic / analytics over the historical data. If I understand correctly you are looking to restore the value back into memory of a service instance so that it can survive reboots and arguably the data is functional at this point - the correctness of future counter values depends on the restoration of this state. Trying to do this with EventCounters would likely run into two issues:
a) There is the small, but non-zero window where updates are being buffered in memory before they are emitted and potentially serialized. During a process reboot this data could easily be lost. Also typical configuration for our tracing systems is best-effort delivery, not guaranteed delivery. This ensures if the logging pipeline stalls for whatever reason it won't block the application. In most cases that is what user's want for a diagnostics system but it isn't a good property if losing an update means your persisted data is now out-of-sync.
b) Presumably you don't want arbitrary persistence, you want persistence whose locality and lifetime exactly matches the SF Reliable dictionary you are storing metadata about. If at some point in time you deleted that dictionary, stood up a new instance of the service, migrated to another node I assume you want that counter to update at the same time. Anything built in to the core of the .Net Runtime isn't going to have automatic knowledge about SF or these external events so you would have to figure out how to keep it synchronized, probably by detecting changes to the reliable dictionary and then mirroring that into the corresponding counter storage. If the counter storage is significantly different than the SF Reliable dictionary that is likely to be a complicated and error prone synchronization. On the other hand if the counter is in the ReliableDictionary you get synchronization automatically and if it is an adjacent SF Reliable datastructure you can probably easily manage the collection of datastructures as a consistent unit.

Official guidance states that PerformanceCounters should not be created & immediately used due to latency time to enable the counters, is the same true for EventCounter running on the .Net Core run-time?

On the counter emitting side your code can update the counter value immediately after creating it. On the counter receiving side the listener needs to specify an update rate when it begins listening. I think we have a hard-coded lower bound of 1 second right now. In theory you could imagine we lowered that bound in the future but EventCounter doesn't store any data until a listener indicates interest. If the counter producer hasn't logged any data in between when listening started and when the first update is transmitted then the listener is getting statistics from a sample of size 0. Not an error but probably not useful.

What is the envisaged pattern to re-load an EventCounter on server restart (in light of previous question on delay of usage after creation)?

There are two tasks to do:

  1. Determine what the initial counter value should be - The app author has full control over this part. I would imagine a typical pattern is that an application has some form of durable state (say database, file system, REST service, non-volatile datastructure) and the author wants counters that are metadata about that state. The easy route is if the persistent metadata can be stored in-band with the state it is describing using the same primitives for storage and retrieval. If that isn't possible then the app author would have to pick some other available durable storage and figure out a scheme to keep the two different persistent stores synchronized.
  2. Update your EventCounter with that value - This part is a few lines of code, but exactly what the lines are depends on what kind of counter you want. EventCounters is actually the umbrella term we use for four related counter types (so far). I'll guess you might find the PollingCounter option most useful in this case. PollingCounter leaves it up to your app to compute the value that will be emitted to listeners any way you want and the counter infrastructure handles the message formatting, managing listeners as they subscribe/unsubscribe, and tracking fetching the value to respond to each listener's preferred update rate. You specify a callback function and it is called automatically whenever it is time to send a listener an update. For example:
    class MyEventSource : EventSource
    {
        ...
        PollingCounter _countOfDictionaryEntries = new PollingCounter("Dictionary Entry Count", () => { return g_reliableDictionary.Count; });
    }

Lastly any guidance on limits & resource utilization for how many EventCounter counters can be monitored per host on Windows / Linux, for example:
4.1 Can we collect/report on 1000 individual EventCounter counters every 10 seconds. What is the size (1 GB / day) compared to PerformanceCounter?
4.2 Impact on host / container CPU and memory whilst monitoring 1000 individual EventCounter counters every 10 seconds etc?

I don't have exact numbers to quote, but I can offer my basic mental model. We've got plans to do some real performance investigation - this is just educated guesswork and I make no promises you would see this if you experimented today:

  1. The size of metric data when it is stored. This isn't specific to EventCounters but rather to your chosen persistence mechanism. Assuming you were using a time series database you'd probably expect counter metadata to be stored efficiently once for the entire series and then each datapoint is sizeof(value) + sizeof(timestamp). There is likely potential for very effective compression if the times/values follow predictable patterns or you are willing to sacrifice precision for storage size. But naive uncompressed storage of a (32 bit timestamp + 32 bit value) * 1000 counters * 8640 updates/day is ~69MB/day if I did my math right.
  2. Process VM usage to store counter data while it is being aggregated - Currently all the stats we offer can be computed in O(1) space. These are things like Min, Max, Count, Sum, SumOfSquares, and Mean. For efficiency we buffer a small number of recent updates before doing the calculation (I think it was 10?) and the values are doubles + some overhead for object headers and tracking multiple series so maybe ~150 bytes * # counters * # listeners. Then you also have per-counter metadata that is mostly strings so perhaps another 100-200 bytes/counter for those. Ballpark guess 1000 counters with 1 listener, 350KB? If you used the Polling variety of counters then none of the statistics are needed any more and size might drop in half.
  3. CPU overhead to aggregate the statistics - For the polling variety counters this is zero, but if you use the counters which aggregate it for you my best guess is the InterlockedCompareExchange to get the counter into the buffer probably dominates the overall time. Maybe 100 cycles per counter update and your app controls how frequently it makes updates.
  4. CPU overhead to format and emit an event - Depends in part on the tracing system we are using (ETW, EventPipe, Lttng) and the speed of the eventual IO, but maybe ~1000 cycles per counter per emitted update? 1M cycles every 10 seconds would put you at ~0.003% of a modern single CPU thread or a latency <= 1ms
  5. IO bandwidth on the tracing channel - Depends on the size of your descriptive metadata strings. We don't make any effort to compress it so all the strings get transmitted with every update. I'll guess a typical counter update has a ~300 byte payload so 1000 of them every 10 sec is 30KB/sec. Depending on how you had the tracing channel set up this could be direct memory IO in which case it looks neglible, but if you were directly persisting the events to a file or low bandwidth network connection that might not work as great.

Overall my sense is that unless you go nuts with huge numbers of counters, lots of parallel listeners or run an app on some very constrained hardware the performance overheads of counters probably won't be a meaningful concern. Anecdotally I can tell you that the ASP.Net team turns on all the default runtime and ASP.Net counters emitting once per second when they do performance benchmarking for Tech Empower. It is a pretty sensitive benchmark and they don't have enough measurement precision to discern any difference in the results.

Sorry we got a little long there, but hopefully that was more useful than the twitter size answer : ) Cheers!

@MedAnd
Copy link
Author

MedAnd commented Jun 20, 2019

Thank you for the detailed response @noahfalk ... I've read it several times and wanted to confirm my layman's understanding + interpretations are correct, apologies in advance if they are not 🙂:

  • There is very, very little overhead in using EventCounter and it is being designed to be cross platform, high performance and low resource utilization from the get-go
  • No native EventCounter persistence store (able to survive a server restart) is planned, however forwarding of the metrics to persistence stores such as Application Insights / Azure Monitor will be made very easy
  • Unlike PerformanceCounters, EventCounters will not be able to maintain their state even if listeners were attached before the emitting process that created the EventCounters restarted. The state of the EventCounter in memory belongs to the emitting process... my interpretation of below:

Within the process emitting the counters you declare an instance of EventCounter, configure some metadata, and then start calling WriteMetric() every time you want to make an update. As long as there is no listener EventCounter discards the data immediately. If at some point a listener connects it will communicate to the EventCounter what the desired update rate is. EventCounter will then batch up the updates in-memory, consolidate them by computing summary statistics, and then send updates of those statistics at the agreed upon transmission rate.

The listener decides what the duration of this persistence is, typical values are likely to range from 1 second to 10 minutes but that isn't required. If the process emitting the counters terminates then all in-memory data is naturally lost, but updates that were already emitted to the logging system may or may not be persisted depending on how the end-to-end flow has been set up. It is relatively easy to configure ETW, Lttng, or EventPipe to log into a file so that is one way the data might be persisted.

Some questions remain around my specific scenario:

I'm still not sure if it would be possible & practical to wire-up re-loading of state from a durable persistence store? For example the below passages imply even if on-start one was to reload state from somewhere, the value is discarded unless there is a listener already present and attached, which is almost impossible to guarantee without blocking. In the PerformanceCounter design this is not an issue, as either an emitter or listener keep the current value alive in memory. Say my code emits a PerformanceCounter NumberOfItems64 and sets the raw value to 5. Even if there is no listener attached, the value is 5, and when a listener does attach, it will get the value of 5 and report that to Azure Monitor.

EventCounter doesn't store any data until a listener indicates interest. If the counter producer hasn't logged any data in between when listening started and when the first update is transmitted then the listener is getting statistics from a sample of size 0. Not an error but probably not useful.

As long as there is no listener EventCounter discards the data immediately.

PS. If my understanding of above is correct, whilst I understand many of the trade-offs in favor of the new design, the nature and properties of EventCounters is very different to PerformanceCounters. That is, by design you do not want to use memory mapped files in the new EventCounter approach as it's not cross platform, but there is a loss of functionality as PerformanceCounters do remain alive, at least in memory, as long as either 1 listener or 1 emitter is attached and running... couldn't we combine the new design and keep (even if opt-in) the properties of the old design (memory mapped files) 🙂 ?

@noahfalk
Copy link
Member

I've read it several times and wanted to confirm my layman's understanding + interpretations are correct, apologies in advance if they are not

No worries at all. I'm actually using this opportunity to figure out if my descriptions are good and where people might get confused. I've got a TODO item to write documentation for some of this new work and you are giving me a dry-run at it with free fast feedback ; )

I'm still not sure if it would be possible & practical to wire-up re-loading of state from a durable persistence store? For example the below passages imply even if on-start one was to reload state from somewhere, the value is discarded unless there is a listener already present and attached, which is almost impossible to guarantee without blocking.

The PollingCounter variation of the counters should make this fairly easy, at least in terms of interacting with the counters. Lets say you have some particular value double MyData::g_myValue that you would like all the performance counter viewers to observe. On startup you initialize it however you want, persist it as long as you want, and periodically you change its value based on activity that is happening in your app. To publish this value so that all the counter viewers can see it you declare a counter for it:

class MyEventSource : EventSource
{
    ...
    PollingCounter _myCounter = new PollingCounter("The-best-counter-ever", () => MyData::g_myValue );
}

If a listener hooks up it could ask to receive updates for your counter every 10 seconds. This will cause the callback () => MyData::g_myValue to be invoked every 10 seconds. Whatever value you currently have stored will be returned from the callback, timestamped, formatted as a log message, emitted to the log stream, and then read by the listener or stored for later viewing. If I understood you correctly, this is the behavior you were looking for.

When I was refering to discarding state if there was no listener, I was talking about the EventCounter (or IncrementingEventCounter) variation of the counters. Those two counters have a push model where you update the counter when an event occurs and then some stats are computed by the counter that summarize all the updates you have made during that time period. For example say you are doing some transactions and you want to record how long the transactions took. At the end of each transaction you might invoke myEventCounter.WriteMetric(transactionTime). When there is no listener this data is ignored. Later a listener shows up and asks for updates once a minute. Now the event counter starts saving all the transactionTimes that occur during the minute and at the end it emits a log message which says there were 20 transactions during this interval, min time was 0.5sec, max was 3.1 sec, average was 1.2 sec, etc. All of these statistics only depend on the data that was collected in that one minute period of time so at the end of the minute after the log message is recorded the stats are reset to 0 and the next minute's worth of data starts aggregating.

but there is a loss of functionality as PerformanceCounters do remain alive, at least in memory, as long as either 1 listener or 1 emitter is attached and running... couldn't we combine the new design and keep (even if opt-in) the properties of the old design (memory mapped files) 🙂 ?

In the example above you could always replace MyData::g_myValue with a piece of memory that is backed by a memory mapped file. There is also nothing which prevents a listener from remembering the last logged value which it observed, regardless if the emitting process has terminated and no new updates are being sent. If you used one of those approaches does that get you the properties of a solution you are looking for?

@MedAnd
Copy link
Author

MedAnd commented Jun 24, 2019

So in the new .Net Core approach & to continue with the Service Fabric scenario, example, wish-list 🙂 I will have an EventSource inside of which I have a PollingCounter instance called DeadLetteredMessagesPollingCounter.

On microservice start-up I initialize the value of the DeadLetteredMessagesPollingCounter with the dead lettered messages count I read from a durable & persisted store. A Log Analytics agent running either on Linux or Windows (roadmap) will then be able to subscribe to the DeadLetteredMessagesPollingCounter, configured via the Azure Portal, specifying how often the callback-report loop occurs?

In the same microservice, each time a dead lettered message scenario is encountered, application logic needs to increment the g_myValue variable / object. Moreover each successful re-try of a dead lettered message application logic needs to decrement the value of g_myValue. Wondering if this means application code also has to coordinate access to g_myValue or are you planning to provide some level of support for this within the EventCounter types, including PollingCounter?

For example from the old PerformanceCounter documentation we have the following choice:

The Increment, IncrementBy, and Decrement methods use interlocks to update the counter value. This helps keep the counter value accurate in multithreaded or multiprocess scenarios, but also results in a performance penalty. If you do not need the accuracy that interlocked operations provide, you can update the RawValue property directly for up to a 5 times performance improvement. However, in multithreaded scenarios, some updates to the counter value might be ignored, resulting in inaccurate data.

PS. A thought on above, would be most valuable to formalize & document the initialize scenario/pattern for PollingCounter, hopefully with examples in C#.

@noahfalk
Copy link
Member

I've got this flagged to come back to, just wanted to give you the heads up that there is a flurry of activity I have to attend to getting changes wrapped for Preview7. Once that calms down (couple days?) I'll be back to this : )

@noahfalk
Copy link
Member

A Log Analytics agent running either on Linux or Windows (roadmap) will then be able to subscribe to the DeadLetteredMessagesPollingCounter, configured via the Azure Portal, specifying how often the callback-report loop occurs?

Yes, with a healthy dollop of hand-waving : ) We are still pretty early exploring the Azure integration part of the puzzle so I won't have anything concrete, but my super rough goal is you deploy the app to Azure, you configure something saying you want the counters (maybe via portal? maybe something in the app?), and then the counters show up in Azure logs/graphs/reports some place that makes sense.

Wondering if this means application code also has to coordinate access to g_myValue or are you planning to provide some level of support for this within the EventCounter types, including PollingCounter?

In PollingCounter you have total control over g_myValue so you get to decide what synchronization primitives, if any, are going to be used.
On EventCounter the runtime owns the storage of the counter data and the current policy is we use interlocked operations to do the updates. If you had a counter that updates frequently enough that the choice of interlocked is impactful on the overall app performance (100,000+ updates/sec?) then I'd probably suggest switching to a PollingCounter to gain more control over it and be able to optimize it.

A thought on above, would be most valuable to formalize & document the initialize scenario/pattern for PollingCounter, hopefully with examples in C#.

We are definitely planning to get this stuff documented as part of our 3.0 work. If you are looking for examples some already exist inside the runtime: https://github.com/dotnet/coreclr/blob/37ff0f54f4259e2e9629c62dfc7602c37ee3a97a/src/System.Private.CoreLib/src/System/Diagnostics/Eventing/RuntimeEventSource.cs#L56. I expect we'll make something more simplified for demo purposes though.

@MedAnd
Copy link
Author

MedAnd commented Jun 28, 2019

To continue the journey, to go another round, depending on your point of view 🙂, as interlocked operations won't be part of the PollingCounter, documentation and an advanced sample would be the next best thing. An Advanced PollingCounter with interlocked operations C# sample please 😄

Regarding consumers; given EventCounter is built upon ETW if hosting on Windows, if you modify the Log Analytics agent to consume EventCounter traces, in theory this means the Log Analytics agent or similar consumers, will then be able to consume and forward any ETW trace? I think the Log Analytics Agent currently does not support consuming ETW traces as per below image from Azure. I hope your scope includes any ETW trace as opposed to specific ETW traces of schema type EventCounter?

image

If the above is way down your road-map though, I would like to implement my own monitoring microservice to consume ETW traces (including EventCounter), which other processes on the Node emit.

Wondering if you plan to also provide documentation & samples for how to write high performance consumers for EventCounter? For example I have been investigating / planning to use KrabsETW which is used in production by the Office 365 Security team.

PS. Hidden Treasure: Intrusion Detection with ETW (Part 2)

@noahfalk
Copy link
Member

An Advanced PollingCounter with interlocked operations C# sample please 😄

Request noted : ) #368

given EventCounter is built upon ETW if hosting on Windows,

ETW is one option but EventPipe is a new 2nd option. I can't predict which one we would try to use in this scenario yet. It could depend on what Log Analytics already has in place and whether we are building a single xplat solution or two different Windows/non-Windows solutions.

I hope your scope includes any ETW trace as opposed to specific ETW traces of schema type EventCounter?

Certainly I don't want to scope it smaller than it needs to be, but there are a few factors that might be an issue if we aimed for the bigger scope:
a) Interpreting the non-counter data - The counter data has a constrained format. ETW data as a whole doesn't necessarily have events in known or self-describing formats.
b) Displaying the data - The types of UI for displaying counters (charts, graphs, simple tables) are fairly predictable. In the space of all ETW event data it could represent anything. Generic list/table views are possible, but often isn't a useful representation for a lot of data that is traditionally logged to ETW. For example if you had a sampling profiler that collected 50,000 callstacks, displaying it as a 50,000 row table wouldn't be ideal.
c) Data size - counters alone tend to be on the order of bytes or KBs per sec. Full ETW traces are often 3+ orders of magnitude larger. That vast difference in scale might mean it needs a different storage solution, or even a different business pricing model. Totally speculating here, maybe counters are free as part of various other offerings, but full traces have to be charged by the GB.

Whether these will ultimately be an issue I don't know, its just want comes to mind.

If the above is way down your road-map though

I don't have a great sense of the timing because I need to reach out to partner teams I haven't worked with much before and I don't know what their timetables are going to be. From the outside I often hear that Microsoft seems like an atomic unit, but if you think of it as a group of 100,000 employees you it makes sense that statistically any given person only works with a tiny fraction of the company and it changes over time. Thankfully its still a friendly and supportive bunch. I'd be pretty surprised if it was less than six months but I have no idea what the upper limit is.

Wondering if you plan to also provide documentation & samples for how to write high performance consumers for EventCounter?

Yeah, we'll need to. One current example that probably isn't bad is the dotnet-counters command line tool. It is a simple viewer that prints the counter values to the console. Full source is available here: https://github.com/dotnet/diagnostics/tree/master/src/Tools/dotnet-counters

For example I have been investigating / planning to use KrabsETW which is used in production by the Office 365 Security team.

You should be able to use any parser though I'm not familiar with that one specifically. The one we use is called TraceEvent available here: https://github.com/microsoft/perfview/tree/master/src/TraceEvent
This parser is used by our SDK tools, by PerfView, and by Visual Studio. In addition to parsing ETW generically it also has a fair amount of built-in knowledge about the events generated by .NET, and various higher level abstractions useful in a variety of diagnostic tools. TraceEvent also parses the EventPipe nettrace format and a limited portion of Lttng's CTF format if you ever anticipate wanting to go xplat.

Cheers!

@MedAnd
Copy link
Author

MedAnd commented Jun 29, 2019

Roger... really appreciate the above engagement & think it's time for me to digest the info further, incorporate it into my R&D! For completeness and in the hope it influences the work yet to come, from the Hidden Treasure: Intrusion Detection with ETW (Part 2) article the following stands out:

The TDH APIs are what all ETW APIs ultimately call. While they offer a great deal of power, they’re still Win32-style APIs that are cumbersome to use. TraceEvent is a library used by the PerfView tool and has the benefits of being a well-designed .NET API. Unfortunately, it doesn’t perform well for scenarios where we want to keep memory usage to a minimum. System.Diagnostics.Tracing has the advantage of being part of the .NET BCL but we’ve observed intermittent exceptions and unexpected behavior in the past. Additionally, it suffers from the same memory consumption issue that TraceEvent does.

In response to these challenges, Office 365 Security chose to implement our own API with three primary goals:

  • Intuitive and flexible API
  • High performance – filtering events in the native layer
  • Available both in .NET and native C++

Have linked to Tracing and Counters Interest Group - Announcements

@tommcdon tommcdon added this to the 5.0 milestone Sep 12, 2019
@MedAnd
Copy link
Author

MedAnd commented Oct 20, 2019

Great to see this is scheduled for the 5.0 milestone!

@noahfalk
Copy link
Member

noahfalk commented Nov 6, 2019

Going to close this issue because the doc work is being tracked by #515 and this issue was primarily about answering questions. If there is anything I missed just let me know we can reopen/open a new issue as appropriate.

@noahfalk noahfalk closed this as completed Nov 6, 2019
@ghost ghost locked as resolved and limited conversation to collaborators Jun 27, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Documentation related issue question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants