-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Tracing #14618
Comments
To clarify/for the record: This is about logging for basic components of the Crystal runtime. We cannot use the sophisticated
Single-line log messages is great. But I'd suggest to use a standardized format instead of a kind of custom Dumping the trace unconditionally to stderr could mix up with application output going to the same file descriptor. We need to be able to isolate these different purposes from each other, i.e. write traces to a different location. |
It's also meant to have the least impact on the program execution, so edge cases and race conditions can still be reproduced (not every time, it will still have some impact).
Good point for a standard format that could be easily ingested by external tools and databases (nice)! I hope I can find a better format than JSON that would hurt human reading (my main use case).
Being able to trace to a file instead of stderr would be ❤️. I'm just afraid to lose the atomic write guarantee of pipes (up to |
It should always be an option to pass a file descriptor to dump the trace into. And that file descriptor can be a pipe so it has that guarantee. Should be easy enough to setup externaly that the pipe dumps into a file. |
This is great to have and will reduce the need to do manual printf when debugging. It is probably not needed at the first implementation, but at some point it would eventually be nice if the handling of the events would be pluggable with a pretty locked down interface (ie, no strings except for things like fiber names, just records with enums and measurements in the recording steps) - I'm thinking of shards providing support for something like https://github.com/mikesart/gpuvis/wiki , or pushing the values into hdrhistogram to be processed later or whatever, without having to monkeypatch anything. That also changes the perspective a bit, as it immediately changes the discussion to be about event generation and processing/presenting, as separate activities where the latter is up to the user. But perhaps that can be added later, as long as the door isn't closed to that by and implementation choices. |
Researching formats:
I'd thus like to have a couple formats. By default a format that is both & readable as possible. I propose to follow OpenTSDB line protocol:
Then have a {
"event": "<section.operation>",
"timestamp": <timestamp>,
"thread_id": "0x1234",
"thread_name": "<name>",
"fiber_id": "0x5678",
"fiber.name": "<name>",
"key": "value"
} Notes:
|
Did you consider the OpenTelemetry Protocol (OTLP)? OTLP/HTTP maybe? Ruby: Line-based output would still be helpful for quick debugging, as OTLP can be slightly tricky to set up. |
Yes, that's "logfmt". |
Reading the OTLP spec, it looks far too complex for the bare metal level we want here. It's not about tracing an application, but tracing the core runtime, down to each GC malloc for example. |
Implements tracing of the garbage collector and the scheduler as per #14618 Tracing is enabled by compiling with `-Dtracing` then individual tracing must be enabled at runtime with the `CRYSTAL_TRACE` environment variable that is a comma separated list of sections to enable, for example: - ` ` (empty value) or `none` to disable any tracing (default) - `gc` - `sched` - `gc,sched` - `all` to enable everything The traces are printed to the standard error by default, but this can be changed at runtime with the `CRYSTAL_TRACE_FILE` environment variable. For example `trace.log`. You can also redirect the standard error to a file (e.g. `2> trace.log` on UNIX shell). Example tracing calls: Crystal.trace :sched, "spawn", fiber: fiber Crystal.trace :gc, "malloc", size: size, atomic: 1 **Technical note:** tracing happens before the stdlib is initialized, so the implementation must rely on some `LibC` methods directly (i.e. read environment variable, write to file descriptor) and can't use the core/stdlib abstractions. Co-authored-by: Johannes Müller <[email protected]>
I propose to introduce a mechanism to trace the GC and scheduler operations as they happen, when they happen and what they did, sometimes even measuring how long the operation took (e.g.
GC.malloc
).This has proven invaluable at understanding multi-threaded synchronization issues, that would have been impossible to reproduce using a debugger (you can't step manually and hope to reach a race condition: you need brute force and then analysis). In addition tracing the GC can lead to interesting stats about how much time is spent on GC (malloc, collect, ...), or find out how many allocations happened and when they happen. Tracing both the GC and scheduler, we can cross analyze the lifetime of a program.
The tracing itself is relatively simple: each trace is a line with its section (
gc
,sched
), the operation (malloc
,enqueue
, ...) then context metadata (thread, fiber, time, duration) and eventually metadata about the operation itself. The trace is meant to be easy to parse and read, to be grepped, searched, copy pasted, you name it.I propose to have the tracing enabled with a compile time flag (
-Dtracing
) and to have the feature directly built right into the stdlib. It could be implemented into the perf-tools shard, but it would be harder to patch itself in (especially into the GC collection) and harder to maintain when the stdlib changes. It would also be harder to use, as you would have to add the shard & require it before you can start using it.Once compiled with tracing enabled, each section should be enabled manually (by default they're all disabled) using the
CRYSTAL_TRACE
environment variable, which is a list of section names separated by a comma (,
). For exampleCRYSTAL_TRACE=gc,sched
would log everything, whileCRYSTAL_TRACE=gc
would only log the GC.Evolutions
We could augment the tracing to report more/less data about the program, for example only output GC collection stats (with more before/after details). We could also listen to a signal (and/or a keyboard shortcut) to stop the world and print details about the threads and schedulers (that one would fit nicely into perf-tools for starters).
Technical notes
See
#14599#14659 for a proposed implementation and technical discussions.The text was updated successfully, but these errors were encountered: