-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggregating stream_frame #163
Comments
Hello @incfex, Thanks for opening an issue. IIUC, you're basically asking for a For the ACK-frame special case, maybe we can add an endpoint discriminator to example (though not very happy with the semantics of "owner" here):
I am however a bit confused by your motivation saying
aren't you still logging them all in the |
Sorry for the confusion, my current situation is: on the SERVER side, it is sending a lot of STREAM frames and receiving similar amount of ACK frames.
This is basically what I am asking for, but only log ACK or STREAM frames, and with a little twist. Qlog currently requires STREAM frame to have purposed
current example of logging all the STREAM frames with
Example of logging with the
This approach could also applies to ACK frames, becomes a dumb version of What do you think of this? |
Thanks for the additional information, that's much clearer now. While I agree with the use case for a For example, you'd have to check if per-frame values deviate from the default, and you'd have to keep multiple of these events "alive"/buffered, 1 for each frame type you might care about (and demux frame data to the correct one). It would also require some quite more advanced logic in tools. I feel that would work in your specialized use case (only STREAM/ACK, many similar frame instances), but IMO that's not what should be standardized: this would be better suited for a "custom", application-specific qlog event, which is perfectly allowed by the spec as well. This is partly because IIUC, one of the main reasons to do this is to reduce log verbosity. qlog has a long history of trying different ways to combat this (e.g., a convoluted csv-alike setup in draft-01 with columns). For draft-02, we took the decision to care less about (textual) verbosity/repetition, since this can be dealt with using either compression or a binary serialization format (see #30 for some earlier discussion and #144 for the current issue on this). I think moving from pure So what I'd propose is add something like this:
I think probably @marten-seemann also has some opinions on this. |
Thanks for your response! I agree that this would advanced logic in tools to achieve
About this, if a website have video playback or file download, it will have bulk consecutive STREAM/ACK frames. Having a video on the website is not that uncommon this days. Maybe change the |
After applying |
Hello @incfex, Thanks for keeping us updated on this. IIUC, you have now created a custom During the recent discussion on this at the IETF, I proposed not adding events like As such, it would help me to get some insight into your specific use case and the reasons why keeping qlog size down (if that is indeed the main motivation) is crucial to your setup and why that's difficult to achieve by using compression instead. I'm not arguing having additional size optimizations would be useless, but I am debating if the complexity of adding these events to the qlog standard are worth their benefits. Relatedly, I wonder how you are using the logged information concretely. No longer having specific timestamps and packet associations for individual ACKs/STREAM frames reduces debuggability in some ways. How are you currently processing the qlogs in practice to e.g., find bugs/inefficiencies or how do you plan to do so? Thank you in advance for this additional insight! |
Absolutely, here is the qlog format we are currently using.
I have watched that session and learnt a lot from that. I now agrees with that if we are trying to make qlog a logging format, sticking to the wire-format is the way. However, writing qlog is hammering on our disk IO, increasing the IO latency and thus lowered our bandwidth. Also, our QUIC server works in edge computing, the disk space to store and bandwidth to transfer qlog cost a lot, so we cannot use verbose JSON directly. (on-the-fly/streaming) does help, and we are using it now to further reducing the size of our qlog.
Reasons as explained before:
Our current observation is that if you try to implement qlog on the machine that running quic fine right now might cause performance impact. And to mitigate this, you have to increase the budget.
We are just starting to use qlog, and trying to see what we could use it for. Specific timestamps can be added using time delta just like other events. Currently we have a parser that parse our aggregated qlog back into standard qlog. There might be some place I did not explain clearly, if you have any question, just shoot it. This also helps us a lot. |
This custom optimization seems completely fine to do if people want to add the complexity. However, for the reasons outlined, I don't see a reason to try and design this into base qlog specification. Perhaps the original poster has had time to learn from deployment since the issue was opened. But absent any followup I think this can be safely punted into a qlog extension. |
Closing as timed out. |
I am currently implementing qlog for video streaming using quiche.
Due to the nature of video streaming, there are a lot of back and forth of
STREAM
andACK
frames, and they are not very useful to log.From the
server
vantage point,ACK
s can be aggregated by eitherpackets_acked
orframes_processed
event. However, I did not find a way to aggregateSTREAM
frames the server send out. Is there a good way to solve this problem?The text was updated successfully, but these errors were encountered: