-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
breaking changes for v0.4.0 #47
Labels
enhancement
New feature or request
Comments
Open
scottlamb
added a commit
that referenced
this issue
Apr 28, 2022
As listed in #47. This improves throughput in the client/h264 benchmark by about 20%, I think because we copy fewer bytes around. CodecItem went from 256 bytes to 176.
scottlamb
added a commit
that referenced
this issue
Apr 30, 2022
I left alone `VideoFrame::new_parameters`, as I'll likely eliminate it as described in #47.
scottlamb
added a commit
that referenced
this issue
May 11, 2022
Part of #47 The main benefit is that `VideoFrame::into_data` can cheaply return a `Vec<u8>` that the caller can mutate. In particular, they could convert H.264 data from Annex B to AVC form or skip non-VCL NALs. Neither of these transformations add bytes so they're both possible without allocation. Audio and message frames still use `Bytes` internally, as that allows them to avoid copying when the frame is wholly contained in a packet. I think this is much more common than for video. I didn't add an `AudioFrame::into_data` or `VideoFrame::into_data`.
scottlamb
added a commit
that referenced
this issue
May 11, 2022
For #47. Along the way, I simplified `aac::Depacketizer` impl a bit, eliminating a clone and some redundancy.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Work underway on the
next
branch.Breaking changes I'm considering:
Box<str>
instead ofString
.)CodecItem::Error
or the like. We could just make CodecItem#[non_exhaustive]
rather than figure out the details now. (edit: just did the#[non_exhaustive]
for now.)Session::teardown
method.pub
from a few items that got it without much thought. (they're already#[doc(hidden)]
)VideoFrame::new_parameters: Option<Box<VideoParameters>>
with a boolean. You can obtain the latest parameters viaStream::parameters
, so there's no good reason for the preemptive clone or for the extra bytes in everyVideoFrame
. Except:retina::client::Demuxed::poll_next
returnsPending
, currentlyStream::parameters
may reflect changes in the upcoming frame. This is probably confusing / worth avoiding.Demuxed
doesn't offer easy to access to the streams, but there's no reason it shouldn't.retina::codec::Parameters
withretina::codec::ParametersRef
, to avoid having to clone the parameters when returning them fromretina::client::Stream::parameters
.Stream::setup
take an options struct to allow customizing depacketizer options. Useful for e.g. annexb support #44.PacketContext
to be paired with a new typeStreamContext
, rather than justRtspConnectionContext
. I'd exposeStreamContext
as an accessor on the stream rather than include it inMessageFrame
. Notably,MessageFrame
has a pair ofPacketContext
s, so this should shrinkMessageFrame
's size by four socket addresses, at the cost of the caller having to pull more stuff together to get enough information to find the packet in a pcap file.VideoFrame::data()
and friends from returning a&Bytes
to returning a&[u8]
, maybe adding aVideoFrame::take_data()
that returns aVec<u8>
. If the caller wants to actually modify the bytes in-place for some reason, this would be better. I don't think there's any way to go from aBytes
back to aBytesMut
, even if there are no other references. (I see an issue 0.5 release prevents going back from Bytes to BytesMut tokio-rs/bytes#350 that seems to be saying the same.)PacketItem
should expose RTCP compound packets, not just sender reports.I'm open to others.
Deferred
I'm going to punt on returning anything borrowed from the
Session
/Demuxed
. This can't be done with thefutures::stream::Stream
interface (as Rust doesn't have GATs yet / it's unclear ifStream
can be extended to support them anyway). I think switching to just an inherent methodasync fn next(&mut self) -> ...Item
might be worthwhile, which would let us borrow from a ring buffer (see #6). But I don't want to make the improvements already implemented wait for that, and folks say "version numbers are cheap".PacketItem
andCodecItem
should include a&Stream
to save the caller having to look it up. They should have it handy anyway.CodecItem::VideoFrame
could also return theparams: Option<&VideoParameters>
(likewise audio/message media) rather than requiring the caller match on anParameters
enum.Buf
that borrows from the internal buffers, rather than copying into a contiguous buffer. If we borrow, we don't have to make a newVec
of chunks for each frame. We'll make it easy for the caller to copy into a new contiguousVec
as before, or to copy into some other structure (like a singleVec
for multiple frames, a ring buffer, etc).Other deferred stuff, because again version numbers are cheap, and these aren't as fully baked as the stuff above the fold:
log
totracing
. This would allow the caller to supply context for interpreting the log messages. Motivating example: Moonfire NVR maintains all its streams from the same tokio thread pool. When Retina logs a message about a stream, it's unclear which stream it applies to. Retina at least knows what URL we're talking with, but Moonfire NVR can wrap that in better context like the name of the camera and main/sub stream, andtracing
seems like a popular way to augment log messages in this way. Retina might also put in its own spans for each session.tracing
properly at the Moonfire NVR level would involve moving stuff into fields rather than just the message and rethinking the log format. Not quite ready to take that on.CodecItem
level. I don't like that right now an item'sloss
field doubles as the number of packets lost (not necessarily consecutively) and information about if the current frame is "complete". I wonder if it's better to represent these as orthogonal concepts. Sometimes we know the loss was in a previous frame; sometimes we're uncertain; sometimes we know we're missing part of the current frame (a prefix, one or more interior regions, or a suffix). I'm not totally sure what the best representation is, but it might involve losing theloss
field. Having a separateCodecItem::Loss
would also give a way to represent richer information about the loss (like the packet number range, the times of the packet before/after the loss).PlayOptions::enforce_timestamps_with_max_jump_secs
. It's already optional, but I'm not sure it belongs at this level at all. The caller can do this validation if they care to, and it may not be used often enough to be worth advertising in this library. They may instead want to do something more sophisticated than trusting the RTP timestamps at all, given that many cameras don't use the monotonic clock to produce them. If I come up with one really battle-tested reliable strategy, it might belong in this library, but I don't thinkenforce_timestamps_with_max_jump_secs
is that.crate::Timestamp
step optional/customizable. Also requires more design effort.The text was updated successfully, but these errors were encountered: