Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add running scripts on connect/disconnect #436

Merged
merged 6 commits into from
Jan 21, 2021

Conversation

DavisNT
Copy link
Contributor

@DavisNT DavisNT commented Jan 19, 2021

Added settings for running scripts on connect and disconnect (including SteamVR shutdown).

Previously non-graceful closing of TCP control connection was not detected (as no packets were sent/received), an application level sending of keep-alive messages has been implemented (as it uses new ServerControlPacket and ClientControlPacket values it is not compatible with previous client versions).
Previously tokio::net::TcpStream supported enabling OS level TCP keep-alive, but this functionality was removed in tokio-rs/tokio#1767.

Add back similar functionality to "On connect command-line" and
"On disconnect command-line" settings (and implement it).
It should NOT be possible to pass parameters to the script/executable.
* Added application level sending of TCP packets every 5 seconds in
both directions of control connection (if TCP connection would get
broken this should force it to be closed within seconds).
* Enabled TCP nodelay on both ends of control connection.
Add running disconnect script on SteamVR shutdown and pass environment
variable ACTION (with possible values "connect", "disconnect" and
"shutdown" - without quotes) to the script.
@zmerp
Copy link
Member

zmerp commented Jan 19, 2021

Thanks. The Rust code looks very good! Unfortunately I cannot accept the keepalive code as is because you invalidated the packets signature. I added some "reserved" fields and variants to allow limited extensibility without breaking backward compatibility. Basically I want that client and servers on v14.X.X to be forward/backward compatible with one another. To be honest I'm making it a bigger deal than I should, but this was the policy we adopted for when to bump the major version number.

Fortunately I see that you can just replace "NetworkKeepAlive" with just "Reserved" (containing an empty string).

Just so you know, in the branch audio-rewrite, I'm sending and receiving audio with the control socket (using the "ReservedBuffer"). On v15 it will be moved to a separate streaming socket.

@DavisNT
Copy link
Contributor Author

DavisNT commented Jan 20, 2021

@zarik5 Thanks! 🙂 Can I use Reserved("KeepAlive") instead of NetworkKeepAlive? I think that would be more descriptive (than Reserved("")) and would consume only a few more bytes (every 5 seconds) of bandwidth.

Please don't use TCP for any realtime data (audio, video, controls)! As far as I know all the IP telephony applications use UDP for realtime data (audio and video) - just tested three very popular apps and confirmed this.
In case of temporary packets loss (e.g. wifi glitch) TCP will buffer up the data and resend it all (or close the connection).

What should be the behavior of audio if connectivity is broken for 0,5 seconds? I would guess that audio stream during the connectivity loss should be just discarded and streaming should resume from the point in time where connectivity is restored - UDP does exactly that. TCP would either struggle and resend all the buffered data or break the connection. In case of resent data it would need to be discarded on client side (or a delay between audio and video will be introduced), in case of broken connection it should reconnect quickly as each dozen of milliseconds adds to user frustration (and by default TCP sends "connect" retries after seconds). In either case coping with TCP issues will add complexity to the application without any real benefits (at least without any benefits known to me). Also TCP might add to increased audio delays after connectivity issues (while the restoration of connectivity is detected and data is being resent).
TL;DR As far as I know TCP for realtime data will introduce lots of problems (UDP usage by RTP (IP telephony standard) and major proprietary messaging apps in a way confirms this idea).

What is the reason behind TCP for audio?

@zmerp
Copy link
Member

zmerp commented Jan 20, 2021

What is the reason behind TCP for audio?

Basically it's just a temporary necessity, since the current UDP streaming socket port is still occupied by the legacy code and I don't want to require the user to whitelist another port. Actually, working with TCP first gives me room to experiment with the actual data stream, before worrying about the stream reconstruction algorithm. The first Iteration of the audio rewrite will be able to deal with latency caused by queued packets, but it could be slow to recover. After that I will implement a statistics based latency minimization model that can deal with latency spikes a lot faster and lower the overall latency considerably.

Can I use Reserved("KeepAlive") instead of NetworkKeepAlive?

That's ok, but keep in mind that the Reserved variants will also be shared by other things until v15, like haptics. To deal with this I thought we should use put data in JSON form. So for the keepalive this could be something like Reserved("{ }") or Reserved("{ "keepalive": null }"). Actually, to keep the code robust we should still check for invalid JSON, but using valid JSON for "correct" packets seems cleaner. To encode data into JSON (and to check if the string is valid JSON) I use serde_json.

@DavisNT
Copy link
Contributor Author

DavisNT commented Jan 21, 2021

I have removed NetworkKeepAlive and replaced it by Reserved("{ \"keepalive\": true }".into()) (true instead of null as it looks more descriptive and takes the same number of bytes on the wire).
Does this PR look good? Or maybe I should consolidate 6 commits in 2 - adding the ability to run scripts and adding network keep-alives?

Regarding audio streaming.
I think opening a new port is not a problem for most users (as they have already opened other ports).
I strongly suggest using a new UDP port for the new (non-legacy) realtime data streams! As far as I know virtually all relevant literature (e.g. https://ccrma.stanford.edu/workshops/nsd2012/lectures/03-RT-Audio-Streaming.pdf https://gafferongames.com/post/udp_vs_tcp/ ) suggests using UDP for realtime data streams.
Unfortunately there is no way how to reliably compensate all the problems caused by TCP for realtime streams (and more or less decent compensation should be much more complicated than custom audio-streaming protocol).

@zmerp
Copy link
Member

zmerp commented Jan 21, 2021

Ok, your code LGTM! Merging now.

About the audio, I'll do some tests. If I don't see problems then I will still ship it with TCP. Soon I'll add the option to switch to UDP.

@zmerp zmerp merged commit c0a371f into alvr-org:master Jan 21, 2021
@zmerp
Copy link
Member

zmerp commented Jan 22, 2021

While merging audio-rewrite with master I took the chance to fix some issues with your code. I removed the scripts from OpenvrConfig, since those are settings needed specifically by OpenVR at startup and are those settings that require a restart when changed (the idea is to remove most of them at the end of the Rust rewrite). I see that you put a copy of the disconnection code at shutdown, and so it may be executed twice. I moved it inside the StreamCloseGuard, that gets executed always when the control flow exits the connection pipeline (by proper disconnection or execution canceling). If you want we can add a separate shutdown script but I think it's unnecessary.
Also I made the connect/disconnect scripts non advanced, since right now the advanced settings are things like "don't change anything if you don't want to break everything" 🙂

Anyway, what do you think of the current Rust code? Do you think the architecture is good? Do you think using async was appropriate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants