Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streamer2 #967

Closed
wants to merge 23 commits into from
Closed

Conversation

dingodoppelt
Copy link
Contributor

This code calls ffmpeg to stream a stereo mix directly from the server either into a streaming service like icecast or a file.

Copy link
Collaborator

@pljones pljones left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, it will need to be made conditional compile not WIN32.

#ifndef WIN32
....code....
#endif

throughout.

The .qm files... Can you try git reset HEAD~ whatever.qm and see if that fixes it?

void CJamStreamer::OnStarted() {
// create a pipe to ffmpeg called "pipeout" to being able to put out the pcm data to it
pcm = 0; // it is necessary to initialise pcm
QString command = "ffmpeg -y -f s16le -ar 48000 -ac 2 -i - " + strStreamDest;
Copy link
Member

@hoffie hoffie Feb 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth considering hardening the ffmpeg call as it executes a command and includes a variable in the invocation. I fail to find a popen() alternative which takes an array of arguments instead of a string (i.e. like exec*() to system()). Unless I'm missing something, the only remaining hardening measure would be proper quoting + escaping of the variable. One might argue that the variable is currently supposed to come from a trusted context (command line), but things may change in unexpected ways sometimes. Improving this would also ensure that special chars will not break things even in well-intentioned environments.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps use QProcess?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps use QProcess?

Good idea! This should solve the potential security impact and might also get us cross-platform support for free. :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. I'm thinking you're going to want to know the path the ffmpeg binary. Ideally from either a setting or from a command line option. Better not just to launch whatever executable happens to get picked up from "ffmpeg" getting run.

Copy link
Contributor Author

@dingodoppelt dingodoppelt Feb 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the only remaining hardening measure would be proper quoting + escaping of the variable

How would I go about and do this? At the moment I just read the output part of ffmpeg from the commandline (which can be anything that ffmpeg understands like "-f mp3 icecast://source::/" or something completely different. I don't know in advance how many arguments will be given in different usecases)

Perhaps use QProcess?

Thanks for the advice! I looked into it and can't see how I could alter my code (I'm not a programmer ;) to do the same as before using QProcess. (popen creates a pipe which I can then stream the pcm data to, does QProcess offer this?) Any hints on what I would be looking for?

Better not just to launch whatever executable happens to get picked up from "ffmpeg" getting run.

I've thought about that before and come to the conclusion that it won't matter for security reasons since you can call anything named "ffmpeg". Allowing the user to specify which program to call doesn't make it more secure in my opinion, since you'd still be able to call anything you want. It'd only be useful if you wanted to call a certain build of ffmpeg besides the one already installed system-wide, but I couldn't think of a usecase where you would want to do that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it doesn't crash for me

It does for me only when I connect a client but yes, ffmpeg never gets started it seems.

Maybe we should just stick with the popen version...

I think so, too

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not seeing a segfault either, but I have found three problems which I have fixed in a PR to @npostavs' branch against this PR. :)

npostavs#1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thanks a lot for your work :)
Would that mean I can get rid of the conditional compile for win32 as well? I can't check since I have no windows machine here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would that mean I can get rid of the conditional compile for win32 as well?

I think so, yes, but one should test.

I can't check since I have no windows machine here.

Neither can I (Linux-only here). A first smoke test would probably be removing the ifdefs. CodeQL should probably catch if compiling works. Someone should still run a real test with ffmpeg if possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't compile on windows whatsoever. I've put the conditional compile back in

@dingodoppelt
Copy link
Contributor Author

The .qm files... Can you try git reset HEAD~ whatever.qm and see if that fixes it?

that fixed it, thanks!

@pljones
Copy link
Collaborator

pljones commented Feb 10, 2021

Is this ready for final review?

src/main.cpp Show resolved Hide resolved
hoffie and others added 4 commits February 13, 2021 00:28
Signed-off-by: Christian Hoffmann <[email protected]>
- QProcess automatically populates argv[0]. Explicitly passing
  "ffmpeg" as first argument will end up as argv[1] and will break
  ffmpeg.
- Increase ffmpeg log level to "error" so that startup errors are
  really output.
- Add support for space-delimited multiple parameters for the
  --streamto argument.

Signed-off-by: Christian Hoffmann <[email protected]>
@dingodoppelt
Copy link
Contributor Author

Do you have a specific way to reproduce this? Maybe even without Icecast but with local files or anything what one could easily replicate?

@hoffie
To replicate the icecast issue I can't think of how I could do that with a file locally.
With an icecast server running you just launch the jamulus server and connect a client so the stream starts. Then connect to the stream to listen. You'll find the first time you connect you'll hear sound. Then disconnect from the stream and rejoin to find the stream won't start anymore. It is just working once. To listen to the stream again you either disconnect all clients from the server (which closes ffmpeg on server idling and restarts ffmpeg on the first client connecting to the server) or restart the server altogether.

@pljones
Copy link
Collaborator

pljones commented Mar 12, 2021

So you're saying the set up is

  • Running Jamulus instance with potential session stream to a per session ffmpeg instance
  • Running Icecast instance listening for audio data and client connections
  • ffmpeg instance configured to output its input from Jamulus to the Icecast instance

ffmpeg will only be running during a session. Jamulus and Icecast are running all the time.

(1) Right so far?

Then a Jamulus client connects to the Jamulus server and ffmpeg connects to Icecast.

Then an Icecast client connects to the Icecast server.

At this point the Icecast client can hear the Jamulus session.

(2) Right so far?

The Icecast client disconnects.

Another Icecast client connects but now cannot hear audio.

(3) Right so far?

If that's all correct, I'd say "check the logs" to see what's reported as going on.

@hoffie
Copy link
Member

hoffie commented Mar 18, 2021

@dingodoppelt In addition to what @pljones asked, can you confirm that this behavior does not occur when using some other, non-Jamulus icecast client?

I can't really think of anything which would influence this from within Jamulus or the QProcess vs. no-QProcess handling.

@dingodoppelt
Copy link
Contributor Author

can you confirm that this behavior does not occur when using some other, non-Jamulus icecast client?

@hoffie: yes, my code from the original pr works.

@hoffie
Copy link
Member

hoffie commented Mar 20, 2021

I have just installed icecast and tried to use this feature (10400f5, the latest state of this PR). After some fiddling with icecast config/authentication, it worked, even with multiple clients on icecast and/or on Jamulus. So, I'm not able to reproduce the issue you are describing.

Can you share some more details (icecast/ffmpeg version and config)?

$ icecast -v
Icecast 2.4.4

$ ffmpeg -v
ffmpeg version n4.3.2

I used the default icecast config and configured a single listener mountpoint.
I used ./Jamulus -s -n --streamto "-f mp3 icecast://source:[email protected]:8000/stream"

Some further things I've observed:

  • --streamto does not fail when used in Jamulus as a client mode, yet, it does not work there. I think we should fail in src/main.cpp.
  • When ffmpeg crashes (e.g. due to an icecast server restart), streaming will stop with no way to restart it besides re-connecting all clients and/or restarting the Jamulus server. Not sure, if/how this could be improved. We could think about restarting on failure, but this adds more complication and risks crash loops.
  • The current code still uses the ffmpeg-focused approach which requires command line splitting. I think the plan was to go for the generic approach and add an appropriate example script (with a ffmpeg invocation exaxmple). This will still have to be done.
  • We should find out why it worked for me and did not for you.

@pljones
Copy link
Collaborator

pljones commented Mar 21, 2021

  • --streamto does not fail when used in Jamulus as a client mode, yet, it does not work there. I think we should fail in src/main.cpp.
  • The current code still uses the ffmpeg-focused approach which requires command line splitting. I think the plan was to go for the generic approach and add an appropriate example script (with a ffmpeg invocation exaxmple). This will still have to be done.
  • We should find out why it worked for me and did not for you.

👍

  • When ffmpeg crashes (e.g. due to an icecast server restart), streaming will stop with no way to restart it besides re-connecting all clients and/or restarting the Jamulus server. Not sure, if/how this could be improved. We could think about restarting on failure, but this adds more complication and risks crash loops.

Personally, I'd leave this out for now. I need to get a proposal written for better server management. For example, at the moment, the SIGUSR1/2 for the jam recorder isn't portable. If the jam server runs out of space, it simply raises an exception to alert the server operator (that is, it deliberately crashes the server). Neither is particularly nice.

So I'm hoping to have something like a separate "management" thread that non-core service can communicate with. That should enable better and deeper status monitoring and control without impacting the main server too much.

(I'm hoping a similar approach can be taken with the client, too, to allow a nicer "headless" mode there.)

(I live in hope my day job work load will ease a bit so I've brain-space left to work out the details, along with everything else!)

@olibac
Copy link

olibac commented Mar 25, 2021

Would be possible to redirect every client channel arriving to server to ReaRoute? (this is related to #1305 @ann0see mentioned a few days ago).
The same way every channel output is redirected to a file when recording maybe it could be redirected to ReaRoute (or any virtual cable) so you can use Reaper (or any DAW) to mix every channel, add effects (eq, compressor, etc) and deliver final streo mix to a streaming service or to record live session with a better sounding mix.
I think it could be useful for audio techs as they can have more control on the "tracks" in order to improve quality of the final mix.

@pljones
Copy link
Collaborator

pljones commented Mar 25, 2021

The client can direct all of it's two outputs (i.e. left and right channels) wherever you want.

It only has the stereo mix. That's not going to change in the lifetime of "Jamulus" as "Jamulus". If you need a multi-channel, locally mixed solution, you'll need to look elsewhere.

@olibac
Copy link

olibac commented Mar 25, 2021

@pljones Thank you for the response. @ann0see explained to me the mixing is done in the server and client only has the stereo mix, that's why I am asking if it can be done in the server (not the client). The same way every channel in the server is redirected to a file when recording audio maybe it could be redirected to a virtual cable in order to be able to mix in a DAW.

@pljones
Copy link
Collaborator

pljones commented Mar 25, 2021

The server isn't attached to any audio hardware currently. Doing so would slow it down. Which would mean the number of clients a single server could support would drop. The jam recorder outputs each audio file unsynchronised. If you're trying to get multichannel realtime audio for mixing externally, you need it synchronised, which means keeping in time with an external clock.

Again, that's unlikely to happen. I've already said, someone could take the existing jam recorder quite easily and replace each file with a stream. What each stream was attached to would then be up to the user. But they wouldn't be "in time with each other".

@olibac
Copy link

olibac commented Mar 25, 2021

Thanks for the clarification about synchronization. And thanks for the efforts too.

@WolfganP
Copy link

I did some tests with Reastream VST plugins on the server side (see #146 (comment) and subsequent comments) but never had the time to make a workable prototype with the server (I even worked out to develop a Wireshark dissector for the protocol https://github.com/WolfganP/reastream-wireshark but that was the end of it due other time consuming tasks)

@hoffie
Copy link
Member

hoffie commented Apr 29, 2021

@dingodoppelt Are you still planning to work on this? If not, would you be ok with someone else taking over?

@dingodoppelt
Copy link
Contributor Author

@hoffie
I actually worked on the streamer by introducing a protocol message to turn streaming on and off on the fly. but that commit went into my old code since i have problems getting the changes done to this pull request to work. i am using my streaming code on about 20 servers and combined it with a feature to chat with the outside world for which I had to introduce a second protocol message. that is not something i'd like to have merged since it is too much of a "special usecase" and hacked together to have something working for me. I'd rather see something like an API (accessible through a management thread) and dedicated protocol messages for controlling things.
I really think that extensions like this are better integrated into something like @pljones proposed:

So I'm hoping to have something like a separate "management" thread that non-core service can communicate with. That should enable better and deeper status monitoring and control without impacting the main server too much.

I thought that this pr might be better off being held back until something like a management thread is introduced. that would make the stream or a chat with the outside modules or plugins. I don't really know if it is wise to merge an in-between since ffmpeg would have to go first ( and i'm still not a programmer ;)
We should first make clear what is really necessary to have a stream and possible future extensions working without messing up the code too much.

@hoffie
Copy link
Member

hoffie commented May 10, 2021

Thanks for your reply. In my opinion, it should be possible to bring this into mergable state with little effort. #967 (comment) outlines what should be done, IMO.

I don't know how much demand for this feature there is, but I think we should either move this forward for the next+1 (3.9.0) release or close it for now.

While I agree with @pljones that having such a management interface would be really nice, I doubt that it will be as simple as this PR, so it might make take some more time.

What do people think? Is there demand? (Maybe show via a thumbs-up reaction on this comment)

If there is demand and you (@dingodoppelt) don't plan to move this any further, then someone else could possibly step in and try to finish it?

@ann0see
Copy link
Member

ann0see commented Sep 5, 2021

I think this could go to a feature branch and be discussed later. A management interface etc. could be created - based on #1975

dtinth added a commit to dtinth/jamulus that referenced this pull request Sep 17, 2021
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.

The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:

     nc localhost $STEREO_MIX_PORT \
       | ffmpeg -f s16le -ar 48000 -ac 2 -i - \
                -acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL

The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967

By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.

* No need to implement ways to toggle streaming on/off, as streaming
  starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.

Co-authored-by: dingodoppelt <[email protected]>
dtinth added a commit to dtinth/jamulus that referenced this pull request Sep 28, 2021
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.

The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:

     nc localhost $STEREO_MIX_PORT \
       | ffmpeg -f s16le -ar 48000 -ac 2 -i - \
                -acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL

The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967

By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.

* No need to implement ways to toggle streaming on/off, as streaming
  starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.

Co-authored-by: dingodoppelt <[email protected]>
dtinth added a commit to dtinth/jamulus that referenced this pull request Oct 15, 2021
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.

The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:

     nc localhost $STEREO_MIX_PORT \
       | ffmpeg -f s16le -ar 48000 -ac 2 -i - \
                -acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL

The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967

By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.

* No need to implement ways to toggle streaming on/off, as streaming
  starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.

Co-authored-by: dingodoppelt <[email protected]>
dtinth added a commit to dtinth/jamulus that referenced this pull request Oct 23, 2021
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.

The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:

     nc localhost $STEREO_MIX_PORT \
       | ffmpeg -f s16le -ar 48000 -ac 2 -i - \
                -acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL

The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967

By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.

* No need to implement ways to toggle streaming on/off, as streaming
  starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.

Co-authored-by: dingodoppelt <[email protected]>
dtinth added a commit to dtinth/jamulus that referenced this pull request Nov 6, 2021
A stereo mix server is a local TCP server that outputs the mixed sound as s16le stereo @ 48000 hz.

The output can be streamed to ffmpeg for sending to (e.g.) an Icecast server:

     nc localhost $STEREO_MIX_PORT \
       | ffmpeg -f s16le -ar 48000 -ac 2 -i - \
                -acodec libmp3lame -ab 128k -f mp3 $ICECAST_URL

The implementation for `CServer::MixStream` comes from the Streamer2 PR:
jamulussoftware#967

By making the stream system pull-based (consumers ask Jamulus for data),
rather than push-based (Jamulus sends data to a streaming server), the
implementation is greatly simplified.

* No need to implement ways to toggle streaming on/off, as streaming
  starts when a client connects and ends when the client disconnects.
* No need to deal with launching, monitoring, and killing sub-processes.
* Works with all OSes.

Co-authored-by: dingodoppelt <[email protected]>
@ann0see
Copy link
Member

ann0see commented Jan 2, 2022

@dingodoppelt I now pushed it to a feature branch: https://github.com/jamulussoftware/jamulus/tree/feature/ffmpeg-streamer2 therefore closing. I think @dtinth is working on something related to this.

@ann0see ann0see closed this Jan 2, 2022
@ann0see
Copy link
Member

ann0see commented Mar 4, 2022

As soon as the management interface (JSON-RPC) is merged, the feature branch could be revived: https://github.com/jamulussoftware/jamulus/blob/feature/ffmpeg-streamer2/README.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants