-
Notifications
You must be signed in to change notification settings - Fork 223
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Store a client session's Pings and Overall Delays for later analysis #756
Comments
Salut, we are a 3 piece band, all living in Munich, but we have 3 differents ISPs. We suffer from the "slow-down" effect. Our current setup is:
What we have tried:
Meanwhile I learned about buffer bloat (https://www.bufferbloat.net/projects/bloat/wiki/). We used http://www.dslreports.com/speedtest to check the additional delay introduced by the buffer bloat. Results:
I believe that the high additional delay for uploads on my place and for downloads on the bass player's place lead to the observed difficulties. Next try will be to set up a Private Jamulus Server on the m-net environment. Edit 2020-12-08: A Private Server on m-net environment is not possible because m-net uses DS-Lite (Dual-Stack Lite) and provides only public IPv6 addresses. I believe that ping RTT alone is not sufficient to judge whether or not the overall delay will be small enough. Buffer bloat results from other users would be helpful. Cheers Edit: Links repaired, setup info updated (again). |
The problem with buffering is really caused by the choices network operators make in balancing their traffic. It is mostly just polite language for overloaded routers. We can get into a cat-and-mouse game to shape our traffic to minimize buffering at a router. This kind of game gets harder if the problematic router is several hops away. Unfortunately, managing the buffering at a router should not be a user issue. I am experiencing this kind of problem in several areas in my city. Some neighborhoods have low latency (<20ms) access to my server. Other users have an additional 40+ ms. It is even worst if the path crosses more than one network operator. I am still gathering data to see if this is an old neighborhood vs new neighborhood issue or a poor community vs wealthy community issue. I am not in favor of putting network diagnostic tools into Jamulus (to play buffer bloat games). It would be appropriate to have tools to report/record when we have poor network service. This would be helpful in discussing service issues with the network operator. (N.B. these problems are also causing poor video conferencing experience. However, most videoconferencing users just accept the problems as if it is a problem with the weather.) |
Bonjour, Helmuth :
gene96817 :
|
@pcar75 Thanks for the reply. Two things that you probably already know. ** the challenge with indicator lights, when we are trying to quickly indicate a change in status, we need a fast-on of the new status and a slow-off. It is easy to end up with fast changes that is hard to read. ** regarding buffer backup, if we detect that condition quickly, it is helpful to stop sending packets to let the backup clear. Yes, that will cause noise in the audio but that noise will (should?) be better for musicians than to have the sound be more and more delayed. (I don't know our internal machinery so I can only point out this basic principle.) |
Same to me. Lack of documentation is currently a very weak point for me. As musicians and engineers we are used to reading block diagrams and signal flows. And at the moment the only Jamulus block diagram I am aware of is in "Figure 2: Simplified Jamulus block diagram" of Volker's (otherwise very well structured and comprehensive) case study from 2015 (http://llcon.sourceforge.net/PerformingBandRehearsalsontheInternetWithJamulus.pdf). After reading the paper, I was (mistakenly) thinking that there is only one mix for all clients, which as of today is obviously wrong as there is a "Personal Mix" per client. If possible I would like to contribute to creating a more current diagram which should contain audio as well as control flows; if Volker reads this maybe he can direct me how to achieve this (and maybe also a bit more streamlined description in the wiki). Edit 2020-12-06: I was not aware of Issue #64 - the diagrams there are exactly what I was missing. They need to be taken up into the wiki (if they are not already - sorry for my inability to check everything :-)
Highly appreciated. I would really be happy if - within this community - we would be able to drill down latency problems as far as possible. In our trio, we have only 3 different views and access methods to research the problem. But in the community there are tons of other possibilities. Maybe this issue is not the right place to discuss everything related to latency. But perhaps we can make a start. My approach is to provide to the community the info which I have, hoping that there are other colleagues who can add their knowledge. In our case I hope there are other musicians on cable network (VF/KDG) who can give their input.
... which for me is no wonder because in verbal communication you can easily stop and ask for repetition or clarification - impossible when playing music in real time. Last not least my appeal to the responsible persons: Please do not overload the Jamulus SW and GUI - the more functions (maybe only relevant for a small amount of users) are introduced, the higher the risk for bugs due to the amount of platforms to be supported. As already said there are tools outside this platform to investigate problems. Cheers |
There was a previous discussion on diagrams at #64 but the tension between detailed techie flows and easy to understand pics for non techs was never resolved for them to be included in the wiki/documentation. |
Reading #758 I need to say that for our trio with the given ISP situation (which we cannot currently change) JamKazam was not working well for us. For us latency, not bandwidth, is the bottleneck! For me the only solution appears to be using distributed computing power (i.e. audio compression/decompression) to overcome the latency problems and therefore I support the Jamulus approach. @WolfganP Thanx to referring me to #64, I will study it. As an electrical engineer/drummer/sound engineer/software developer and project manager (retired!) I am convinced that the techie/non-techie conflict can be solved if the doc is giving the relevant information to everyone - most likely one simple sheet is not enough - linking should make it happen. |
I found the diagrams in #64 useful for a very very high-level view of how the pieces relate to each other. It doesn't have enough detail to give insights for troubleshooting problems. If the coders could provide the very techie flow, I am sure there are others (as in this thread) who could help provide higher-level abstractions for the technical users. |
@pcar75 This discussion about bufferbloat has been sitting in my mind. While the referenced article is pragmatic, it is the wrong way to deal with buffer backups (which they call the bloating of a buffer). I would be interested in a discussion about buffer backups in the client-server path and creating a mechanism for quenching the buffer back up. If this is interesting, we could design an enhancement to the data stream to detect and quench the backup. Assuming the Internet path is only momentarily backed up, this would accelerate recovery to minimize the interruption of the audio stream. (This is the best I can contribute to this topic. I am a protocol guy, not a coder.) |
We could add a more technical page on jamulus.io maybe in the upcoming knowledge base. In fact, I think we lack technical documentation. Please open an issue on https://github.com/jamulussoftware/jamuluswebsite/issues |
@ann0see: To whom do you address this? If to me, OK, I am willing to open an Issue, please help me with the title, thx. And what about the "upcoming knowledge base"? What is supposed to be to found there with respect to the current wiki? |
Yes, you should open an issue with the feature request template. Fill the template and describe that you would like to have more detailed, technical information documented somewhere for detailed troubleshooting purposes (at least that's what I read from skimming through your comments). This information would not necessarily be for the end user but for people who want or need to understand the technical details. Therefore we'd not include it in the main wiki but on an external page in an upcoming blog like section on jamulus.io |
Last night a group of us had a difficult time because of delay. We changed our own settings to make matters the best possible, and we played because that is what musicians do (regardless of our superb data analysis, networking and interprocess communications skills). |
@DavidSavinkoff There have been several calls (in different issues) for telemetry and metrics to be shown by Jamulus. What I'm unclear on though is how that would actually help anyone improve their sound. Essentially you can only:
What would it matter if Jamulus told you 100 things about what was happening? You can only take action on those 4 things, and you may well not be able to change 3 and 4. I dunno. This seems an obvious point to me but maybe I'm wrong. |
@gilgongo Thanks for correcting the title :-) I think the intention is not to receive a "live" indicator of problems which can then be solved immediatley but a kind of summary of what happened during a session. Of course the basic settings need to be correct beforehand and the essential hints (e.g. no WiFi) observed. In our rehearsal last night we encountered several very ugly distortions in the bass channel, which we had to disregard/tolerate to not break the rehearsal. We were concentrated on our music, unable to check the visible indicators in the Jamulus GUI in parallel. Would there be a summary as proposed we would have been able to check the root cause after finishing a piece (Input level overload? Buffer underrun? CPU exhausted? Jitter in the overall delay?) and try to remove the problem. See also #150. So the point is not to record 100 parameters but only the few ones known to be essential for a good audio transport. If it turns out that you cannot change them then you need to live with the problem anyway, but at least you know what happened. Maybe this approach is only relevant for "techie" people who are used to analyze and remove problems... |
OK so a history of events during a session (so a bit like what Jack gives you with Xruns and stuff, together with ping times).
OK so let's say you had a time-stamped record of all your client's under/overruns, CPU usage % and delay and ping status. What would you actually do as a result other than adjust buffer settings (in Jamulus and/or your drivers), or look for CPU or bandwidth hogging things? The latter you don't really need a log for, do you? Fundamentally, there's only two things you can change to see if things get better (assuming you are on Ethernet and have a decent interface). So my point is that if you hear sound problems, why do you need any data or flashing lights at all to take action? |
Simply because when we play we cannot monitor computers. If there are droputs/distortions etc. they need to be tolerated. Afterwards a research for the root cause can be started - of course while everybody is playing but then not the music but the technical environment is on focus. |
Sorry to labour the point, but what would be an example of a "root cause" for which you would do anything other than try adjusting the jitter buffers or looking bandwidth/CPU hogs? Surely pretty much all information you had would end up with you taking one of those two actions. So why not just cut out all the "research" and take that action? It's not as if you have any real choices. Or put it another way: do you think there is a pattern in what you hear in your signal (pops? glitches? dropouts?) and what you would see in any event log that would make you take action that is different from what you do today? Maybe it's because I'm not an engineer - but I don't understand how more data in this context is actually worth having. |
Yes, I think there could be a pattern. If e.g. during actual rehearsal at times I play "louder than normal" and this causes input level overload which results in audio problems whatsoever - for what reason do I need to fiddle with buffer settings? Having more data simply saves time. And yes, I admit that for engineers trial and error to eliminate problems is horror. |
I see (would the log need to record db level as well then?). So maybe it's just a mindset thing then. Analysing data to establish a pattern in order to decide whether I can do anything seems less satisfying to me than doing something and seeing if it makes a difference - particularly if there are only a couple of things available to do. But I can see that others think differently on that. And if all this data is safely recorded in a log that people like me can ignore, that's fine :-) |
Thanks for bringing back this thread to its primary focus. Most of you may have great or very good overall delays but my experience with our drummer (hosting the private server) here in Montréal, QC, Canada is showing the overall delay in the 60-75 ms range, with 'snap, crackle, pop'. For every tests we do, it is mostly a subjective matter to gauge if we made an improvement, however small. That is why I suggested this modification to the software, so we can test and gauge with more objectivity. Besides the obvious Jamulus controls and basic environment conditions, there is a lot we can consider. Examples are :
Things I've already tested, the Jamulus client parameters in the following condition :
Our drummer tested different machine configuration on his LAN with me :
I still think my initial suggestion could help people that have less than great delay and sound , and are willing and able to investigate a little ... |
If it is possible to determine and record the input level [dB] then fine, but for me it would be sufficient to record the event of an overload. |
@gilgongo mentioned: My answer is that I believe there are diminishing returns for the effort to improve delays. I'm sure that the documentation will eventually give all of the advice that can be given on improving delays, and that Jamulus will eventually address all reasonable prossibilities to reduce delays. Once all is done, what about the telemetry functions? (I hope the answer would be: What telemetry functions?) I've been trying to reduce delays myself, and here are a few things I have discovered in linux:
|
Please also have a look at this input: #781 (comment) |
Sorry to be catching up with the discussion. I am 10 hours behind Europe. If nothing else, the pings and delays we collect can provide quantitative feedback on changes we make to diagnose bad audio problems. |
If I have understood the "buffer bloat" problem correctly: It is about buffer overflow but without packet loss - the buffer size is simply increased shortly before it overflows. This causes high jitter in the RTT. |
In normal buffer operations, the buffer is always partially filled. Then, when the normal conditions vary too much, there is buffer underflow (the buffer went empty) or buffer overflow where one or more packets are lost. To prevent underflow, we introduce a little delay to keep the buffer from going empty. The normal fluctuation of packets in the buffer is what we want to compensate for jitter. The cause for jitter is momentary congestion in the data path. Because of this explanation, I feel the imagery of buffer bloat suggests a possible misconception in the cause and effect of delay and jitter. Actually, a buffer can only help compensate for jitter in one direction of the traffic. For a discussion of jitter management, we have to think of RTT as a forward jitter problem and a return jitter problem. Path congestion does not have to be symmetric. When we think about traffic flow through a multihop path in the network, if some router buffers begin to fill up and do not quickly drain, then the image of buffer bloat makes more sense. This is because what should have been a normal momentary buffering now is persistent. This is relatively benign for TCP traffic (because of TCP flow control) but bad for time-sensitive UDP traffic because late delivery of a UDP packet is almost as bad as dropping a packet. In fact, with buffer backup in the network path, usually dropping UDP packet is a good strategy for flushing the buffer to help get caught up with normal delivery times. |
I just found more information on buffer bloat (3 decades of computer science research). |
Maybe I have not been clear/detailed enough (it has been a bit late, sorry :-). This is exactly what I wanted to refer to, not the Jamulus implementation. http://cacm.acm.org/magazines/2012/1/144810-bufferbloat/fulltext (linked in the linux article given above) is worth reading, and to the Buffer Bloat Project (and speedtest) I have already provided links above (#756 (comment)), and I have stated my opinion there that the delay problems we encounter can most likely not be solved by using a public server due to the asymmetric delay and buffer bloat (upstream vs. downstream), maybe, however, with a private server on the ISP customer encountering the lowest delay and buffer bloat. But currently I do not see how this is supposed to be feasible due to DS-Lite.
100% agreed, but I still believe that logging a few key parameters to a file at a place where it doesn't annoy people who are not interested in it could help analyze problems occurring during a live session. I think the logging shall be enabled only on demand and the file location shall be configurable somewhere in the settings panel. |
While what @DavidSavinkoff has pointed out is true. I agree with @drummer1154 about collecting some data about the situation as seen by Jamulus. When I am in musician mode, I don't have any networking diagnostic equipment running. If some bad behavior occurs, it is very valuable to have a clue of what Jamulus was seeing. It is also good to have the data if a less technical user reports a problem. It is really hard to deal with intermittent problems and the solution is not for users to be constantly running diagnostic tools to capture an intermittent problem. Also when I am in my musician mode, I don't want to have all the diagnostic tools around. It is my belief that a good application should provide data about why it is beginning to fail or why it failed. Nothing is more frustrating than for an application to misbehave without clues. In those cases, I cannot distinguish between a buggy app and a bad environment. |
I really like the idea of having live insight & logs about ping, buffer numbers and overall delays.
I think this would be a great feature on the server side also. Like seeing in the list which client has problems or bad setups. But that would be a separate ticket, I presume. |
The beauty of the client side facility is you would have useful data regardless of which server you are connected. |
Okay, for client side it wouldn't be too hard to:
if a line hat average of 1000 chars, logging two lines per second, that gives ~7.2MB per hour. Do you really need an on/off button + size limit on that? What parameters are you interrested in? ping & overall delay should be very easy to get in that context. Would that be a first version of interrest? |
Thanks for the interest. Personally , I would like to capture the ping, the overall delay, the Jitter buffer Auto on/off status, and it's local and server values with timestamp.
|
Ooopps , sorry . I wanted to close my comment, not the issue ... |
See also #961 |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Bonjour,
I would like to suggest the development of a facility on the client side to store the Ping times and Global delays into a file (text or CSV) while connected so it would be possible latter to analyse those results of server and clients setups, with tools such as a spreadsheet application for data or graph comparison.
Hints :
Context :
We are currently running a series of tests designed to tweak performance and quality of our sessions, and it is difficult to play music and try to average the values of Pings and Overall Delays to help gauge the benefits.
Our current setup is :
Our internet speed are similar : ping 10-13 ms, jitter 2-4 ms, 33+ Mpbs download and 10+ Mpbs upload.
Besides testing for best hours/days to do the session due to the vagaries of internet loads regionally, we are ...
Thanks for considering this suggestion.
Merci
The text was updated successfully, but these errors were encountered: