Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store a client session's Pings and Overall Delays for later analysis #756

Closed
pcar75 opened this issue Dec 3, 2020 · 38 comments
Closed

Store a client session's Pings and Overall Delays for later analysis #756

pcar75 opened this issue Dec 3, 2020 · 38 comments

Comments

@pcar75
Copy link

pcar75 commented Dec 3, 2020

Bonjour,
I would like to suggest the development of a facility on the client side to store the Ping times and Global delays into a file (text or CSV) while connected so it would be possible latter to analyse those results of server and clients setups, with tools such as a spreadsheet application for data or graph comparison.

Hints :

  • Add a path-filename to store the data into.
  • Add an On/Off button to activate or Clear button to deactivate the path-filename (à la Server -Recoding fashion)
  • Add a time-stamp every X refresh stored
  • Specify a maximum time to store data so files don't grow indefinitely
  • If the client GUI refresh rates (about 0,5 seconds or less in my experience) are not appropriate (taxing I/O , performance) for storing data, specify a slower refresh rate for storing or possibly store to a memory structure (ex named pipe, other) and write to path-filename when maximum time, Off/Clear button or Disconnect/Exit happens.

Context :
We are currently running a series of tests designed to tweak performance and quality of our sessions, and it is difficult to play music and try to average the values of Pings and Overall Delays to help gauge the benefits.

Our current setup is :

  • Drummer with Jamulus server on laptop Windows 7 Pro (64b) and SSD drive, Behringer Q802USB, old native drive
  • Guitar with Jamulus client on tower Windows 10 Pro (64b) with SSD and HD drive, Behringer Q802USBm, old native driver
    Our internet speed are similar : ping 10-13 ms, jitter 2-4 ms, 33+ Mpbs download and 10+ Mpbs upload.

Besides testing for best hours/days to do the session due to the vagaries of internet loads regionally, we are ...

  • Testing whether the -F and/or -T server arguments offer better results
  • Testing whether my Windows 10 standard configuration or SafeModeNetworked configuration is better
  • Testing whether a USB3.0 key JamulusOS client is better than a Windows one.
  • Testing different buffers size and timing configuration
  • Mixes of those tests

Thanks for considering this suggestion.
Merci

@drummer1154
Copy link

drummer1154 commented Dec 3, 2020

Salut,

we are a 3 piece band, all living in Munich, but we have 3 differents ISPs. We suffer from the "slow-down" effect.

Our current setup is:

  • dr (me), Roland TD-20 via SPDIF, Lexicon Omega; iMac (Early 2008), Catalina; ISP=T-Com (VDSL2 17a G.Vector (ITU G.993.5), 50/10Mbit/s)
  • b/g, interface BOSS GT-1B/BOSS GT-1; iMac (Retina 5K, 27-inch, 2017), Big Sur; ISP=VF/Kabel Deutschland (200/12Mbit/s)
  • tp+key, Universal Audio Arrow; iMac (2020), Catalina; ISP=m-net (Surf&Fon-Flat 100 fiber (in reality VDSL2 30a (ITU G.993.2) DS-Lite), 100/40Mbit/s)

What we have tried:

  • Public Servers (Central or others with low ping RTT), result: Overall Delay 40ms+ for b => hard to concentrate
  • Private server on my local network: 64bit Win8.1 i3-3225 3.3GHz 16GB, result: Overall Delay 50ms+ for b => slow down

Meanwhile I learned about buffer bloat (https://www.bufferbloat.net/projects/bloat/wiki/). We used http://www.dslreports.com/speedtest to check the additional delay introduced by the buffer bloat. Results:

I believe that the high additional delay for uploads on my place and for downloads on the bass player's place lead to the observed difficulties. Next try will be to set up a Private Jamulus Server on the m-net environment. Edit 2020-12-08: A Private Server on m-net environment is not possible because m-net uses DS-Lite (Dual-Stack Lite) and provides only public IPv6 addresses.

I believe that ping RTT alone is not sufficient to judge whether or not the overall delay will be small enough. Buffer bloat results from other users would be helpful.

Cheers
Helmuth

Edit: Links repaired, setup info updated (again).

@gene96817
Copy link

The problem with buffering is really caused by the choices network operators make in balancing their traffic. It is mostly just polite language for overloaded routers. We can get into a cat-and-mouse game to shape our traffic to minimize buffering at a router. This kind of game gets harder if the problematic router is several hops away. Unfortunately, managing the buffering at a router should not be a user issue.

I am experiencing this kind of problem in several areas in my city. Some neighborhoods have low latency (<20ms) access to my server. Other users have an additional 40+ ms. It is even worst if the path crosses more than one network operator. I am still gathering data to see if this is an old neighborhood vs new neighborhood issue or a poor community vs wealthy community issue.

I am not in favor of putting network diagnostic tools into Jamulus (to play buffer bloat games). It would be appropriate to have tools to report/record when we have poor network service. This would be helpful in discussing service issues with the network operator. (N.B. these problems are also causing poor video conferencing experience. However, most videoconferencing users just accept the problems as if it is a problem with the weather.)

@pcar75
Copy link
Author

pcar75 commented Dec 3, 2020

Bonjour,

Helmuth :

  • Thanks for bringing attention to this part of possible latency problem, I will research this.

gene96817 :

  • Thanks for further comments on this problem, However It is not a case of adding "network diagnostic tools", the tools are already there, i.e. the rapidly refreshed indicators like led lights and values of Jitter Buffer [auto] in the Client window and the Audio Stream Rate, Ping Time and Overall Delay in the Client>Settings window. The idea is simply to record what is already available i.e. mostly the Ping Time and Overall Delay ... provided , of course, that this processing does not cause undue impacts to Jamulus prime function.
    I too am not in favor of 'feature bloating' this great software, but I will do anything I can to reduce the latency and jitter, and better the sound and timing quality of our music.

@gene96817
Copy link

@pcar75 Thanks for the reply. Two things that you probably already know.

** the challenge with indicator lights, when we are trying to quickly indicate a change in status, we need a fast-on of the new status and a slow-off. It is easy to end up with fast changes that is hard to read.

** regarding buffer backup, if we detect that condition quickly, it is helpful to stop sending packets to let the backup clear. Yes, that will cause noise in the audio but that noise will (should?) be better for musicians than to have the sound be more and more delayed. (I don't know our internal machinery so I can only point out this basic principle.)

@drummer1154
Copy link

drummer1154 commented Dec 3, 2020

I don't know our internal machinery so I can only point out this basic principle

Same to me. Lack of documentation is currently a very weak point for me. As musicians and engineers we are used to reading block diagrams and signal flows. And at the moment the only Jamulus block diagram I am aware of is in "Figure 2: Simplified Jamulus block diagram" of Volker's (otherwise very well structured and comprehensive) case study from 2015 (http://llcon.sourceforge.net/PerformingBandRehearsalsontheInternetWithJamulus.pdf). After reading the paper, I was (mistakenly) thinking that there is only one mix for all clients, which as of today is obviously wrong as there is a "Personal Mix" per client.

If possible I would like to contribute to creating a more current diagram which should contain audio as well as control flows; if Volker reads this maybe he can direct me how to achieve this (and maybe also a bit more streamlined description in the wiki).

Edit 2020-12-06: I was not aware of Issue #64 - the diagrams there are exactly what I was missing. They need to be taken up into the wiki (if they are not already - sorry for my inability to check everything :-)

bringing attention to this part of possible latency problem, I will research this

Highly appreciated. I would really be happy if - within this community - we would be able to drill down latency problems as far as possible. In our trio, we have only 3 different views and access methods to research the problem. But in the community there are tons of other possibilities. Maybe this issue is not the right place to discuss everything related to latency. But perhaps we can make a start. My approach is to provide to the community the info which I have, hoping that there are other colleagues who can add their knowledge. In our case I hope there are other musicians on cable network (VF/KDG) who can give their input.

most videoconferencing users just accept the problems as if it is a problem with the weather

... which for me is no wonder because in verbal communication you can easily stop and ask for repetition or clarification - impossible when playing music in real time.

Last not least my appeal to the responsible persons: Please do not overload the Jamulus SW and GUI - the more functions (maybe only relevant for a small amount of users) are introduced, the higher the risk for bugs due to the amount of platforms to be supported. As already said there are tools outside this platform to investigate problems.

Cheers
Helmuth

@WolfganP
Copy link

WolfganP commented Dec 3, 2020

If possible I would like to contribute to creating a more current diagram which should contain audio as well as control flows; if Volker reads this maybe he can direct me how to achieve this (and maybe also a bit more streamlined description in the wiki).

There was a previous discussion on diagrams at #64 but the tension between detailed techie flows and easy to understand pics for non techs was never resolved for them to be included in the wiki/documentation.

@drummer1154
Copy link

Reading #758 I need to say that for our trio with the given ISP situation (which we cannot currently change) JamKazam was not working well for us. For us latency, not bandwidth, is the bottleneck! For me the only solution appears to be using distributed computing power (i.e. audio compression/decompression) to overcome the latency problems and therefore I support the Jamulus approach.

@WolfganP Thanx to referring me to #64, I will study it. As an electrical engineer/drummer/sound engineer/software developer and project manager (retired!) I am convinced that the techie/non-techie conflict can be solved if the doc is giving the relevant information to everyone - most likely one simple sheet is not enough - linking should make it happen.

@gene96817
Copy link

I found the diagrams in #64 useful for a very very high-level view of how the pieces relate to each other. It doesn't have enough detail to give insights for troubleshooting problems. If the coders could provide the very techie flow, I am sure there are others (as in this thread) who could help provide higher-level abstractions for the technical users.

@gene96817
Copy link

@pcar75 This discussion about bufferbloat has been sitting in my mind. While the referenced article is pragmatic, it is the wrong way to deal with buffer backups (which they call the bloating of a buffer). I would be interested in a discussion about buffer backups in the client-server path and creating a mechanism for quenching the buffer back up. If this is interesting, we could design an enhancement to the data stream to detect and quench the backup. Assuming the Internet path is only momentarily backed up, this would accelerate recovery to minimize the interruption of the audio stream. (This is the best I can contribute to this topic. I am a protocol guy, not a coder.)

@ann0see
Copy link
Member

ann0see commented Dec 4, 2020

I am convinced that the techie/non-techie conflict can be solved if the doc is giving the relevant information to everyone - most likely one simple sheet is not enough

We could add a more technical page on jamulus.io maybe in the upcoming knowledge base. In fact, I think we lack technical documentation.

Please open an issue on https://github.com/jamulussoftware/jamuluswebsite/issues

@drummer1154
Copy link

@ann0see: To whom do you address this? If to me, OK, I am willing to open an Issue, please help me with the title, thx. And what about the "upcoming knowledge base"? What is supposed to be to found there with respect to the current wiki?

@ann0see
Copy link
Member

ann0see commented Dec 6, 2020

If to me, OK, I am willing to open an Issue

Yes, you should open an issue with the feature request template. Fill the template and describe that you would like to have more detailed, technical information documented somewhere for detailed troubleshooting purposes (at least that's what I read from skimming through your comments).

This information would not necessarily be for the end user but for people who want or need to understand the technical details. Therefore we'd not include it in the main wiki but on an external page in an upcoming blog like section on jamulus.io

@ghost
Copy link

ghost commented Dec 7, 2020

Last night a group of us had a difficult time because of delay. We changed our own settings to make matters the best possible, and we played because that is what musicians do (regardless of our superb data analysis, networking and interprocess communications skills).
Jamulus does not require Data capture or Network protocol or Statistical analysis functions (maybe someone can write a plugin for WireShark and write R language scripts instead).

This was referenced Dec 9, 2020
@gilgongo gilgongo changed the title Store a client session's Pings and Overall Delays for latter analysis Store a client session's Pings and Overall Delays for later analysis Dec 15, 2020
@gilgongo
Copy link
Member

@DavidSavinkoff There have been several calls (in different issues) for telemetry and metrics to be shown by Jamulus. What I'm unclear on though is how that would actually help anyone improve their sound.

Essentially you can only:

  1. Adjust buffer settings (in Jamulus and/or your drivers)
  2. Turn off bandwidth or CPU hogging apps
  3. Maybe use Ethernet not wi-fi
  4. Maybe use a different audio device

What would it matter if Jamulus told you 100 things about what was happening? You can only take action on those 4 things, and you may well not be able to change 3 and 4.

I dunno. This seems an obvious point to me but maybe I'm wrong.

@drummer1154
Copy link

drummer1154 commented Dec 15, 2020

@gilgongo Thanks for correcting the title :-)

I think the intention is not to receive a "live" indicator of problems which can then be solved immediatley but a kind of summary of what happened during a session. Of course the basic settings need to be correct beforehand and the essential hints (e.g. no WiFi) observed.

In our rehearsal last night we encountered several very ugly distortions in the bass channel, which we had to disregard/tolerate to not break the rehearsal. We were concentrated on our music, unable to check the visible indicators in the Jamulus GUI in parallel. Would there be a summary as proposed we would have been able to check the root cause after finishing a piece (Input level overload? Buffer underrun? CPU exhausted? Jitter in the overall delay?) and try to remove the problem. See also #150.

So the point is not to record 100 parameters but only the few ones known to be essential for a good audio transport. If it turns out that you cannot change them then you need to live with the problem anyway, but at least you know what happened. Maybe this approach is only relevant for "techie" people who are used to analyze and remove problems...

@gilgongo
Copy link
Member

gilgongo commented Dec 15, 2020

OK so a history of events during a session (so a bit like what Jack gives you with Xruns and stuff, together with ping times).

(Input level overload? Buffer underrun? CPU exhausted? Jitter in the overall delay?) and try to remove the problem

OK so let's say you had a time-stamped record of all your client's under/overruns, CPU usage % and delay and ping status. What would you actually do as a result other than adjust buffer settings (in Jamulus and/or your drivers), or look for CPU or bandwidth hogging things? The latter you don't really need a log for, do you?

Fundamentally, there's only two things you can change to see if things get better (assuming you are on Ethernet and have a decent interface). So my point is that if you hear sound problems, why do you need any data or flashing lights at all to take action?

@drummer1154
Copy link

Simply because when we play we cannot monitor computers. If there are droputs/distortions etc. they need to be tolerated. Afterwards a research for the root cause can be started - of course while everybody is playing but then not the music but the technical environment is on focus.

@gilgongo
Copy link
Member

gilgongo commented Dec 15, 2020

Afterwards a research for the root cause can be started

Sorry to labour the point, but what would be an example of a "root cause" for which you would do anything other than try adjusting the jitter buffers or looking bandwidth/CPU hogs? Surely pretty much all information you had would end up with you taking one of those two actions.

So why not just cut out all the "research" and take that action? It's not as if you have any real choices.

Or put it another way: do you think there is a pattern in what you hear in your signal (pops? glitches? dropouts?) and what you would see in any event log that would make you take action that is different from what you do today?

Maybe it's because I'm not an engineer - but I don't understand how more data in this context is actually worth having.

@drummer1154
Copy link

Yes, I think there could be a pattern. If e.g. during actual rehearsal at times I play "louder than normal" and this causes input level overload which results in audio problems whatsoever - for what reason do I need to fiddle with buffer settings?

Having more data simply saves time. And yes, I admit that for engineers trial and error to eliminate problems is horror.

@gilgongo
Copy link
Member

I see (would the log need to record db level as well then?).

So maybe it's just a mindset thing then. Analysing data to establish a pattern in order to decide whether I can do anything seems less satisfying to me than doing something and seeing if it makes a difference - particularly if there are only a couple of things available to do. But I can see that others think differently on that.

And if all this data is safely recorded in a log that people like me can ignore, that's fine :-)

@pcar75
Copy link
Author

pcar75 commented Dec 15, 2020

Thanks for bringing back this thread to its primary focus.

Most of you may have great or very good overall delays but my experience with our drummer (hosting the private server) here in Montréal, QC, Canada is showing the overall delay in the 60-75 ms range, with 'snap, crackle, pop'.
Wanting (really needing !) to play music, we are determined to wring every milliseconds off that delay and better the quality.

For every tests we do, it is mostly a subjective matter to gauge if we made an improvement, however small. That is why I suggested this modification to the software, so we can test and gauge with more objectivity.

Besides the obvious Jamulus controls and basic environment conditions, there is a lot we can consider. Examples are :

  • shutdown router Wifi
  • disconnect TV from router (even if it is closed)
  • reboot everything (computer, router and cable-modem) before a session
  • test with public servers ( 2 found in Montréal, sub-20 ms ping )
  • Verify with my ISP about QoS (quality of service), hours/days bandwidth profile, buffer bloat.
  • Verify if upper (costlier) internet tier package would help.
  • Research and test server in the cloud with different providers.

Things I've already tested, the Jamulus client parameters in the following condition :

  • My windows 10 Pro regular user (baseline)
  • A new user : under Windows 10 it does not substantially diminish the bloatware already installed :-(
    • As an aside, I guess most softwares install for 'default' or 'public' users without asking. Also being wary of useless/unused programs I tried some uninstaller like IObit for example but I realised that their free versions can't even uninstall themselves cleanly ‡-/
  • Booting in SafeModeNetworked (seems a little better)
  • Booting from a USB key with Jamulus OS (seems a little better)

Our drummer tested different machine configuration on his LAN with me :

  • A Windows 10 laptop with SSD with client / and server
  • Another Windows laptop with HD with client
  • A newer Windows 10 desktop with server / and with client
  • ...

I still think my initial suggestion could help people that have less than great delay and sound , and are willing and able to investigate a little ...

@drummer1154
Copy link

would the log need to record db level as well then?

If it is possible to determine and record the input level [dB] then fine, but for me it would be sufficient to record the event of an overload.

@ghost
Copy link

ghost commented Dec 15, 2020

@gilgongo mentioned:
What I'm unclear on though is how that would actually help anyone improve their sound.
Essentially you can only:
Adjust buffer settings (in Jamulus and/or your drivers)
Turn off bandwidth or CPU hogging apps
Maybe use Ethernet not wi-fi
Maybe use a different audio device
What would it matter if Jamulus told you 100 things about what was happening?

My answer is that I believe there are diminishing returns for the effort to improve delays. I'm sure that the documentation will eventually give all of the advice that can be given on improving delays, and that Jamulus will eventually address all reasonable prossibilities to reduce delays. Once all is done, what about the telemetry functions? (I hope the answer would be: What telemetry functions?)

I've been trying to reduce delays myself, and here are a few things I have discovered in linux:

  • I follow the advise you have above... and
  • Jamulus client QjackCtl Settings: buffer size = 128 : periods per buffer = 3 : jitter buffer = auto
  • Jamulus client QjackCtl Settings: buffer size = 64 : periods per buffer = 2 : jitter buffer = auto : Enable Small Network Buffers
  • Jamulus client Must connect to a local (same computer) Jamulus server as: localhost (or 127.0.0.1)
  • Jamulus server settings (no real time in kernel): sudo nice --adjustment=-1 Jamulus -n -d -s -T -F
  • Jamulus server settings: sudo ifconfig eth0 txqueuelen 16 (attempt at reducing buffer bloat on computer)
  • Play with router settings for QoS etc.
  • Compile optimize Jamulus specifically: -march=<> -mtune=<> -mfpmath=sse (modified Makefile)
  • strip -s Jamulus
  • Share all Jamulus settings with bandmates.
  • Learn how to perform music with network delays.

@gilgongo
Copy link
Member

@DavidSavinkoff

The docs attempt tell you everything you reasonably need to know about reducing latency and preserving quality. If there's any suggestions you have on that, please raise them here.

@pcar75

so we can test and gauge with more objectivity

I love you guys ... :-)

@drummer1154
Copy link

Please also have a look at this input: #781 (comment)

@gene96817
Copy link

Sorry to be catching up with the discussion. I am 10 hours behind Europe.
I care about this discussion because it would be helpful to know what to do if there is bad audio quality. the causes of bad audio are late or lost packets. We can't fix all the causes. It would be good to be able to triage the problem to (1) the client device, (2) the client's local network or gateway router, (3) the client's internet uplink, or (4) the Internet data path. We (as users/clients) can do a lot about 1, 2, and 3. Too many people are trying things without a troubleshooting strategy and then think it is magic or unreliable.

If nothing else, the pings and delays we collect can provide quantitative feedback on changes we make to diagnose bad audio problems.

@drummer1154
Copy link

If I have understood the "buffer bloat" problem correctly: It is about buffer overflow but without packet loss - the buffer size is simply increased shortly before it overflows. This causes high jitter in the RTT.

@gene96817
Copy link

In normal buffer operations, the buffer is always partially filled. Then, when the normal conditions vary too much, there is buffer underflow (the buffer went empty) or buffer overflow where one or more packets are lost. To prevent underflow, we introduce a little delay to keep the buffer from going empty. The normal fluctuation of packets in the buffer is what we want to compensate for jitter. The cause for jitter is momentary congestion in the data path. Because of this explanation, I feel the imagery of buffer bloat suggests a possible misconception in the cause and effect of delay and jitter.

Actually, a buffer can only help compensate for jitter in one direction of the traffic. For a discussion of jitter management, we have to think of RTT as a forward jitter problem and a return jitter problem. Path congestion does not have to be symmetric.

When we think about traffic flow through a multihop path in the network, if some router buffers begin to fill up and do not quickly drain, then the image of buffer bloat makes more sense. This is because what should have been a normal momentary buffering now is persistent. This is relatively benign for TCP traffic (because of TCP flow control) but bad for time-sensitive UDP traffic because late delivery of a UDP packet is almost as bad as dropping a packet. In fact, with buffer backup in the network path, usually dropping UDP packet is a good strategy for flushing the buffer to help get caught up with normal delivery times.

@ghost
Copy link

ghost commented Dec 16, 2020

I just found more information on buffer bloat (3 decades of computer science research).
https://www.coverfire.com/articles/queueing-in-the-linux-network-stack/
I skimmed through the article following links as I encountered them and found The Buffer Bloat Project, and active Mailing lists, and more. (note that this linux article would apply to any modern operating system)
After reading this, it became clear to me that Jamulus should Not store Pings etc. for further analysis because Jamulus is not a tool for learning about networking. This Ping analysis is not required because the research and implementation is already part of modern networking and operating systems. There is no need to rediscover what is already well known and deployed. Also there is a big industry with Teleconferencing and Networked Gaming applications (as latency sensitive as Jamulus). If there is need for tweeking a network or computer, I expect there to be a separate tool for this job.
Setting up Jamulus should be simple, with only a few settings that can be trusted to work as best as practicable.

@drummer1154
Copy link

When we think about traffic flow through a multihop path in the network, if some router buffers begin to fill up and do not quickly drain, then the image of buffer bloat makes more sense.

Maybe I have not been clear/detailed enough (it has been a bit late, sorry :-). This is exactly what I wanted to refer to, not the Jamulus implementation. http://cacm.acm.org/magazines/2012/1/144810-bufferbloat/fulltext (linked in the linux article given above) is worth reading, and to the Buffer Bloat Project (and speedtest) I have already provided links above (#756 (comment)), and I have stated my opinion there that the delay problems we encounter can most likely not be solved by using a public server due to the asymmetric delay and buffer bloat (upstream vs. downstream), maybe, however, with a private server on the ISP customer encountering the lowest delay and buffer bloat. But currently I do not see how this is supposed to be feasible due to DS-Lite.

Setting up Jamulus should be simple, with only a few settings that can be trusted to work as best as practicable.

100% agreed, but I still believe that logging a few key parameters to a file at a place where it doesn't annoy people who are not interested in it could help analyze problems occurring during a live session. I think the logging shall be enabled only on demand and the file location shall be configurable somewhere in the settings panel.

@gene96817
Copy link

While what @DavidSavinkoff has pointed out is true. I agree with @drummer1154 about collecting some data about the situation as seen by Jamulus. When I am in musician mode, I don't have any networking diagnostic equipment running. If some bad behavior occurs, it is very valuable to have a clue of what Jamulus was seeing. It is also good to have the data if a less technical user reports a problem. It is really hard to deal with intermittent problems and the solution is not for users to be constantly running diagnostic tools to capture an intermittent problem. Also when I am in my musician mode, I don't want to have all the diagnostic tools around. It is my belief that a good application should provide data about why it is beginning to fail or why it failed. Nothing is more frustrating than for an application to misbehave without clues. In those cases, I cannot distinguish between a buggy app and a bad environment.

@nefarius2001
Copy link
Contributor

nefarius2001 commented Jan 8, 2021

I really like the idea of having live insight & logs about ping, buffer numbers and overall delays.

I would like to suggest the development of a facility on the client side...

I think this would be a great feature on the server side also. Like seeing in the list which client has problems or bad setups. But that would be a separate ticket, I presume.

@gene96817
Copy link

The beauty of the client side facility is you would have useful data regardless of which server you are connected.

@nefarius2001
Copy link
Contributor

Okay, for client side it wouldn't be too hard to:

  • add a parameter --statisticslog "path/to/file"
  • have some parameters logged on Gui update

if a line hat average of 1000 chars, logging two lines per second, that gives ~7.2MB per hour. Do you really need an on/off button + size limit on that?

What parameters are you interrested in? ping & overall delay should be very easy to get in that context. Would that be a first version of interrest?
Or are you really really into dB / overload / buffer underflow statistics / others?

@pcar75
Copy link
Author

pcar75 commented Jan 11, 2021

Thanks for the interest.

Personally , I would like to capture the ping, the overall delay, the Jitter buffer Auto on/off status, and it's local and server values with timestamp.
( BTW, in what units or measure are the Jitter buffer values ? )

  • If these were selected, I doubt 1000 chars/record would be needed.
  • I think that a limit should be maintained (hard-close or wrap-around log) because you can't take space for granted ... for example : an USB key of limited capacity/configuration with Jamulus OS or an Arduino, Rasberry Pi device or simply a client temporarily forgotten and left running.
  • ...
    Just a thought, but from my old coding days, it seems to me that this feature must surely have already been coded as part of a test version somewhere/somewhen ...

@pcar75 pcar75 closed this as completed Jan 11, 2021
@pcar75 pcar75 reopened this Jan 12, 2021
@pcar75
Copy link
Author

pcar75 commented Jan 12, 2021

Ooopps , sorry . I wanted to close my comment, not the issue ...

@gilgongo
Copy link
Member

See also #961

@gilgongo
Copy link
Member

Hi All - until we have a reasonably well defined specification for what we'd like to have, I'll move this to a discussion for now. Once we've agreed a spec we can raise a work ticket fro the backlog.

bitmoji

@jamulussoftware jamulussoftware locked and limited conversation to collaborators Feb 19, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants