-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fallback to less precise frame pacing on non-Windows platforms #6941
Conversation
Wouldn't it be better that different choices would be offered to the user when using fixed timestep? (i.e. accurate, power friendly) I think it's one of those cases where a single code path does not work for all requirements, and you'll probably have another issue opened in a few months telling that it's not accurate enough and they want accuracy at the cost of power. |
You're very right. I don't have the time for now to do more, so this would basically be a workaround while we figure out either a real fix or add a switch like you're suggesting. |
@mrhelmut Thanks again for addressing this! As mentioned in the related issue, I did originally keep the sleeps in place for other platforms before it was decided to remove it (this was in my first PR #4207). But I did find that the frame timing accuracy could at least be improved by being more cautious with the use of if (sleepTime >= 1.0)
System.Threading.Thread.Sleep(1); So the reasoning for these changes are as follows:
While these changes will still actually spin the CPU a little, my testing of this found the CPU usage to still be very low. Hope this is useful! |
Thanks for the insights! I'm going to test this out. If it is that low in CPU usage, it could be a good way! Worst case scenario I'll propose a switch, though I believe it's going to work! If we're in a situation where the sleep time is close to 1ms, it means that we're already CPU or GPU bound anyway. You might be right about 1ms resolution being the most widespread for the sleep timer. When I experimented with frame pacing it always snapped to the closest ms almost perfectly (at least on .Net and desktop Mono). |
Alright, I made some more extensive testing on .Net Core 2.2. Here's where I'm at.
So it seems that from a cross-platform perspective, nothing beats The perfect situation would be to find ways to make I would suggest making Another note: we can slightly improve |
I'm in favor of this. If we can do better than Thread.Sleep like for Windows that's great, but in case we don't we absolutely need users to be able to choose what's more important to them. |
The only thing i have to add here is maybe we should err more on the side of performance and not trying to be as cooperative with other processes. If we do...
We then tune |
Nice work @mrhelmut 👍 From your results, I'm guessing that your timer resolution is most likely 1ms while you're running these tests. In this case, Just to prove something, could you please try benchmark one other method? if (sleepTime >= 2.0)
System.Threading.Thread.Sleep(1); When the timer resolution is <= 1ms, this should give the same results as The 2ms threshold comes from the fact that when the timer resolution is 1ms,
This would definitely be nice. And I am also in favour of a switch for other platforms, otherwise we are going backwards in regards to accuracy. The user should be able to choose between power-saving and accuracy.
Agree. I'm actually setting the timer resolution to the maximum (which is typically 0.5ms) in my game. While it won't actually increase frame timing accuracy, it can actually reduce CPU usage (because it can sleep more and spin less). EDIT: Additional note though -- This is only really relevant when using fixed time step. So because increasing the timer resolution increases power usage, it may not be a good default behaviour for all cases. |
@MichaelDePiazzi Ah! I misunderstood your initial thoughts about And you're right, the default resolution reported by Which makes I've added Vsync timing to the table for comparison. I've also tested on macOS (Mono 5.20) and the results are very comparable to Windows (and within Vsync performance range), so I'm assuming that macOS default timer resolution is close to 1ms as well. @tomspilman This is interesting and pretty close to Michael's proposal. With some testing on all platforms we could empirically define constants for each of them to have a somehow precise default behavior. This is actually close to implementing our own |
Just be aware that you may not be able to guarantee the timer resolution will be any specific value, unless:
But based on my experiences (with Windows setups at least), it's typically gonna be 1ms (the video drivers and/or the graphics API seem to be setting it). So this is why stuttering is typically not an issue, as 1ms is good enough to hide noticeable inaccuracies. However, I have come across setups where it doesn't get set to a finer resolution and remains at something like 10ms or 15.6ms. This is when stuttering is reported as a problem as So unless you can either query or set the timer resolution, I feel the best trade-off for most platforms will actually be " |
I believe that you are very right here. It would probably be complex to figure out a fixed threshold for each platform if the accuracy is so widely varying. We can always query/set that on Windows, but most other platforms most probably won't allow that. So |
I dug into the Mono source, it's using iOS seems to have a 1ms precision as well. I can't test Android but I'm expecting this to be different on each devices. |
For consoles we should just spinlock and burn CPU time to be 100% accurate. |
I pushed a revision. This should fix the majority of cases. |
Looks good to me! 👍 |
This is a lot better. Thanks @mrhelmut :) |
Workaround for #6925 but reintroduces potential stuttering on non-Windows platforms (assuming it's better to have that issue back than maxing CPU usage).
Fixes #6925