You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the problem or limitation you are having in your project
Currently, when V-Sync is disabled, Godot will try to render as fast as possible. This provides the highest possible framerate, at the cost of high GPU power draw. Lowering GPU power draw has several benefits, such as reduced noise/heat output, improved battery life and more power budget for the CPU to use (on integrated graphics). Also, with electricity prices rising across the globe, reducing system power usage has become even more important lately.
The high GPU usage (in GPU-bottlenecked scenarios) also leads to increased input lag as a result of pipeline stalls, while the inconsistent framerate depending on scene complexity also makes input lag less consistent over time.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
Limit rendered framerate to (roughly) the monitor refresh rate by default when V-Sync is disabled or fails to kick in.
This proposal suggests using a slightly higher FPS limit than the monitor's refresh rate (see below). This means it normally has no effect when the V-Sync mode is Enabled or Adaptive. It does have an effect when the refresh rate is Mailbox, so we may want to consider disabling this automatic FPS limit if the Mailbox V-Sync mode is used.
An alternative mode that provides a VRR-optimized framerate cap is also proposed, but not the default as VRR usage cannot be detected on the application side.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
This proposal follows the rename suggested in #5449.
Refactor Application > Run > Max FPS into an enum with the following values:
Move the max FPS project setting to Application > Run > Max FPS Custom, which is only used if Application > Run > Max FPS is set to Custom.
Run-time Engine singleton properties should also be modified for this setting, so that this behavior can be toggled at run-time in a settings menu.
As for choosing the FPS limit value:
In my experience, the best way to achieve this is to use a slightly higher value than the monitor's refresh rate. For example, on a 144 Hz monitor, a FPS limit of 145 can be used (DisplayServer.screen_get_refresh_rate() + 1).
In multi-monitor setups, all screens' refresh rates should be queried, then the highest refresh rate should be taken into account. This is an easy solution that handles moving the project window across screens at run-time. This also prevents the framerate cap from ever being too low in multi-monitor setups that have different refresh rates. Alternatively, we could use the monitor the project started on as a base, but this is more difficult to change at run-time as the window is moved.
If this enhancement will not be used often, can it be worked around with a few lines of script?
Yes, but this is not something most project developers will think about naturally.
For example, this autoload script can be added to a project:
extendsNodefunc_ready():
# Note: This doesn't account for multi-monitor setups with different refresh rates.Engine.target_fps=DisplayServer.screen_get_refresh_rate() +1
Is there a reason why this should be core and not an add-on in the asset library?
It can technically be an add-on, but the point here is that this functionality must be built-in to achieve the desired goal (which is, reducing GPU power draw and GPU-bottleneck-induced input lag when V-Sync is disabled).
The text was updated successfully, but these errors were encountered:
I'm generally in favor of this proposal, but it may require refinements to the FPS limiting code for best results. I tried to implement this in GDScript and was not happy with the amount of microstutter I was seeing, likely due to the Windows platform code oversleeping (or possibly a lack of triple buffering in OpenGL). The timer on Windows is infamously low resolution and you may end up oversleeping by as much as 15ms, and even if you force it to have better resolution, I wouldn't trust it to sleep less than 2ms.
May be worth looking into how nVidia and Rivatuner handle limiting the FPS?
I tried to implement this in GDScript and was not happy with the amount of microstutter I was seeing, likely due to the Windows platform code oversleeping (or possibly a lack of triple buffering in OpenGL).
Describe the project you are working on
Related to #5449.
The Godot editor 🙂
Describe the problem or limitation you are having in your project
Currently, when V-Sync is disabled, Godot will try to render as fast as possible. This provides the highest possible framerate, at the cost of high GPU power draw. Lowering GPU power draw has several benefits, such as reduced noise/heat output, improved battery life and more power budget for the CPU to use (on integrated graphics). Also, with electricity prices rising across the globe, reducing system power usage has become even more important lately.
The high GPU usage (in GPU-bottlenecked scenarios) also leads to increased input lag as a result of pipeline stalls, while the inconsistent framerate depending on scene complexity also makes input lag less consistent over time.
There are also occasional issues where V-Sync is not effective even though it should be. Implementing this feature would help alleviate those scenarios to an extent.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
Limit rendered framerate to (roughly) the monitor refresh rate by default when V-Sync is disabled or fails to kick in.
This proposal suggests using a slightly higher FPS limit than the monitor's refresh rate (see below). This means it normally has no effect when the V-Sync mode is Enabled or Adaptive. It does have an effect when the refresh rate is Mailbox, so we may want to consider disabling this automatic FPS limit if the Mailbox V-Sync mode is used.
An alternative mode that provides a VRR-optimized framerate cap is also proposed, but not the default as VRR usage cannot be detected on the application side.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
This proposal follows the rename suggested in #5449.
Refactor Application > Run > Max FPS into an enum with the following values:
Move the max FPS project setting to Application > Run > Max FPS Custom, which is only used if Application > Run > Max FPS is set to Custom.
Run-time Engine singleton properties should also be modified for this setting, so that this behavior can be toggled at run-time in a settings menu.
As for choosing the FPS limit value:
In my experience, the best way to achieve this is to use a slightly higher value than the monitor's refresh rate. For example, on a 144 Hz monitor, a FPS limit of 145 can be used (
DisplayServer.screen_get_refresh_rate() + 1
).In multi-monitor setups, all screens' refresh rates should be queried, then the highest refresh rate should be taken into account. This is an easy solution that handles moving the project window across screens at run-time. This also prevents the framerate cap from ever being too low in multi-monitor setups that have different refresh rates. Alternatively, we could use the monitor the project started on as a base, but this is more difficult to change at run-time as the window is moved.
If this enhancement will not be used often, can it be worked around with a few lines of script?
Yes, but this is not something most project developers will think about naturally.
For example, this autoload script can be added to a project:
Is there a reason why this should be core and not an add-on in the asset library?
It can technically be an add-on, but the point here is that this functionality must be built-in to achieve the desired goal (which is, reducing GPU power draw and GPU-bottleneck-induced input lag when V-Sync is disabled).
The text was updated successfully, but these errors were encountered: