Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve support for refresh rate discovery and control #8031

Open
juj opened this issue Jun 23, 2022 · 24 comments
Open

Improve support for refresh rate discovery and control #8031

juj opened this issue Jun 23, 2022 · 24 comments

Comments

@juj
Copy link

juj commented Jun 23, 2022

The display industry has been speedily moving into a direction where both on desktop and mobile ecosystems, there now exists a large variety of displays that refresh at > 60 hz rates. This trend started with gaming and e-sports displays that run at 120hz, 144hz, 240hz, and now at wildest, up to 360hz and the trend is going even higher. The Android ecosystem is embracing 90hz and 120hz displays. Apple has shipped iPads with 120hz displays since 2017, and this improvement has now come even to the iPhone lineup.

What makes iPhone especially interesting here is that it also supports slowing down refresh rates to 24hz and 10hz to conserve battery, something that the current web requestAnimationFrame API will seem to be unable to exploit.

On the web ecosystem, high CPU overhead of WebGL has been a hindrance to establishing graphically rich gaming experiences, but this blocker is to be addressed with the development of the new WebGPU specification, which brings higher performing CPU-to-GPU work dispatch capabilities to the web. At Unity, we see a growing number of game developers who are looking to target new types of games on the web with WebGPU, and they are reporting varying issues across the board that are seen as trouble from this being possible.

One of these limitations is the lack of robust frame rate control on the web. Today, web browsers have the requestAnimationFrame function, which enables animation to be latched on to the vsync rate of the display. However, there are two limitations with this API at the moment:

  1. unfortunately at present, there is no browser provided API to discover what the expected refresh rate is that requestAnimationFrame will tick at, which, we claim, prevents the possibility of robustly targeting this blooming diverse ecosystem of displays with different refresh rates.
  2. likewise, there is no browser provided API to enable retargeting/configuring a desired target animation rate for the requestAnimationFrame function. This likewise causes issues with developers attempting to implement smooth animation and/or battery saving measures into their games.

The hasty reader might conclude: aren't these trivial to implement manually in JS code: one can benchmark the rAF rate, and manually count elapsed ms and skip rAFs that arrive too early?, and we thought so too at first, but the more we look into these issues, the more we are convinced that JavaScript-based implementations are wholly inadequate, and a platform-provided API is in order.

Why would you even want to do that?

The use cases that developers are reporting for needing to do these things are diverse. Here are some gaming related use cases, given that Unity's domain is generally gaming oriented:

  • dominantly the most common reason developers report is that the game rendering is heavy and cannot sustain e.g. a 60hz, or a 120hz or higher refresh rate. Attempting to do so would result in stuttering when the frame updates fluctuate, and people want to remove this stuttering: the game might either manually allow users to cap to given hz rate, or automatically attempt to run at whatever native refresh rate, and upon failing, want to decimate frame output by 1/2, 1/3, or some other rate compared to the native rate.
  • the game might dynamically switch between noninteractive cutscenes and interactive gameplay, and wants to restrict cutscenes e.g. to 30hz or 20hz because e.g. game 3D model animation keyframes have been authored at that rate, or a video has been authored at that rate,
  • game developer might want to reduce microstuttering by fixing frame delta quantas to expected frame arrival times, rather than implementing a naive variable delta time step scheme (this is generally important for VR),
  • the game might want to cut back to lower refresh rate to conserve battery either always, or in game sections that are known to be noninteractive,
  • game logic might expect a specific fixed tick rate, so the program wants to smoothly match that rate (including, but not limited to, emulators, DOSBox, archive.org),
  • a game might want to detect whether rendering is achieving the 100% output rate possible for the display, in order to deduce dynamic resolution/geometry/other rendering effect should sustain, or be scaling up or down, or reporting this to user (e.g. an e-sports title reporting to user that they are not reaching the optimal/expected results),
  • a benchmark application might want to detect whether content is achieving the fastest possible animation rate (browsers don't allow vsyncless rendering, hence 3D rendering benchmarks on the web might implement a "perfect score" output on fixed rendering workloads),
  • an application might want to differentiate between the device throttling the display rate output back (due to dynamic conditions illustrated in next section), vs the game rendering not being able to reach the display rate
  • a canvas-based video player might want to implement precise frame rate timing with pullup and/or pulldown logic to minimize judder.

Discovering display refresh rate

It would be true that one can implement a "rAF benchmark" that would discover what the interval rate is between rAF() ticks, but there are major issues.

  • the main issue is that benchmarking the rAF is next to impossible to do if there is significant JS CPU or WebGL rendering load that occurs at the same time when the benchmarking is done - heavy load will skew and spoil the benchmark results. This means that one would have to restrict such benchmarks to run only e.g. at startup or other times when no JS or GPU load is present.
  • moreover, the device and OS/GPU driver ecosystem has become so complex in features that display refresh rates can vary at runtime:

Because of these dynamically changing conditions, a robust JS side implementation should be prepared to continuously benchmark the rAF tick rate, since it can vary at any given time. However, because such benchmark results are not reliable while rendering the game, this means that at best, the game should make regular pauses in the rendering in order to discover the refresh rate, which is not feasible for many game types due to UX reasons. This is a type of a exploration-vs-exploitation dilemma: one needs to occassionally keep switching between these two activities to either acquire what the current frame rate is, but then also have time to enjoy the smooth animation against that refresh rate.

To resolve this challenge, it would be ideal if the web browser would instead continuously advertise the target refresh rate that the rAF() events should be firing at. This way web sites should not need to implement difficult-to-reliably-manage benchmarks (and get it wrong), but they could simply read the current rate off a property. A strawman example:

interface mixin AnimationFrameProvider {
  unsigned long requestAnimationFrame(FrameRequestCallback callback);
  undefined cancelAnimationFrame(unsigned long handle);
  readonly double currentDisplayRefreshRateInterval; // *new: describes the current msec interval between the display's subsequent refreshes.
};

Configuring display refresh rate

There is an obvious "poor man's" method for controlling the refresh rate of the display from JavaScript: one might tally up the elapsed msecs in rAFs, and skip processing anything in the rAF handler until the tally has reached a desired amount of msecs.

Another ad hoc method is to try setTimeout or setInterval to schedule animation, rather than utilizing manually decimated requestAnimationFrame calls. However, web browser authors are likely already familiar how bad of an idea that is.

These types of manual timing attempts are unfortunately rather imprecise, and result in a source of stuttering for the page - compare to the native implementations of eglSwapInterval and D3D11 SyncInterval methods that provide reliable guarantees.

Game developers understand the concept of decimating frame rate intervals well, and from that experience, typically never (or quite rarely) want to restrict application frame rates to run at nonmatching rates compared to the display refresh rates. This means that if a display is running at 90hz, they might want to target either 45 fps or 30 fps - but if the display refresh rate was 60hz, they would never attempt to run the game at 45fps, since that would result in stuttering.

Game developers tend to be extremely sensitive to stuttering, to say the least. A major part of game optimization process in the industry revolves around reducing sources of stuttering. Hence attempting to provide these type of "poor man's" solutions are a constant source of complaints from game developers to Unity, and perceived as poor engineering from Unity's behalf.

Ideally, the web browser specifications would enable an API to allow one to request the target refresh rate interval that their requestAnimationFrame ticks should trigger at. This way the browser compositors would have the ability to do a better job at producing stutter free animation under the desired conditions.

Previously I have been a proponent of adding "swap interval" support to requestAnimationFrame API, however given the highly dynamic nature of mobile and laptop device display refresh rate behavior, maybe an appropriate API would be one where a developer would choose a target max rate instead, and the browser would ensure that the rAF callback would be called at a rate that would not exceed the given max rate.

Paired with the ability to query window.currentRequestAnimationFrameRateInterval, that would enable application developers to customize their content to animate at a given max rate, e.g. 30hz, and hence conserve battery and/or reduce stuttering in their content.

The mandatory fingerprinting conversation

Every discussion of adding a new web spec comes with the old conversation of fingerprinting. Yes, a window.currentRequestAnimationFrameRateInterval would be fingerprintable, but fingerprinters don't quite suffer from the exploration-vs-exploitation dilemma that games have, so these rAF rate benchmarks that are not suitable for game developers, are wholly suitable for fingerprinters today, and as such, rAF fingerprinting benchmarks probably exist as parts of the suites already today.

A new window.currentRequestAnimationFrameRateInterval property would probably not change that - it would likely just reduce the CPU consumption of such suites by a tiny bit.

API proposal

Ideally, I would like to see the API to solve these both aspects to look something like:

interface AnimationFrameRequest {
  // msecs between consecutive rAF calls that the browser is currently achieving, changes
  // after calling setMaxAnimationRateInterval()
  readonly double currentContentRefreshRateInterval;
  // ask browser to cap animation rates to occur at most this often from now on. Actual
  // achieved rate may be less than this, depending on the hz rate of the display.
  // E.g. asking for 1000/45 msecs might acquire 45hz rate if display is running at 90hz,
  // but if the display slowed down to 60hz, then then animation rate would revert to
  // 30hz.
  undefined setMaxAnimationRateInterval(double msecs);
}

interface mixin AnimationFrameProvider {
  AnimationFrameRequest requestAnimationFrame(FrameRequestCallback callback);
  undefined cancelAnimationFrame(AnimationFrameRequest request);
  readonly double currentDisplayRefreshRateInterval; // describes the current msec interval between the display's subsequent refreshes.
};

This way multiple sources of content on a web page (e.g. a game but also some ads running on the side that also use rAF) could independently make requests of how often their content should animate, without these requests conflicting with each other.

However that kind of API as-is would be backwards compatibility breaking (requestAnimationFrame used to return a long), so probably the more appropriate API would have to look a bit different - this is just to give the idea.

Thanks for reading through the long post! 🙏

CC @kenrussell @kainino0x @kdashg @Kangz

@Kaiido
Copy link
Member

Kaiido commented Jun 24, 2022

Given that rAF is a one-off call I don't think the setMaxAnimationRateInterval on AnimationFrameRequest idea would work, there is no "rate" from the point of view of this call, the rate only comes from the loop.

Web-animation-2 might be a better place for this, see for instance https://github.com/WebKit/explainers/tree/main/animation-frame-rate and in particular this paragraph, or
There was also #5025 but I think it's moving in the same direction anyway.

@kenrussell
Copy link
Member

This proposal should be discussed further. The fact is that requestAnimationFrame is a global operation - it's not tied to a specific element. Limiting the entire page's rAF rate seems like one of the only feasible controls. Many customers have expressed need to limit the frame rate of imperative animations, rather than browser-controlled ones as in Web Animations.

Other people who've given this a lot of thought and prototyped solutions for it include @khushalsagar @zhenyao @fserb . Could any of you offer thoughts regarding earlier work with Meet and other partners?

@zhenyao
Copy link

zhenyao commented Aug 16, 2022

What about adopting something similar to https://github.com/WebKit/explainers/tree/main/animation-frame-rate?

@juj
Copy link
Author

juj commented Mar 27, 2023

Hi, I wonder if there might have been any progress on this front in the past months? CC @jakearchibald @graouts @smfr

A Unity user brought up this limitation again in a recent communication, where they are targeting mobile devices and noted that the 60hz/90hz/120hz device landscape is getting fragmented at this point.

@khushalsagar
Copy link
Contributor

Sorry for the extremely late reply here. I'm generally supportive of tackling this problem space. In fact, I had put up a very similar proposal a while back here. It lets authors specify a frame rate choice but was missing the hooks to discover the frame rates possible (and detect when that changes if the window is moved to a different monitor).

The use-case which motivated this proposal was video conferencing sites. Generally video streams are at 24/30Hz and are the main content the user is looking at. Ideally the web page would render at a frame rate matching the video but animations in the web content can push frame production to a higher rate. We wanted the ability to lower down the frame rate (both to conserve battery and to give resources to other workloads on the device) but without knowing how integral the animation was to the web page's current state, it becomes a difficult decision. Heuristics help but can be brittle and sites can end up falling off the optimized path in surprising ways.

The feedback I got at the time was that it's better for this to be a higher level primitive, something like "prioritize-power-over-frame-rate", instead of explicitly setting the exact frame rate. The reasoning being that the precise frame rate is a function of:

  • The frame rates the device can support, which can be discrete (like 60/90/120Hz) or a range with variable refresh rate (like 60 to 120Hz).
  • The browser will pick based on all content associated with the tab, which includes cross-origin iframes.
  • It's hard for sites to figure out what frame rate they can consistently hit.

So providing this control with the low level primitives mentioned here : the frame rate capabilities on the device and a precise desired frame rate, requires more detailed thinking behind how sites would use it given the above factors.

And a higher level primitive of "it's ok to throttle my animations" would still be better for sites that don't need the precise control like games do.

@zhenyao
Copy link

zhenyao commented Jun 9, 2023 via email

@khushalsagar
Copy link
Contributor

The frame rate has to apply to the whole Document, its quite difficult to lower it for just the canvas element. But if the site's primary content is the canvas element (which will be the case for games), they can choose to set the Document's frame rate based on what the canvas needs.

@zhenyao
Copy link

zhenyao commented Jun 9, 2023 via email

@khushalsagar
Copy link
Contributor

If a canvas context has a preferred framerate, we can treat it like video

That's an interesting thought. You could specify a frame rate for just the canvas element. And if the only source of invalidations is this element, then we let it decide the frame rate for the whole Document.

If you wanted to generalize further, you could specify a frame rate for any DOM element. If invalidations are limited to the subtree of this element (and no other node in the subtree has a frame rate preference), then you assume that's the frame rate preference for this DOM subtree. And you decide the global frame rate for the Document based on the preference of each sub-tree + which sub-tree is actively animating. This is actually how I'd expect us to deal with iframes, which is conceptually a bunch of DOM subtrees providing their preferred frame rate.

While I like the above idea in theory, it'll make the API/implementation more complicated. And it doesn't seem like it's worth the effort given the current use-cases? A per Document preference is easier to begin with and we can always extend to preference per DOM sub-tree later on.

@juj
Copy link
Author

juj commented Jun 9, 2023

Is that possible for WebGL to have an optional context creation attribute, indicating a preferred framerate of updates?

A model where content would be able to only set a preferred frame rate at context creation time would unfortunately not be adequate, for many reasons listed in the first comment (display refresh rate can change at runtime, battery power conditions can change at runtime, etc), but also specifically to this proposal for at least two more reasons:

  • many games may wish to have varying update rates throughout the lifetime of the content. Maybe an esports game might want to render at maximum available Hz when ingame, but when between rounds, or when paused, or browsing menus, the frame rate might be desirable to cap to something slower.
  • some engines are set up so that the engine initializes first along with the renderer, and the game(s) are then downloaded and initialized on top of the engine, and the game would want to set up the refresh rate, but the engine has already had to set up the WebGL context beforehand, making it too late for it to change anything.

Having a presentation rate property tied on the Canvas object would sound fine (to help reconcile multiple canvases having different animation rates), but it would be best for it to be a property that is freely read-writable independent of the WebGL/WebGPU context creation events.

@michaelwasserman
Copy link

Relatedly, w3c/window-management#47 proposes adding per-display refresh rates to the ScreenDetailed interface.

@mturitzin
Copy link

Wanted to heavily +1 this issue from the perspective of Figma (I work on Figma's rendering team).

@juj explains this really well in the issue description. The specific case we want this the most is for time-slicing of WebGL rendering work. This is most similar to this case listed above:

a game might want to detect whether rendering is achieving the 100% output rate possible for the display, in order to deduce dynamic resolution/geometry/other rendering effect should sustain, or be scaling up or down, or reporting this to user

Basically, we want to submit an amount of time-sliced GPU work per frame that will not reduce the framerate below the maximum possible for the display. This is hard to do when we don't know what the maximum is (and that can change after page load, etc., etc., as described in the issue description).

@floooh
Copy link

floooh commented Sep 26, 2023

Just discovered this issue and would like to add a big fat +1.

As a minimal solution, if the timestamp in the requestAnimationFrame() callback would be incremented by a fixed amount which is a multiple of the (inverse) display refresh rate instead of a jittery measured time, this would pretty much instantly fix all microstuttering in WebGL/WebGPU content (assuming they use this timestamp for animation timing) while being fully backward compatible and Spectre/Meltdown-safe (since it is not a measured time at all, and has a much lower "precision" than peformance.now() in any browser.

In general, animated content is not interested in when exactly the frame callback is called within the current frame (which may jitter by hundreds of microseconds or up to 1 or 2 milliseconds even on native platforms), but only in the time interval that's between two frames becoming visible, and on a traditional monitor this duration is fixed (unless the display mode changes or the window is moved to another monitor).

For rendering animated content in games (and I guess anywhere else), there is basically no need for any time duration that's not exactly a multiple of the (inverse) display refresh rate.

In addition to such a "fixed increment" rAF timestamp, it would be good to have a separate API call to query the display refresh rate of the monitor a canvas (or generally DOM node) currently lives on (assuming that this would also be the "base frequency" which requestAnimationFrame() is triggered with).

As it is now, one has to use filtering/smoothing code to get a non-jttery frame duration, but this has all sorts of issues when combined with the unpredictability of when the frame callback is actually called (for instance the frame loop being frozen when tabs go into the background, browser windows moving to a different monitor, etc...).

@jesup
Copy link

jesup commented Dec 11, 2023

I'm broadly supportive of this idea; there's a separate issue of what rate should UA's use for rAF/vsync when the device rate is very high (180, 240, 360Hz for example, or even if they should use 120), absent of a request for high or fixed rates. If there is a requested rate, does this only apply if the browser is focused? What happens when there are multiple visible browser windows with different requests? Give priority to the focused one, probably, but what if none are focused or if a window/document which didn't request a rate is focused?

A naive idea for general web content would be to decimate down to a range of either 50-99Hz or 60-119Hz. For general web content, there's very little if any win over 120 hz (and maybe not anywhere near that). The main win above 60Hz is smoother scrolling, but I think by 120 it's pretty much down to noise. And possibly decimate down to even lower on battery or if battery is low/battery-saving is on, etc.

@floooh
Copy link

floooh commented Dec 12, 2023

there's very little if any win over 120 hz

One nice feature of high refresh rates is that the "visible mouse pointer lag" is automatically reduced for content that's attached to the mouse in HTML canvas objects.

For instance when dragging windows in demos like these: https://floooh.github.io/sokol-html5/imgui-highdpi-sapp.html, this looks much more responsive on a 120Hz refresh rate than on 60Hz because at 120Hz the dragged window doesn't trail the mouse pointer as much as on a 60Hz refresh rate (the underlying issue is of course that the pipeline between receiving input events, issuing draw commands and the content eventually showing up on screen is too long, but since there are so many components involved, this will be hard to optimize).

@juj
Copy link
Author

juj commented Dec 13, 2023

I'm broadly supportive of this idea; there's a separate issue of what rate should UA's use for rAF/vsync when the device rate is very high (180, 240, 360Hz for example, or even if they should use 120), absent of a request for high or fixed rates.

Today Firefox and Chrome both implement rAF() on my 360 Hz display at 360fps. I absolutely love that it works for all web sites out of the box, hopefully that would not go away.

If there is a requested rate, does this only apply if the browser is focused?

I think so. Browser focus does, and should not matter for animation. If browser application window focus mattered, then it would be impossible to multi-work on several windows on multiple displays.

If browser is minimized or tabbed out, i.e. invisible, then the current browser behavior to completely stop rAF is good (or in Firefox case, to do the peculiar slow pump).

What happens when there are multiple visible browser windows with different requests?

To me it sounds like those should tick at their respective rate. They are different windows, so they have different composition chains already, so even today tick separately (to my understanding)

@greggman
Copy link

greggman commented Aug 6, 2024

Excuse me if I didn't read all of the above ... but, I feel like this proposal for setMaxAnimationRateInterval ignores the reality of the web. Games, usually, go fullscreen (or are on Console/TV) and so often have some exclusive say on framerate. They aren't embedded in iframes or composited with other parts of a webpage.

For example, the HTML5 games on itch.io are all embedded in an iframe. The game has no say in what the iframe, controlled by itch.io, does. Many other online game sites have far more "stuff" around the game. For example, other iframes with ads, those ads will be making the same but incompatible requests for a frame rate. example, example, example, example.

There's also issues like, page asks for 60fps. User's device is 75fps (mine is). So what, browser gives 75fps? (then there was no point in asking for max 60). Browser drops one of every 5 frames? That's janky.

That same issue occurs if 2 iframes request frame rates with no good lowest common denominator.

My point being, a game that wants to be on the web will rarely have control of the environment they're being used in. Requesting some frame rate will never be guaranteed. In order to function, the game will have treat these requests as though they might not work. As such, what is the point of making the request in the first place? Knowing it can't be guaranteed means requiring the game to do all the things it needed to do without the request.

As for finding out the display's frame rate, that sounds like to be more web-like it should be event/callback based. Otherwise you'd be required to poll it which is not web-like.

So, off the top of my head

const observer = new DisplayFrameRateObserver((frameRate: number) => {
   console.log("display's current frame rate is:", frameRate);
});

It would get called once on creation, the same as ResizeObserver, and then only if the frame rate changes. It might change for any reason. User moves window to another display, device goes into low-power mode, developer sets a flag in devtools, etc..

I'd be curious if it should be in fps (eg, 30, 60, 72, 75, 144, 240, 360) or time interval (eg, 0.03333, 0.01667, 0.01389, 0.01333, 0.00694, 0.00417, 0.00278). I like the think I like the frame rates more than the time intervals personally but maybe time interval makes it clearer the number might not be an integer.

@floooh
Copy link

floooh commented Aug 7, 2024

I would like both tbh (ability to throttle rAF, and a way to get the current interval at which rAF is called with (typically this would be the display refresh rate).

IMHO a game should be able to announce a target frame rate to rAF, this would only be a hint though, the rAF callback would be called at a close interval based on the display refresh rate (for instance if a game requests a 30Hz interval, but the display refresh rate is 80Hz, rAF would be called at 40Hz) - for this topic, also see https://developer.apple.com/documentation/metalkit/mtkview/preferredframespersecond?language=objc.

To figure out the frame duration, the timestamp we're already getting in the rAF callback would suffice as long as this doesn't have jitter. E.g. it should not be a measured time, but simple be increased by a fixed amount derived from the display refresh rate. Since it is not an actually measured time this should also be Spectre/Meltdown-safe.

Currently the rAF timestamp has considerable jitter and it is necessary to take an average over a couple of dozen frames (or filter otherwise) to remove the jitter. If the rAF timestamp increment wouldn't be a measured time to begin with, but simply a constant derived from the display refresh rate there would be no microstutter in animations which directly use the rAF timestamp. This new rAF behaviour would also be fully backward compatible and all existing web games would automatically benefit.

...when the game's canvas HTML element moves to a different window, the rate at which rAF is called would be recomputed based on the target frame rate and actual display refresh rate. E.g. if the target frame rate is 30Hz, and the canvas moves from an 80Hz display to a 60Hz display, the rate at which the rAF callback is called would be 40Hz initially, and 30Hz on the new display, and the increment of the rAF timestamp would change from 25ms to 33.333ms.

@greggman
Copy link

greggman commented Aug 7, 2024

IMHO a game should be able to announce a target frame rate to rAF, this would only be a hint though.

Since it's a hint, what's the point? The hint may be ignored and you end up with all of the same issues.

Currently the rAF timestamp has considerable jitter and it is necessary to take an average over a couple of dozen frames (or filter otherwise) to remove the jitter. If the rAF timestamp increment wouldn't be a measured time to begin with, but simply a constant derived from the display refresh rate there would be no microstutter in animations which directly use the rAF timestamp.

The browser can not guarantee the frame rate. So the OS says the refresh rate is 60fps and the browser advances the time returned by rAF at 1/60th, but because of various things (too much work) the actual framerate is 50fps, then your game will run too slow since you would not be respondng to the actual framerate.

The fact that it does jitter as much as it does is arguably a browser bug. But, specifying a target won't fix that bug. That bug should to be fixed regardless.

If you wanted that behavior you described (a constant time increment) then query the display framerate, call rAF, skip frames if you want to run do things less often than the display framerate, increment your own variable by a constant.

when the game's canvas HTML element moves to a different window,

I want to point out, HTMLCanvasElement is not special, any element can be animated by JavaScript.

@juj
Copy link
Author

juj commented Aug 7, 2024

Such a DisplayFrameRateObserver would be a fantastic solution. 👍

@floooh
Copy link

floooh commented Aug 7, 2024

Since it's a hint, what's the point? The hint may be ignored and you end up with all of the same issues.

But at least it gives me something to work with in budgeting my per-frame work. Typically I pick a hardware min-spec where I want to hit a stable 60Hz frame rate and build my per-frame work around that (let's say 12ms on min-spec at a target frame rate of 60Hz so I have a comfortable 4.6ms wiggle room). If rAF is called at 360Hz my whole per-frame budgeting is worthless, my 12ms frame budget will never fit into the available 2.7ms.

If the rAF-interval isn't throttled and my per-frame work overruns the display refresh interval, then the presentation would happen at the next 360Hz slot, and some frames may be a bit early and some a bit late which then results in stutter again.

Mobile CPUs switching between E- and P-cores or varying the CPU frequency doesn't help either of course, a proper solution would need to implement a 'game mode' in the OS thread scheduler.

So the OS says the refresh rate is 60fps and the browser advances the time returned by rAF at 1/60th...

From what I'm seeing, browsers and operating systems are very good at presenting a frame at exactly the display refresh rate interval. IME the most reliable way to get absolutely stutter free animations is to increment your animation timestamp by the display refresh rate (or integer multiples). The exact time inside a frame where your per-frame code is called may vary significantly (by 1..2 milliseconds - that's why measuring is pointless), but the time your frame becomes visible is strictly fixed at display refresh intervals (and the time a frame becomes visible is the only time that matters - not when the frame is rendered, or scheduled for presentation, or picked by the swapchain code to be presented next - what matters is when it becomes visible on the display).

...the actual framerate is 50fps

...not in my experience, the actual frame rate would drop to 30Hz (on a 60Hz display) because that's the next possible presentation interval if the game code can't maintain 60Hz (or even worse vary between 30 and 60Hz). The game not being able to maintain its target frame rate is an exceptional worst case scenario though.

(of course if the display refresh rate is 50Hz but is reported by the OS as 60Hz, e.g. a software or even hardware bug, then querying the refresh rate is pointless either way, but I don't know if this is a common enough problem to worry about).

The fact that it does jitter as much as it does is arguably a browser bug.

I'm observing similar time jitter even on native swapchain APIs which give me a presentation time stamp (e.g. DXGI or CADisplayLink - the jitter is smaller than the rAF timestamp jitter in browsers though but it's still enough to matter). I guess it's just the fact that measuring time between frames is more or less worthless on modern operating systems, because thread scheduling is 'fuzzy'.

Some browsers (Firefox(?) and Safari) also still have their rAF timestamp clamped to a full millisecond. And in this case the random jitter is actually important, because it allows to derive the true presentation interval by averaging enough frames.

For instance see this demo when running on Safari: https://floooh.github.io/sokol-html5/imgui-dock-sapp.html

The averaged frame time will eventually stabilize close enough to 16.667 ms to be useful, even though the rAF timestamp only returns full milliseconds - the main problem with this approach is that the average needs time to adjust when the refresh rate changes.

then query the display framerate

Currently I cannot do that, but this would indeed be the 'least required functionality'.

I want to point out, HTMLCanvasElement is not special, any element can be animated by JavaScript

Yes, the theoretical new preferredFramesPerSecond API would need to accept a HTML element as 'target' to be any useful.

skip frames if you want to run do things less often than the display framerate, increment your own variable by a constant.

That would work but would require me to split my per-frame code into slices of unpredictable length (e.g. if rAF is always called at 360Hz on a 360Hz display without me being able to throttle rAF to a target frame rate, I would need to split my per-frame code into slices of less than 2.7ms - just skipping frames and then doing a whole 60Hz frame's worth of work every N-th frame wouldn't work because my 12ms frame budget which comfortably fits into a 60Hz frame can't be done in a single 2.7ms frame.

@juj
Copy link
Author

juj commented Feb 19, 2025

Today I prototyped a rather ridiculous implementation that aims to benchmark the native refresh rate of the display, to provide a new variable screen.animationFrameInterval that specifies the time between subsequent rAF() calls. Used the following code:

screen.animationFrameInterval = 50/3;
(function() {
  var min, max, t, t0 = performance.now(), dts = Array(60).fill(50/3), idx = 0, i, raf = () => {
    t = performance.now();
    // Keep a rolling history of 60 most recent raf frame delta-times.
    dts[idx = (idx+1)%60] = t - t0;
    t0 = t;
    min = dts[0];
    max = dts[0];
    // Calculate the min and max of the last seen delta-times, and of those are close enough to each other,
    // we can believe we are seeing the actual native display refresh rate.
    for(i = 1; i < 60; ++i) if (dts[i] < min) min = dts[i]; else if (dts[i] > max) max = dts[i];
    if (max < min * 1.05) { // If there's less than 5% fluctuation in frame times, then we hold plausible we are seeing the actual display refresh rate
      var dt = (max+min)/2,
      // Janky spaghetti "pretend that display refresh rate is an exact integer Hz" snapping to avoid fluctuation in the detected times.
        hz = (1e3/dt)|0,
      // Even more janky spaghetti "snap integer Hz refresh rate to close to common marketed gaming display refresh rates, and multiples of 10 Hz":
        hzs = [144, 165, Math.round(hz/10)*10];
      for(i = 0; i < hzs.length; ++i) if (Math.abs(hz-hzs[i]) < 2) hz = hzs[i];
      screen.animationFrameInterval = 1e3/hz; // Convert back to milliseconds per frame.
      console.log(`Raf delta is small enough: ${(max-min).toFixed(2)} ms -> Detected display raf interval ${screen.animationFrameInterval} ms, frame rate=${(1000/screen.animationFrameInterval).toFixed(3)} fps`);
    } else {
      console.log(`Too much delta, ${(max-min).toFixed(2)} ms, in raf rate. Min-max: ${min.toFixed(2)}-${max.toFixed(2)} ms (${(1000/max).toFixed(2)}-${(1000/min).toFixed(2)} fps)`);
    }
    requestAnimationFrame(raf);
  };
  requestAnimationFrame(raf);
})();

The idea of this code is to provide the display refresh rate in continuously adapting manner (i.e. value of screen.animationFrameInterval should change when the refresh rate changes), but not get tripped up by slow/heavy render workloads that might be currently running in parallel.


Initial results suggest that the frame delta-times fluctuate greatly already on an empty web page, more than 5% in both Chrome and Firefox on Windows (a 13900K CPU):

Image

but occassionally there is enough signal to guess the refresh rate.


Unity's customers have still been asking to know the exact refresh rate. Two requests that were lodged recently were about

a) being able to reduce micro-stuttering using the scheme that floooh also commented above:

the most reliable way to get absolutely stutter free animations is to increment your animation timestamp by the display refresh rate (or integer multiples). The exact time inside a frame where your per-frame code is called may vary significantly (by 1..2 milliseconds - that's why measuring is pointless), but the time your frame becomes visible is strictly fixed at display refresh intervals (and the time a frame becomes visible is the only time that matters - not when the frame is rendered [...])

b) developers were targeting "cinematic 30Hz" in game. Instead of rendering at every rAF(), on a 60Hz display they would want to render on every second rAF(), and on a 120Hz display they would want to render on every fourth rAF(). But without knowing the rAF() rate, it is not possible to know what the exact skip factor should be.

Historically Unity has used setTimeout() to perform rendering when a non-60Hz refresh rate has been requested. However this is something that is observed to work poorly, and result in considerable amount of stuttering (as is no surprise). So decimating rAF()s has been the strategy, but requires that the display refresh rate is guessed right.

@floooh
Copy link

floooh commented Feb 20, 2025

@juj just a note: IME the 'snap to common refresh rate' is tricky because there are so many weird refresh rates out there. Initially I used a similar approach and started to collect a list of refresh rates from users (e.g. see https://github.com/floooh/sokol/blob/b0aa42fa061759908a6c68029703e0988a854b53/sokol_time.h#L274-L287), but eventually ended up ditching that approach and instead just compute the average over enough samples (around 60)...

One problem with the 'averaging approach' is to quickly figure out that the refresh rate has changed (for instances when moving the browser window from a 120Hz to a 60Hz display) - and in that case flush/reset the sample buffer.

The other problem is that valid display refresh rates may be closer to each other than the average jitter (e.g. 120 and 144 Hz are only 1.39 ms apart, and 72 vs 75 Hz is even only 0.5 ms apart.

I feel like the only thing I need to improve in my approach of 'just take the average' is to better detect a change in refresh rate (e.g. when moving a window to another display) and in that case flush the history (to avoid that the game runs at the wrong speed for the duration of the 'sliding window'). Maybe have a smaller window of the last 5..10 frame durations, and when that smaller average is outside a 'safe zone', flush the history (e.g. better to have a slightly jittery frame duration until the history ring buffer fills up again, instead of the game running at the wrong speed for a second or so).

@juj
Copy link
Author

juj commented Feb 20, 2025

Yeah, the whole approach is very stinky.

ended up ditching that approach and instead just compute the average over enough samples (around 60)...

The problem that makes this infeasible to us, is that majority of indie Unity gamedevs develop games that are not fine-tuned to stay within the frametime budget on the target devices, so simply averaging over a history of deltatimes will result in incorrect refresh rate being detected.

I don't expect the code above to work well, but in the absence of proper APIs, we'll take half-working jank over always-wrong results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

10 participants