-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose 'inertial scrolling state' in wheel events #58
Comments
On second thought, I'm not sure this is sufficient. I'm thinking through the case where the user is two-finger scrolling on a trackpad and you need to determine the moment that they lift their fingers (could be interpreted as "letting go" of something on-screen). Out of band 'gesture-phase' events might be a more flexible solution. The idea would be that scroll events could be "bracketed" by other gesture events which you could subscribe to (or completely ignore). iOS already fires 'gesture[start/change/end]', not sure if it's smart to overload those or invent something new.
The event sequence for a scroll-plus-throw might look something like:
And the event sequence for a scroll-then-stop-then-lift (zero velocity fingers at the end of the gesture)
(It is important to see that 'scrolling-gesture-end' so the application knows when it can update it's internal physics state). Applications can determine the previously proposed 'isInertialScrolling' by checking if wheel events are inside the inertial bracket. |
Two bracketing events are really equivalent to two boolean bits on the event, eg. I (from blink's perspective) have a bit of a preference for the latter because it's a smaller API and it avoids problems / unnecessary overhead in atomic wheel scrolling case (USB mouse wheel with no defined start/end). |
I support adding this and we'd probably implement it in blink if it was being implemented in at least one other engine. For MacOS we already have this information. Windows may be more challenging - or at least we probably would have meaningful values only for the new breed of high precision windows touchpads. |
@RByers True, I agree (easier on the app side to consume a single event stream too). Suppose the spec should just make clear exactly when the Excited that WebKit (#56 (comment)) and Blink are both interested! |
From the Firefox side I'd also be interested in adding APIs for this. An |
I'm curious the status of this issue. I've been playing around with Unfortunately, A boolean like Of course, it would be best if Webkit/Chrome/Firefox would implement this with a css property for native elastic scrolling on macOS on any overflowing scroll area, but I'm not holding my breath at this point (since it's a mac-specific thing). |
Yeah, it's kind of nuts that it's been 5 years, I could have written my own browser by now :). (Side note: elastic scrolling + rubberbanding works fine for me in Safari on overscroll elements natively, no JS or extra CSS needed...) I cared about this more for doing things that aren't replicating scrolling (in general, best to defer to native facilities for that), but other interactions that happen to use trackpads, like 3D panning / zooming / etc. |
Any updates? |
It's nice to know I'm not the only one that would like this. I don't encourage "scroll hijacking" but I think it's still useful to know if a wheel event was user initiated vs automatic. Especially for in-browser game controllers, but it would also make carousel components more robust by allowing us to compute final resting positions based on when the user stopped touching their device vs waiting for the inertial scrolling to finish and then recentering the carousel afterwards which always looks a bit shabby. In any case, I don't think Apple or W3C will begin exposing this data anytime soon so I've begun work on a way around the issue and am looking for others to help collaborate on it. I'm using tensorflow.js to train a model on sequences of delta values and time offsets so that I can present it with new sequences and let it predict if they are real (user initiated) or fake (initiated by the operating system). My approach is quite simple: I use a frontend website with a div that accepts wheel events and ask the user to hold down the shift key while they are physically touching their trackpad. As they scroll, they keep the shift key held down and when they release (and the synthetic events begin) they release the shift key. This leaves us with computable sequences of So far it's showing promising results, however there are some external factors that still need to be mediated. Most notably, the prediction itself should be moved to a separate thread using a webworker so that it does not interfere with the sequence time offset values. I've observed that the automatic wheel events will issue higher delta values if it sees that it's been blocked for longer than a normal animation frame duration. Another big help I could use would be on the training end as I'm using mostly default values for my AI model layers and someone out there might know of a more accurate setup. Here is a link to the public repository: And here is a screen recording of the system in action: https://drive.google.com/file/d/1AKM13MYigFjPVKCeQCUZgTMnEfez_etk During the first half of the video I'm holding shift while "scrolling" on my trackpad, holding shift during the training portion moves the slider all the way to the right to indicate that this is a real wheel event. When I release the shift key the slider goes all the way to the left indicating these are synthetic events. After a debounced timeout, the slider returns to the middle. In the second half I begin training the system on the sequences it pulled from my training data which is visualized with TensorBoard. In the last 15 seconds I'm "scrolling" on my trackpad again, but this time I'm not pressing the shift key at all since the system is now predicting whether the input is real or synthesized. You can see the slider jumps to the right as my scroll events start and then back left when I remove my fingers. Obviously without seeing my trackpad this is difficult to visualize so I'd encourage you to download the repo and try it yourself. I may also re-record the process along with a camera feed pointed at my hands. There is definitely a slight delay (about 500ms) between when I stop touching the trackpad and when the slider starts moving to the left, but I believe adjusting where I set the "windows" on my training data could improve that. I also think training with much more data would help as well. |
@DaveSeidman, I couldn't reliably train the model. If you had any success, could you share a pre-trained one? |
Thanks for giving it a try! I haven't played with it much since I commented but I'll try again soon. I was getting pretty decent results but with a little delay between taking my fingers off the trackpad and the prediction values beginning to drop. I do think a lot more training is required to speed it up. I also see from my TODO's on the readme there are a few other things that need cleaning up so that the training data collected isn't affected by things like slowdown created by adding lots of display elements to the DOM. If you'd like to collaborate on this I'm very much open to it! Otherwise it'll probably be a few weeks before I can get back to it in earnest. Happy to see at least one other person is interested/irritated by the behavior of the trackpad in the browser :) |
Split from #56
Current DOM 'wheel' events only expose bare-minimum data (deltaX, deltaY) that don't fully capture how modern scrollwheels and trackpads behave. Exposing additional bits of information would greatly improve the possibilities for interesting and important custom event handling.
For example, on many modern trackpads, "throwing" your fingers results in two distinct phases of scrolling. First, while your fingers are in contact with the trackpad real scroll events are delivered corresponding to your finger motion. Second, once your finger(s) leave the surface, a stream of scroll events are synthesized corresponding to the velocity of your fingers at the end of phase I.
For many applications it is vital to be able to distinguish between real and synthetic scroll events (e.g. to perform custom scroll handling with different physics parameters or boundary cases, where naive system-provided friction falloff will not suffice). Native event APIs expose this information and I propose that browsers could expose it through new WheelEvent properties.
Developers have come up with horrible hacks to work around not having this information (see https://libraries.io/npm/lethargy and weep).
I propose exposing an additional property on delivered DOM wheel events:
The text was updated successfully, but these errors were encountered: