-
-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gesture bindings #372
Comments
How about this:
You can only set either an argument, or the properties. All three have only those variants that make sense for that property (i.e. no vertical-only gestures in the horizontal property). I'm still not sure about encoding the finger count into the "key" name.
Examples: currently existing interactive window resize; future interactive window move. For the resize it even makes sense to be able to bind just the horizontal part (since we're a scrolling WM). |
I was looking for a way to set The encoding of the finger count into the key name makes sense to me. |
how about some thing like this:
So that the touchpad gestures are identical to the keyboard shortcuts for focusing columns. this fixes #466 |
I envision the gesture bindings section only for continuous binds, and the regular bind section for discrete binds. |
Do you have any thoughts on forwarding continuous gestures to layer-shell based desktop components? (think being able to pinch in the app menu from anywhere like on macOS) I have previously prototyped that for Wayfire in wf-globalgestures which exposes a custom protocol, but I think I'd be fine with a config-based solution to avoid having Yet Another Protocol, like so (this example would result in that gesture always being forwarded to a client with matching 'namespace' on the current output):
|
Hm, interesting idea. Need to look into it when implementing. |
Obviously, you should be able to bind a swipe of however many fingers, or a different bind when swiping with a modifier. But i'd like to be able to swipe with just one finger to scroll in the workspace view, however i obviously don't want to intercept a one-finger swipe from clients. So it should only trigger if the swipe would not hit a client. In particular, this means that the event would not be sent to any surface because the surfaces it did hit (a list which may have a length of zero) all have an input region that doesn't include the location of the event. If that sounds completely foreign to a reader unfamiliar with Wayland, consider that I also would want to be able to swipe from left/right screen edges, and cause that to scroll in the workspace rather than being sent to the client in my struts. This specific bind can be constructed by never sending touch inputs to toplevels in the struts (which should probably not be the default behaviour; but one i want to be able to configure regardless because simply picking up my tablet will send touch inputs to strut clients, as it doesn't have bezels thick enough to avoid accidental edge touch inputs), but i think in general it is useful to consider "swipe from [left/top/right/bottom] edge" a separate gesture, so i can for instance bind Because of the above consideration, i'd say that the type of gestures that exist for touchscreen is generally a different shape from the ones that exist for touchpad, both of which are objectively just very different from mouse gestures. I think it makes a lot of sense to separate them into sub-sections, which makes them more visually distinct than the harder-to-parse longer names that repeat what kind of input method this gesture belongs to. mouse gestures should, for instance, be able to trigger discrete binds. ( There should also be a way to have different gesture binds (particularly touch binds) when a window is fullscreen. Because then i can't scroll in my struts. There are no struts. I could for instance have a swipe from the bottom edge un-fullscreen this window. To be real sexy, that should be a smooth animation that transitions back to a maximized one-window column. And when there is no fullscreened window, i should be able to use the same gesture to initiate a workspace switch (but it ought to have a different threshold before it actually triggers a full switch on release) perhaps if we support arbitrary named gesture bind modes, this can be implemented using those. each mode can have additional transitions in its own section, next to the different input devices.
the A lot of gestures like The threshold completion action cannot sufficiently implement switching to a "fullscreen gestures mode". Because i could still want to fullscreen from a Oh yeah, by the way
That 2D gesture exists now. It's called "interactive move". There are many different kinds of gesture actions mainly distinguished by axis (vertical workspace switch vs horizontal workspace movement; even interactive move which occurs on both axes). pinch actions like For the fairly limited amount of gestures and gesture actions, it might be worth going over each pair one-by-one to see if they make sense. Implement a way for that action to map onto this gesture. A specific action may actually be implemented in multiple different ways, from completely different gestures see After planning out which gestures can trigger which actions, and also how exactly that action will look when triggered by this gesture, it may be easier to spot patterns and which properties are actually useful to look at. For touchscreen binds, i think they could further be split by subsections for how many fingers you want. This is not just because it allows you to not repeat it, but also because this affects which gestures are available.
A lot of actions are meaningless if the gesture didn't occur on a specific object. Because the niri layout in dense, it is hard to trigger a pinch on the dead region unless you have no windows open. But it is nonetheless possible. In some cases where we require a column or window, i think it would make sense to just default to the active window/column for that output, but in others it would be better to invalidate the gesture as a trigger. For some, it doesn't matter if it was triggered on a window or not. We should consider making actions like A lot of actions that are like "swipe in the bottom strut", "swipe in window gaps", "swipe on the dead region when no window is open" can also be implemented by a client on the background layer. Consider not implementing the distinction for "Gesture X on region Y", and offloading it to a client that makes use of To whoever reads comment this while implementing code to validate the config we decided upon: I don't envy you in this moment. |
Do all background layer-shell clients have an empty input region? I'm not sure it's an easy case to tell apart from having input region but ignoring all events. Also consider that with CSD clients have some input region outside their geometry to give a bit of area to resize handles.
Yeah, edge swipes are usually a separate gesture. Though, with no built-in libinput support as far as I can tell.
Well... yeah, but not really because "binds" is a more common name for this, plus we already have it, and already with some discrete mouse and touchpad binds there (scrolling).
Then maybe strut gestures are not the right thing entirely, and instead what you want is edge swipes in all cases?
Idk, need to think about it. For example, we could have top edge swipe to unfullscreen and bottom edge swipe to go into the overview, where you will be able to scroll workspaces up and down.
Let's keep things simple please, at least until there's some very compelling reason to complicate them.
Ideally all gestures have continuous movement indication, even if the outcome is discrete (think how the current interactive move rubberbands the window out of the layout, even though the result is a discrete "window ended up on the mouse cursor vs. left in the layout"). Discrete actions with no obvious movement don't really belong to gestures honestly? Like, touch swipe to switch tty feels kinda wrong. They can live as regular binds.
Yeah, also interactive resize when it's in two directions at once.
Also need to see what API libinput offers here (if any) and what can be done easily. I'm afraid we don't have an entire team here to implement arbitrary finger arbitrary gestures, etc.
I'd either ignore, or pick the closest target in this case. See how interactive move picks the closest target drop location.
Yes, this unnecessary complexity and IPC lag is exactly why you don't want gestures and stuff to live in separate clients. |
I have a tall display and so I set a very large top strut so I don't have to strain my neck which has opened an a bit of space at the top. While I've not attached a trackpad to my pc, I had an idea to use the scroll-wheel on the background to slide the columns around, very similar to what I imagine is the idea being discussed here. PS: YaLTeR, I switched to daily driving niri (love it! thank you) |
Add some way to customize gesture bindings. I don't have a concrete design yet, but:
I'm thinking something like a
binds
section, but for gestures. To satisfy 1. and 2., maybe encode number of fingers explicitly into the "key"?This way, I can add new defaults when this section is missing from the config, and when it is present in the config the user will just need to add new gestures manually.
I don't entirely like this though, looks kinda awkward.
Also, I can see a problem in the future where there may be a 2D gesture, and so you will need to be able to bind either
touchpad-swipe-N
to a 2D gesture, or separatetouchpad-swipe-N-horizontal/vertical
to 1D gestures. But also maybe that's not a problem and can just be verified during parsing.Also, this "gestures" section that I have in mind seems to be mainly about continuous gestures (swipe and pinch) and not about discrete gestures like double-resize-click (these seem more fit for the regular binds section).
Also, should it be allowed to bind "vertical" to "horizontal" gestures and vice versa? Maybe not.
The text was updated successfully, but these errors were encountered: