Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose window 'surface' size, distinct from the 'inner' size #2308

Closed
rib opened this issue May 29, 2022 · 36 comments · Fixed by #3890
Closed

Expose window 'surface' size, distinct from the 'inner' size #2308

rib opened this issue May 29, 2022 · 36 comments · Fixed by #3890
Labels
C - needs discussion Direction must be ironed out S - api Design and usability

Comments

@rib
Copy link
Contributor

rib commented May 29, 2022

Edited: to use "surface size" instead of "physical size" as discussed. The edit also tries to clarify the table information.

Currently the general, go-to API for querying the size of a winit window is .inner_size() which has conflicting requirements due to some downstream consumers wanting to know the size to create render surfaces and other downstream consumers wanting to know the safe, inner bounds of content (which could exist within a potentially larger surface).

For lower-level rendering needs, such as for supporting integration with wgpu and or directly with opengl / vulkan etc then what downstream wants to know is the size of the surface they should create, which effectively determines how much memory to allocate for a render target and the width and height of that render target.

Incidentally, 'physical size' is how the Bevy engine refers to the size it tracks, based on reading the winit .inner_size() which is a good example where it's quite clear what semantics they are primarily looking for (since they will pass the size to wgpu to configure render surfaces). In this case Bevy is not conceptually interested in knowing about insets for things like frame decorations or mobile OS safe areas.

Conceptually the inner_size() is 'inner' with respect to the 'outer' size, and the outer_size() is primarily applicable to desktop window systems which may have window frames that extend outside the content area of the applications window, and may also be larger than the surface size that's used by the application for rendering. For example on X11 the inner size will technically relate to a separate smaller child window that's parented by the window manager onto a larger frame window.

Incidentally on Wayland which was designed to try and encourage client-side window decorations and also sandbox clients the core protocol doesn't let you introspect frame details or an outer screen position.

Here's a matrix to try and summarize the varying semantics for the inner/outer and surface sizes across window systems to help show why I think it would be useful to expose the physical size explicitly, to decouple it from the inner_size:

Window System Outer Size Inner Size Surface Size
X11 Size of parent frame window owned by the window manager (e.g. with title bar and border around content) Size of nested content window (which is usually a child of the frame window)

This may vary but I want to highlight that on X11 due to it's async nature there are times where it makes sense to optimistically refer to a pending/requested size (so e.g. .set_inner_size() followed by .inner_size() can return a pending size) but the 'surface' size would be the remote, server-side size
Size of nested content window (which is usually a child of the frame window)

In some situations the driver may need to reconcile mismatched surface/window sizes due to async resizing of window, e.g. using a top-left gravity and no scaling.
Wayland wl_surface size.

Protocol has no notion of frames but clients may draw their own window decorations and impose their own inner inset.
Based on wl_surface size but application defined. Wayland clients may have client-side decorations whereby an 'inner size' could be smaller than the surface (Would be a bit like a self-imposed safe area as seen on mobile) Size of client-side allocated wl_buffer.

Notably there may be a scale and rotation between the wl_buffer and the wl_surface to account for the physical display (e.g. client may render rotated 90 degrees for a display that's connected with an internal rotation that doesn't match the product from the user's pov)
Windows Size of full frame (e.g. with title bar and border around content) Content Size Same as content size I think?

Tbh this is the window system I'm least familiar with so I'm not clear on how the compositor maps rendered buffers to windows, and what kind of transforms it allows, if any)
macOS NSWindow size NSView size CAMetalLayer drawableSize
iOS UIView size UIView safeAreaInsets CAMetalLayer drawableSize.

Note that in wgpu it looks like they ignore sizes given by the application for configuring surfaces which is lucky considering winit reports an inset safe area which also gets treated as a physical size for configuring render surfaces
Android Screen size.

Possibly transposed according to how a display is physically mounted
Inset (safe) content area SurfaceView size

Note: similar to Wayland it's possible to have a render-surface transform to account for how a display is physically mounted to take full advantage of hardware compositing

An example where the conflation of inner and surface size is problematic is on Android where we want to let downstream APIs like Bevy know what size they should allocate render surfaces but we also want to let downstream users (e.g. also Bevy, or any UI toolkit such as egui) know what the safe inner bounds are for rendering content that won't be obscured by things like camera notches or system
toolbars. (ref: #2235)

@kchibisov
Copy link
Member

Hm, I wonder what the problem with inner_size right now? Since it does return the size in physical pixels (not logical ones, so all the scaling is accounted already). And that inner_size is the inner size that is intended to be used by the renderer, so it doesn't have decorations, etc, etc.

This is also what is being used by downstream consumers.

An example where the conflation of inner and physical size is problematic is on Android where we want to let downstream APIs like Bevy know what size they should allocate render surfaces but we also want to let downstream users (e.g. also Bevy, or any UI toolkit such as egui) what the safe inner bounds are for rendering content that won't be obscured by things like camera notches or system

The inner size is the size you should be using for drawing. What you want is safe_area method on the window, so the users will know the offset they should take into their buffer? Also, the safe area should be related to frame, etc.

@rib
Copy link
Contributor Author

rib commented May 29, 2022

The main problem is that "inner_size" currently has two different usages (physical and safe size), and yeah there's some potential for splitting the other way like you're suggesting (adding a new api for the safe area).

I'm not sure it's as clear cut as you suggest though...

For example on iOS the inner_size right now is the safe_area not the physical size.

It was also suggested in #2235 that the inner size could represent the safe area on Android.

I think it's fair to say that intuitively, just based on the vocabulary used, 'inner size' could reasonably be expected to return the safe size, perhaps more so than the physical size. The vocabulary right now doesn't seem like a good match for reporting the physical size (and in fact it doesn't consistently report the physical size)

One other technical reason to consider splitting it the other way (e.g. add a physical_size API instead of adding a safe_size API) is that the existing inner_size API already has an associated inner_position which logically makes sense for a safe area, whereas a physical size doesn't logically have a position.

Hope that clarifies my thinking bit.

@madsmtm madsmtm added S - api Design and usability C - needs discussion Direction must be ironed out labels Jun 23, 2022
@madsmtm
Copy link
Member

madsmtm commented Jun 23, 2022

I agree that inner_size has been conflated to mean two things, and we should change that.

I'm not sure the better name is "physical size", winit already uses that in dpi::PhysicalSize. Maybe "content size", "surface size" or "drawable size" to mimic what the underlying OSes call it?

What is expected of the set_inner_size function? I would assume that's also modifying the "physical size", and should be renamed accordingly?

@kchibisov
Copy link
Member

set_inner_size should set the drawable surface size, the same applies for inner_size.

@rib
Copy link
Contributor Author

rib commented Jul 27, 2022

Right, another name might help avoid confusion with the existing "PhysicalSize" APIs. Physical size was my first preference since I think that's what I'd want to call it if there was no conflict - and Bevy was at least one example that also seems to show they prefer the term "physical size" for this.

"surface size" might be a good alternative though, since I'm generally talking about the size that would be used to create a rendering surface, such as an EGL/GLX/WGPU surface.

"content" size could be confused with the "inner" size I think - i.e. that sounds like the area where the application will draw which might be smaller than the surface size if there's a safe area.

Summary of Terminology

This is how I could see the terms begin defined:

  • "Outer Size" = The size of the window frame, or most-outer bound of a window. The outer size can conceptually have an associated position that may be in screen/display coordinates.
  • "Inner Size" = The size of the content area within the outer bounds where the application will draw. This may be an inset safe area on mobile devices, it might be an inset that represents the window frame. The inner size conceptually has an associated offset position within outer coordinates (i.e. any offset is relative to the top left corner of the outer bounds).
  • "Surface Size" = The size that a render target should have. For window systems with server-side decorations this may == the inner size. For window systems with safe areas or client-side decorations that may == the outer size. The surface size does not conceptually have an associated position.

Any API to change the outer or inner size is implicitly going to have to also resize the surface size, and the exact relationship between the surface size and inner/outer sizes may be backend specific.

@madsmtm
Copy link
Member

madsmtm commented Aug 10, 2022

I'm fine with that naming scheme, especially if we add a small section in the docs where we specify this terminology.

@rib
Copy link
Contributor Author

rib commented Nov 14, 2022

Just to note here, I did a fairly sweeping edit of the original issue, to refer to "surface size" instead of "physical size" since I think there was some agreement that would be better terminology (would hopefully avoid confusion with the PhysicalSize type)

@rib rib changed the title Expose 'physical' window size, distinct from the 'inner' size Expose window 'surface' size, distinct from the 'inner' size Nov 14, 2022
@kchibisov kchibisov added this to the Version 0.29.0 milestone Jun 9, 2023
@kchibisov
Copy link
Member

I think the inner_size must return the actual size of the buffer you should create. So for android it should be the entire screen, including the safe area. The same API will be on the macOS for example, where they also have notches.

However we must add the safe_area call to report the safe drawing area for the window describing the notches and such, or occluded area. This information could be as a hint to e.g. offset the window viewport or draw into the buffer with the offset.

Wayland could also get a protocol for that, given that such stuff is exposed in edid iirc, so compositors could deliver it.

How does it sound @rib?

@rib
Copy link
Contributor Author

rib commented Jul 11, 2023

I think if the 'inner' size effectively becomes the size for surfaces then it ends up as being a misleading / inappropriate name.

I believe "inner" in the current API name was originally intended to mean that it was the size of the window inside of the the frame. That's still a useful thing to be able to query but it varies across window systems whether that's related to the size of the surface.

I tried to distinguish these concepts in the table, but maybe the table is over complicated / unclear, I'm not sure.

If there's an API that would be specifically documented to provide the size that surfaces can be allocated at, why would you want to call it the "inner_size" ?

@kchibisov
Copy link
Member

I'd suggest to rename outer_size to window_size, inner_size into surface_size. And add a safe_area, what do you think?

@daxpedda
Copy link
Member

I think it doesn't make sense to include Occluded into this.

The "surface size" is unlikely to change often, but Occluded will. If users are supposed to use "surface size" to draw, changes to it might warrant recreating buffers and re-configuring the surface, which is computationally expensive. This is how currently wgpu users handle Resized. Occluded on the other hand is temporary and users most likely won't change much except not draw in the occluded area.

So I think these two concepts deserve to be differentiated.

Overall the proposal looks good to me.
Web is missing in the OP, which could be handled by env(safe-area-inset-x), but I'm not sure how I feel about that without #696.

@kchibisov
Copy link
Member

@daxpedda the occlusion here was a bit confusing, it's mostly about the notches due to hardware (read macbooks or phone cameras). It's commonly called a safe area though (area where you can draw and your content won't be obscured by the hardware limitations).

I'd have to look though what usually such APIs expose.

Also, maybe we should call view_size instead of inner size? Because surface_size could be confused with vulkan/egl surfaces, while it's not them.

@daxpedda
Copy link
Member

I like it!
My current preference would be: window_size > view_size > safe_size.

@kchibisov
Copy link
Member

I think it should be safe_area since area could have gaps, I'm not sure how to even expose it yet, have to look into APIs we have.

@daxpedda
Copy link
Member

If we can expose it I would also be in favor of calling it safe_area instead of safe_size.
I can say at least on Web it's not possible to get that information.

@rib
Copy link
Contributor Author

rib commented Jul 12, 2023

Also, maybe we should call view_size instead of inner size? Because surface_size could be confused with vulkan/egl surfaces, while it's not them.

Being clear that the size is the size that should be used for creating vulkan / egl surfaces was the reason why I think there should be a "surface_size" API - that would be the unambiguous purpose of the API, to know the size that should be used when creating GPU API surfaces. Referring to that as a "view" size doesn't really mean anything to me sorry - how would you define what a 'view' is?

If it weren't called "surface_size" I would probably want to simply refer to it as the "size" or "window_size" - i.e. the canonical size of the window which can be documented as the size that surfaces should be allocated at.

Is there a reason you want to stop exposing the 'inner' / 'outer' sizes as a way of exposing the geometry of window frames?

I don't have strong opinions about that since I don't really know when I would ever need to know the size of a window frame (except if they are client-side).

Insets could maybe be queryable with an enum, since there can be lots of different insets on mobile platforms.

E.g. for android see:
https://developer.android.com/reference/android/view/WindowInsets#getInsets(int)

It's notable also that it can be desirable for applications to e.g. render underneath an onscreen navigation bar on android (which may be transparent) but they need to be aware that the nav bar won't be sensitive to application input (only for the nav buttons) and so the semantics of different insets are pretty important.

@kchibisov
Copy link
Member

I think the default winit should allow users to draw into what physically is possible. And then they should use insets to offset their content, how does this sound, @rib ?

My only "issue" with surrface_size is that request_surface_size could mean that you resize the EGL surface, because on Wayland you manually resize it, and it could confuse some folks. View is usually a view inside the window, like NSView.

How folks on android usually handle all of that? Do they use glViewport to offset based on insets and just glClear to fill everything with the color they want? I'd really like for surface_size to return the maximum of what you could use for drawing.

github-merge-queue bot pushed a commit to linebender/xilem that referenced this issue Jul 1, 2024
As I understand it, on iOS the outer_size corresponds to the size of the
window, meaning when the surface is rendered using inner_size(the
safe/non-obscured size) the elements get stretched.

There's an open issue in winit about clarifying/standardizing the
different sizes, but until that's done switching to outer_size fixes the
issue.

The touch positions now also match the rendering

Winit issue: rust-windowing/winit#2308

Fixes #419


<details>
  <summary>Before</summary>

![xilem
ios](https://github.com/linebender/xilem/assets/7433263/8bde0880-90e7-498b-b45c-c9964a0ae153)

</details>


<details>
  <summary>After</summary>

![ios
after](https://github.com/linebender/xilem/assets/7433263/8f6703d3-bb99-4f42-a941-d936ca5da6cf)


</details>
@TarekAS

This comment was marked as off-topic.

@bjornkihlberg
Copy link

I've been banging my head against a problem with trying to control pixels and I just realized this was the issue. So I'm writing my solution here in case anyone runs into this or googles for a solution.

Here's a square rendered with a debug texture sampled in the shader using textureLoad:
image
The issue is subtle but the horizontal and diagonal lines show that something is off

If my window is set to have size 800x600 and is set to be decorated,

event_loop
  .create_window(
    Window::default_attributes()
      .with_inner_size(PhysicalSize::new(800, 600))
      .with_decorations(true))

the real outer size is 800x600 but reported as 800x635 and inner size is actually 800x565 but reported as 800x600.

dbg!(window.inner_size());
dbg!(window.outer_size());
[src/main.rs:186:30] window.inner_size() = PhysicalSize {
    width: 800,
    height: 600,
}
[src/main.rs:187:30] window.outer_size() = PhysicalSize {
    width: 800,
    height: 635,
}

Solution

let (actual_surface_width, actual_surface_height) = {
    let inner_size = window.inner_size();
    let outer_size = window.outer_size();
    (
        2 * inner_size.width - outer_size.width,
        2 * inner_size.height - outer_size.height,
    )
};

If I use actual_surface_width and actual_surface_height to configure my surfaces, I get correct results:
image

@MarijnS95
Copy link
Member

@bjornkihlberg what backend and winit version is this on? That seems incorrect regardless of how inner and outer size are interpreted (i.e. the entire thing is 600 pixels high, so it shouldn't ever return 635).

@kchibisov
Copy link
Member

I bet that was wayland and what was reported is correct. On Wayland you're the one controlling the sizes all the way, winit tells you mostly about the suggestions from compositor and both sizes are correct.

If you end up with smaller visually sizes, it's on you.

@MarijnS95
Copy link
Member

@kchibisov to then ask the question that I wanted to ask: is there some math that adds the "expected" title bar size to the surface size? But in the end we use CSD to render it within the surface? The screenshot has a window with 600 pixels in height so I don't see where this arbitrary outer_size().height = 635 could possibly come from.

Or is that actual wayland window size (if that's even a thing) separate from committed buffer size?

@kchibisov
Copy link
Member

@MarijnS95 we add them ourselves because outer size is the size of the window including the decorations.

@bjornkihlberg
Copy link

@bjornkihlberg what backend and winit version is this on? That seems incorrect regardless of how inner and outer size are interpreted (i.e. the entire thing is 600 pixels high, so it shouldn't ever return 635).

It was on Wayland.

@MarijnS95
Copy link
Member

@MarijnS95 we add them ourselves because outer size is the size of the window including the decorations.

@kchibisov then who rendered the decorations within inner_size() in the sceenshots above, is that winits CSD feature?

@kchibisov
Copy link
Member

Decorations are not within the inner_size, they are added to outer_size since they resize on a separate subsurface, not visible to the user.

@bjornkihlberg could you provide a WAYLAND_DEBUG=1 log during reproducing your issue? If the window is smaller, it's likely your buffer is smaller. Winit has no idea about the buffer of the window, so it's expected if they've got desynced.

@MarijnS95
Copy link
Member

Decorations are not within the inner_size, they are added to outer_size since they resize on a separate subsurface, not visible to the user.

It looks like we are misunderstanding each other. I pointed out the same, because that is not what seems to be happening in #2308 (comment), and said that this must be some kind of a bug. So we agree :)

@kchibisov
Copy link
Member

and said that this must be some kind of a bug. So we agree :)

I mean, they've asked for 800x600, they've got 800x600, and given that it's wayland, it can not be anything other than 800x600. So if they think that it's 800x565 it's not true, since they control the size entirely. In anyway, the WAYLAND_DEBUG=1 log will pretty much tell everything.

@MarijnS95
Copy link
Member

@kchibisov download the first screenshot and open it in your favorite image editor. Count or measure the pixels.

The total window size _including decorations is 802x601. The inner size of the white surface is 800x565.

That doesn't match the returned inner_size() => (800, 600) and outer_size() => (800, 635) does it. This is the issue that @bjornkihlberg pointed out, it's not "if they think that it's 800x565 it's not true" 🙂

@MarijnS95
Copy link
Member

MarijnS95 commented Nov 3, 2024

Also it looks like GitHub is having a bug with timezones. I just posted #2308 (comment) in reply to #2308 (comment) (which was in reply to #2308 (comment)):

image

Probably because the servers are in the US, whose time went back one hour from CDT to CST, 10 minutes ago?

@MarijnS95
Copy link
Member

@kchibisov sure, let's wait for WAYLAND_DEBUG=1. What I'm saying is that yes, they requested 800x600 for inner_size and got that. However, decorations are being rendered within those 800x600: who is responsible for that?

At least it makes sense that if CSD is never rendered to the subsurfaces around the window, wayland won't show them and the whole window will always appear to be inner_size() and never outer_size() until those buffers are submitted.

@kchibisov
Copy link
Member

@kchibisov download the first screenshot and open it in your favorite image editor. Count or measure the pixels.

size is entirely controlled by the user as I said, whatever they draw whatever is displayed, if they draw less, it's their issue, not winit one. inner_size just tells you what to use.

@kchibisov
Copy link
Member

@kchibisov sure, let's wait for WAYLAND_DEBUG=1. What I'm saying is that yes, they requested 800x600 for inner_size and got that. However, decorations are being rendered within those 800x600: who is responsible for that?

I completely don't understand you. They requested 800x600 for inner size, got 800x635 for outer size, an the decorations are in the (0,-35) location. I have no idea where they get other sizes from, and we put decorations at the (0, -35) to the surface origin, so in the negative coordinate space. If the compositor is broken, it could maybe move them into the surface for whatever reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C - needs discussion Direction must be ironed out S - api Design and usability
Development

Successfully merging a pull request may close this issue.

7 participants