-
Notifications
You must be signed in to change notification settings - Fork 914
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HiDPI scaling finetuning #837
Comments
Thank you, this is a nice overview of the complexity of things! It definitely makes clearer that alot of the weird edge cases are due to X11 being half-assed about it. We'll be rid of it someday, honest.... I'm also not sure we can dismiss mobile platforms as easy though; mobile devices often have much higher DPIs than desktops, 300+ rather than ~100. Making stuff look nice on both is a non-trivial problem, and I sure don't know how to solve it. For completeness, the definition of "hidpi factor" is Not to restate what I've already stated at too much length (:-P) but as a user, I am attracted to the Windows style. By default you get Something More Or Less Sane, and if you tell the system you know what you're doing it gets out of your way and lets you handle it. My use case is basically only gamedev though, which is fairly narrow and specialized. |
Android, at least, provides fairly detailed docs on how it handles DPI scaling. That's available here. Regarding X11 DPI scaling: For winit, my general rule of thumb in solutions like this is to do what everyone else does. GDK, Qt, Firefox, and Chrome all have faced this problem before; what solutions did they come up with? If they collectively decided to just not handle DPI scaling, that's the solution we should use, but if there are cleaner solutions buried in their source code it would be silly not find them out and implement them here. I've got thoughts on reducing the reported friction outside of X11 and solutions for it, but those thoughts aren't fully baked so I'll wait until I figure out what exactly I'm trying to say there before I post them here. |
From the point of view of winit, I was rather thinking in terms of API constraints: a mobile platform will very likely have a constant dpi factor during the whole run of the app, and it also imposes the dimensions of the window (most often the whole screen). This makes questions like "How to handle resizing?" or "How to specify the initial window size?" basically nonexistent.
Though if there were no question about X11, isn't that pretty similar to what winit currently does (and wayland and macOS), except everything is handled in "logical pixels" / "points" rather than physical pixels? The main advantage I see is that it allows most of the app code to be invariant under DPI change: only the drawing logic needs to be DPI-aware if all input / UI processing is done in points.
I don't think these are actually what we want to compare winit with. Winit's scope is more similar to projects like FreeGlut of GLFW on this point. Though, for the record, from my experience of having an hidpi laptop, both Qt and GTK (and to some extend Firefox) "solve" it mostly by being configured using env variables, defaulting to not doing any any hidpi handling on X11, see https://wiki.archlinux.org/index.php/HiDPI#GUI_toolkits for example. |
I was mainly talking about looking at how they calculate DPI scaling values. If they don't even bother, then I think we should just make X11 DPI support second-class - yes, it breaks multi-monitor DPI scaling, but trying to solve that has caused enough problems that I'm tempted to just not even bother with it any more. Anyhow, my thoughts on reducing API friction: TL;DR: I think we should drop Why do I think that's the case? There are two broad classes of applications Winit aims to support: 3D/2D rendering for games, and 2D rendering for GUIs. Broadly speaking, logical pixels are useless for the 3D rendering, so I won't spend any time talking about them in that context. It's less obvious why Winit's logical pixels are useless for GUI rendering: it's useful for GUI applications to express units in terms of logical pixels, right? Well, yes. However, if a GUI application wants to render its assets without blurring, it still has to choose the proper assets for the pixel density, and manually align those assets to the physical pixel grid. To do that, the GUI's layout engine has to distribute the fractional part of the logical pixels throughout the output image as described here (admittedly for text justification, but the same principle can be applied to widget layout). It can only do that effectively in particular, safe, areas (mainly whitespace areas); if it doesn't take care to do it in those safe areas, it'll end up stretching out parts of the image that shouldn't stretched, such as text, with the following result: Winit's logical pixel model doesn't account for those subtleties, and using it as suggested just distributes fractional pixel values evenly through the rendered image, resulting in artifacts like shown above. To reiterate, the proper way for an application to account for fractional DPI scaling values is by:
Winit is, by design, so low in the application rendering stack that it cannot assist with either of those things. As such, we shouldn't even pretend to do it, and just provide physical pixel values w/ a scaling factor to downstream applications. Getting rid of EDIT: To be clear: we shouldn't drop |
I see your point @Osspial. I'm not yet sure I fully agree, but it makes sense. However, if we go that route, I believe hidpi awareness should be opt-in, maybe with a |
No because winit lacks the ability to tell it to ignore the hidpi scaling factor.
Usually when one clicks on a window one is clicking... well, on something. So your application would need to convert logical to physical pixels or vice versa at some point anyway.
...Or by the application itself, in Qt's case by calling With regards to That said, I'm fine with either opting in or opting out, since unlike converting all logical pixels to physical ones, that's an easy choice to make that only happens in one place. |
This is often a big problem with Linux WMs too, since it can lead to resizing twice immediately at startup which many don't really like very much. On X11, I think Qt attempts most to provide proper Hidpi support. It supports both the Xrandr method of scaling and fixed scaling values for all screens. GTK is surprisingly lacking here by only reading the Xft.dpi for fonts (which is fine and works for most users), but only allowing a fixed DPI UI scale using an environment variable. I think if winit would provide physical pixels with a way to get the Xft.dpi using the get_hidpi_factor and a way to override this to use the dynamic xrandr approach instead (maybe WindowExt on linux can provide something like `with_xrandr_dpi(true)?), it would likely give developers all the tools necessary to implement DPI however they want (and however a user might want it) on X11. In general I think the biggest difference from the change to physical pixels to logical pixels was the automatic window resizing. For Alacritty this is nice since we don't have to do it manually anymore, however I think it would be trivial to implement ourselves but very hard to disable. So if this is not done, it seems like the user is mostly in control of handling DPI themselves which I'd be in favor of generally. All I'd want personally would be a way to get the hpdip factor, handling everything else myself seems straight-forward. Note that this is with a pretty simple application that fully tries to comply with DPI, however we still convert everything to physical pixels anyways. |
I'm no going to answer to individual points in a quote-reply manner, because I feel it'd would only exacerbate misunderstanding about details, rather than actually advance the discussion. The thing I understand is that uses such as gaming really only deal in physical pixels, and as @Osspial explained advanced UI layout really needs to account for the physical pixels rather than only work in logical pixels. However, I feel most of the discussion is done taking the X11/Windows model as a basis. Please take into account that these intuitions don't map well to macOS/Wayland. Especially the question of automatic resizing. On Wayland, if the dpi factor changes (by moving the window to a monitor), the physical surface is resized, so that the logical size does not change, period. On Wayland, it does not make any sense to request a specific physical size. (But on wayland, the hidpid factor is always an integer, so this simplifies stuff). The initial sequence of the creation of a wayland window is something like that:
This all works together because when submitting its contents, the client actually tells the server "I'm drawing on this window using a dpi factor of XX". In this context, if winit is to always expose physical pixels to the user, I see a few possibilities. Assuming winit has some switch to enable or disable DPI-awareness:
I personally think the first option is the most correct, to ensure the best visual outcome for the user & avoid wasting resources rendering in hidpi when displayed on a lowdpi screen. |
This is certainly true for the per-monitor DPI scaling mode that was introduced in Windows 8.1. However it might be worth mentioning that the older system wide DPI scaling mode, introduced in Vista, lets you choose an integer % between 100 and 500. Users can still opt in to this mode today, as seen from the screenshot I just took on Windows 10 1809. What's more, this value doesn't actually get directly used in Windows. Whatever you insert gets multiplied by 96/100 and saved as an integer in the |
Oof, I'm familiar with that field. It's the enum, That being said, would it be reasonable to just round the DPI scale factor on X11 to the nearest integer (or 0.25 increment)? I believe DPI scaling exists to make UIs appear (about) the same size on different resolution density monitors and that scaling is standardized in the system's compositor to make the UIs of different apps on the same system appear the same size. If X11 doesn't have a standard "blessed" way of getting a nice DPI scale factor, I think we can only make our best guess and hope that's what most other apps are doing. In support of @Osspial's proposal to drop the logical units from |
Yeah, that makes sense.
Absolutely, with one notable specificity: "exposing the hidpi factor before creating a window" is a hairy problem. Currently, not all platforms allows you to know in advance on which monitor your window will be displayed, which means that in a multi-dpi setup, you cannot reliably know in advance what will be the dpi factor applied to your window. For this reason, I believe there is some uncertainty wrt to window creation : how should winit interpret the size requested by the user ?
Yeah, I believe these are worth keeping too. |
Honestly the more I think about it the more Windows's approach, of all things, makes sense from a conceptual level. Just 'cause it has an actual slider you can adjust in the display settings. If hidpi is a tool to scale things up to look good for the user, then it makes perfect sense to be able to tell applications "your window is X size, oh and also, the user has requested you display at 1.5x scale". The goal is to provide information to the application about what the user wants. Apple's auto-scaling approach is a hack to try to do something, anything, for all those iOS applications that assume they're always on a 1080p screen. Under this approach, @Osspial 's proposal to drop logical units sounds reasonable, though it makes me sad 'cause it involves undoing a lot of hard work in winit. :-( This, annoyingly, also sounds like the exact opposite of what Wayland does, so I want to do some extra research there. For the matter of exposing a hidpi factor before creating a window, unless we can always specify which monitor to have a window on I think we're going to have to choose a least-bad solution. Allow the programmer to either ask a window to be created on a specific monitor and accept it won't always work, or do the current approach of letting them see what monitors have what DPI factor's and handling it themselves. Or a bit of both. For my use case, the Principle Of Least Surprise would dictate that when a program creates a window, it always gets a window that is the physical size it asks for, and also gets a
This might be the way to close the loop; let a programmer say "I want a window of physical size 800x600 and hidpi factor 1.0", and after it's created winit can say "here is your window of physical size 800x600, but it got created on a screen with hidpi factor 2.0 so have a |
In its approach, wayland kind-of flips this concern on its head going the other way. The rationale is that the only thing that really matters to the user is "How big do things appear on the screen", and the actual pixel density used to draw them is a detail. This is especially relevant in multi-dpi setups, where logical coordinates reflect a common space independent of the screen actually displaying the window. And, there may be no simple mapping between the abstract space and the physical pixel: suppose the user has setup a screen mirroring between a lowdpi and a hidpi monitor. Your app now has two different physical sizes at the same time. In this context, from the wayland point of view, it does not make any sense for a client to request a size in physical units, as the app cannot anticipate what will be the relation between the pixels in the buffer it submits in a reliable way. So what the client does is that it submits a buffer of pixel contents to the compositor, along with the information "btw, this buffer was drawn assuming a dpi factor of XX". The dimensions of the buffer define the logical dimensions of the window (by dividing them by the factor the client used), and the compositor then takes care of mapping this to the different monitors as necessary. Wayland also tries to be somewhat security-oriented, as such a general axiom is that the compositor does not trust the clients. So the clients will never know where exactly they are displayed on the screens or if they are covered by an other window. Not even if there are other windows. To still allow correct hidpi support, the compositor will provide the clients a list of the monitors the window is currently being displayed on, and the dpi factor of each. But the client will not be able to make the difference between "being half on a screen and half on the other" or "being on both screens because they are mirrored" for example. The reasonable behavior for clients in this context is to draw their contents using the highest dpi factor of the monitors they are being displayed on, and let the compositor downscale the contents on the lower-dpi monitors. This is what winit currently does: whenever a new monitor displays the window, it recomputes the dpi factor, and changes the buffer size accordingly. As such the logical size of the window does not change, but its physical size does.
So, given what I just described I suppose you can see why this would be difficult. The way a wayland-native app would see it is rather "Here is your window of logical size 800x600, btw you were mapped on a screen with hidpi factor of 2.0, so your contents were upscaled by the compositor. If you want to adapt to it, tell me and submit your next buffer twice as big." I hope this helped clarify the wayland approach to dpi handling, and how it conflicts to what other platforms (especially x11) does. |
I think I'm with @icefoxen on using physical units to create a window. If there is an api that's like // the app knows the logical size of its content
let app_resolution = LogicalSize { width: 800, height: 600 };
// assume no dpi scaling if the hidpi factor is unavailable yet (e.g. on x11)
let hidpi_factor = get_hidpi_factor_before_window_creation.unwrap_or(1);
// at this point, this is the best guess we have for how big our app content should be
// on screen
let window_size = app_resolution.to_physical(hidpi_factor);
let wb = WindowBuilder::new().with_dimensions(window_size);
// Later in the main loop...
events_loop.poll_events(|event| {
if let winit::Event::WindowEvent {
event, ..
} = event
{
match event
{
winit::WindowEvent::HiDpiFactorChanged(hidpi_factor) =>
{
// if we can only get the hidpi factor after window creation, this
// event should be fired on the first frame and the app can respond
// to it before the first frame is presented
}
_ => (),
}
}
}); This lets platforms like Wayland shine while supporting other platforms in a reasonable way. This does mean that winit has to pick a single "best" hidpi factor provide the app. I think this is reasonable, especially if an app cannot really do "better" (like rendering different parts at different DPIs) even if given the whole set of current possible dpi factors for the window. If an app just doesn't care about hidpi at all, it can work entirely in physical units (without even having to say "I'm rendering at a hidpi factor of 1"): let window_size = PhysicalSize { width: 800, height: 600 };
let wb = WindowBuilder::new().with_dimensions(window_size);
// ...
winit::WindowEvent::HiDpiFactorChanged(hidpi_factor) =>
{
// just ignore this event
} If all events from winit are using physical units (e.g. window resize), then app developers can choose to opt-out by just ignoring all things hidpi. |
@tangmi I get your intent, but I'm afraid there has been some misunderstanding: the So your example code, rather than "letting wayland shine", is rather an example of what I described earlier as "interpreting initial window dimensions in a platform dependent way" : as a physical size on all platform and as a logical size on wayland. |
Ah, my apologies. Given that, however, I'm curious how wayland solves this problem... Is wayland strongly associated with any GUI library (qt, maybe?) or is it just the windowing system that has no opinions on what goes in each window? If it's the latter, I'm curious how people are intended to deal with hidpi settings if the creation of a window (and all mouse, touch, etc input) is in logical units. To me, this seems to imply that wayland is very opinionated that all users correctly handle DPI scaling (by doing everything in logical units) and to only dip into physical units when doing the "low level" thing of dealing directly with gpu surfaces. I know I've been flip flopping on this, but if Wayland, macOS, and Windows all deal in logical units, I think that winit could also deal in logical units. I'm not sure about how X11 fits into this... Chromium and i3 do some gnarly looking stuff to calculate the DPI scale factors. |
Wayland is not associated with any GUI library and rather has a pretty low-level approach to graphics : a window is defined by the buffer of its pixel contents, and the clients are 100% responsible to drawing them. All input is done in logical units, but with sub-pixel precision (fixed point number with 1/256 of a logical pixels in resolution).
Kind of yes. The idea is that if your app is not hidpi aware, you just treat logical pixels as if they were physical pixels and the display server will upscale your window on hidpi screens. If you are hidpi aware, the only part of your code that needs to deal with physical pixels is your drawing code. This model is greatly facilitated by the fact that the possible dpi factors are only integers. Wayland is arguably pretty opinionated on this point (with the mantra "half-pixels don't exist"), and argues that either your screen is single, or double or triple, ... pixel density, and that finer size configuration should be done by application configuration (font size and friends). |
So if you ask for a window of 800x600 pixels on Wayland, and it creates window of 1600x1200 physical pixels and says "this is a 800x600 logical pixel window with a hidpi factor of 2", how do you draw your stuff at full resolution? As I understand, if you give it a buffer of 800x600 pixels and say "this is drawn at a hidpi factor of 1", it will upscale it to fit. Do you then hand it a buffer of 1600x1200 pixels and say "this can be drawn at a hidpi factor of 2" and it draws it on the screen without transformation? Also, more importantly for my use case, how does this interact with the creation of OpenGL/Vulkan contexts? Those API's have their own opinions about viewport sizes that may or may not be the same as Wayland's. |
That is the general idea yes.
To be exact, the display server does not set the size of your window beforehand, but rather infers it from the dimensions and scale factor of the buffer you submit.
When appropriate the display server can suggest you a new size (during a resize action from the user for example), but you are free to submit buffers of the size you choose (think terminal emulators that only resize on specific increments for example). In return, if the buffer you submitted is bigger than the space the display server allocated to you, it'll likely crop your window to make it fit.
Le 19 avril 2019 16:32:33 GMT+02:00, icefoxen <[email protected]> a écrit :
…So if you ask for a window of 800x600 pixels on Wayland, and it creates
window of 1600x1200 physical pixels and says "this is a 800x600 logical
pixel window with a hidpi factor of 2", how do you draw your stuff at
full resolution? As I understand, if you give it a buffer of 800x600
pixels and say "this is drawn at a hidpi factor of 1", it will upscale
it to fit. Do you then hand it a buffer of 1600x1200 pixels and say
"this can be drawn at a hidpi factor of 2" and it draws it on the
screen without transformation?
--
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
#837 (comment)
|
Regarding OpenGL / Vulkan, the buffers that the client submits to the display server still have a well defined size in number of pixels, so this is not different.
The specification of the dpi factor of the buffer is done independently by the client using an other API than the one to submit the buffers, so OpenGL/Vulkan simply do not care about it.
|
I just realised I was not precise enough wrt to OpenGL/Vulkan: For OpenGL, the EGL wayland interface has and API to set the pixel size of the drawing surface, this is what glutin's For Vulkan, I am no expert but if I understood correctly, Vulkan swapchains are created for a given rendering surface size, and this size is the one used for exporting the buffers to the wayland compositor. |
How does wayland dpi stuff interact with events, then? I assume everything is in logical coordinates again. Wayland in general sounds excellent from the point of view of the compositor, and like utter hell from the point of view of the application developer who just wants to have some assurance that what is actually getting shown bears some vague resemblance to what they intend to be shown. Unsurprisingly. I've given enough presentations to be really sympathetic to the idea of wanting to mirror one's desktop on your laptop and on a crappy projector and have it actually look similar in both places. (Lots of projectors, even new, expensive, modern ones, have absolutely absurd resolutions like 854x480 or 1280x800.) But with integer-only DPI factors there's no way it's actually going to look and function very much the same in the not-uncommon cases of, say, the monitor and projector having sizes that are non-integer multiples of each other, or or having different aspect ratios. All the compositor can hope to do in that case is draw the window at the larger size, and scale it down/mush it around on the smaller display to make it fit as well it can even if the result is ugly. Or ask the application to draw it twice at different sizes. In which case, from the point of view of the application, you have to just accept "this is going to get scaled with ugly interpolation no matter what I do and I can't help it, the only thing to do is draw it large and pray it works out okay", or "I need to handle layout and drawing at two different resolutions at once". Neither of which are actually helped by having your window units be in logical pixels. Ugh. Sorry if I'm beating a dead horse, nobody's sicker than thinking about this than I am. But what is an application even supposed to do if it's mirrored on two monitors, one 2560x1440 and the other 1920x1080? Both have the same aspect ratio, both are very common and reasonable modern monitor sizes. Very sane situation to have. One monitor is 4/3 times the size of the other. An integer hidpi factor isn't going to cut it. Again, the only options are "scale with interpolation" or "get the application to draw twice at different sizes". |
Yes all events are done in logical sizes. I don't get your point for the rest though. If you're mirroring on two monitors with very different dimensions, the display server (be it X11, Wayland, Windows or MacOS) has only two solutions : either the two monitors won't cover the same rectangular surface of the "virtual desktop" (which is what Xorg does IIRC), or it'll have to scale things on one of the monitors, and I don't see how using logical coordinates in the protocol have anything to do with the decision of the display server to do one or the other (and the Wayland protocol does not mandate anything here). In any case complaining about it here will achieve nothing apart calming your nerves on me. I didn't design the protocol and have no power to change it. Unless your point is that winit should not support Wayland at all, in which case I don't know what to say to you. Anyway, at this point I just don't care anymore. I think I've pretty clearly exposed the constraints of the Wayland platform, so I'll just let you all decide on the API your want. Ping me when you have decided and I'll shoehorn the Wayland backend into it. |
Yeah. Sorry. That wasn't really directed at you personally as much as me just being frustrated. Sorry it came off that way. 😢 The point I'm trying to make is basically that hidpi is a false solution that doesn't actually solve any of the problems it pretends to solve. Trying to deal with it only makes life more complicated for no gain, because there are so many situations it doesn't cover. I had hoped that Wayland would make the world's graphics ecosystem better, not worse, but that doesn't seem to be happening. |
Okay, I get that you are frustrated, even though I don't fully get the source of your frustration. Given how I'm mostly familiar with how Wayland handles DPI, I'm kinda frustrated by how x11 seem to not handle it from my point of view. As @anderejd reminds us, our goal here is to find a pragmatic API that works reasonably well on all platforms. Let me make a proposal that attempts that.
How does that sound? |
@vberger That looks good. The one point I'd add would be to change the units in |
Agreed about |
I wasn't aware that was something Wayland could do. If that's the case, I see no reason to not expose that. |
X11 with XI2 also provides subpixel precision for mouse positions. This is useful e.g. for painting in graphics applications, where it can noticably reduce aliasing. |
If people agree that the way to go is to default to |
IMO it'd be better to roll it in the same release as EL2.0. I'd tend to cluster all breaking changes together rather than do several largely breaking releases in a short timespan. |
I believe that it's possible for input devices to have a higher resolution input space than the screen they're displaying to (e.g. pen digitizers), so An example is the |
This is actually pretty common even for conventional input devices, in my experience. There's really no reason for any analog input device's precision to be limited to the display resolution, of all things. |
Yeah, as another example, it looks like Wacom wintab's Additionally UIKit's |
I'm gonna take a stab at implementing this. |
@vberger Actually, thinking about it, does it maybe make more sense to have the window creation API take either a pub enum Size {
Physical(PhysicalSize),
Logical(LogicalSize),
}
impl From<PhysicalSize> for Size {/* ... */}
impl From<LogicalSize> for Size {/* ... */}
pub struct WindowAttributes {
pub inner_size: Option<Size>,
pub min_inner_size: Option<Size>,
pub max_inner_size: Option<Size>,
/* ... */
}
impl WindowBuilder {
pub fn with_inner_size<S>(mut self, size: S) -> WindowBuilder
where S: Into<Size>
{/* ... */}
pub fn with_min_inner_size<S>(mut self, min_size: S) -> WindowBuilder
where S: Into<Size>
{/* ... */}
pub fn with_max_inner_size<S>(mut self, max_size: S) -> WindowBuilder
where S: Into<Size>
{/* ...*/}
} IMO this would be easier to document/implement than having an optional target DPI parameter, but I'm also the person that came up with it so I may not be the best judge of that. |
@Osspial I guess it makes sense yes. And in that case, such API could also be used for |
@vberger Yep. I'll go ahead on that design, then. |
This helped us very much, thank you for providing it. (Sorry that this comment doesn't really provide any constructive feedback, I hope it's ok anyway.) |
I believe this is all implemented now and can be closed? At least, the biggest issue I have with Wayland scaling is the lack of support for non-integer scale factors, but that's not really something we can solve here. |
I'll close this issue given that semantics outlined here established in the winit. |
Thanks to previous discussions #105 and some awesome work by @francesca64 in #548 winit has now a quite fleshed out dpi-handling story. However there is still some friction (see #796 notably), so I suggest putting everything on the table to figure out the details.
HiDPI handling on different platforms
It tried to gather information about how HiDPI is handled on different platforms, please correct me if I'm wrong.
Here, "physical pixels" represents the coordinate space of the actual pixels on the screen. "Logical pixels" represents the abstract coordinate space obtained from scaling the physical coordinates by the hidpi factor. Logical pixels generally give a better feeling of "how big the surface is on the screen".
Windows
On Windows an app needs to declare itself as dpi aware. If it does not, the OS will rescale everything to make it believe the screen just has a DPI factor of 1.
The user can configure DPI factors for their screens by increments of 0.25 (so 1, 1.25, 1.5, 1.75, 2 ...)
If the app declares itself as DPI aware, it then handles everything in physical pixels, and is responsible for drawing correctly and scaling its input.
macOS
On macOS, the app can request to be assigned an OpenGL surface of the best size on retina (HiDPI) displays. If it does not, its contents are scaled up when necessary.
DPI factor is generally either 2 for retina displays or 1 for regular displays.
Input handling is done in logical pixels.
Linux X11
The Xorg server doesn't really have any concept of "logical pixels". Everything is done directly in physical pixels, and app need to handle everything themselves.
The app can handle HiDPI handling by querying the DPI value for the screen it is displayed one and computing the HiDPI factor by dividing it by 96, which is considered the reference DPI for a regular non-HiDPI screen. Meaning the DPI factor can basically have any float value.
There are several potential sources for obtaining the DPI of the screen:
xft.dpi
or the gnome service configuration, meaning:Linux Wayland
Similarly to macOS, most of the handling is done only on logical pixels.
The app can check the HiDPI factor for each monitor (which is always an integer), and decide to draw its content with any (integer) dpi factor it chooses. If the chosen dpi factor does not match the one of the monitor displaying the window, the display server will upscale or downscale the contents accordingly.
Mobile platforms
I don't know about Android or iOS, but there should not be any difficulties coming from them : a device only has a single screen, and generally apps take up the whole screen.
Web platform
I don't know about it.
Current HiDPI model of winit
Currently winit follows an hidpi model similar to Wayland or macOS : almost everything is handled in logical pixels (marked by a specific type for handling it), and hidpi support is mandatory in the sense that the drawing surface always match the physical dimensions of the app.
The
LogicalSize
andLogicalPosition
types provide helper methods to be converted to their physical counterpart given an hidpi factor.The app is supposed to track
HiDPIFactoChanged(..)
events, and whenever this event is emitted, winit has resized the drawing surface to match its new correct physical size, so that the logic size does not change.Reported friction
As can be expected from the previous description, most of the reported friction with winit's model comes from the X11 platform, for which hidpi awareness is very barebones.
I believe the main source of friction however comes from the fact that linux developpers are very used to the x11 model and their expectations do not map to the model winit has chosen: winit forces its user to mostly work using logical coordinates to handle hidpi, and only use physical coordinates when actually drawing. This causes a lot of impedance mismatch, which is a large source frustration for everyone.
Solutions
There doesn't appear to be some silver-bullet that can solve everything, mostly as the X11 model for hidpi handling is so different from the other platforms that unifying them in a satisfactory way seems really difficult.
So while this section is really the open question of this issue, there are some simple "solutions" that I can think of, in no particular order:
1.0866666666
, and we can close this issue asWONTFIX
orWORKS AS INTENDED
get_xrandr_dpi_for_monitor
andget_xft_dpi_config
, and let the downstream crate deal themselves with X11 hidpiNow tagging the persons that may be interested in this discussion: @Osspial @zegentzy @icefoxen @mitchmindtree @fschutt @chrisduerr
The text was updated successfully, but these errors were encountered: