How multiple displays should work?

My original plan was instantiate separate app_server Desktop object and HWInterface for each screen and allow move windows between desktops. So a window is managed by only one desktop at the time. But it need decoupling Desktop user session and window drawing logic.

7 Likes

The solution that is already supported by the current API is to have a single desktop that spans over all screens. Each monitor is just a view to the big virtual desktop. Internally, the Desktop should be there only once per user; that’s already implemented, and should also work fine with multiple desktops (one for each user) simultaneously.
A desktop would only have one virtual HWInterface that aggregates one HWInterface per screen.

1 Like

That will introduce additional overhead for each drawing call and will have a lot of boilerplate code. Interest in windows located on multiple screens at the same time seems low.

3 Likes

I think you’ve answered the question on how to treat multi displays for various scenarios. Let one wokspace be 3840x1080, another workspace 1920x2160, a third/fourth workspace as 1920x1080 etc. The user sets the workspace. The driver maps the workspace on one monitor or two (more).

This way the user by specifying the workspace size determines what the window crossing screen boundary behaviour should be (crop or transpose). And you’ve made users of both behavious happy.

There were a number of examples why it would make sense to support spanning a window across multiple screens. I see no technical reason not to support this. As long as you assign each window an HWInterface directly, there is only additional effort when it is actually needed. A workspace always contains the complete screen configuration (of all screens), and the desktop is the base class managing it all. That’s how it’s done now, anyway.

If you see the urge to change that, it at least should follow the logic of the existing API.

Desktop object currently follows “god object” anti-pattern. Window graphics management and user session logic should be separated no matter support spanning a window or not.

2 Likes

The question is why we would copy this behaviour at all? I’ve only seen it implementef badly, and I think that is a systemic problem.

The option to not support this is both way easier and maked more sende, windows are on one monitor and that is it. if you want an application on two monitors it should open two windows.

Could you elaborate about this a bit more deeply? AFAIU, the Desktop class currently manages the user session, and of course, the windows and resources in that session. It does not manage HWInterfaces or Screens, it just assigns them to the session. How else do you imagine the class hierarchy?

Just don’t think of two monitors as two different things. Just consider a large canvas that happens to span over more than one screen (screen borders may be very thin). Why should the OS make this use case so cumbersome to use for applications? I see little use for a tiny window that is visible on two screens at the same time, but consider something like a flight simulator, a video wall, or just a video editing software with a very large timeline (as mentioned above).

For the OS, it only gets difficult if the DPI of the monitors is different. But that is not the typical use case of the setup. If that’s your only worry, I wouldn’t see a problem to only be able to span windows on screens using the same DPI.

5 Likes

Multi-monitor dreaming…

2 Likes

Except they are different things, and the canvas metaphore makes no sense. Why should the OS make it this hard to use two output devices as two output devices?

All of the examples you mentioned work horribly when “just” stretched to a different screen, it requires very carefull setups by users and already breaks easily when the OS does anything, like DPI based hackery.

Just add proper multi window support to applications instead, it is easier works better and fits much better with Haikus gui style.

3 Likes

It’s been a while, so cold coffee.

5 Likes

The screens might also be different size with different resolution. So you get a non-rectangular canvas. E.g
I usually use my 15" laptop at my desk with an additional external widescreen monitor attached.

2 Likes

Sure, but that’s not an issue; the drawings will still be exactly the same on both screens. A non rectangular screen is not a problem at all, it’s similar to a window being partially hidden by another window, just on a different level.

1 Like

So it’s just a question of drawing to multiple regions with different clipping rectangles?

Pretty much. We have a class HWInterface where all the drawing is done through. Each window does have one. If a window is only on one screen, it will have an HWInterface that paints in a single buffer. If it is spanned over multiple screens, it will have an HWInterface that forwards the drawings into its child HWInterfaces with different offsets and clipping. Since each buffer is rectangular by itself, the clipping is pretty simple to do, and we even do that already as you can easily draw a line that starts well outside our viewing area. If there is only a single buffer that spans over two screens in the graphics card, there is only a single HWInterface needed for the complete area.

However, once you have two screens with different DPIs, you’d have to scale the drawing calls differently, which means that the drawing doesn’t look the same anymore. Since we don’t have a mechanism to provide the app_server with diffferent sharp bitmaps for different resolutions, bitmaps might be drawn blurry in this cases. Even worse, if you are using font hinting, that won’t look as good, especially if your resolution isn’t as high, as you cannot hint the fonts optimally for both DPIs, you’ll have to choose one (because the font width will differ on layout otherwise).

But it’s not technically hard to do, it just wouldn’t look perfect in this case, and understandably so. Anyway, I probably won’t be the one implementing this, so I’ll be fine with whatever solution will come that improves how we handle this now.

This is the thing I’d prefer to avoid. It looks really bad. I think I would prefer windows to be unscaled until they are fully moved to one display.

But then again, I’ll probably set both of my displays to have exactly the same scaling anyway, hopefully that will easily be possible and not require messing with a dozen settings as it is in Windows.

2 Likes

DPI isn’t relevant at all here, it is just a physical property of the display. The same rendering can be used with different DPI if the display has the same resolution but is smaller for example. What does matter for font hinting is the patterb of colored pixels, this can’t be bypassed woth scaling either. So in that case it would indeed not be possible to properly render it hinted for both displays at all unless you render it twice, or render half the view with different properties.

If we treat two screens as two screens this problem disapears, I am still unconvinced why you want to copy the way windows does this, it works badly in practice and we don’t gain any advantage. It just means there are severall things we now can’t implement properly.

Edit: add to that monitors with different color depths, remote displays over network etc and this breaks down even more. The canvas metaphore doesn’t make sense at all except for a very limited case of people who have two identical monitors with almost no edges next to each other… and font hibting still breaks down at the edges in this case

Sorry, I cannot follow you. If you want to understand DPI differently in this context, then just replace it with scaling.

Hinting is completely different from colored pixels, it’s just a method of multiplying the (perceived) horizontal resolution. It’s correct that the order of subpixels can be different per monitor which requires you to draw it differently (even if the graphics card may use a big buffer). But that’s just one reason you could not use a combined HWInterface for both displays in this case, but would need two.

The problem with hinting is that it changes the horizontal size requirements of a font. And if you have a different scaling applied on each display, you would need to have different hinting, and thus, different sizes. But since you layout your window once fixed, you can only have one truth in that case. You could go the Apple way, and disable hinting, which would solve the issue. Or you live with strange letter spacing on the other screen.

If I could not convince with my examples, fine, that’s entirely your choice. I just don’t see any advantage of disallowing windows to span over more than one screen. Of course, there are edge cases that cause suboptimal results (just think about different refresh rates), but that just happens if a window spans over multiple screens. No one forces you to setup a suboptimal hardware configuration, and no one forces you to make your windows span over more than one screen.
However, forbidding it makes the implementation on the application side more complex in case someone actually wants to do that. And that’s not the way APIs should be designed IMO. Furthermore, you don’t save any complexity by it (how bad it looks is just an implementation detail), you just forbid something for no good reason.

1 Like

Sorry, you seem to be using terms completely different than me. thus creates confusion, so I’ll lay out what I ment.

DPI: Dots per inches, a physical property of the display
hinting: a technique used in font rendering to make a font look nicer for specific physical pixel alignments

Font hinting in Haiku used rgb hinting per default afaik, if you render with this it will look broken on a non-rgb monitor (pixel order) even if you use the exact same rendering without any bitmap scaling or similar.

Apple can get away with disabeling font hinting because they deliberately ship high resolution displays.

Anyhow, this is not about “forbidding” anything, as you said windows can span multiple screens if implemented that way.

I am arguing against your proposal to basically pretend two screens are one, it will not “allow” windows to span screens, instead it will make it completely impossible to render windows correctly on both display even if they do not cross the window border, this is a lealy abstraction and just doesn’t work.

Now, if you want to add a way to cross windows onto another screen with bitmap copying/moving a bit higher in the dtack as some visual joke, beeing aware that it may look terrible, go ahead(and translating the input_server coordinates if they hit on the second screen). But we should implement the multi monitor support properly first, I don’t want my renderinng to be all fucked up because of some edge case of “someone could potentially want to use this workaround”

1 Like