How multiple displays should work?

The multiplexing HWInterface/DrawingEngine would just divide the virtual screen into several plugged in HWInterfaces/DrawingEngines. Of course, one of those could be a RemoteHWInterface, too. This would also allow for different scaling factors on different screens. If a drawing spans over two (or more) HWInterfaces, it would need to be drawn on all affected DrawingEngines.

The only feature that wasn’t planned at all yet, is to be able to show more than one Workspace at the same time (like on different screens). That would require major rework. Support for multiple screens shouldn’t affect that many parts of the app_server if a solution like the one I mentioned is used. In any case, that was how I planned it years ago. Since I never implemented it, you’re free to choose a different path. However, the current implementation is certainly not too far away from the originally intended plan.

1 Like

So multiplexed HWInterface would provide multiplexing DrawingEngine that will have an array of actual DrawingEngine’s and iterate it all on each drawing call? Something like this:

void MultiplexedDrawingEngine::FillRect(BRect rect)
{
	for (int i = 0; i < fEngineCount; i++) {
		SetupTransform(fEngines[i]);
		fEngines[i]->FillRect(rect);
	}
}

That may cause drawing speed issues when there are a lot of small drawing operations. Also it will require yet another table of all drawing commands. Drawing operations are already duplicated in many places.

1 Like

Something like that, but you would want to use areas, ie. something like this:

void MultiplexedDrawingEngine::FillRect(BRect rect)
{
	for (int i = 0; i < fEngineCount; i++) {
		if (rect.IntersectsWith(fEngineRect[i])) {
			SetupTransform(fEngines[i]);
			fEngines[i]->FillRect(rect);
		}
	}
}
1 Like

Since drawing commands are sent via Views which are attached to Windows, presumably the Window will know what display(s) it is on, and so can send its drawing commands directly to those DrawingEngines, instead of having to incur a performance overhead doing point intersection.

1 Like

Revising this, i think crossing displays is not neccessary at all. Perhaps the main reason it was designed this way was that the “move window” metaphore worked and screens couöd become a window to your “big” canvas.

Honestly I think this just creates more problems and is confusing, for instance: if you model this as a big square with both monitors are a view off, what happens to the area not part of any monitor? Why is moving windows somewhere unretrievable a thing? etc.

The new IPadOS 16 will feature a multiscreen setup that does not allow windows to be half on one screen, and honestly that makes the most sense to me.

Additionally it will remove a really big pile of edge cases and what ifs that have no real answer to them.
When windows switch displays the BWindow can get a message, redraw it’s view with the differeing font/scaled size and be done with it.

From a UI perspective we can move windows with simply a button in the tab bar for example.

How to move a window to another screen in this case?

Could be A button in the tab bar, a shortcut, or simply moving the window to the edge you want and then snapping it over after some threshold.

I can see this use cases…

  • Fullscreen display on several screens for apps like racing simulators or video walls, though the last may require special graphic cards and drivers.
  • Mixed displays for apps like Artpaint or Medo where you can have fullscreen rendering on a screen and tools on another.

But a window that appears half on a screen and half on another, I can’t imagine the purpose. For moving them from a screen to another, we can use shortcuts or workspaces app as long as all screens are displayed there. Did I miss something?

Einige Ideen, vielleicht…
Screenshots • DisplayFusion von Binary Fortress Software

In Linux I set my lower DPI displays to have a higher virtual resolution that makes them the same effective DPI as the display with highest DPI. That way everything stays the right size when moving windows around… but the penalty is that rendering is not pixel perfect on scaled displays. However, when all the displays are reasonably high DPI already and using a modern graphics card, it’s hard to notice the difference. In general you can choose an arbitrary DPI for the array and scale the virtual resolutions of the monitors up or down so that the UI size matches everywhere.

However it would be much better if instead the OS can render each monitor at native resolution and scale the size of UI elements on each monitor according to the monitors DPI and the global scale setting, so that everything renders at the same size on every monitor and is pixel perfect… but I’ve never seen a system that supports it. I guess because if a window is across two monitors you would may have to render different parts of single controls at different DPIs.

This penalty is too high. Blurry software scaled image is too ugly.

2 Likes

As an anecdote, at work I have to use an MDI app on Windows with 2 monitors (identical resolution, which outside of docked notebooks may be the most common 2-monitor-setup?). If I have to have two landscape pages side-by-side, I have to stretch the app’s window to stradle the two screens.

That why MDI is a bad thing. And not used in BeOS/Haiku anyway.

1 Like

I agree full-heartedly.
OTOH, I could want to stretch any window over two monitors. For example, I may want to see as much of the timeline of an audio or video application.

I agree it isn’t ideal (I said as much). Though I do think for most uses it is acceptable if the display is already high DPI, and that you’d be surprised how well it can work (I scale WQHD up to 4K using an exact 1.5x factor and was surprised that I can barely tell the difference… in any case it’s worth it to have a consistent UI across displays without buying a new monitor).

But what is your suggestion to solve this problem? Have the OS render each window (or fraction of a window when across multiple monitors) and all controls (or fractions of controls) at different DPI? It would be very nice but it sounds difficult. Or just not support one window being present on two monitors at the same time? Or just not solve it (different sized UI on each monitor)?

Switch window DPI when moved on different screen. Like Windows does.

Linux as well, in the default configuration (depending if using wayland or X11). I guess it is acceptable to say users must buy monitors with matching DPIs or suffer with things changing sizes.

Let’s not reinvent the wheel. I’d rather have painless monitor detection and setup (and scaling) rather than having to resort to unnatural gestures for the Desktop for moving windows.

Yes, this is one of the edge cases I mentioned above.
Either you render everything twice and recode apps that they expect this.
Or you take the finalized rendering and convert it.

both options suck, and there aren’t really any good reasons to do it. Applications that want to take advantage of multiple monitors can do so properly instead of trying to make “window big” across two screens and hoping that somehow works. (this quickly falls apart if one monitor is smaller, has a different orientation etc. So just not doing this but supporting multipe monitors would be far easier to support in code and to understand as a user)

You don’t ned unnatural gestures. Drag your window to move it around, it’s just that it will only show on one screen at a time, and move, for example, as your mouse cursor moves from one screen to the other.

I think this will indeed remove a lot of complexity, at least initially. Then if someone is really bothered by it they can implement all tHe extra code needed to render a window that’s partially accross two screens with different resolutions.

Personally I would be happy to be able to use any external displays at all with my laptops and I don’t really see the point of obsessing over minute details of edge cases of high dpi rendering until we have multiple displays working…

3 Likes