Well, this is expected, since I didn’t report the rotated size, but how do I do that?
I’ve tried to modify DrawingBuffer() to return a wrapped backbuffer, which reports rotated size. But it results in crashes. And I do not think this is a proper way to fix this. What does the drawing engine use to determine the dimension of the canvas? By video mode or by the backbuffer’s width and height?
But, even if that can be done, I do not like the current method for rotation. For a proper screen rotation support, I would consider something that handles all of these:
Some video driver may support rotation (by modesetting?) natively (e.g. for Intel drivers, I found that some registers has a description like this “PRI_CTL_[A-C]: Enable/Disable the panel, gamma mode, pixel format, tiling, rotation”, but I’m no driver guy so not sure about it). If so, no software rotation should be done.
Direct output? I’m not sure what it is called. But I assume that hardware accelerated OpenGL writes directly to screen in the hardware, just like hardware cursors.
Then, we have to
Expose the rotation settings throughout all relavent components.
Add GUI in screen preferences to adjust it.
React dynamically to user’s adjustment.
And maybe off topic, how do I speed up my edit-run-test cycle? Everytime I ran jam, it has to build my changes (which is fast) and build a new image (which is slow), then I can qemu the new image for testing. Is there any way to build a single component (e.g. just build app_server) and insert it to the raw image, without rebuilding the image from scratch?
BTW I’m working on r1beta4 because the master branch failed to build.
The simplest way and already much jaster is to jam only haiku.hpkg (jam -qj8 haiku.hpkg for example) and use pkgman install to install that into your running system (using pkgman install).
Another option, if you compile from Haiku, is to use jam @install, which will directly put the files into an existing mounted BFS partition. If you build from Linux, you can use jam @update, if that still works, it will only replace the updated files in the built image.
That would be nice, but since not all hardware may support it, I think we will still need a software fallback for the drivers which don’t.
Not really. Hardware cursors are completely handled by the hardware and so they are never really “written” anywhere.
OpenGL in its current implementation is full software rendering and renders to the screen buffer.
So, it depends when you do the rotation: when drawing into the backbuffer, or when copying the backbuffer into the frontbuffer. The place where it will matter is everything using BDirectWindow, which allows apps to directly write the backbuffer.
There are also questions about LCD subpixel antialiasing, which needs to be done in a different direction.
Who knows? We can start from what you have and improve it later. as long as it doesn’t require changing every app to take the screen rotation into account, I think it’s fine?
I was looking into this before along with multimonitor support, and I think the way to solve both problems at once is to introduce a MultiplexingHWInterface (probably we should rename HWInterface… but I digress) which could take draw calls and decide to what screen they should be sent to, presenting a series of screens as one big virtual screen. As transforms would have to be applied before sending the drawcall, adding a “rotate” transform for rotated screens wouldn’t be difficult at all.