VideoStreams architecture can be used even without explicit application support. Something like VideoCortex can be made that allows to arbitrary connect producers with consumers, create new video nodes, interact with Media Kit nodes etc…
This needs more investigation. Anyway some kind of wrapper between WSI and native Haiku applications is needed.
WSI do not define inter-process protocol like Wayland, VideoStreams takes Wayland role.
There comes our final part of browser engine - “Rendering”. A backingstore holds ownership of a BBitmap that was painted and shared from WebProcess. Only rendering is done till now, resizing is slow because everytime bits have to imported into bitmap on the UI Process side. It would be nice to have Bitmap access from one process to another through app_server maybe that would be faster.
Has that changed in the meantime?
Of course there are probably plenty of ways to do this, but this post is describing efficient cross-process rendering.
In this code BBitmap is allocated on consumer side that calls DrawBitmap. Producer have no direct access to BBitmap and can’t use app_server graphics to draw to buffer. But in this case this is not a problem, OpenGL/Vulkan/software blitter are supposed to be used.
Adding BBitmap constructor that accepts area_id and offset will solve problem of drawing on arbitrary buffer using app_server graphics. There is suspicious B_BITMAP_IS_AREA flag that probably did that on BeOS. On Haiku this flag is never used.
I’m assuming that since compositing in app_server is not planned to be altered significantly that this won’t help us to get drop shadows on windows. It is still a welcome idea.
A question arises spontaneously, since there are opengl accelerated via cpu via llvmpipe and vulkan accelerated via cpu via lavapipe, this modification of the media-kit accelerated via 3D, would bring real advantages despite not having dedicated gpu (driver)?
(obviously on the latest generation cpu with many cores I assume that it is affirmative)
yes, I suspected that this was already the case, my question was more than anything else if there could be any real advantages to using opengl via cpu in this context
I tried to setup double buffering with intel_extreme and something went wrong. It is supposed to show only upper part of framebuffer, not shrink it twice vertically. Fixing intel_extreme driver may be also needed.
ScreenConsumer based on BWindowScreen is working with ati driver on QEMU. Double buffering and buffer swapping is used. It display animation with flying black rectangle on “∞”-shape trajectory. Minimal functional dirty region management is implemented, only changed parts are repainted. #17261 need to be fixed to run on real hardware.