How to transform the app_server code to use compositing | Haiku Project

Compositing can initially be implemented without doing lots of changes to the drawing code or how updates work. In the Interface Kit we have BWindow and BView. The BWindow has a messaging port to the app_server. Inside app_server, a ServerWindow (one instance per BWindow) receives any drawing commands which BViews send via their owning BWindow. BViews also have a server counterpart, which is called View, unsurprisingly. There is another class called Window. Window is representing the on-screen object in the server (holds clipping, decorator, ...) while ServerWindow is rather managing the communication and owns one Window object. View objects are owned by Window and mirror the client side BView hierarchy, as long as those BViews are attached. Each Window also owns a DrawingEngine, which abstracts the entire drawing interface. The default implementation for all the supported drawing functions is in the Painter class. Painter itself is using AGG for general purpose drawing algorithm implementation, and has a whole bunch of optimized implementations when it can take a shortcut. So each Window instance owns a DrawingEngine and Painter instance.


This is a companion discussion topic for the original entry at https://www.haiku-os.org/blog/stippi/2011-06-15_how_transform_app_server_code_use_compositing/
2 Likes

Hi Stippi. Thanks for the fantastic technical blog post.

Regarding the slow read operation from the frame buffer, there are cases when the user may want this. The obvious use case is capturing screen shots (eg. video recording). Another benefit is to allow the app server to do view compositing and the client can read this. For my video editor for Haiku, since I never had the ability to read the AppServer frame buffer, I ended up having to manually compose a scene in my own framebuffer (replicating painter functionality), forcing me to use OpenGL. It’s slow since I have to read create an OpenGL frame buffer object, draw there, and read it for exporting. If I had access to the AppServer framebuffer, I could have avoided this duplicate code path. Oh well, when we get HW acceleration, it will probably be for the better for my app (and I also benefit from having GLSL shaders to modify the frame buffer).

I had a similar issue with the media kit sound mixer. Ideally as an app developer, I would have loved to pipe my samples to the sound mixer, then get them back for final processing (eg. to encode to disk when exporting a project). However, since the media kit doesn’t allow accessing the output of the System Mixer, I ended up having to duplicate that code as well. Sadly, I wanted to use OpenAL for 3D positional audio effects. Even though Haiku supports OpenAL, it doesn’t allow me to pipe the final sound output back into my application - again, forcing me to duplicate that code as well.

Anyhow, thanks for a great technical blog post.

It should be possible to insert node between mixer and hardware. If this is not working it is a bug and it should be fixed.

Date of this blog post is 2011-06-15, it posted about 10 years ago. Anyway, it contains a lot of useful information about graphics system implementation.

Personally I don’t like compositing because it consume much more memory and make system less responsible. I like lightweight and responsible Haiku.

1 Like

Some recent discussion about compositing: https://review.haiku-os.org/c/haiku/+/2119.

Blockquote
Date of this blog post is 2011-06-15, it posted about 10 years ago.

Shows up as 4 hours ago on my system. Moving on …

I reposted the message because it had somehow been unpublished, possibly during a website update. It’s pretty much still up to date.

Regarding your comments: accessing the framebuffer seems possible, as both the “screenshot” tool and BeScreenCapture manage to do it. The API may not be straightforward and well-documented but you can look into how these apps do it.

As for view compositing, you can attach a view to a BBitmap and use app_server to draw offscreen if you need to.

For the mixer, I believe it’s also possible to instanciate your own mixer node if you want to reuse our mixer for other purposes than the final mixing step. While it should also be possible to insert a media node between the mixer and the soundcard, I’m not sure if that actually works. It’s possible the soundcard driver does not like its input being unplugged and replugged.

1 Like