Vulcan and ... linux

This is continuation of I3 and awesome and Question about X11
Haiku have very small team developer.

Meybe good idea will be port whole windows system to Vulcan and add this to linux too.

  1. BeOs will have link to new driver vulcan
  2. more dev working on wmindow menager, themes, functionality

I would like to ger more quicly new version of my environ.

To do that libdrm and it’s kernel counterpart would have to be ported, and them Mesa on top of that… otherwise no Vulkan or OpenGL acceleration so no point in doing it.

@waddlesplash and @kallisti5 are probably the two most knowledgeable guys about this and @rudolfc also as he wrote the old NVidia driver BeOS for early GeForce cards. Basically it’s probably (1-3 man months of work (going from waddlesplashes estimate of 1mo :stuck_out_tongue: ) … thats a lot, basically 2-3 years of weekends invested by a single developer also only working sporadically on things is less efficient as you can’t get in a groove etc…

Also it seems you want to port the windowing system to Linux? I don’t think any developers here want to do that other than Barrett who is a nice guy but can be hard to get along with at times . (he has some think like what you suggest in progress for V/OS

I think if it were done properly patches to get app_server running accelerated on top of Vulcan may be acceptable here though… it would require serious research into what make sense though. In many cases directly drawing things with the CPU is faster, but for more complex rendering GPU will always win (rendering webpages for instance).

I think any futuristic GPU accelerated GUI framework should take inspiration from WebRender…

Also looncraz has a work in progress software compositor for the app_server.

A Vulcan graphical system is for android ?

That’s not true at the moment. The way we work for 2D rendering was designed for CPUs and is hard to accelerate with the way GPUs operate. For example, take Bézier curves. They are reasonably easy to draw with a CPU, you compute a few x,y coordinates along the curve and connect them. That is simple to do on a CPU.

But a GPU does not work this way. Instead, what it can do is basically run a program for each pixel returning “what color should this pixel be?”. The way we do 3D was grown around this, but it doesn’t fit at all with the 2D things.

Accelerated font rendering, for example, is still a subject of research and not at all mature.

There are some things CPUs are good at, and some things GPUs are good at. Accelerated drawing is not “let’s blindly move it all to the CPU side”. It’s a path full of difficult decisions as to what should be done on the CPU, and what should be done on the GPU. Something that seems to work reasonably well is compositing: the CPU renders each window (or maybe smaller units) in a separate texture, and then the GPU can blend these textures together, maybe scalling/rotating them in the process and adding some alpha blending.

For example here is a paper from 2017 explaining how text rendering on GPU can be made to work: still an active research area as you can see.


I think when I commented the only available library was slug which is closed source, but in fairly wide use already… but that is rapidly changing. Following link has has links to other resources on the topic GitHub - azsn/gllabel: GPU Vector Text Rendering (WIP)

Nvidia was talking about GPU glyph rendering 16 years ago… and it was likely faster than CPU rendering even then.

1 Like