[GSoC 2017] 3D Hardware Acceleration - Weekly Report 4

You never did any 3D programming right? You have actually any idea what goes into a full fledged GPU driver? Guess not. And by the way, raytracing is bad for real-time graphics. There are lab-condition test-cases but it simply can’t keep up with a HW Rasterizer. There’s a reason Tri-based rendering won over ray-tracing in real-time. It’s non-branching, non-correlated, deterministic and scale well with many pipes.

EDIT: Besides. Mesa just sits “above” the driver. You need binary blobs (or the open source drivers of AMD) to get things working. Mesa alone gives SW Rendering. Granted you can produce like this a render speed comparable to an old aged 3D graphics card but you can hardly run anything “interesting” with it.

2 Likes

CPU, even several of them, can’t compete against massively parallels GPU where 128, 256, 512 parallels rendering pipelines are busy at once. It’s definitively another scale, really.

There is no point in hoping that devoting a computer to do only software rendering will bring that much performance than running the same software rendering on the user’s computer.
At best, the transport cost between the two computers will probably be a bottleneck, as it is already between main memory and GPU memory these days.

1 Like

You know, it would be still software rendering, but yet slower, depending on the transport medium throughput. Maybe USB3 or Thunderbolt could be fast enough to not be the bottleneck, but I still ask myself, why would one run an extra computer only for this sake.

  • It would be slower than to do the rendering on localhost
  • It would take more energy (current GPUs have really good power management)
  • It would generate extra noise
  • It would be bulky
  • expensive
  • it won’t be an elegant way
  • and absolutely not innovative
  • and it would be stil lsoftware rendering.

Your CPU is maybe <1% busy doing other stuff and doing 99% Mesa rendering, when you run a 3D app in Haiku. So, as mentionned, moving Mesa to another computer would add some overhead (you need to transfer the data). This may use, say, 1% of the CPU on each side (assuming a quite good implementation).

As a result, your dedicated Mesa computer has exactly as much CPU power as your original machine has; and now you need two computers to run Haiku, instead of just one.

The premise is IF it could be done and IF it would yield a performance result desirable.

Yes, it’s a kludge, but IF it netted a performance increase not currently attainable, then is ANYTHING better than nothing? That’s my thinking.

Yes, there is. However is it a design from the 1990s and oriented towards the 2D acceleration features of the day: blitting rectangles, drawing a mouse cursor with an hardware sprite, video overlays, maybe horizontal and vertical lines tracing, rectangle filling, scrolling. And Haiku uses none of these anyway. So essentially we are left with the modesetting hook, and a partial implementation of multiple display support in some old drivers.

However, this interface does not go very far for 3D acceleration which is a completely different thing.

From there, we see two ways of doing things:

  • One is extending this existing accelerant API
  • The other is developing a separate interface, maybe as ioctls, more similar to what is done in Linux

I don’t know which way is better, not being familiar enough with DRI/DRM to see if it could be made to fit the existing accelerant interface. This interface is also accessible to applications quite directly (BWindowScreen allows pretty much direct access to the accelerant hooks), so it would be possible for Mesa to grab the hooks and talk with the driver directly. This makes things fit well with our existing model and allow to decide who gets access to 3D acceleration at the app_server level.

If we go with a more Linux-like interface, it would be possible for an application to bypass app_server and talk directly with the device driver. I don’t know if this is desirable/useful.

1 Like

@AndrewZ: So where are your god now?

oh, and the embree renderer:

It is around 4-5 fps on Core 2 Duo L9300.

3 Likes

As I suggested before, how about BDirectGLWindow? I know it only covers opengl acceleration, but isn’t that mostly all the 3d acceleration we’ll need for now? Be Inc even had an article on it before it went under: https://www.haiku-os.org/legacy-docs/benewsletter/Issue5-13.html

Miqlas, you are always the rock star!!

That’s one option, but I think we would rather use EGL which is getting more and more closer to being universal now. It provides a portable way to create and manage OpenGL contexts (ie windows).

BGLView is also going to stay, as you don’t always want a full window for OpenGL stuff (for example, the preview in ScreenSaver preferences can be a small BGLView in a larger window).

Implementing BDirectGLWindow is also possible, but I don’t know if the API is a good fit to our implementation (BDirectWindow is already not as direct as it was in BeOS, because our app_server uses double buffering and BeOS doesn’t).