High-performance 3D on Haiku

This is the type of progress I like to see! Kudos to all involved!

1 Like

I’m not sure I would call software rendered 3D “high performance”. Nice to see progress though.

Software rendering sets the API for hardware rendering to be built upon.

1 Like

What, exactly, is holding up hardware 3D rendering on graphics cards? Are there any 100% open-source drivers? Or is there always going to be some “binary blobs” that prevent any open-source OS from taking full advantage of a graphics card?

Has anyone looked into trying to create a custom graphics card via FPGA on a PCIe bus?

AFAIK its the device manager redesign? Which is being worked on to improve multi monitor etc etc… which BeOS didn’t really support.

I mean there is the existing port of the Vulkan driver.

FPGA is a non starter because its far too slow, you’d end up with a GPU even slower than the Nvidia fixed function GPUs RudolfC’s drivers support. In any case software rendering is already much faster than this would be. As an example even on a fast board the GPU you can fit into an average large FPGA is around late 90s performance. Beyond that you are talking about thousand dollar FPGAs… and that would still not get you much further.

Also consider that FPGAs tend to be pretty weak on memory bandwidth (and even then you have to fan that bandwidth out a lot inside the GPU to handle it), while GPUs are at the opposite end of the spectrum with even low end GPUs today having well over 100GB/s bandwidth (note that is big B bytes)

Some binary blobs are cross-platform do not need any knowledge about its contents by device driver. For example Nvidia GSP firmware. Such binary blobs do not cause any limitations to Haiku compared to other OSes.

1 Like

Ok, just read some interesting stuff about nVidia open-source driver for Turing/Ampere cards and my Asus Zephyrus G laptop uses an nVidia GeForce GTX 1660 Ti (which is a Turing chipset) and found this:

How much can be done with this information? Can we make any/all of this work or is there something that still hogties Haiku?

I already managed to port and run this driver as userland server, but it need integration with app_server modesetting and Mesa NVK Vulkan driver to be actually useful.

15 Likes

Once that is done, what kind of performance can we expect? Just how fast will the Teapot spin THEN? 1,000,000fps? :rofl:

3 Likes

I am new to Haiku, and I was kinda surprised when I saw mine (QEMU Virtual Machine) been only at 300+ fps. I mean, 300fps are great, but not for something so simple. Then again, it was OpenGL and not Vulkan, but this thread that explains that there is no GPU acceleration on Haiku gives some sense to my thoughts!

I hope we can have it soon and that Teapot can be at least 1000fps :wink:

3 Likes

https://www.theregister.com/2024/04/05/amd_mes_open_source/

2 Likes

I’ve read about exactly such an project only a few days ago: New open source GPU is free to all — FuryGPU runs Quake at 60fps, supports modern Windows software | Tom's Hardware
For now there’s only a driver for Windows,but since it’s fully open-source,one for Haiku can be written.
I don’t know if that thing is faster than software rendering (or at least not slower),however.

1 Like

Ref:

See: Software Rendering Vs GPU Rendering: Differences, Pros, Cons (mygraphicscard.com)

1 Like

Thanks for the information! When it comes to the screenshot you provided, on the other thread, is it indeed Hardware accelerated or software render?

Software rendering only.

Seems amazing that you can get almost 2,000fps on GLTeapot via just software rendering. However, you didn’t mention what CPU/motherboard you were running, only the version of Mesa and the graphics card model. Since it’s CPU rendering, then we need to know what CPU you’re using! Also, perhaps, the revision of Haiku as well.

Now, in this scenario (software rendering), we need to start detailing the aspects of GLTeapot. How many polygons (triangles) is being displayed every frame. Obviously, there is no collision detection, even with multiple pots all spinning at once. Which got me to thinking about demos like this:

Or this:

Now given OUR (Haiku’s) logo is either a leaf or the Teapot (or the entire Haiku logo), if we were to create our own version of this type of demo, it could showcase:

  1. Collision detection (w/ walls and floor)
  2. Physics (same)
  3. How many triangles are rendered per frame
  4. Collision detection between teapots (or whatever image we used)
  5. Total framerate

In other words, we’d have a metric by which we could measure the functional performance of our systems in a fairly real world “game” environment. Someone (or a couple of people) threw together those two demos (Amiga and Atari ST) on hardware that, by today’s standards, are pathetic. So, why can’t we do something similar and showcase something that Haiku can do even better today!

And if/when hardware rendering is available in the future, we can switch between the two to see the difference and then record the percentage improvement!

Someone (or a couple of people) threw together those two demos (Amiga and Atari ST) on hardware that, by today’s standards, are pathetic.

These demos are also pathetic by modern standards. Using algorithms and hardware that is more than ten years old, you can

So one can easily imagine the following demo: a glass utah teapot filled with red liquid falls inside a transparent box. It crashes into pieces, splashing red liquid inside the box. The liquid make waves, and teapot pieces are carried a little bit by the waves. And of course all the sounds are simulated in runtime.

1 Like

Right… and you can code this in the same amount of time, as those two demos were coded in in the 80’s, right? I’m talking about making a SIMILAR demo, not one that vastly exceeds them. I’m not talking about what today’s systems CAN do, but what we DON’T have right now, as any type of benchmark. And GLTeapot is not as technically enabled (though it renders a much smoother teapot, because of the type of shading/resolution), because it doesn’t involve any physics or sounds or “physical” interactions with the environment. I’m simply proposing that a demo, similar to the ones I linked, be created that would showcase MORE than what the current GLTeapot demo does.

I just updated my copy of Haiku x86_64 R1/B4 to the latest revision, but GLTeapot still doesn’t have the “disable framerate limitation” option. However, I was able to Jam the latest version, which had it and… apart from the FPS counter changing so fast, I can’t even tell HOW fast it’s going (it’s something over 1,500fps, I think), it’s nice to know that unleashing it does tell me just how fast software rendering can be… which is insanely fast.

That being said, I believe, if we’re going to keep GLTeapot as a demo, it needs to be more informational and useful than it currently is. It tells us NOTHING about how many teapots are spinning, how many triangles are in the teapot, what the maximum/minimum/median framerate is, etc. And I think we need to set a standard that cannot be changed.

What I mean by that is we need to make the display represent some kind of reality. Spinning a teapot in a tiny window is ridiculous. If it were a game, you couldn’t even see the game! So, how fast it spins is utterly pointless. So I think the window needs to be sized to a standard dimension within the resolution that Haiku is displaying. Not resizable on the fly. Yeah, resizable windows on the fly is a cool thing that shows what Haiku can do as far as function, but displaying a spinning GLteapot at over 2,000fps in a 1" or smaller window is silly. And the framerate goes all over the place (and the graphics glitch) when your mouse pointer is moving over the window, so that needs to be fixed.

If we want a windowed display of GLTeapot, it needs to be something like 640x480, 800x600, 1024x768, etc. Something that will fit within the current resolution Haiku is using. If we want full-screen, then the display fills the entire screen. Settings made should be savable, so GLTeapot starts up with the same settings each time. In full-screen mode, the settings can be changed with number keys or something.

Just some ideas…

See: Gaming on Haiku - #89 by cocobean

GLTeapot is a simple app but it does show a basic benchmark. The difference in discussions is that GLTeapot is not a true representation of real 3D graphics benchmarking in full consideration. No high-end polygon counts, phong shading, bump mapping, or tessellations. At least, not in this current app iteration…

This is where the line in the sand is drawn by most enthusiasts…using 3DMark… etc, etc…