This is the type of progress I like to see! Kudos to all involved!
Iâm not sure I would call software rendered 3D âhigh performanceâ. Nice to see progress though.
Software rendering sets the API for hardware rendering to be built upon.
What, exactly, is holding up hardware 3D rendering on graphics cards? Are there any 100% open-source drivers? Or is there always going to be some âbinary blobsâ that prevent any open-source OS from taking full advantage of a graphics card?
Has anyone looked into trying to create a custom graphics card via FPGA on a PCIe bus?
AFAIK its the device manager redesign? Which is being worked on to improve multi monitor etc etc⌠which BeOS didnât really support.
I mean there is the existing port of the Vulkan driver.
FPGA is a non starter because its far too slow, youâd end up with a GPU even slower than the Nvidia fixed function GPUs RudolfCâs drivers support. In any case software rendering is already much faster than this would be. As an example even on a fast board the GPU you can fit into an average large FPGA is around late 90s performance. Beyond that you are talking about thousand dollar FPGAs⌠and that would still not get you much further.
Also consider that FPGAs tend to be pretty weak on memory bandwidth (and even then you have to fan that bandwidth out a lot inside the GPU to handle it), while GPUs are at the opposite end of the spectrum with even low end GPUs today having well over 100GB/s bandwidth (note that is big B bytes)
Some binary blobs are cross-platform do not need any knowledge about its contents by device driver. For example Nvidia GSP firmware. Such binary blobs do not cause any limitations to Haiku compared to other OSes.
Ok, just read some interesting stuff about nVidia open-source driver for Turing/Ampere cards and my Asus Zephyrus G laptop uses an nVidia GeForce GTX 1660 Ti (which is a Turing chipset) and found this:
How much can be done with this information? Can we make any/all of this work or is there something that still hogties Haiku?
I already managed to port and run this driver as userland server, but it need integration with app_server modesetting and Mesa NVK Vulkan driver to be actually useful.
Once that is done, what kind of performance can we expect? Just how fast will the Teapot spin THEN? 1,000,000fps?
I am new to Haiku, and I was kinda surprised when I saw mine (QEMU Virtual Machine) been only at 300+ fps. I mean, 300fps are great, but not for something so simple. Then again, it was OpenGL and not Vulkan, but this thread that explains that there is no GPU acceleration on Haiku gives some sense to my thoughts!
I hope we can have it soon and that Teapot can be at least 1000fps
Iâve read about exactly such an project only a few days ago: New open source GPU is free to all â FuryGPU runs Quake at 60fps, supports modern Windows software | Tom's Hardware
For now thereâs only a driver for Windows,but since itâs fully open-source,one for Haiku can be written.
I donât know if that thing is faster than software rendering (or at least not slower),however.
Ref:
- 2401 FPS - Mesa - latest version - on Haiku x86 (32 bit) and x86_64 (64 bit) - Software - Haiku Community (haiku-os.org)
- 1879 FPS - Upgrade Mesa, how feasible? - #213 by cocobean
See: Software Rendering Vs GPU Rendering: Differences, Pros, Cons (mygraphicscard.com)
Thanks for the information! When it comes to the screenshot you provided, on the other thread, is it indeed Hardware accelerated or software render?
Software rendering only.
Seems amazing that you can get almost 2,000fps on GLTeapot via just software rendering. However, you didnât mention what CPU/motherboard you were running, only the version of Mesa and the graphics card model. Since itâs CPU rendering, then we need to know what CPU youâre using! Also, perhaps, the revision of Haiku as well.
Now, in this scenario (software rendering), we need to start detailing the aspects of GLTeapot. How many polygons (triangles) is being displayed every frame. Obviously, there is no collision detection, even with multiple pots all spinning at once. Which got me to thinking about demos like this:
Or this:
Now given OUR (Haikuâs) logo is either a leaf or the Teapot (or the entire Haiku logo), if we were to create our own version of this type of demo, it could showcase:
- Collision detection (w/ walls and floor)
- Physics (same)
- How many triangles are rendered per frame
- Collision detection between teapots (or whatever image we used)
- Total framerate
In other words, weâd have a metric by which we could measure the functional performance of our systems in a fairly real world âgameâ environment. Someone (or a couple of people) threw together those two demos (Amiga and Atari ST) on hardware that, by todayâs standards, are pathetic. So, why canât we do something similar and showcase something that Haiku can do even better today!
And if/when hardware rendering is available in the future, we can switch between the two to see the difference and then record the percentage improvement!
Someone (or a couple of people) threw together those two demos (Amiga and Atari ST) on hardware that, by todayâs standards, are pathetic.
These demos are also pathetic by modern standards. Using algorithms and hardware that is more than ten years old, you can
- Simulate fluid in a teapot (https://www.youtube.com/watch?v=YTtTEOsMmVU)
- Simulate fracturing bodies (https://www.youtube.com/watch?v=eB2iBY-HjYU)
- Simulate sound of splashing (Keenan Crane - Synthesizing the Sounds of Splashing)
- Simulate sound of a fracturing body (https://www.youtube.com/watch?v=nHH8N_lNZzI)
So one can easily imagine the following demo: a glass utah teapot filled with red liquid falls inside a transparent box. It crashes into pieces, splashing red liquid inside the box. The liquid make waves, and teapot pieces are carried a little bit by the waves. And of course all the sounds are simulated in runtime.
Right⌠and you can code this in the same amount of time, as those two demos were coded in in the 80âs, right? Iâm talking about making a SIMILAR demo, not one that vastly exceeds them. Iâm not talking about what todayâs systems CAN do, but what we DONâT have right now, as any type of benchmark. And GLTeapot is not as technically enabled (though it renders a much smoother teapot, because of the type of shading/resolution), because it doesnât involve any physics or sounds or âphysicalâ interactions with the environment. Iâm simply proposing that a demo, similar to the ones I linked, be created that would showcase MORE than what the current GLTeapot demo does.
I just updated my copy of Haiku x86_64 R1/B4 to the latest revision, but GLTeapot still doesnât have the âdisable framerate limitationâ option. However, I was able to Jam the latest version, which had it and⌠apart from the FPS counter changing so fast, I canât even tell HOW fast itâs going (itâs something over 1,500fps, I think), itâs nice to know that unleashing it does tell me just how fast software rendering can be⌠which is insanely fast.
That being said, I believe, if weâre going to keep GLTeapot as a demo, it needs to be more informational and useful than it currently is. It tells us NOTHING about how many teapots are spinning, how many triangles are in the teapot, what the maximum/minimum/median framerate is, etc. And I think we need to set a standard that cannot be changed.
What I mean by that is we need to make the display represent some kind of reality. Spinning a teapot in a tiny window is ridiculous. If it were a game, you couldnât even see the game! So, how fast it spins is utterly pointless. So I think the window needs to be sized to a standard dimension within the resolution that Haiku is displaying. Not resizable on the fly. Yeah, resizable windows on the fly is a cool thing that shows what Haiku can do as far as function, but displaying a spinning GLteapot at over 2,000fps in a 1" or smaller window is silly. And the framerate goes all over the place (and the graphics glitch) when your mouse pointer is moving over the window, so that needs to be fixed.
If we want a windowed display of GLTeapot, it needs to be something like 640x480, 800x600, 1024x768, etc. Something that will fit within the current resolution Haiku is using. If we want full-screen, then the display fills the entire screen. Settings made should be savable, so GLTeapot starts up with the same settings each time. In full-screen mode, the settings can be changed with number keys or something.
Just some ideasâŚ
See: Gaming on Haiku - #89 by cocobean
GLTeapot is a simple app but it does show a basic benchmark. The difference in discussions is that GLTeapot is not a true representation of real 3D graphics benchmarking in full consideration. No high-end polygon counts, phong shading, bump mapping, or tessellations. At least, not in this current app iterationâŚ
This is where the line in the sand is drawn by most enthusiastsâŚusing 3DMark⌠etc, etcâŚ