[GSoC 2017] 3D Hardware Acceleration - Weekly Report 4

That is correct. Software ray tracing does not provide OpenGL acceleration. But it is an alternative to Mesa, which has no activity and appears to be fairly hard. Porting Embree to Haiku could provide some great demonstrations of real time 3D in software, and it could inspire someone to work on Mesa. The effort to port Embee could be a week or two of work. It requires knowledge of CMake. C++, and systems internals for the low level system include files.

We already have Mesa and software 3D using it. And it even runs some (old, simple) games already! For me that is a better source of inspiration. I even wrote a recipe for TinyGL to see if it would do better (probably not, but you never know).

Given the fact that no one has worked on Mesa for a long time, maybe some additional demos and inspiration would also be beneficial.

No one has worked on it recently, because it works and there is no need to focus on it at the moment. So devs work on more urgent issues.

Also, it was not that long ago that kallisti5 updated it and made current upstream sources work with Haiku. So you canā€™t say this is an abandoned project.

Ok, itā€™s not an abandoned project, and although it is a cool project which I completely support, it does not yet serve the purpose of providing a substitute for hardware accelerated 3D graphics. Which is really what everyone wants. From what I understand, it will require a large amount of work to bring in updates that will improve speed. Would you agree with that? More related developments in 3D, such as real time ray tracing would draw attention to the fact that current CPUs can support many types of real time 3D graphics. Right now the only interest I see is in this and another thread. And that is only discussion.

Raytracing is unrelated to what we need for 3D acceleration. Itā€™s an interesting thing on its own, but unrelated.

Getting hardware acceleration working is a matter of implementing lots of low-level things in the driver (sending commands to the hardware using hardware ring buffers, DMA, ā€¦), and then making this accessible from userland, either through our existing accelerant interface or through something more DRI/DRM-like. And then finally, have Mesa use this new interface to let the hardware do 3D. It is indeed a lot of work, but it does not involve anything 3D-ish (polygons, meshes, vector math, etc). Itā€™s just hardware driver development, and for quite complex hardware. Still, having an up to date Mesa is one of the many needed steps on the long way to accelerated 3D.

You are right anyway, enough discussion, back to code everyone! :slight_smile:

1 Like

I mentioned it before, devs at Morph Os got TinyGL to work nicely, and itā€™s accelerated hardware 3dā€¦ its actually very fast indeed. sure itā€™s on limited hardware, but works rather well. Itā€™s actually a modified version of tinygl.
As was said by many in this thread, you canā€™t cater for everyone, so maybe a limited chipset should initially be discussed?

Wait, tinyGL counts as hardware accelerated now? Itā€™s a pure software rendering solution. Or maybe we are not talking about the same thing?

And yes, of course drivers will come one at a time. Starting with the one whoever works on it has hardware for.

Morhos support 3d hardware acceleration on some selected, more than 10 years old cards.

And yes, they using TinyGL, but not directly, but ā€œThe MorphOS version of TinyGL is only loosely based on the original implementation. It was rewritten to take full advantage of 3D hardware acceleration.ā€

You guys should check the calendar first, then make a calculation: how many developer-hour would be required to support some absolutely out-of-date GPUs.

And while you can argument with ā€œsome old card is yet better than nothingā€, i think the mesa backend what we got could provide sustainable 3D hw acceleration in the future, not a hack over a ā€œtiny subset of OpenGL, last updaed 15 years ago, designed with no hardware acceleration in mind.ā€

I wasnā€™t saying we should use TingGL, itā€™s an example of what a small group of Devs can do once the framework is in place. And completely agree on the outdated hardware, so as also said before, maybe target a widely used NVIDIA or AMD card.
Might be easier these days, not too many gpu manufactorers remaining.at any rate, were talking about it and thatā€™s gotta be a positive.

weā€™ve got upstream support and itā€™s very active

aside from that one modified quake engine, no games use raytracing, which is entirely software rendering with no support from any consumer video cards. haiku, being capable of using any number of cpu cores, is already as suitable for realtime raytracing as any other operating system that exists.

Until a team of devs can handle the hard effort to bring 3d hw acceleration, what could be done by a dev alone is to try to port openswr. Itā€™s a software renderer, sure, but a massively multithread one, which is better than llvmpipe on CPU with several cores. And itā€™s an renderer for OpenGL, not a proprietary API, and itā€™s already integrated into Mesa.

http://www.openswr.org

What could also be done is to explain/design the needed API between graphics card driver and accelerant. Is there any convention between those components at all?

Well, open-source wise, this API between 3D hardware and driver is mostly called DRI2 + DRM. Dunno how Vulkan will or could change things on that topic, though, but itā€™s canā€™t be worse than current situation I guess.

not production ready yet but also mesaā€™s already ported to haiku

Youā€™re laughing because you THINK you know what Iā€™m talking about. But thatā€™s not what Iā€™m talking about at all. Iā€™d given some thought to it, sure, but you are right, that would be rather expensively extensive. For an open source project like Haiku, it doesnā€™t make senseā€¦

But my ā€œforkā€ wouldnā€™t be open-source. So itā€™s more reasonable to consider that option. However, Iā€™m talking about a far more simplistic ā€œvideo cardā€, that uses existing hardware/softwareā€¦ except that hardware does NOTHING but render graphics, even though it isnā€™t designed for that task at all. :smiley:

But let me ask my previous question, which actually could relate to my current one.

How hard would it be to break down my ā€œancientā€ revision of ā€œHaiku64ā€, to a point where it does nothing but render graphics and send that data to a connected Haiku system? You already have Mesa3D. You just make an absolutely minimalistic version of Haiku run that across all CPU cores. Surely that would be a fairly decent solution? And there would be no ā€œbinary blobsā€ or proprietary code to navigate.

Itā€™s far from an elegant solution (it definitely is a ā€œcrazyā€ type ā€œsolutionā€), but would it work to give an accelerated 3D that we currently donā€™t have? Can it actually be done and would it work?

Correct me if i misunderstood something please:

you mean there would be an x86 core with a minimal Haiku System on this boaard which would do rendering for a host System?
Or woul it a separate computer?

And it would do the rendering with Mesa? Or?
This part is not clear to me, sorry.

For sake of simplicity, it would be a PC running a minimal non-graphical version of Haiku, running Mesa on all cores and operating as a ā€œvideo cardā€ (connected to a monitor) with an interface to another PC running normal Haiku. Iā€™m curious whether it would result in faster (accelerated) 3D than we currently have, since the one computer would be devoted to nothing but graphics rendering/display.

Indeed, a crude concept, but would it work?