[GSoC 2017] 3D Hardware Acceleration - Weekly Report 4

OT: He is still around…

Yes, he’s still around, just very busy and with not much time to contribute.

Maybe @Vanne could describe the “Axel” he saw leaving the project, to clear up the confusion…

Sorry, don’t mean left, but I guess that was in regards to being full time payed on the job.

He was paid only for a few month in total, out of many years of contributions.

Acceleration was intentionally disabled. Modern CPUs are so fast that the 2D “acceleration” on most cards was actually slower, while not allowing antialiased drawing. So we just stopped using it because it was silly.

3D acceleration is a different thing, of course. I would also prefer something more “native”, using the existing accelerant interface. But this is a very huge tasks. It takes Linux a separate team of full-time devs for each supported device family. And we have maybe one or two part-time devs with some of the required knowledge. So, let’s be realistic and see if we can reuse some already available code. We’ll see how to integrate it properly. This year GSoC attempt did not go very well, maybe next year we’ll get another student with a different way to approach the problem?

1 Like

Man this guy should be payed full time permanently… :+1:

Just to clear up my confusion… are you talking about full-time hiring Axel Dörfler (was he the ‘Axel’ you saw leaving the project?), or the student that failed on his GSoC project?

If devs would only look to the symptoms and not the specific disease, I am quite sure a decent solution to the whole 3D acceleration issue could be found. But it requires thinking “outside the box of conventional wisdom”. Something that would seem utterly ridiculous on its face, but I believe it would work. But until it’s tried, we’ll never know.

Simple description: make your own video card.

But the “how” is part of my “Crazy Concept Ventures”. Something that, in order to implement, requires closed-source forking, development of a hardware platform (accomplished by building said bits of actual unique hardware), people assigned/devoted to individual code/driver development (and paid to do so), etc.

I have $22K at my disposal to see this project started. And more coming in from the rise of Bitcoin ($5K profit from my investment, at the moment). What can I get going for that amount of money?

Sometimes realizing a vision is worth more than the risk of needing that money sometime. I’m willing to take that risk, if my vision is feasible. Don’t even know if it is. But I’m willing to give it a shot, if anyone “crazy enough” is willing to follow. If Haiku, as an actual platform, could become something people look at and say, “Whoa… how’d they do that?!?”, it will have been worth it… and then they buy! Because what Haiku will do, on that platform, has NEVER been done before, because it CAN’T be done on an existing OS (Windows, MacOS, Linux, etc.) without breaking everything. It must be built-in, as the very foundation of the OS. Not a patch-over.

Time to get “crazy”? Or you can continue to complain about non-existent 3D hardware acceleration… amongst other issues on Haiku. :smiley:

1 Like

I’m sorry but I’m actually laughing out loud at this.

$22K will pay a single dev for a few months. Without any hardware or anything. Making your own hardware needs millions of dollars of investment before you can get anything out of it. Also, no one in the Haiku team has the skills required to design hardware. And, we would STILL need to write drivers for it, so it doesn’t even solve the issue.

People doing video hardware know what they are doing, and we would not do it any better. Some of them do provides us with specifications for their hardware and/or have a support line where we can ask for help.

That being said, someone already tried it. And failed. https://en.wikipedia.org/wiki/Open_Graphics_Project

2 Likes

One alternative to MESA is to go with an optimized 3D renderer like Intel’s Embree. It is open source code that provides fast 3D ray tracing. While ray tracing does not provide general OpenGL compatibility, it does provide an alternative way of very fast 3D rendering. In fact there were some games written using ray tracing as the 3D basis. The Embree code is not hardware specific but does require a porting effort. https://embree.github.io/renderer.html .

This could be useful for games, assuming they are written to use it. But not so useful for generic things like desktop compositing, or accelerating rendering of web pages. There we don’t really have a reason to go with something else than OpenGL.

That is correct. Software ray tracing does not provide OpenGL acceleration. But it is an alternative to Mesa, which has no activity and appears to be fairly hard. Porting Embree to Haiku could provide some great demonstrations of real time 3D in software, and it could inspire someone to work on Mesa. The effort to port Embee could be a week or two of work. It requires knowledge of CMake. C++, and systems internals for the low level system include files.

We already have Mesa and software 3D using it. And it even runs some (old, simple) games already! For me that is a better source of inspiration. I even wrote a recipe for TinyGL to see if it would do better (probably not, but you never know).

Given the fact that no one has worked on Mesa for a long time, maybe some additional demos and inspiration would also be beneficial.

No one has worked on it recently, because it works and there is no need to focus on it at the moment. So devs work on more urgent issues.

Also, it was not that long ago that kallisti5 updated it and made current upstream sources work with Haiku. So you can’t say this is an abandoned project.

Ok, it’s not an abandoned project, and although it is a cool project which I completely support, it does not yet serve the purpose of providing a substitute for hardware accelerated 3D graphics. Which is really what everyone wants. From what I understand, it will require a large amount of work to bring in updates that will improve speed. Would you agree with that? More related developments in 3D, such as real time ray tracing would draw attention to the fact that current CPUs can support many types of real time 3D graphics. Right now the only interest I see is in this and another thread. And that is only discussion.

Raytracing is unrelated to what we need for 3D acceleration. It’s an interesting thing on its own, but unrelated.

Getting hardware acceleration working is a matter of implementing lots of low-level things in the driver (sending commands to the hardware using hardware ring buffers, DMA, …), and then making this accessible from userland, either through our existing accelerant interface or through something more DRI/DRM-like. And then finally, have Mesa use this new interface to let the hardware do 3D. It is indeed a lot of work, but it does not involve anything 3D-ish (polygons, meshes, vector math, etc). It’s just hardware driver development, and for quite complex hardware. Still, having an up to date Mesa is one of the many needed steps on the long way to accelerated 3D.

You are right anyway, enough discussion, back to code everyone! :slight_smile:

1 Like

I mentioned it before, devs at Morph Os got TinyGL to work nicely, and it’s accelerated hardware 3d… its actually very fast indeed. sure it’s on limited hardware, but works rather well. It’s actually a modified version of tinygl.
As was said by many in this thread, you can’t cater for everyone, so maybe a limited chipset should initially be discussed?

Wait, tinyGL counts as hardware accelerated now? It’s a pure software rendering solution. Or maybe we are not talking about the same thing?

And yes, of course drivers will come one at a time. Starting with the one whoever works on it has hardware for.