Maybe is more important have support for hardware in general than a half webbrowser when there are a lot of browser can be ported too… Just priority.
Useless personal comment. Please save your comments for writing something that’s of use to someone. I’d imagine having an Ati or Nvidia driver for hardware 2 and 3d would go a long way.
Unless you’re planning to become a contributor and write a GPU driver for Haiku yourself, that comment is not useful.
Sounds typical when the wishlist doesn’t fit the project goals. I’m just sorry your 15 years old expectations can’t be fulfilled now as of today…
We do have 2D hardware support, but these days doing it all by CPU is faster in fact.
But the 3D support and more generally the full scale of GPU support is another story.
These days GPUs are a whole complex parallel computing units and supporting them is far harder than just support a CPU plateform. And each GPU generation, sometimes even model in same generation, can have enough differences to require a distinct code.
And that for each manufacturer (Intel, NVidia and AMD, mainly these days).
Not even Linux devs does that alone. Actually the support is largely made by Intel, AMD and NVidia themselves.
So, nobody could actually expect the few Haiku core devs to do it alone.
We may or may not find a way to build some compatibility layer to reuse what Intel, NVidia and AMD guys do, but we won’t be ever able to do it all by ourselves.
Sure, it will be great if we could. But we won’t, not without more help.
As an idea instead of DRM, how about look at Dano for inspiration, and maybe consider BDirectGLWindow? It used hooks directly into a graphics card accelerant to give 3d acceleration to the the Direct Window only though, not to mention, it allowed for the DirectGLWindow to be on a secondary monitor. Of course, this is just an idea for the future.
CofE, that’s a great idea, at least for ideas on how this might start to function.
And shows no 2d accell for any cards, though possibly the hd_radeon family. Ofcourse it would be great if we had acceleration already, and as you said, no way it’s going to happen out of nowhere.
There needs to be framework in place to achieve this. Even Morph OS has a version of accelerated 3d, using a modified version of TinyGL. Sure they only support limited hardware, but at least that would be a start.
Not saying that TinyGL or Mesa is the right way to go, All i am saying is that there needs to be a framework in place to implement accelerated 2d and 3d. Not doing so is like sticking your head in the sand as sooner or later you will need it.
Not meaning to rub anyone the wrong way, the progress made by the contributers at Haiku is nothing short of amazing.
OT: I missed that. When, how and why? Is there a statement?
OT: He is still around…
Yes, he’s still around, just very busy and with not much time to contribute.
Maybe @Vanne could describe the “Axel” he saw leaving the project, to clear up the confusion…
Sorry, don’t mean left, but I guess that was in regards to being full time payed on the job.
He was paid only for a few month in total, out of many years of contributions.
Acceleration was intentionally disabled. Modern CPUs are so fast that the 2D “acceleration” on most cards was actually slower, while not allowing antialiased drawing. So we just stopped using it because it was silly.
3D acceleration is a different thing, of course. I would also prefer something more “native”, using the existing accelerant interface. But this is a very huge tasks. It takes Linux a separate team of full-time devs for each supported device family. And we have maybe one or two part-time devs with some of the required knowledge. So, let’s be realistic and see if we can reuse some already available code. We’ll see how to integrate it properly. This year GSoC attempt did not go very well, maybe next year we’ll get another student with a different way to approach the problem?
Man this guy should be payed full time permanently…
Just to clear up my confusion… are you talking about full-time hiring Axel Dörfler (was he the ‘Axel’ you saw leaving the project?), or the student that failed on his GSoC project?
If devs would only look to the symptoms and not the specific disease, I am quite sure a decent solution to the whole 3D acceleration issue could be found. But it requires thinking “outside the box of conventional wisdom”. Something that would seem utterly ridiculous on its face, but I believe it would work. But until it’s tried, we’ll never know.
Simple description: make your own video card.
But the “how” is part of my “Crazy Concept Ventures”. Something that, in order to implement, requires closed-source forking, development of a hardware platform (accomplished by building said bits of actual unique hardware), people assigned/devoted to individual code/driver development (and paid to do so), etc.
I have $22K at my disposal to see this project started. And more coming in from the rise of Bitcoin ($5K profit from my investment, at the moment). What can I get going for that amount of money?
Sometimes realizing a vision is worth more than the risk of needing that money sometime. I’m willing to take that risk, if my vision is feasible. Don’t even know if it is. But I’m willing to give it a shot, if anyone “crazy enough” is willing to follow. If Haiku, as an actual platform, could become something people look at and say, “Whoa… how’d they do that?!?”, it will have been worth it… and then they buy! Because what Haiku will do, on that platform, has NEVER been done before, because it CAN’T be done on an existing OS (Windows, MacOS, Linux, etc.) without breaking everything. It must be built-in, as the very foundation of the OS. Not a patch-over.
Time to get “crazy”? Or you can continue to complain about non-existent 3D hardware acceleration… amongst other issues on Haiku.
I’m sorry but I’m actually laughing out loud at this.
$22K will pay a single dev for a few months. Without any hardware or anything. Making your own hardware needs millions of dollars of investment before you can get anything out of it. Also, no one in the Haiku team has the skills required to design hardware. And, we would STILL need to write drivers for it, so it doesn’t even solve the issue.
People doing video hardware know what they are doing, and we would not do it any better. Some of them do provides us with specifications for their hardware and/or have a support line where we can ask for help.
That being said, someone already tried it. And failed. https://en.wikipedia.org/wiki/Open_Graphics_Project
One alternative to MESA is to go with an optimized 3D renderer like Intel’s Embree. It is open source code that provides fast 3D ray tracing. While ray tracing does not provide general OpenGL compatibility, it does provide an alternative way of very fast 3D rendering. In fact there were some games written using ray tracing as the 3D basis. The Embree code is not hardware specific but does require a porting effort. https://embree.github.io/renderer.html .
This could be useful for games, assuming they are written to use it. But not so useful for generic things like desktop compositing, or accelerating rendering of web pages. There we don’t really have a reason to go with something else than OpenGL.