Well … I have a spec for how to at least attempt that, leveraging our FreeBSD compatibility layer and FreeBSD’s DRM drivers. But I’d need a man-month at least to attempt it in…
It might be well worth the effort, considering how even two man-months of you not doing other work could spark a lot of interest and further contributions if you manage to come up with a reasonable foundation and at least one good popular game working reasonably well most of the time.
Yeah, well, I have no idea if I’ll have an actual man-month of time to try and work on it, even after the release. We’ll see.
If anything on this alley gets done by Christmas it would still be an amazing year for Haiku. Not that there aren’t enough geeks excited about the upcoming Beta, but having something even small to build upon to bring 3D gaming on Haiku would make them go wild.
And for good reason, it’s always great to have options when you enjoy gaming. Too many people are stuck with Windows because of their favourite games, and not everyone finds GNU/Linux to their taste, so a more mature Haiku and a foundation for 3D applications and games would be quite welcome and a good reason for other developers to get involved. Or so says the idealist in me.
Lets us know if you want/need donations for that. If I knew someone was working on some concrete work towards getting libdrm and other infrastructure for 3d acceleration ported I’d throw few man hours worth at that. Especially if AMD graphics aren’t ignored with preferent to Intel etc…
So, I’m guessing we need the additions to the FreeBSD layer (linuxkpi) , work to port libdrm itself, libpciaccess etc…? So one thing I noticed is FreeBSD imported the libdrm code and seems to have written thier own build system for it so it integrates well… would Haiku do the same and make Jam files for libdrm? Does accelerated 3d require implemented any of Linux’s KMS infrastructure?
Yes, it’d need linuxkpi; there is already a libpciaccess port and a Mesa port, but it would need heavy modifications to get working with our kernel, and probably linuxkpi needs stuff our FreeBSD compatibility layer doesn’t have. Again, I just don’t know, I’d need to investigate.
After various attempts with GSoC and everything, I think I have a pretty clear idea of the challenges now.
The thing is, we already have our own drivers and interface for graphics card. One approach is throwing them away and attempting to graft the Linux drivers on our kernel instead. This will face a few challenges, because our kernel isn’t Linux and is not necessary similar to it in all aspects. They have, for example, a lot more complex way to manage memory (with various allocators for different purposes), when we get away with the universal create_area.
In that direction, we can either attempt a port of the Linux drivers ourselves (hamishm started a Linux compatibility layer with that goal, it’s archived on Gerrit now), or attempt to reuse code from FreeBSD and our FreeBSD compatibility layer (but somehow I think stacking a compatibility layer on top of another may not be the best choice).
Once we have the drivers ported, we need to figure out a way to plug app_server to them. Probably a whole new protocol between accelerants and drivers.
The other option is keeping the existing accelerant<>driver interface, extending it with 3D support (most likely in a way compatible with Linux’s DRI - by using the same ioctls). This means more work on the drivers but less changes on the upper layers of the stack for us.
KMS is somewhat orthogonal, it’s just about wether the code to set a video mode is kernel-side (in Linux monolithic kernel tradition), or done by userland (in our case, it is done by app_server). The KMS way has the advantage that you don’t need to start app_server to get the native video mode (so, native resolution splash screen). It has the downside that more code is moved to the kernel and running without userland protection nets and debugging facilities.
I would prefer that we can reach what was promised by Gallium3D: very low level 3D drivers (similar to our current design), and all the heavylifting done userland side using a mix of Mesa/Gallium drivers (for 3D) and accelerants (for modesetting, backlight control, etc). Probably not the easiest solution to get running, but I would say the best choice in the long term. It would also benefit the *BSD and others to have something like that, I guess.
FreeBSD’s linuxkpi code is pretty clean and easy to work with. We shouldn’t incur a performance penalty here, as all the most performance-intensive functions (bus space access, etc.) are inlined anyway, so it’s just another layer of inlining the compiler has to do.
No, not really? Each driver/accelerant pair already has their own custom ioctl set besides the regular ones. Usually this is just ‘get private data’ or the like (as intel_extreme and radeon_hd do), but there’s no necessity it needs to be “just that.”
Wayland compositors often have a similar split like we do in app server vis-a-vis the server itself and the accelerants, though they typically call these backends. For instance, ee what wlroots does here. So we can just write a drm.accelerant
that uses libdrm (which in turn ioctls the graphics devices directly) as an app_server backend.
So this model is very much like the way ethernet drivers ported from FreeBSD work on Haiku: They function almost like native ethernet drivers, but with more ioctls that you can use if you know about them. We can do the same thing for graphics drivers, which will allow us to keep our old modesetting drivers alongside these new DRM ones (and continue writing them, if we wanted), while also gaining 3D acceleration in a way that does not place a massive burden on us.
Gallium discovered that you really need the massive memory management stuff in the kernel; there’s just no way around that, which is why we’re at where we are now with DRM et al. being these huge hulking pieces of software that are usually over a hundred thousand lines of driver code for one chipset family. It’s just a mess.
FreeBSD, OpenBSD, and DragonFlyBSD all use the linuxkpi wrappers and are getting close to Linux levels of performance, so this model works for them, so they would likely be uninterested in a “lite” driver model. Maybe if we had more manpower we could experiment more, but we don’t, so we should take the more traveled path on this one.
subbed for goodness…
I have experience with video drivers and will have time in the winter months to collaborate, At least it looks like it right now I’ll be having some time off. I have written a driver for Microstar International many years ago for laptops, ATI hardware, It’s been a while since I got into that level of code but it’s not unlike riding a bike a few crashes and scrapes and you get your balance again… I used to run BeOS way back and am a big open-source fan Linux is my preferred system, But things can change again. lol
Christopher
Idea:
- Can we now decommission Mesa 7.9.2 or breakage on x86_gcc2? Focus on mesa-swpipe usage.
Only if you can get mesa 10+ to build with gcc2.
Since we are at R1B1 and Mesa > 7.9.2 doesn’t support GCC2 anymore, skip it.
FOSS HW-accelerated drivers dwindled down over the years. A well crafted BSD/Linux compatibility layer is better overall to support legacy hardware - if not already existing for Haiku.
Remember that the gcc2 side of the OS is there only to provide compatibility with BeOS apps. We don’t need a super modern Mesa there, and I don’t expect any new apps to be written with gcc2 compatibility in mind anymore (and I wouldn’t even be surprised if people start ignoring 32 bit support completely in the next few years).
So, consider this Mesa as a legacy thing, here to satisfy the needs of BeOS apps. New developments can fully ignore it. And I don’t mind shipping only gcc7 versions of GLInfo, Haiku3D and GLTeapot even on 32bit images, to show off these few extra FPS (or spin even more teapots).
It possible to apply for mesa the same hack that was used for build the new ffmpeg version?
What was that a GCC2 compiled opengl wrapper or something, that probably wouldn’t be too hard?
No, correct to keep certain things as is for Beta1. Invest in 3D HW accelerated driver development infrastructure for Beta2. Intel/Radeon provide enough open GPU docs for HW support (see: https://gpuopen.com).
I’ve ported AMDgpu and other drivers elsewhere so it just depends on the driver framework provided to
see if it’ll work on Haiku…
Not as easily, because Mesa has C++ APIs.
But really, what’s the point? Mesa 7 is meant only for BeOS apps. None of them needs any modern GL feature. So Mesa 7 fits the bill perfectly. New apps should be built with gcc7. Why would one spend time hacking on modern Mesa for gcc2 apps that won’t ever use the features anyway?
When I posted about decommissioning Mesa 7, this was in reference to porting software using GCC7 on
Haiku x86 w/Mesa >7. This was/is more about using and supporting 3D apps like Blender, OpenSceneGraph, Embree, 3DMov, and others on Haiku x86 (versus Haiku x86_64).
My other comment deals with accuracy of 3D rendering between Haiku’s mesa_swrast and mesa_swpipe drivers. Mesa_swpipe s faster, but has a few problems in 3D rendering not experienced when using the mesa_swrast driver. So, this is not so much the Mesa 7 versus Mesa > 7, scenario on Haiku x86 in this case but in migrating between those two 3D drivers if just in some minor 3D software rendering performance gain versus gained or equivalent 3D rendering quality/accuracy,
Sorry about any previous confusion. If you want to ‘experiment’ and migrate just the core 3D demos/screensavers on Haiku x86 to use the mesa_swpipe driver. Haiku3D is a start.
This will provide Haiku with a nice 3D driver reference point when designing/poritng other 3D drivers and infrastructure for 3D HW acceleration purposes…
To pulkomandy’s point Haiku already has Mesa 17+ to support all the applications you mentioned under the x86 arch. In that sense Mesa 7 is already decomissioned for GCC7. Mesa 7 is only used to support BeOS compatiblity as it has to compile under gcc2. You can have them both installed on the same system.
Please note 32bit Haiku OS has 2 architectures, gcc2 and gcc7 currently which are called x86_gcc2 and x86 respectively. Haiku 64bit is gcc7 only anyway as BeOS never had 64bit applications.
Only the hardware drivers provide > GL 3.3 and vulkan support anyway so one way or other you need the hardware drivers for modern applications.
Also I seem to remember kallisti5 doing something to allow selecting between LLVMpipe and swrast etc… ? Or perhaps that fell by the wayside. There is also the SWR driver… that may be even faster than LLVMpipe.