This makes sure no progress is made. If someone is interested in fixing the problem and your answer is “nah, just run Linux and wait for someone else to do it” we will not get anywhere.
I already mentionned hamishm work on a Linus compatibility layer in this thread. It’s available as a patch series on Gerrit. I suggest starting from that, and trying to use it to get one of Linux drivers to build. Hamish work shows the kind of things that needs to be done, but the Linux kernel has a lot of internal APIs and we need to implement most of them to get graphics driver working.
Or, we may wait a little to see if Android gets somewhere with providing a stable API between kernel and drivers. This could reduce our work to implementing just that API.
In the name of progress, I have been using Godot in Linux as a working reference for what should happen in Haiku. I have yet to get Godot compiling in FreeBSD or OpenBSD. I’d use those for this because I find Linux to be abominable. But, I’ll use what works for the purpose of a working reference. I’ve even run into problems in Godot on Windows which prevents me from using that as a reference. Linux is only slightly less abominable than Windows. I’ve been learning alot. My only wish is that there were InRealLife::more_hours_in_a_day(). Is there anybody available with the know-how to implement this functionality?
I have no qualms with angering the gods. It has been my life-long mission and birthright to do so. My biggest reaction of disgust is towards secretiveness in an otherwise open project. Can you articulate a practical reason for why you are saying what you are saying? You seem to be dancing around the topic and protecting your claimed works from having more eyes on the issue. Why is that?
I have made a very simple diagram describing my current understanding of the components of the graphics stack that are interchangeable between Linux and Haiku. As you can see, for certain cards which exploit the Gallium design architecture, it may be possible for a few dedicated individuals to port an accelerated driver from Linux to Haiku.
From what I understand, Hamish (the kiddo working on the GSoC project?) was working on writing the “Hardware agnostic, OS specific code” in the hexagon second from bottom in the above diagram.
(Laugh if you want, but please give feedback)
EDIT: it should be possible to write a converter that translates Linux’s system calls to Haiku’s system calls that is performed at compile time, which means that there is theoretically no performance penalty. It would be a Haiku native driver.
Also, there is the issue of interfacing that driver with Haiku. How? Can we rewrite the app_server to take advantage of the new driver or is that stupid?
Something like your drawing is very good. I look at it like this for Haiku: Hardware → Kernel (DRM) → Driver (ICD) → Accelerant → app_server
You can review video driver breakdown architectures from X.org point of view and AMDgpu-pro development, as well as the Haiku version for video driver development on the wiki (I think it was based from the BeOS book version). You can add a layer of breakout for the kernel-side (like DRM) versus userland-side to help the visual on where things lay out moreso for development purposes and better collaboration.
Again, I think the wiki has this recent info.
Note:
The Matrox driver was used as an example in the wiki.
With Mesa 17.1.10 on Haiku, implementing " export MESA_GL_VERSION_OVERRIDE=4.5COMPAT" will promote the OpenGL 4.5/GLSL 3.30 compatibility-mode features of Mesa. Normally, it defaults to OpenGL 3.0/GLSL 1.30 under Haiku x86 (gcc7) and Haiku x86_64. You can test OpenGL 4.5/GLSL 4.50, but with expectations that not everything (certain GL 4.x extensions will work properly or successfully). Check with the Mesa matrix website on what extensions are expected to work with a driver.
System calls are for userland applications. Drivers run in the kernel. We need to implement the Linux API in some way, we can’t just “translate system calls.”
Thanks for the reply, Waddlesplash, I was unaware of the difference. However my idea was to implement the driver as a userspace program. This would hopefully make porting X and Wayland applications by only copying what is absolutely necessary to get accelerated graphics, while the app_server (or some hypothetical acceleration server) does what DRM does, which I believe is to structure safe, direct access to graphics hardware without the possibility of crashing because of attempted simultaneous execution of code in the command que.
Thinking about it in retrospect, it seems kind of dumb.
Mesa already largely is a userland driver, everything in the kernel is there for a reason, be it performance security, or flicker free boot etc… doing that stuff in a different way than Linux just means your hypothetical driver would likely see very few updates in the future by other developers expecially those outside of the Haiku project.
You really should get a better understanding of how all this works before proceeding. A lot of what you are saying has holes in it. This is probably a good place to start for understanding Linux’s graphics drivers https://dri.freedesktop.org/docs/drm/gpu/index.html
This is more or less what graphics looks like on Linux today. libdrm and the kenrel DRM side of things is mostly what is missing from Haiku. It’s also worth noting that there is no such thing as a 2D accelerator in a modern graphics card, so making something like Glamor on Haiku for the App server might be useful then again maybe not as Haiku’s API may not lend itself to that.
The GSoC student was Vivek Roy, and unfortunately you can mostly forget about his work. His mentor disappeared after about 1 month, and all he did was copypasting some header files from Linux without much clue of where he was going. Also this was not archived properly so we don’t even have this work as a patch we can apply.
Hamish was also a GSoC student, he worked on porting Java to Haiku back in 2012, but also contributed to other things in Haiku, including what I think is currently the most advanced effort for porting 3D drivers to Haiku.
This is not true. The userland side of things is just the OpenGL state machine and shader compiler. The kernel-land side of things does memory management, ring buffer / command queue control, DMA, etc. … it’s why the kernel drivers are pretty large.
Yeah it is true… its described this way everywhere it is documented. And as I said the parts that need to be in the kernel for performance are. Mesa is self described as a userland driver.
Not sure what you are on about the state trackers and compiler part of it not being part of the driver… they literally only target the GPU hardware… as part of userland side of the driver. Without the userland side of mesa you certainly don’t have a 3d driver at all.
Mesa is the open source implementation of Khronos’s API specifications. It is not a driver. Also, wouldn’t it be more correct to say “user space” and “kernel space” components of the driver?
Also, I looked at Hamish’s work. I shot him an email with a few questions, hopefully he gets back to me soon. Aside from porting libDRM, does anyone have a strategy for implementing hardware acceleration? Having libDRM is great and it allows easier porting of applications that are designed to interface. It, but we would have to rewrite Haiku’s applications to interface with libDRM. It’s a poo poo sammich either way.
EDIT: No I’ll will towards anyone. I just get antsy about vocabulary because technical stuff gets confusing fast.
Uh, he’s probably pretty busy. Developers generally don’t like it when you bother them privately about code, especially when they haven’t touched said code for years. If he’s around, he’d reply to the public mailing lists; if not, then one of the other developers can (probably myself would.)
libdrm is the interface between userland and the graphics drivers. Once it’s ported, the Mesa components that use it to talk to graphics hardware will then function (well, probably with a bit more work to communicate with our app_server.) The rest of the OpenGL stack is ported as we have Mesa LLVMpipe working on it already.
So … do you even understand how the OpenGL stack works on Linux/BSD/Android/etc.? Because to me, it looks like you have absolutely no idea what you are talking about here.
So … do you even understand how the OpenGL stack works on Linux/BSD/Android/etc.? Because to me, it looks like you have absolutely no idea what you are talking about here.
Not really. I’d like to eventually contribute to Haiku. If you have time, please educate me or point me toward some talks or books on the subject. I’m learning through practice and reading online material detailing how the Linux graphics stack and Haiku’s graphics stack function. Would Haiku’s applications not have to be rewritten to interface with graphics drivers through libDRM? Would you only rewrite app_server to interface those drivers through libDRM?
My understanding is that the higher level APIs, such as Mesa are what’s involved in application development. libDRM is just a middle manager between the API and the kernel. If you write an app to use the Mess APIs, there shouldn’t be much of a change needed when the lower level stuff changes.
No. That is not how libDRM nor how Haiku’s applications work. libDRM is an interface to the kernel-level drivers that is actually pretty similar to the Be accelerant model.
app_server does not need to be rewritten, it already has support for multiple backends and is completely hardware-agnostic; the accelerants take care of this. We would just write another accelerant for libdrm.
GPU-accelerated graphics stacks are very complex and require one to already essentially be an expert in low-level memory management, high-level graphics APIs, and then everything inbetween: display server architecture, ioctl calls, pipeline state machines, buffer passing, …
So unless you already know a lot about kernel and userspace development and interaction, this is far too advanced to be a starting point. This is why I spent the past year learning kernel development through WiFi drivers instead of starting with this; it’s just too complicated to be anyone’s introduction to these concepts.
Mesa does not provide “Mesa APIs”, it implements OpenGL (and now Vulkan). But yes, we already provide all the APIs necessary, once we get GPU acceleration the APIs will just get a lot faster performance-wise.
Thanks for answering all these inane questions, I really appreciate it. I’ll go away for a while now and read.
I’m not starting here; this is my main interest. I am beginning with some of the low hanging fruit ok Haiku’s to-do list, and after those patches get approved, I’ll inch towards accelerated graphics.