As mentioned above I did some more testing and from what I gathered there the bug is not floating point texture by itself. Disabled floating point depth buffer and using conventional depth buffer shows the same problem. I can show by saving the depth buffer to a texture after glClearBuffer* is called that glClearBuffer* in the MESA implementation is plain out ignored. Obviously by ignoring the glClearBuffer previous frame depth buffer accumulates.
Now what goes for reporting this to MESA: that’s a waste of time. 17.1.10 is deprecated and end-of-life. MESA is at 18.x already with 17.2.x being the bug-fix branch of 17.1.10 . If you insist on sticking to 17.1.10 then any attempt to fix this problem is FUTILE since the fix would go into 17.2.x NOT 17.1.x .
This decision would be fine with me but then I’ll stop right here right now since it’s a waste of my and others time to continue.
There is no “decision” to stick to a specific version. Someone just has to provide an updated recipe and make sure it does not break everything. From our experience, Mesa can build but fail at runtime when some callbacks are not set, so it needs careful testing before we ship a new release.
We will get to it eventually, it will just take some time unless someone steps up and does it.
OpenGL 3.3 is paired to GLSL version 150, not 330. The shader requires #version 150, otherwise it defaults to 110 (from memory). From version 4.2 onwards, GLSL andGL versions are in sync (420) (from memory). Most times MESA & AMD adhere to spec andwill fail for faulty code, while nVidia will allow non spec code to run.
I forgot to mention that my spec came from ES 2.0 version of GLSL where 150 is the first version to use texture() instead of texture2d, allowing same shader for both GL3.3 and ES2.0. My mistake for not mentioning that.
OK, well, if it is actually a Mesa bug, then it should be reported to Mesa so they can fix it.
We don’t “insist” on sticking to 17.1, it’s just that nobody has had time to upgrade to 17.2 or 18. I think someone started working on 18 but didn’t get it past the “crashes at start” stage yet.
In Mesa, you’ll recently get GLSL 3.30 and 1.40. GLSL 1.50 got tossed into the GLSL 3.30 port. Now
with OpenGL ES 3.0, Mesa 17.1 provides the OpenGL 4.5 compatible API but the llvmpipe/swpipe driver is OpenGL 3.3 w/ GLSL 3.30 compatibility. Usually, you’ll fall into the ballpark of a
OpenGL 3.0 / GLSL 1.30 driver scenario. With shader programming, we safely stick with GLSL 1.10 as a baseline unless using the OpenGL-licensed commercial drivers or Mesa 3D accelerated driver(s) for AMD AMD/Intel/Nvidia graphics products (Mesa 17.1 compatible expections: OpenGL 4.5, GLSL 4.50, OpenGL ES 3.0, GLES 3.00).
Remember, Haiku has zero 2D/3D graphics HW acceleration (software only rendering/CPU-based).
That depends. I assume with certain MESA releases certain llvmpipe version is coupled. So it might be that newer MESA contains an llvmpipe not having the depth-clear bug for example.
I compiled a build of Mesa 18.3.0, but not updated certain Haiku-specific patches. So, we can
compile Mesa 18.3 on Haiku. Have you tested your program with Mesa <=17.1 on Linux/BSD and
seen the same rendering issue?
I can look further into it as time permits. If there is a bug report already, we’ll just use that.
Hardware acceleration and having the APIs working are two totally separate issues. Related? Yes. Mutual requisites? Not exactly. Its the opposite of what you’re saying. Hardware acceleration won’t do any good if the APIs don’t work. Getting the APIs working does not require acceleration. Technically, getting hardware acceleration is not the issue. Working APIs is the issue.
Development on graphics applications can happily(ish) progress without acceleration. That only requires the API to work. However, the converse is that working on the acceleration without the APIs in place with which to test it, is fighting a losing battle.
This is good news. Thank you. Do you have a recipe available? I’d love to work on testing. I have a sneaking suspicion that the outdated Mesa 17.x is messing with the Godot port.
Oh, I’m not focused on the non-acceration/acceleration issue(s). Mainly, just need to know your program was tested on a Linux/BSD platform with a similar Mesa llvmpipe driver setup.
I never tested on llvmpipe on Linux since on all test systems I had either AMD or nVidia chipsets supporting 4.5 . I had though also tested 17.1.0 on linux where I needed to override the reported GL version too to get it working. The depth buffer bug did not show there but the rendering had artifacts (strips along rendering hinting to depth-buffer problems). On any other MESA version with AMD/nVidia I had no troubles. I tested with and without float-depth-buffer and the result is everywhere the same (working or not working).
Could be very well llvmpipe being the actual problem. According to a feature comparison page llvmpipe is only feature complete up to 3.3 (hence Core 3.3). That would be fine with my engine. It uses up to 4.5 extensions if present but does not require them to run. 3.2 is the only real limitation since there OGL switched to the current Core model and basing a modern game engine on <3.2 is stupid.
Recipe? Yes. I have a recipe and patches for Mesa 18.3.1 for Haiku to get it to compile only. I found the code bugs causing some issues with core Haiku apps or specific Mesa demos misrendering on Haiku - but not on an alternate OS test platform (if it is consistent between various test platforms, then its a Mesa issue).
Most of the recent bugs are reported to kallisti5 in Trac or Haikuports. They currently deal with Haiku’s Haiku3D, GLInfo, and Mesa’s gearbox demo.
The other issues may revolve around the use of the geometry_shader4 and S3TC extensions. Otherwise, I’ve gotten almost every FOSS OpenGL demo/app/game to work under Haiku’s Mesa implementation(s) - that I’ve tested myself. Fingers crossed…
I looked at Godot 2.1.5 and 3.1-alpha5 which seem to work (per its developers). Still, Mesa implementation fixes resolves some of the ‘smaller’ rendering issues.