[3D graphics] Google's SwiftShader

SwiftShader is a high-performance CPU-based implementation of the Vulkan, OpenGL ES, and Direct3D 9 graphics APIs. Its goal is to provide hardware independence for advanced 3D graphics.

SwiftShader libraries can be built for Windows, Linux, and Mac OS X.
Android and Chrome (OS) build environments are also supported.

2 Likes

But first up (if it’s easy enough): hardware acceleration!

BTW this (CPU-based) could be an initial alternative to graphic card based accelleration…

2 Likes

LLVM based multi-core software OpenGL rendering is already available by Mesa.

1 Like

Well we already still have a similar CPU-based renderer which is the Mesa LLVM-pipe software renderer that also uses the CPU and LLVM to render OpenGL graphics in software.

It appears that SwiftShader is no different here expect that it interestingly comes with a Vulkan software implementation, but I’m not sure if it is faster than LLVM-pipe. I remember another one which is OpenSWR from Intel but this one uses and requires advanced CPU features such as AVX instructions and probably has Intel specific quirks which may exclude some older CPUs.

Unless there are some benchmarks to suggest whether if SwiftShader it is faster or if its a good fit for Haiku, we would probably stick with LLVM-pipe. But for those who want ‘Vulkan’ or ‘Chromium’ on Haiku would probably go for porting SwiftShader anyway.

I’ve never seen any benchmark in which SWR was markedly faster than LLVMpipe… in any case CPU’s just don’t have enough memory bandwith, enough vector units, the correct cache layout for it to work.

The benefit of AVX instructions is much greater speed. AVX provides at least 2X speedup. AVX2 provides additional speedup. I ask the question, at what point is software-only rendering “good enough” for basic 3D operations such as gaming and CAD. By maintaining compatibility with older CPUs, we are limiting important use cases. Ideally users of older CPUs could simply run the older package.

New laptops are coming out with 8 (!) CPUs. Surely a software pipeline would benefit from additional CPUs with AVX/AVX2?

Haiku doesnt support AVX/AVX2/AVX512 yet AFAIK.

True, but with the added risk of excluding older CPUs if some software like OpenSWR make it a hard requirement. I could require AVX-512 (Which OpenSWR supports) and that would exclude all AMD CPUs which that wouldn’t be good at all as a fallback software rasterizer. Obviously the preferred solution is GPU-based acceleration, but the context here is for a “good enough” fallback software rasterizer.

While OpenSWR has this speed benefit, it won’t run on the minimum hardware requirements. Unless we raise the minimum supported Intel CPU to Sandy Bridge and AMD CPU to Jaguar, then I don’t see a benefit to having a hard AVX-requirement and OpenSWR cannot be built without it. Perhaps this could change after R1, but I would say not now.

It would be worth looking at alternatives like SwiftShader and LLVMPipe (which the latter optionally supports AVX) to see if they’re a fit for Haiku as a fallback.

ooooh, interesting! I need to look at how the hybrid kernel “controls” these types of instructions. Maybe a fork is needed :wink:

In general AVX and more cores only helps slightly… remember CPU memory bandwidth has not went up much in recent years, all you have is about 50GB/s or so in an ideal case, and even with added cores, it isn’t enough for graphics rendering, even the slower GPUs thave way over 100GB/s and a reasonably usable GPU is going to have 200+GB/s and a high end one 1TB/s

And on top of that a CPUs cache is not designed for graphics work.

Investing effort into CPU fallback support is almost a fools errand. It should be there yes, but don’t expect it to actually be usable for real work or anything more than 10+ year old games. It’s great to see things like Blender load… and maybe even do simple work in it… but you can’t expect it to perform.

Even AMD’s on package GPU’s top aout at around 1.5TF/s due to memory constraints… all the bandwidth is used up at that point. And that is with a GPU optimized cache in front of the memory controller.

Which is not very reasonable as our drivers (in particular the intel_extreme one, at least) isn’t even able to do modesetting on these later machines…

But, what we can do is have multiple implementations of BGLView. As long as the API and ABI does not change, you can easily replace libGL by a version using another renderer. This is what rudolfc had done when he worked on 3D acceleration for GeForce video cards one or two decades ago :slight_smile:

There is also a port of MiniGL in HaikuPorts that can be used that way (but it got very little testing, and is not suitable for anything modern).

Technically wouldn’t the proper way to do that be to implement EGL, waffle and libglvnd?

You’d have to add BGLView support to waffle I guess?

EGL would be a wrapper over native BGLview and Haiku window system from what I understand.

Waffle might let us select which GL implementation or even API we want to use say… someone decides to come up with an improved alternative to BGLview etc…