Plans for 3D acceleration

So are you saying that, e.g. 3D rendering, is done on the server side (the client in X terms) and the result is MP4 compressed and sent to the client (or server in X terms, meaning, the machine that is displaying the result)? This would be ideal, otherwise rendering is done on the client machine and bitmaps are the least of your problems because all the data and drawing commands have to be sent to the client and its super slow. On X there is a solution for this particular issue (3D) in virtual gl and turbo vnc or xpra though (see for example this comparison video running without and then with virtual gl https://www.youtube.com/watch?v=51srRwy1PzY).

Virtual gl can save the day if you use remote desktop in linux with e.g. opengl accelerated chromium, since it forces all the rendering to happen at the server instead of the thin client you’re viewing with. (I use it to browse the web on an underpowered atom netbook for example).

1 Like

Custom controls are still implemented using drawing primitives (gradients, lines, etc). Just try the already working remote app server,it just works and does not feel laggy at all (until you run a Qt app, that is)

2 Likes

That would seem to back up my side of things would it not? I can’t imagine complex BeAPI applications are snappy either but I could be wrong. I did quite alot of X11 forwarding a few years back and it’s very handy and works decently but it doesn’t scale.

  • I will note that the NX protocol is much faster than straight up X… and maybe Haiku is more similar to that but I find that very doubtful.

The problem is that yes with a simple application it is cheaper to send the draw calls, but for more complex modern applications, it is cheaper and far more straight forward to do all the drawing, then ship a buffer of all the work done, and at that level you can do whatever you want to it compression/video encoding etc… Web+ for instance is probably complex enough for this to show up. In the end simple applications don’t matter, as they are similarly performant on either method of doing things.

I think some remoting systems actually support just passing video streams straight through… it would be cool if Haiku did that, so not reencodinng at all just stream the video and it’s dimensions to the remote app server.

Low latency gaming video codecs do not render locally… they render on a server farm encode via hardware and stream it… that’s probably the ideal case for desktop use also, as you can throw away all the network transparency complexity. You can also only really guarantee the capability of the host machine not the remote client anyway…

Yes, RDP can do draw calls but it isn’t state of the art by a long shot… neither in bandwidth usage or responsiveness.

Also most of your complaints about wayland are years out of date… the point of it was to be a standard to build on you can just use a library ie libweston for the specifics.The same is the case with Vulkan its very complex to use natively but that’s what libraries are for.

Also the idea that Wayland requires OpenGL is wrong it’s just that QT,GTK require OpenGL these days to be fast at all, the clients do the rendering imagine if Haiku implemented Wayland, it would continue rendering the BeAPI in software using AGG just as it does today and then ship it to wayland as a buffer… to slap on the screen.

Actually there is a wayland demo running in an emulated OpenRISC1k PC in JS… that is relatively responsive…so no OpenGL is not required it’s impressive what it can do in about 50-70MIPS on my PC. https://s-macke.github.io/jor1k/demos/main.html (type help then load the wayland/weston demo).

I think considering restructuring the old design of the app server is completely valid for Haiku R2. Or at least growing it into something more modern by at least enabling desktop compositing and potentially GPU acceleration for the BeAPI.

1 Like

The NX protocol is a hybrid of VNC-type bitmap forwarding (but with extra compression) and draw commands. We could do that in Haiku’s remote desktop protocol too, we just don’t yet.

Well, only one way to find out… :slight_smile:

Have you tried it recently? For me anyway, I can use RDP in low-res with pretty low latency over a 3G connection, which is something I certainly can’t say for VNC.

Um. Then all the pointless complexity is just stuffed into libweston. Why have the complexity at all? We mostly don’t.

Vulkan is a completely different story. It’s complex to use because GPUs are complex to use, and modern OpenGL had hid that complexity. Wayland invents complexity where there’s no reason for it to exist.

Please go look up the Wayland specs again; Wayland itself has a hard requirement on EGL built in to the specification. Qt does almost all its drawing in software still, even on modern Linux, it certainly doesn’t need OpenGL for most things, as is evidenced by its good performance on Haiku.

Looncraz seems to have returned with his compositing app_server experiments, so we may get this. But it will be on our own terms, not Wayland’s ridiculous overcomplicating of everything and inherently poor performance with software compositing.

2 Likes

Well you may end up regretting making those choices and seeing that people that have been working with graphics 20 years long than you actually had a point… and as we can already see software rendering is inadequate for many applications, which is why Linux doesn’t even bother… to attempt anything more than a fall back mode.

So, the circled red part (where wayland actually does it’s hardware accelerated compositing one of it’s main advantages)… is the sticking point for you since there are no requirements for GL elsewhere. I guess it will become obvious pretty fast if software compositing is viable… which I doubt especially as screen resolution has started going up again as of late.

wayland

20 years longer than I, yes, but stippi and looncraz have been around even longer than that. So I presume they know what they are talking about; they have been right where Xorg has been wrong before :slight_smile:

3 Likes

Then this is the next priority for haiku development? (i think it should be) or there are other prioritys right now? :slight_smile: it is the only thing important as i think that haiku lack right now.

Dunno if it is a priority, most all users and developers want it though.

I think waddlesplash has delayed his plans on this a bit, he has a ton on his plate with everything he’s already doing. I think there is a good chance we will have something before next beta though which may come sometime next year.

Web+ is improved recently as well , and HaikuDepot is getting some usability enhancements which you can see if you test out a nightly.

1 Like

Dude, I am a college student with other obligations besides Haiku. I probably already spend too much time working on it (as the rather onesidedness of the last progress report showed…) Just asking for “when is it coming?” or “can you work on this?” is not helpful. I want the feature as much (or more) than you do; I’m motivated without others asking me to be.

Still, getting nicknamed “batman” is kinda cool :slightly_smiling_face:

Yes, see above remark. The changes I’ve been doing for Haiku over the past few weeks were mostly “I have an hour to relax, let’s do some small stuff,” and not time to work on something large like USB WiFi or 3D acceleration.

I will be unbelievably busy between now and the end of the semester (mid-December), but then I have all of winter break (~1 month) with nothing on my plate…

Oh, there’ll be something before then, I think.

I did get some time in to work on DRM driver porting after the beta was released – I investigated what porting FreeBSD’s Linux compatibility layer (on top of our existing FreeBSD compatibility layer) would be like, and actually got a significant chunk of it to compile, but that was with a ton of hacks and stub headers added to our compatibility layer, and there were a lot of problems that I realized I just wasn’t going to be able to solve effectively without modifying both compatibility layers substantially, and there were some issues relating to memory management that I’m not sure were solvable at all. FreeBSD’s and Linux’s and Haiku’s memory management systems are not so fundamentally different, but they are different enough that running a Linux compatibility layer on top of a FreeBSD compatibility layer on top of a Haiku compatibility layer … well, you get the idea.

My next steps will be trying to use DragonFlyBSD’s DRM drivers or OpenBSD’s. Their APIs have diverged significantly from FreeBSD’s, though, so it may be the case that modifying our compatibility layer to work with them is more work than writing a Linux compatibility layer from scratch … in which case I may wind up just doing that; and if so, it will mean a much more (months) development time required than these routes. So I really just don’t know what will happen, at this point.

But one benefit that did come from the otherwise-abandoned modifications to the FreeBSD layer was the fixes to performance & battery life issues last week, as well as the addition of 10Gb Ethernet drivers from the week before. So it wasn’t for nothing.

7 Likes

Well I think hammish wen’t down this road before… and ended up just saying to heck with it and starting writing a Haiku dedicated Linux compat layer… which he never finished.

I am sorry wasn’t my intention, i was just thinking about what can be in this moment important enough for, i understand it could beggin soon but maybe is a missunderstand, sorry.

With the current system work so nicely on most of the libs (with some linux files lacking) i think this would help a lot on the porting side. :+1:

Don’t worry, Hamish’s work is not lost and is safely backed up in Gerrit:
https://review.haiku-os.org/#/c/haiku/+/436/

It can be used as a base for whoever wants to work on this next.

Just wondering, are there any updates regarding 3D acceleration on Haiku? IIRC there was some work on the Intel driver for up to Gen8.

3 Likes

My work on the Intel driver is not related to 3D, just getting the display to show something at all and some steps toward multiple monitor support. I am not aware of someone else actively working on it.

1 Like

So, what’s the current plan on how 3D acceleration might be done @waddlesplash? Linux compat layer, extending the FreeBSD translation layer to get their stuff working, or porting the OpenBSD/DragonflyBSD graphics stack?

1 Like