There is a big picture even if there isn’t a single guy responsible for it…
We already have Qt, wxWidgets, FLTK, SDL, and probably a few others I’m forgetting. GTK+ is the only one we don’t have. So, wouldn’t it make more sense to properly port GTK+ rather than Wayland?
well, yeah when you put it like that, absolutely!
Tsk tsk. We have at least 50% of GTK work done (backend). “Just” the native frontend to be done.
( I dont know if the current Broadway backend included DragNDrop support to override, complying with Pulkomandy DnD text above)
This does render in a web browser, does it not?
Yes, it does render in a browser, yet the applications run “natively” (as in run on the haiku host, then ask the displayed web server to update like if it was on a window).
But you can, technically, run GTK apps right now, provided you comply with the dependencies to compile.
Yeah we have all the APIs, but they are slow when running on top of an unaccelerated API like the BeAPI. All of this is fine today, and probably for R1 certainly. Though you probably shouldn’t call porting APIs and applications that typically require acceleration on top of an unaccelerated API proper ports.
Looking forward though, Wayland would provide much of the infrastructure and work already done, for accelerating all APIs, rendering the BeAPI into a buffer initially and potentially accelerating it later on perhaps via vulkan etc… worst case it would provide similar performance to what we have now when unaccelerated.
I mean by the time Haiku implements 3d drivers etc… it will either be stuck in a roughly windows XP/X11 + extensions era graphics server design. Or we can actually go about it in a cooperative and modern way… probably much of what sets Haiku apart can be retained as well as Wayland is just a protocol after all for how things should talk to each other, it does dictate a few things but nothing that seems too different. Much of what Wayland is about is accepting that past design decisions (on unix) and hacks around them were wrong.
Anyway I think it’s worth thinking about rather than brushing the suggestion off because that’s not how it’s done now. Do I want Haiku to become Linux? No, I want an accelerated modern desktop across all supported frameworks, with a fallback performance on hardware that can’t support acceleration… and I don’t think the wheel needs to be reinvented to get that.
WeirdX works ok…
That’s not what I said at all… but maybe I misunderstood what you want as well.
But… it’s java… I am offended lol.
In all seriousness though WierdX is a bit limited relative to what a port of current X would support IIRC, and is a dead project. It probably is 99% of what most would need though.
The Be API is not accelerated by implementation, not by design. (The same is true of QWidgets; while Qt Quick is hardware accelerated by default, QtWidgets are almost entirely not and are drawn in software. There is a QPainter backend on OpenGL, but it is not usually used here.) We could write a BView backend that used OpenGL to do drawing operations; whenever performance is an actual bottleneck, we may indeed. “Premature optimization is the root of all evil”, and all that.
Er… What do you mean by this? I don’t understand.
We control the design of
app_server completely; and certainly it isn’t like XP (which did some drawing in the kernel, I think) or X11 (where there are server-side drawing functions, but they are rarely used nowadays, and window managers do a lot of heavy lifting that they shouldn’t have to were the design saner.)
Either way, we could completely rearchitect the internals of app_server tomorrow, and no-one but us and whoever reads the commit log would be the wiser. It’s very much abstracted from the APIs that developers consume, and that’s the point.
Wayland is very tied to Linux’s event models, which is a problem. But that aside, it is very much a victim of “design by committee” and “hyper-interoperability is the way of the future.” Have you read the specification for the Wayland wire protocol? It is virtually impossible to “speak” it without using an XML parser to parse specification files, and even then it’s not so straightforward to pass messages. And if you want to read the clipboard, then you have to use their equivalent of an “extension protocol” to do that; but oh no, this server doesn’t implement that protocol but uses a different one, so no clipboard for you!
That is to say, Wayland is a natural extension of the Linux philosophy: any component can be exchanged for any equivalent component. That sounds cool and all, but in practice what this means is that rough edges can never be smoothed out if it would break compatibility, or because others have already “gotten used to it.” The same is true in X11, ALSA, CUPS, etc. etc. etc.; which is most of why we use none of these. We would even prefer to have a native office suite instead of LibreOffice; but unfortunately at present we don’t.
Further, I don’t really know what “past bad design decisions” you are referring to that Wayland does not have. Obviously a lot of the legacy cruft of X11 is now gone, but so is drawcall forwarding and
ssh -X, something Haiku still has (and hopefully always will.)
Wait, that’s why you think Wayland is worthwhile? Uhh … Wayland does not help us with that at all!
The userspace portions of hardware acceleration we pretty much already have (Mesa, BGLView, etc.) The lack of kernel driver support is what is blocking work from proceeding on that front. Porting or switching to Wayland does not help us one bit here, because it would make exactly the same demands of the kernel and driver stack that doing DRM acceleration in app_server would. So Wayland does nothing for us here at all.
Further, as I already said, a significant number of said toolkits don’t use acceleration at all. QtWidgets doesn’t; I believe GTK+3 supports it but it’s usually disabled in Cairo by default; and wxWidgets uses Qt or GTK+ on Linux. So where exactly are we lacking here?
Same difference? It was never designed to be accelerated, and probably would not benefit , or it might go back and forth depending on the use case. At least at the API level.
Mainly that if you don’t use as much as possible from Wayland etc… then you have to reimplement all of that to even get to that point feature wise and you’d end up doing much the same thing. So you end up with something only halfway there just on manpower and lack of clear direction. If you are going to rearchitect at all then may as well align with Linux at that point if there is no drawback that’s all I’m saying. And unlike the Linux drivers wayland doesn’t seem to be that much of a moving target, the X guys are pretty different in that respect than the kernel guys.
Except that it does… if you want a secure composited accelerated desktop from the ground up. Do you want to go through the pain of developing multi GPU handling for laptops (PRIME),output rotation, adaptive sync etc… The point is even once you get the kernel bits of Mesa going, Haiku R2 won’t even have anything using it it will still be drawing the entire BeAPI and compositing it in software and then drawing it to a the memory of the graphics card in a brain dead way… instead of taking advantage of it. At the bare minimum with Wayland and a 3d driver you’d get at least hardware accelerated application composition.
Also last I checked screen tearing wasn’t too bad on smaller displays but as they get bigger like 4k and up, you have to rely on the hardware to get that right, and Wayland does that well.
https://dev.haiku-os.org/ticket/13271 < that for instance, the fact is that vsync isn’t enough, if it was nobody would have bothered with wayland. It’s a fight that couldn’t be won without some large design changes and things learned that was implemented in DRI3 and Wayland. If there was an alternative that is was on par I’d suggest that too but right now Wayland is the only open source stack for this.
I vehemently disagree with your assertion that it would “probably not benefit” because it was not designed for that in the first place. It does all drawing in
::Draw methods with rather well-designed drawcalls, and these could be directly made into OpenGL calls.
Even if it hadn’t been designed for this, though, my point that Qt and GTK+, and thus KDE, GNOME, XFCE, etc. are largely not hardware-accelerated for standard widget drawcalls stands. So whatever standard you are trying to hold us to, even Linux is not.
There is a drawback, I already explained this. And if we cannot design a display server, something that is a core part of the OS, and would get only “halfway there” and have “a lack of clear direction”, how could we reasonably be expected design a whole OS at all?
wlroots, the modular Wayland compositor library behind
swaywm (at one point the most popular Wayland compositor) among others, has its DRM backend in a single drm.c, which is not too much over 1000 lines. That’s it, 1000 lines! And other Wayland compositors will implement this their own way; that’s the “beauty” of Wayland being a specification, not an implementation. Is that really an upside?
swaywm handles all of those features; mostly because they are implemented in DRM, and not in the display server itself. Those 1200 lines (and then the 200-300 lines in each of the other files in that folder, mostly interfacing with
swaywm itself) are all that’s needed to do all of the things you are mentioning here. That’s it. All the rest is up to Mesa and the DRM driver cores.
Again, premature optimization is the root of all evil; and again, Qt, etc. do all their widget drawing in software. If it works for them, and they aren’t really experimenting with hardware-accelerated drawing, why would we?
OK. See comment above about premature optimization. We can already move windows around the screen without a whole lot of CPU usage.
Further, looncraz has posted an overview of a composited app_server and how he plans to go about implementing it, and more immediately, how to handle individual composited windows. We know how to do this, we just need the time.
As noted above, there is no “Wayland” program, there is the Wayland specification and then a variety of implementations.
swaywm's does all of that in ~1200 lines, or in other words, about 1-2 “man-weeks” worth of work for someone who understands all the components involved. That’s not very much.
People bothered with Wayland because X11 is broken and full of legacy cruft from the 80s, indeed. That does not mean what we have is worse than Wayland, and my argument is that we would still be better off without it.
The ticket is about implementing vsync. Currently, only the
intel_extreme driver does;
radeon_hd, etc. do not. We also do not utilize vsync information in app_server, mostly because at the time that part of app server was written, no driver supported it. Once we have more drivers that handle it properly, we can revisit that ticket.
Uh, OK, so then ours would be the second.
If that’s the case then great I amd skepticalI guess we’ll have ot see…
Well yeah but that gets back to the point of draw calls being slower than just doing it… what you really want is to program the GPU to do certain tasks that are expensive to do on the CPU and just feed it the data. Rather than attempting to program the GPU on the fly to draw anything as that is slow for the CPU and doesn’t use the GPU optimally. So, you’d want to at the very least batch up the draws by the BeAPI and submit them at once right? Doing it properly certainly wouldn’t be just mapping to GPU API calls directly.
I tried booting up the KGPE-D16 again last night and didn’t get anywhere… not sure why I’ll probably play with that Saturday. Anyhow I’ve pestered you enough about this for a good few months at least, its been very cool to see XHCI working well now even on my Ryzen laptop.
Um … what do you think “just doing it” is? That’s what “drawcalls” are. You may set up shaders beforehand, but then you’re just going to submit vertices and the like to the GPU.
Uh … again, I don’t know what you mean here. There’s nothing in the BeAPI that prevents command batching in the slightest; in fact that is probably exactly what we’d do when writing a GPU backend. Again, this is all an implementation detail; there is nothing in our API that prevents this at all.
Nothing about our API forces this to be “on the fly”.
You can call the view’s Draw() only once, store the result in a BPicture, convert that to GPU operations, and let the GPU execute that several times if needed (for example to render the window with varying zoom levels, etc).
Only when the applications calls Invalidate(), this process needs to be repeated.
This is what allows us to get decent performance in remote app_server, and a design with asynchronous CPU/GPU is not much different from that.
So, we can do this easily, because the API isn’t directly mapped to the drawing calls. There is the whole app_server in the midle, and we can do anything we want there.