Haiku Beta1 will be here this year?

what reports and what systems?

It isnt just reports, one can calculate the latency too. More abstraction level means bigger latency for me.
The reports coming from Windows users, btw.

Nope
It was designed after IRIS GL to be a open graphic library API to abstract hardware acceleration, since day one.
In fact, first versions didn’t even have full software rendering support IIRC; only the features not available on every SGI’s hardware accelerators had software fallbacks, but with time, every API got a software fallback, allowing software only rendering.

But it was never the objective of that API to be used without hardware acceleration, and it shows in its API design.

The same is even more true for Vulkan.

1 Like

opengl has targeted software rendering since before quake, regardless of origin.

Page 1, chapter 1.1.

OpenGL (for “Open Graphics Library”) is a software interface to graphics hardware.

You either have a twisted definition of software rendering or you’re just trolling.

if you think i’m a fool, what are you doing arguing with me?

opengl still targets software rendering, we have llvmpipe in haiku now because of it. it works well. that building a decent haiku system means putting more resources into processing power and pairing that with low-end display hardware is a fine problem to have.

1 Like

They keep them in a different repository, and they are all custom code for Vulkan backends (not OpenGL, but Vulkan is lower-level than OpenGL so I think there are already projects to implement OpenGL on top of Vulkan.)

The kernel drivers they’ve developed so far are here:
https://github.com/fuchsia-mirror/garnet/tree/master/drivers/gpu

I think the userspace Vulkan core is also in that repo (search through the files for vulkan and there are some libraries.) I think geist knows more or less how it all fits together, so ask him.

2 Likes

I’m not. I figured “why not check with some credible sources”, because maybe you’re right? Turns out you’re not, but even confronted with the original OGL spec you still want to spread misinformation ¯\_(ツ)_/¯
I can’t stop you, but I can make it a bit harder :slight_smile:

Good luck and have fun!

1 Like

you haven’t gotten to the part where i’m misinforming – there’s been a software target for over twenty years. you’re flaming and for no reason.

Let me remind you your original statement:

Phrasing of this suggests that OpenGL was designed with software rendering in mind, and hardware rendering done as an afterthought. This is wrong and it was pointed out to you, to which you responded with:

These sentences again are phrased in a way that conveys the wrong message: that OpenGL was designed for software rendering (not true). Then you just kept writing about OpenGL targeting software rendering, which is not precise: the API does not target software rendering, but has software rendering targets. See the difference? I saw what you mean only in your last (fifth) post.

You’re not being precise which leads to misunderstandings, and when that’s pointed out to you, you just double down on your wrongly phrased statement. I’m aware some of this might be due to language barrier and it might have been right in your head. Still, when 3 people tell you you’re drunk, you probably are.

then it targets software rendering. this isn’t three people telling me i’m drunk, it’s you, alone, repeatedly attacking me for, apparently, placing my words in an order which displeases you. and the stakes for this are…?

I think the point is the API is made for hardware renders.

In the ancient past I designed and built a software 3D renderer for Commodore computers. I put a lot of thought in not just the design of the API but also future possible extensions to the API uses.

I ended up with a design that looks nothing like that that OpenGL looks like.

When you consider what format and what type of data you want to pass to describe a 3D object the format you use to get the best performance is totally different if you are using hardware to do the drawing vs the format you use to get the best performance from software.

OpenGL format is for hardware.

1 Like

it’s also been rendering in software, and well, for twenty years. that it also targets hardware is beside the point (and kind of a given; it’s extremely obvious whereas the other widely used 3D graphics library does not support software rendering and never has)

And you seem to spend a lot of time avoiding the truth that the API of OpenGL was first designed for hardware, the fact that there are software that can also use the API is meaningless - the API was designed for HARDWARE.

That makes it in sub-optimal for software if speed of rendering matters. What the software does is let you test your ideas on cheaper hardware or if the demands are not too high you can use the software versions on cheaper hardware.

And there is still also software that only renders using software and does not target any hardware - SO WHAT?

It has nothing to do with the discussion of OpenGL design choices.

1 Like

i’m not avoiding that hardware rendering exists. opengl is literally the only graphics library outside of osx with a software rendering target. it exists in haiku, we’ve got that chain and it’s good enough for most of what anyone can use haiku for right now, anyway. application and driver developers separate from the haiku team can take a crack at hardware acceleration and nothing written targeting opengl will even notice the difference – it’ll work still (and be faster and look better) because opengl targets software and hardware.

this whole thing has no point whatsoever and has been an exercise in jumping down my throat just to move a mile off topic.

my point is, was and has been, there’s no reason for the core team to touch hardware acceleration. the rest is semantics.

Wasn’t this thread supposed to be about the Beta? Now it’s a OpenGL thread.
Back to the original question. I think Beta will be ready by this time next year (2019). Buildbots are still being worked on, and until that’s done we can’t even start Beta branching. There will need to be an extended period of bug fixing and polishing to follow. I’d rather see it done right, rather than quick and crappy, just to satisfy the optics of releasing a Beta.

5 Likes

Yes, good point. Can the OpenGL posts be moved to their own thread?

can that thread be moved into the sun?

1 Like

While there is talk of moving to a beta phase, there are still ideas being put forward, discussed, and often rejected:
like rust, swift, rebol, wine, virtual machines, multiuser, security, etc. etc. This seems to be a self-defeating exercise. So wouldn’t it be better to make a firm statement somewhere that no more ideas are required for R1/beta, so that the devs can fully concentrate on getting a beta.out? And that new ideas are only for R2? In other words, if you can nail down exactly what R1/beta will encompass it could make expectations and discussions more realistic.

2 Likes

I thought this was already done. R1 is a BeOS equivalent, with the features that were voted on years ago. Other smaller features are added if a developer cares to add them, or if they are necessary to support current hardware (like USB3).

This is a pretty clear picture, but not a hard line in the sand. It’s unlikely that a useful patch would be rejected just because it’s not required for R1. For example, if someone decided to add NVMe support or USB wifi.

At the end of the day, the developers are doing this for fun so it’s understandable that some non-required features get added from time to time.

As far as major changes, like the ones you specifically mentioned, it seems well known that these types of changes won’t happen until after R1. This has been discussed a lot in these threads lately. This is why many of us are looking forward to the exciting post-R1 world.

2 Likes