There is one thing that you should temporary block though, the wait for engine idle function is still called by haiku which makes no sense at all without using the engine. Since the GPU on newer Intel cards is still down even dat would stall app_server big time thereā¦
GPU engine is usually used smarter then waiting for idle until running next commands. On Radeon there are fences that are written to the end of command sequence and signaled on completion by writing sequence number and optionally triggering interrupt.
Accelerant engine API seems have sync token for that.
It is just an observation I made. BTW everything you mention is just the acc engine, none of that is used by haiku currently, so it should not crash haiku if whatever stalls on that engine.
On Nvidia what you mention also applies. Though I did not use interrupts there. For 3d I never waited for engine idle of course, that would not be clever. However, if you want to set a 2d mode from haiku, itās probably best to do that only if the engine is totally idle, as memory and front buffer are about to be re-arranged.
Update: hmm come to think of it, waiting might be precisely the reason why acceleration did not speedup haiku drawing compared to software drawing. I saw app-server calls that hook after use of every 2d primitive. (Pure observation of behaviour). I think I will re enable 2d acc for some personal tests here at some point.
Maybe. This is not how Beos did it though afaik: app-server stops engine use and then calls modeset.
How and when all hooks are called on Beos is very interesting to monitor closely. It was quite clever in my opinion, with of course the limited knowledge I have .
I implemented interrupt ring and make it handle fences update. Interrupts itself are not yet handled, interrupt ring is pooled by timer.
GPU use ring buffer for interrupts too, writing interrupt event packets (VM page fault, ring fence completed, display connected/disconnected (display hot plugging) etc.). Interrupt ring is working in opposite direction compared to GFX/DMA ring: it is written by GPU and read by driver.
I finally fixed GFX ring problem by adding additional tracing to Linux andgpu driver andcomparing results. The problem was different units in RPTR and WPTR registers: uint32, not bytes. DMA and GFX ring use different ring position units. Now GFX ring works with non zero RPTR and wrapping.