VideoStreams: Media Kit-like 3D hardware acceleration kit for Haiku

yes, I suspected that this was already the case, my question was more than anything else if there could be any real advantages to using opengl via cpu in this context

I uploaded patch that allows to create BBitmap with custom area_id. It would help to share bitmaps between processes in Webkit2.

https://review.haiku-os.org/c/haiku/+/4369

12 Likes

I tried to setup double buffering with intel_extreme and something went wrong. It is supposed to show only upper part of framebuffer, not shrink it twice vertically. Fixing intel_extreme driver may be also needed.

#17261

CIMG4739_1

14 Likes

ScreenConsumer based on BWindowScreen is working with ati driver on QEMU. Double buffering and buffer swapping is used. It display animation with flying black rectangle on “∞”-shape trajectory. Minimal functional dirty region management is implemented, only changed parts are repainted. #17261 need to be fixed to run on real hardware.

screenshot19

24 Likes

I know it is very early but, how will handle the igp and dgpu on the laptops i have curiosity about this, i suposse it will go over igp by default.

But this media kit will consider more than a graphic processor? Thanks for all your work.

Yeah! B_MOVE_DISPLAY is working with radeon_hd and my patch. Tested on RISC-V board.

ScreenConsumer should start work after proper implementing clone accelerant.

CIMG4740_1

25 Likes

Nice one :slight_smile: Enjoying following along with this one!

2 Likes

I managed to make AccelerantConsumer work with intel_extreme driver. Buffer swapping is working (can be confirmed by flashing cursor).

29 Likes

Also works with radeon_hd and RISC-V.

18 Likes

so, exactly what was gained ? I’m not sure I understand what you’ve done here

Beginning of accelerated desktop… I think?

5 Likes

Initial version of compositor is working:

screenshot104

29 Likes

When implementing protocol, I experienced deadlock problem when 2 connected nodes are handled in the same thread. Synchronous message sending to BHandler in the same thread locks forever. I made some workaround;

static status_t SendMessageSync(
	BHandler* src, const BMessenger &dst, BMessage* message, BMessage* reply,
	bigtime_t deliveryTimeout = B_INFINITE_TIMEOUT, bigtime_t replyTimeout = B_INFINITE_TIMEOUT
)
{
	if (dst.IsTargetLocal()) {
		BLooper* dstLooper;
		BHandler* dstHandler = dst.Target(&dstLooper);
		if (src->Looper() == dstLooper) {
			// !!! can't get reply
			dstHandler->MessageReceived(message);
			return B_OK;
		}
	}
	return dst.SendMessage(message, reply, deliveryTimeout, replyTimeout);
}
7 Likes

Compositor with multiple VideoProducer clients is working.

33 Likes

I implemented compositor protocol and now clients from separate processes can be connected and disconnected.

28 Likes

very well done,would you consider a bounty or contract to work on 3d drivers ?

3 Likes

Donates are accepted here, it can speed up development.

About bounties: it should be asked to Haiku inc., not me, I am not official Haiku development team member and I have no decision power. I also have full time job and I am not sure if it can be combined.

Next plans (approximate):

  • Implementing copy swap chain present mode. It is needed for app_server.

  • Integration with app_server by implementing CompositorHWInterface. test_app_server can be used first.

  • Compositor surface window based on BWirectWindow (for getting clipping information from app_server).

  • [optional] Semitransparent windows based on BDirectWindow with modified region calculation.

  • Mesa3D OpenGL integration.

  • Software Vulkan rendering integration.

  • VideoStreams graphics card management API (VideoPorts) and multiple monitor support.

14 Likes

I will note here for users’ sake that while some of the API research in here is interesting, this is mostly orthogonal (i.e. related but not contributing to) 3D acceleration. That is, some of these APIs might be useful in a world where Haiku has 3D acceleration, but none of them are actually prerequisites for or actually contribute to 3D drivers.

Compositing is generally done because hardware accelerated graphics makes it relatively cheap and it makes other things (e.g. desktop shadows) significantly easier. It does not need to be done to get hardware accelerated 3D graphics, to be clear; I would expect that a first iteration of hardware acceleration would not touch app_server’s core drawing code at all.

I further note (as I have discussed with X512 elsewhere), that I’m not actually sure if the VideoStreams API discussed here has relevance anymore with the introduction of Vulkan WSI. These days, Vulkan and drivers want to do nearly all buffer and copy management themselves based on direction by the Vulkan API consumer. This is all specified in the Vulkan APIs, and while some of it requires interfaces with the windowing system, most of the internal buffer management that X512’s “VideoStreams” seems to supply is, in my understanding, entirely done by and with Vulkan and then the GPU drivers.

(Indeed, OpenGL leaves a lot more to the individual window system here, but if the future is entirely Vulkan, designing, implementing, and then supporting an API that largely has relevance only for OpenGL may not make a lot of sense.)

1 Like

Haiku inc also has no decision power on the development. If someone else decides to set up a bounty, that’s fine.

Ultimately the development team (not Haiku inc) decides what is merged or not. It already happened that some work paid by Haiku inc was rejected by the development team and eventually rewritten. But of course Haiku inc is now asking the developers before setting up a paid contract, so that it doesn’t happen again, because it’s a bit embarrassing for everyone.

13 Likes