Wow, after years people begin to acknowledge that. However, the BMediaClient was exactly a temporary countermeasure for this problem. The media_kit has quite a lot of flaws. Seriously, the difficulty in developing it, isn’t just a question of pedantic interfaces, but it is a direct consequence of some design decisions. To name a few :
- Distinguish producers and consumers
- BTimeSource is a node (horrible)
- Every node has start/stop/threading etc. implicitly defined, that’s a major flaw
- Absolutely no concept of remote and local objects
- BMediaRoster is a BLooper (WTF)
- Mixes encoding stuff with raw data stuff
- BFileInterface is a complete hole in the water.
I will name how those flaws are being solved in media2 :
- BMediaInput and BMediaOutput
- BTimeSource become just a provider of RealTime(), BMediaClock and BMediaSync are introduced.
- BMediaSource
- BMediaNode implements IMediaNode, with reference counting system.
- The roster becomes just an accessor for a few static methods, e.g. getting the SystemGraph, all other features will be available using meta-interfaces a-la OpenBinder.
- The Codec Kit is here for this reason.
- BMediaExtractor and BMediaWriter will inherit some of the above mentioned classes to become interfaceable with nodes.
So, it is a few days that I’m swinging about creating an article or not. I think it is going to drain too much of my time for little interest, but there are still a few things that I’d like to discuss. Note also that this is the first time I expose what the new Media2 Kit will be, that means I have mostly completed my draft design.
However, continue on the Why’s. Let’s take a BMediaNode (as in the old kit). The main point of having such a class is being able to route audio/video/midi in/out between processes, right?
To do something like that you need :
- A MediaNode object
- A way to communicate between processes
- A way for the media_server to observe such objects
- A way for the media_server to reference count those resources and free them
- A way for a remote process to control a node, instantiate it and so on.
How the Media Kit handle that:
- BMediaNode, BBufferProduce, BBufferConsumer, BControllable, BFileInterface
- Ports
- Fragile code that uses port messages and expect the nodes to notify him
- Again, it tries to keep track of resources using the ports communication
- The BMediaRoster and those opaque structs, media_source/destination, media_input/output, buffer_clone_info and so on.
What’s the problem?
Suppose an application crash. Boom. You are into a burden on trying to recover from that. Suppose the media_server crash. Boom. Recovering the status is very hard and imply writing tons of code which is error prone.
Suppose you are controlling a remote node, and one of the above happens, guess what? Houston we have a problem.
Someone remember some strange media_kit bug like that?
Let’s continue with the BMediaRoster example. It’s API is really too much complex for the few things it is required to do. You need to start the whole chain of nodes, attach timesources, when you need to handle connection statuses you need to use all those funny and fancy methods with old-plain c structures. Even if I have a fairly decent experience in Media Kit, I find myself in doubt if I open the BeBook and analyze some methods. What’s that? inOutput? outInput? inSource? ourNode, theirNode? theirInput? And so on. I’m pretty sure anyone who developed something using the media_kit knows that. And if it’s not enough, look at the media_connection implementation in the BMediaClient, see how complex is to track a connection between two nodes.
Now we identified at least two problems:
- Programming is way too hard and error prone, the learning curve is too high.
- Keeping track of remote objects and remote resources status is simply impossible.
And we can go directly to the point. What’s the solution OpenBinder gives us, and why I’m falling in love with this idea.
We implement an interface IMediaNode, this is remotely-reference-countable, that means we can know who uses it and why. BMediaNode, implements IMediaNode, which is the local version of IMediaNode.
- The media_server can subscribe to notifications from this node, how many buffer it owns, what buffer group uses. The status of the object is kept by the kernel rather than the server, so that anything happens we don’t loose information.
- The local process, just implements BMediaNode, the remote processes uses IMediaNode.
- A third process that want, for example, Stop() our node, just call it’s own IMediaNode::Stop() command without caring about using anything so horrible like the BMediaRoster.
- A fourth process want to connect with us, nice, it uses IMediaNode::Connect.
- We are done processing something, all the BBuffers are released, the media_server simply receive a notification that the buffers reference count gone to zero and release the memory, in just a few lines of code.
- At some point the process crash, the Binder connection become unavailable and the node status get to an invalid state. All remote processes instantly know that, and know that their inputs/outpus are now invalid and free to be used by another remote node. The media_server know that too, he can release the resources by fairly decreasing the reference count.
The whole thing is simplified here. But trust me or not, the media2 kit needs a way to model remote and local objects and reference count them. That’s what the Binder interface does. That’s why it is so useful.
It is entirely possible to implement something like that using ports, shared memory and signals, plus some support in the kernel to maintain the status of shared objects. I’m still considering how to implement it, if it’s the case to have it enough generic to be used by other kits, if it’s the case to grab some code from android, or if it’s the case to have a local implementation for media2 kit use only.
I’m pretty sure, other parts like the app_server will greatly benefit from something like that.
Feel free to comment.