The whole point of the thing is that, per current design, we don’t want the codecs to be dependent on the media_kit. We shouldn’t generally assume that the programmer wants to use the media_kit at all. The general idea is that decoded data is made easily available through BMediaTrack and then at the upper level someone can easily implement a node, eventually. One of the things to consider is that hardly we can provide a node which can really handle everything a possible app want. Instead most of the apps will want to access just the bitmap to draw on top of the frames, so that’s why a simple rendering should happen in the codecs side.
Having a rendering node would be problematic right now, for example, how you’d manage the position and other settings of the subtitles? There’s no way right now to have a reliable API on top of the hypothetical rendering node. And even if this was resolved, for example using some kind of port protocol, there’s no assurance it can be managed in a way that it can be considered a stable API.
Now, let’s suppose there’s a node for each component of the codec. We’d have a similar chain :
[FileReader] -> [Demultiplexer] -> [Decoder] -> [Filter] -> [Consumer]
This is taking into account we want to do that on a single media format. Suppose we want to handle both audio, video and subtitles.
At the Demultiplexer node we’d have 3 “arrows” going out. The audio arrow would go to the system mixer. The Video and Subtitles path will before go in the respective decoders, and then we’d have to overlap the bitmaps somewhere, right? So we need also a kind of video mixer node to do that.
Let me add here that BMediaEventLooper is unsuitable to do something like that because you can’t really manage non-linear paths with it due to the over simplified latency system it has.
Now imagine to use something like that in a complex app like MediaPlayer.
- How’d you manage synchronization between the frames, since in theory every node has an independent latency. In theory every jitter could make the frames to displace and it’s very hard right now to recover “externally” from such a situation.
- It’d work perfectly as a show case, but once the app needs to do something a little bit over the “standard”, the problem arise, the code lies somewhere, no way to access how the nodes do the job.
And last but not less important, each node would have it’s own thread (per current media_kit design). Do you think it’d be an optimal solution computationally? It’d waste a lot of resources. That’s why this idea of “do everything using a node” is completely bad. It can’t work. It may make sense to have system nodes for easily playing wav files, yes, but don’t expect it to be a solution for complex apps.