Codec Kit Introduced

The Codec Kit has been introduced. It is formed of two parts:

  • The codec, reader/writer, streamer API
  • The Adapter Kit API

The first is the usual API which was private since the beginning. It still lacks padding and other love, that is coming soon. The latter is the Adapter API implemented by myself that allows to support streaming. The ABI will be subject to change until R2 while I think for the most part is ready.

This was a needed step mainly for two reasons:

  • This functionality doesn’t belong to the media_kit which is already enough bloated.
  • This will be the solid stone to build the Media2 Kit.

In future I may extend on the argument, but not now. Beware that apps using the private API will be break and will have to link to libcodec.so. The API is not present in R1/Beta and I am not planning to introduce it.

EDIT: Ok, guys removed that part of the post.

5 Likes

Why? Why do you have to explicitly state to not donate to the Inc.? Dumb question or not, maybe this should be explained anyhow since your statement might be interpreted as discouraging people from financially supporting Haiku.

2 Likes

It is simple, because Haiku Inc. did not support my intention to improve the media framework. Without any serious explanation other than their non-operativity. So, it follows that you don’t have to support them if you like my work.

This seems like an interesting story. So they didn’t outright say no to it, but just didn’t really give a response. Is that right?

Exactly. After a while, I had a talk with the only operative person that’s in there (kallisti5), he vaguely explained me that my proposal was controversial.

Perhaps discussions are still happening internally? If that is true, then making statements that discourage donations to them might swing their final decision to no. We humans are quite the emotional beings oftentimes, y’know.

As donation to Haiku Inc are not and can’t be conditionned to funding any specific work on Haiku project, calling to not donate if people likes your work is calling people to not donate.

Donation and bounty are two differents things.

2 Likes

No. I am not saying to do not donate. I am saying, if you want to donate because you like my work don’t do it.

On the other hand, I am an Haiku Developer, Haiku Inc. is a neutral organization right? I am not obliged to support them in any way. If I don’t agree with them I am free to do that.

Sincerely I can’t be less emotional than now. It is a thought decision that I don’t want my work to be something supporting them. And, no, it is too late for any collaboration between me and them.

I get what you are saying and why but you might want to amend it to say…something like.

Please support my work directly if you like it as this is not funded by Haiku Inc. that’d be a lot less controversial.

I also get why Haiku Inc, might not concern themselves with this yet… as it isn’t part of the R1 focus.

Also it’s good to see interest being put into developing improed APIs for R2. Thanks! Is this API planned to support hardware acceleration? Also it’s worth noting AMD’s latest cards have JPEG acceleration also… in the VCN engine. So codecs might not just be fore Audio or Video right but really any media or is this purly for streaming media?

6 Likes

Ok, then. Regardless, I still don’t think that it is a good idea to discourage people who like your work from donating to Haiku Inc. It is not an impossibility that someone could like what you’re doing and also want to donate to the Inc. since it supports Haiku. I’d suggest following what @cb88 said:

5 Likes

Ok going to accept that.

6 Likes

Well…my proposal wasn’t about an R2 kit. Was about general improvements like:

  • BMediaClient API on top of the media kit
  • MediaPlayer Add-On host
  • Subtitles support
  • UltraDV improvements

Yeah. I’d like in future to look into porting parts of libstagefright and their acceleration layer for mobile devices. But also other things like support for audio-over-ip. At least in my intentions :sweat_smile:.

What about https://github.com/intel/libyami / llibva / VA-API?

libstatefright is the defacto implementation of OpenMAX or a least the defacto subset thereof. But… that’s mostly on mobile GPUs with binary drivers. Dunno how you even use it on desktop hardware? Mesa seems to have some sort of support for OpenMax via Tizonia’s work on it, recently at least but… I’ve never heard of anyone actually using it otherwise?

Certainly VA-API is more of a known value on desktop hardware?

Hi Barrett. Thanks for the work on improving the Media Kit. I am developing a Haiku native video editor (video part works, working on audio now), and since I interface with the kit I’d love to know what the update adds that I didn’t have access to before, and how can I use it. Unfortunately, this thread is more about non-technical aspects of your work, while I’d love to know whats happening on the technical level. Can you write some more about the actual changes and how it impacts developers using the media kits. Thanks.

4 Likes

Zenja how fast would your editor go through 256GB ram :smiley: ?

Also according to someone else I know that has the setup I’m building the ram performs best if only the ram attached to the local numa node is accessed so 128GB per socket (otherwise latency goes up and is limited to hypertranport bandwidth)… since your application is so ram hungry does it take all that into account anywhere? Hoping to boot it up tomorrow night for some initial testing.

For HD video, a 1920x1080 image consumes 8Mb of RAM (ARGB32 colour space), so @30 frames per second thats 240Mb per second cached. Video players will typically cache frames for seamless scrubbing. For a 4K video source, thats 4x more memory, so 960Mb per second. So for 256GB memory, I guess it’s safe to assume you will have 4 minutes worth of frames cached for scrubbing. I guess the experience should be good. The biggest bottleneck for video editing is disk bandwidth, not RAM bandwidth.

Sadly, I cannot give you a version for testing yet. I’ve finished the video portion of the editor a couple of weeks ago (and I’m quite happy with how that works). Without the ability to edit audio, it’s kind of useless. As of this week, I’ve managed to get audio resampling working (ie. conversion from source to target frame rate), and the next step is actual mixing of the sound clips.

At 2 hours development per week, it should be ready for release just before R1 time. :slight_smile:

With that much RAM to spare, he could just go ramfs for the input/output of files, so the disk bottleneck goes away.

At this moment there’s little that can interest you probably. Mostly because it is a WIP. But I think at some point you will want a more low level access to the codecs other than BMediaTrack. For example you may want to implement a different seeking strategy. For the moment you can continue with the classes you use.

Are you planning to release the program closed source or open source?