Cortex

Hello, one of the things i fallen in love with haiku os is that “cortex” i never used it for nothing but i ever has supect that thing should rock, but how? anybody can explain me?, i can imagine software who lack one trick can obtain from another software with the ports opens to be used on cortex, is it like i can imagine? or better?
Why there are so poor information about this tool, i want write an arcticle, can help me with information?

wouldn’t it be up to the nodes’ developers to handle such cases and document them?

cortex could use some more functionality, though, i agree – such as making an attempt to open nodes that aren’t already open when nodegraphs are reloaded, so it could be like loading patches in puredata; or something more like an interface with haiku’s scripting, so that loading a graph wouldn’t require actually opening cortex. modular collections of software could be distributed and loaded, and connections between each segment could be altered by users – so, if someone enjoys an editing software but hates the sequencer in it, they could patch in a different one.

near as i can tell, the one reason it never quite caught on is that proprietary software isn’t conducive to modification, but with as much as has been going on in the realms of creative coding and open media, it’s probably time.

i think of cortex as akin to puredata and nodebox, which are graphical representations of relationships between objects which are themselves written in c and python, respectively. here, we can write them in c and c++. by “we” i mean third party developers – there’s no operating system on earth where it’d be considered fair to expect the os devs themselves to build media software beyond basic media players.

Cortex is basically a front end to the BMediaRoster, it allows you to connect BMediaNodes, so what you can do is up to you. In theory every ported program could be interfaced with the media_kit.

http://betips.net/1997/09/09/fun-with-cortex/
http://birdhouse.org/beos/byte/22-media_kit/

There are poor informations just because no one is writing about it, you can find various media_addons in haikuware…so go ahead and document it!

Cortex tries to give the BeOS/ Haiku Media Roster a boxes and lines model like a modular synthesiser. You should be able to Google “modular synthesiser” to find pictures of real world ones, and information about software to do this on more capable systems. Be’s Media “nodes” are the modules of the synth in the Cortex approach.

There are a few problems with that

  • The Media Kit assumes 1:1 connections. By using simple analogue splitters a modular synthesiser easily handles N:M connections. Of course this can be worked around by introducing a great many dedicated 1:N or N:1 nodes, but Haiku does not supply such modules, and from a user experience perspective they would make the system unnecessarily clumsy anyway.
  • There are two philosophies about controls in a modular synthesiser. One possibility is that audio signal voltages should match control voltages, the output of an oscillator can be just as readily plugged into the control input for a phaser as to its audio input. The other possibility is to label them separately and deny such connections. Haiku makes no effort to enforce either philosophy, and the supplied components lack control inputs altogether.
  • More generally the Media Kit specifies very little about audio, an output which is, for example stereo 44kHz 16-bit PCM may be rejected by an input which accepts only mono. Again the Media Kit, or Cortex, could provide a collection of adaptors but even if this was done (and it hasn't been) the result would be clumsy.
  • Very few nodes have ever been created for the Media Kit. A typical modular synthesiser would include a variety of modules, different types of oscillators, filters, and more esoteric features like boolean functions. The more variety the more possibilities for exploration.
  • Perhaps most importantly, modular approaches have always been a specialist niche. Not everybody wants to plug a bunch of modules together just to make noises. This has proved true for other (more capable and better supported) digital audio systems that offer this visualisation.

In practice today Cortex is probably most useful to Haiku users and developers as a diagnostic tool, to visualise the connections already made and settings already used by their software.

Reading your posts from time to time, i have the feel you are talking of something you never used. The mixer node is exactly the module you are talking about.

The AudioAdapter node do exactly that.

A specialists niche? So AudioUnits and jack clients approaches are niches.

Cortex is not a modular synthesizer but something a lot more generic, as said an interface for the media_kit, the main problem with it is the fact of it being an unfinished work. The main media_kit defect is being too generic,i’m sure the BeOS devs were designing some more specific objects to handle specific cases, but we know how the story finished. If you look close to MediaNodes you can see that it’s more intended to do inter app communications.

Not true. The Media Kit offer no help to handle 1:N, N:1 or whatever N:M connections, but it doesn’t forbid it either. The default mixer node is the best proof that it’s perfectly possible. What missing in the kit is some support class(es) to make multiple (even better, dynamic)inputs / outputs handling easier to implement.
The Media Kit “connection” defines a link between an output source and an input destination. Aka actual working ports, not a pair of “logical ports”.

[quote]Haiku does not supply such modules, and from a user experience perspective they would make the system unnecessarily clumsy anyway.[/quote].
Which won’t mind anyhow, because this stuff is clearly a developer level issue, not an user one. See above why I agree that, from developer point of view, Media Kit is kind of too raw on this topic.

But Media Kit is not a digital audio framework. It’s a multimedia framework.
This make a lot of difference, in particular in the shortcuts your data flow system can take or not, in forced synchrocity or not.

True. It was never intended to be more than that.
Actual multimedia applications does the media kit nodes graph assembly all by themselves way better and way error-proofer. Check Media Player source code.

with a bit of love, it wouldn’t be far different from graphical programming languages whose objects are written in c or python – think nodebox or quartz composer – and the community around those is kind of huge and amazing.

[quote=phoudin]
Not true. The Media Kit offer no help to handle 1:N, N:1 or whatever N:M connections, but it doesn’t forbid it either. The default mixer node is the best proof that it’s perfectly possible.[/quote]

It’s disingenuous to assert that the Media Kit doesn’t “forbid” M:N connections. It does, the connections defined by the Media Kit are specifically 1:1. The Mixer “gets around” that by spontaneously creating new inputs for each connection and redirecting the connection to the new input, a crude hack that’s no replacement for actually being able to connect things together arbitrarily.

The other side of the mixer doesn’t have the same trick, so you can’t connect the output more than once. If you need the output of (anything) in more than one place, too bad, write yet another node to duplicate the signal.

The audio adapter smashes everything into the least common denominator. I’m talking about e.g. taking a stereo input and turning it into a left output and a right output, or a centre output and a difference output. Or vice versa. For which today you would need to write another node.

Well yes, but what I was really getting at is that graph tools (like Cortex) to manage them are a small niche. Most Jack users don’t spend much time fiddling with its graph tools, they may only run one Jack program most of the time so the graph would be rather empty.

Well, if all nodes reserve a new input/output opened, it would be the same as supporting N:N connections.

There is nothing technically which is preventing that, it’s just not the standard right now, but nothing can prevent it to be in future. Especially after R1.

This is just question of improving the AudioAdapter, but anyway i don’t understand why you are talking to me about the limits i showed in a recent article, what’s the point? I was talking about how the problems i showed have a lot of chances to be resolved without changing too much in the media_kit. I would add, there are also good points of having multi-channel audio interlaced, for example it’s more easy to keep them synchronized, and there’s a unique stream of data instead of two.

I don’t see how the Media Kit could do more than what it already does with regard to multiple inputs. What would you expect it to do ?

Maybe you want a variable number of inputs, all with the same purpose, like the mixer node.
Maybe you can handle just 2, all with the same purpose (for example in a ring modulator, which multiplies two sound waves), or maybe two very different ones (let’s say something that overlays subtitles on a video, would get one text and one video input).

The same thing happens on the output side, you may want all the consumers to get the same data, or you may want them to receive different parts of it. The system and the media kit don’t know, so, they leave it up to you to manage this the way you need, in your own node.

From the point of view of a tool like Cortex? Make easy things easy and hard things possible. When you plug two sources into the same input, sum them; when you plug two destinations into the same output, copy the data. In the present Media Kit framework none of this is really practical, which is what I was getting at originally.

[quote=PulkoMandy]I don’t see how the Media Kit could do more than what it already does with regard to multiple inputs. What would you expect it to do ?

Maybe you want a variable number of inputs, all with the same purpose, like the mixer node.
Maybe you can handle just 2, all with the same purpose (for example in a ring modulator, which multiplies two sound waves), or maybe two very different ones (let’s say something that overlays subtitles on a video, would get one text and one video input).

The same thing happens on the output side, you may want all the consumers to get the same data, or you may want them to receive different parts of it. The system and the media kit don’t know, so, they leave it up to you to manage this the way you need, in your own node.[/quote]

I may want a mono output connected to two different nodes with a mono input for example. Or i may want two output mixed into an input automatically, i don’t think the system should force you to use mixer nodes to do that. Additionally, one may want to separate a stereo input, to process the left with a node and the right with another, then i would like the output to be returned to the audio card as a stereo connection. Those are facts, limits of the media_kit. Actually it’s not clear for me which solution should be take to go over those limitations, but i can think of a more specialized set of nodes making more easy for the programmer to handle basic situations. Or alternatively the concept of media_input and media_output might be extended a bit to handle such situations.

The main problem is that as said the media_kit is a lot generic, but video and audio have different needs. I’ll take the previous example, connecting two video outputs to one input should not have the same behaviour of the audio case. So that’s why i was thinking to more specialized nodes, it leave anyway the possibility to inherit the node from BBufferProducer/Consumer, but it would make more easy for plugin developers, for example, to have more complex but common behaviours…

If they all reserved one such input or output for each distinct “real” input or output of the node, and then they all had a bunch of logic to pre- or post-process the data for those extra inputs and outputs, and there was an API to determine which “real” input or output was associated with the ports things seem to actually plugged into and so on - and if that was all documented? Sure. But they don’t.

You have to play the hand you’re dealt, not the hand you wish you had. Cortex is a tool for BeOS and for the still far from complete Haiku R1, which has this Media Kit and not the improved Media Kit you might wish for in some hypothetical future version of Haiku.

[quote=NoHaikuForMe]
You have to play the hand you’re dealt, not the hand you wish you had. Cortex is a tool for BeOS and for the still far from complete Haiku R1, which has this Media Kit and not the improved Media Kit you might wish for in some hypothetical future version of Haiku.[/quote]

What make you think i’m not playing?

does any of that have anything to do with cortex, thohugh? to a user, if nodes are there that behave in a certain way, that’s the functionality available to them regardless of how that functionality was delivered. programming nodes and using cortex are not the same discussion – anything that’s done to the media kit and anything that’s done with nodes will just have its behavior reflected in cortex without any change to cortex itself, it doesn’t seem to care in the least.

i’ve written elsewhere a couple of times, could do with the ability to execute graphs as one would a script (including initializing applications included in the graph’s nodes) and some ide integration allowing displaying and editing the source of each object and displaying help files detailing the use of each object. it could probably also do with some graphic differentiation between the types of connections.

But what do you mean by “sum them” ? it’s a bit simplistic ! Do you mean buffer1+buffer2 ? This doesn’t make sense.

You can’t mix sounds the same way you mix videos or MIDI events. You have to tell the media kit how you mix the 2 buffers, and that’s the role of the node.

Maybe jack don’t show you this node behind the UI but something has to do the job.

But what do you mean by “sum them” ? it’s a bit simplistic ! Do you mean buffer1+buffer2 ? This doesn’t make sense.

You can’t mix sounds the same way you mix videos or MIDI events. You have to tell the media kit how you mix the 2 buffers, and that’s the role of the node.

Maybe jack don’t show you this node behind the UI but something has to do the job.[/quote]

Corrrect. The job is done buy a way better “routing” wich automatically adds nodes and filters as far as i know (my knowledge is a litte bit outdatet) eg in gstreamer in direct show and so on. -

So there is a definitly big posibilitys to improve the mediakit and cortex of corse.

But from the general point of view, the mediakit is well prepared.A Node can accept not only multiple inputs but also different timers. Connections accept different Types of Media and so on. E.g. it is possible to write nodes wich process Textdata in a stream or convert Audio to Video and so on.
Just check for example [ f3c ] FMedia
there is a good toolbox and some good nodes to play with.

Also the api from the mediakit is the worst i ever worked with on BeOS … The downsides wich where explained are not design wise like our personal trained troll NoHaikuForMe made it look. :smiley: But as he would line out… “The plugins are missing” and as i said a intelligent routine is missing. Thats true … but the foundation is there … i know some prepared Nodes wich it never made it into the public and thats sad because this could helped a lot.

But now i am looking forwad to see a MediaNode developed from NoHaikuForMe Maybe then we get some more technical correct trolling wich we are normally expect from him :-). Maybe even with some links to the api :-D. Looking forwad to this :slight_smile:

In PCM audio, for each frame you simply add together the relevant sample from all the inputs. If you’re using a signed integer representation you should use saturating arithmetic, like a DSP, not the wrapping arithmetic provided by default in most CPUs. For floating point it’s unimportant.

Over the history of personal computing there have been many non-audio people who thought like you, they came up with some very amusing algorithms in this area, all wrong. I particularly like the solution chosen for the Amiga in the 1990s where if you play some music and then also play silence, the music gets quieter. Wrong, but they were trying so very hard.

You can read the JACK source code for yourself, it’s a simple addition and it’s performed automatically by JACK itself. JACK made a bunch of design choices which make this (and other important things) easier to get right, but the fundamental concept isn’t hard.

I don’t think there’s a real node behind, it’s jack doing the mix things in the graph execution.

All that should be done at some point, there will be also need to merge the midi_kit functionalities into the media_kit. Unfortunately, the same problems of multiple connections will be here when adding midi support, so i think the problem of how media_kit will handle multiple connections should be resolved before.

[quote=Paradoxon]
But now i am looking forwad to see a MediaNode developed from NoHaikuForMe Maybe then we get some more technical correct trolling wich we are normally expect from him :-). Maybe even with some links to the api :-D. Looking forwad to this :-)[/quote]

++1

our mixer is not just blindly adding streams together. What if they have different bitrates? The mixer node takes care of this, it will also resample (and there are two algorithms implemented to do that with different levels of quality).

You are ignoring what I said: what if you want to implement a ring modulator ? Getting the sum of two signals, when what you wanted is to multiply them, is utterly useless. So your suggestion is restricting what the media kit could do. And I’m still talking about sound streams here.

The same apply on the output side: what if the nodes you plugged as outputs don’t expect the same format ? There’s negociation between nodes to agree on a common format. While this happens with pairs of nodes, it’s easy to find a solution. When it happens with multiple nodes on each end, you really need some kind of mixer or splitter node in the middle to handle the format conversions and make sure every node gets input in a format it can process.

Note this doesn’t increase latency more than what you suggest: in your scheme “something” would still have to do the mixing and copying. Why not use nodes for that?