Total Haiku NOOB here, but as I look at Haiku’s Media Kit https://www.haiku-os.org/legacy-docs/bebook/TheMediaKit_Overview_Introduction.html
it seems to me that many primitives for streaming media are included in the OS Media Kit, and that compatibility with Linux and ALSA is pointless and actually disadvantageous (Media Kit won’t be on Linux anytime soon, LOL)
Trying to support any other platform than Haiku OS is basically throwing away the advantages of Media Kit and not driving platform adoption.
I am NOT a NOOB about using DAW’s, however, and I toured for over a decade living 100% off producing electronic dance music on various DAW’s, as well as having over 30 years experience of recording instruments editing mixing mastering traditional studio work.
I currently make a living programming web server applications on Linux, and I’m aware of the advantages this platform provides for this application. My principle language these days produces executable binaries and has an HTTP server in the standard library I “get it”.
ALSA, however, is an impediment (device trees being one strategy for setting up channel mapping that users need for multi-channel audio interfaces that ALSA cares not a whit about, etc.)
Linux diversity and layering and modular software ideas are also not useful given the interfaces and state machines it’s subsystems provide. (I get that modular “glue code scripting” of OS primitives would be cool, but that’s not what’s available with our favorite SERVER (mainframe) OS)
Most audio developers who have worked on all platforms will agree that CoreAudio is basically the most pleasurable to work with. In practice, you use RTaudio or portaudio or whatever and you can just open an soundcard driver and register your callback function and call it a day, but you certainly can’t use any OS provided datatypes nor know anything about the data until you have defined your own and pipe everything through it. (pre-buffering, recording, any memory-mapped spaces where you do outboard processing like UAD does, etc…)
Reaper Ardour Cubase Ableton and other cross platform apps will basically let you select an audio driver and open it for duplex communication and they function by having all timing, buffering, etc defined inside the application, and they all have their own libraries for dealing with this.
Logic and Mac only software often take advantage of CoreAudio data types inside the program and if you look at Media Kit, you can basically use glue logic to make some primitive multi-media applications directly with the OS…
I’ll not have much time to argue with folks here, etc… but I will report back later with code, and my hope is to do a from-scratch DAW, and not by being arduous or torturing myself trying to replicate Pro-Tools nor Cubase functionality that the users don’t like anyway, but are the historical results of trying to continually be everything to everyone and add features to software.
Ableton threw away most of those obsolete rules and simplified things and things “just work” the way you expect them to, first time you touch it, no owners manual needed.
I intend to consider the nature of Media Kit and write a Haiku OS ONLY DAW that will never work on Linux Mac or PC, because any serious music person cares about the DAW as a holistic tool they can achieve songs with, and will be OK installing a partition or building a dedicated tower for this task.
A DAW is a single purpose computer, in a way, and the less unrelated gack in the way of this = the better.
Oh, don’t expect anything from me soon, LOL… I’ll talk to you in a couple of years… LOL…