Tight as tight audio engine for music making app

Hey

I’m writing a little prog called ‘Musical Audio Editor’ (for now). It’ll basically be a waveform editor which plays all open and activated files in sync and has grid-based editing and other things to make it more ‘musical’. I’m trying to construct it so it can grow into something much larger, but it’ll probably never amount to anything. I’m not really a programmer anyways. I’m mainly concerned about tightness of synchronisation, so that all loops would be retriggered smack on the one (or wherever) in time with one another. I want to be able to play the same loop on top of itself with no phasing issues.

The question here is, instead of looping, I want to do retriggering, so for each open file you’d set ‘retrigger me every x bars’ and then the SampleController object would interact with a timing source to play itself at the right time if it’s activated. At least that’s my idea of the best implementation (as it would eliminate the need for all loops to be exactly x bars long), but once again I’m an utterly useless newbie, so it’s probably a stupid idea.

Anyways. Should I use the master system time source or look into implementing some sort of sound class? Or maybe just drop the whole thing and stick to my course work?

I’m reading the BeBook but it’s all a little difficult. I tend to post these questions just before I come up with a solution, but if anyone’s got anything intelligent to say about this, please come forward.

-Paws