On 24 Jul 2009, at 21:21, Kassen wrote:
So in this vein, I posted a POSIX-threaded sample illustrating how to reuse threads when searching files in small 'grep'-like program. One can use any number of threads for any number of files. Each thread opens up a file and searches it. When one thread is finished, if there are more files to search, it continues to the next.
That example is set in a entirely different context. For one thing such searches aren't a realtime application for another the results of any one search don't depend on the outcome of any other and won't be affected by them. Such things can be paralelised very well.
This is very different from a ChucK VM where any of the parts may interact with and depend on any number of other things.
What I am saying I think this independence is the requirement for parallelization. So this parts must be identified from the Chuck code that is now mostly written sequentially. This is a hard thing to do, of course. But then, such independent parts can be computed in parallel or sequentially at need, and the scheduling on different CPUs is not hard.
So if this can be down with sample times as snapshots, all that is needed is sufficient CPU power to complete all computations until the next sample is presented.
You seem to be describing a situation where values travel through the UGen graph at a rate of one sample per UGen to travel through.
The picture I have in my mind is a bit more complicated. The value at each sample time is a dead-line that should be reported. Between those, one may have a complicated directed graph of events and computations between them. Arrows going in parallel can be be parallelized, and computed by reusing ordinary threads. Arrows what meet or branch off must be met up, but not in real-time, except for the next sample-time reporting. And this is the best one can do.
That indeed is a scenario that can be paralelised well, but that's quite different from how it currently works where the output of a single UGen will start affecting the value at the DAC in the very sample sample, thanks to our "pull through" model. One of the big advantages of the model we are using now over -say- block processing is that we can have predicatble tuned feedback loops.
Perhaps the current Chuck model isn't well adapted for such a thing. Hans