[chuck-users] GC vs real-time (was Getting Started with ChucK)

Kassen signal.automatique at gmail.com
Fri Jul 24 15:21:17 EDT 2009


Hans;


So in this vein, I posted a POSIX-threaded sample illustrating how to reuse
> threads when searching files in small 'grep'-like program. One can use any
> number of threads for any number of files. Each thread opens up a file and
> searches it. When one thread is finished, if there are more files to search,
> it continues to the next.
>

That example is set in a entirely different context. For one thing such
searches aren't a realtime application for another the results of any one
search don't depend on the outcome of any other and won't be affected by
them. Such things can be paralelised very well.

This is very different from a ChucK VM where any of the parts may interact
with and depend on any number of other things.


>
> So if this can be down with sample times as snapshots, all that is needed
> is sufficient CPU power to complete all computations until the next sample
> is presented.
>

You seem to be describing a situation where values travel through the UGen
graph at a rate of one sample per UGen to travel through. That indeed is a
scenario that can be paralelised well, but that's quite different from how
it currently works where the output of a single UGen will start affecting
the value at the DAC in the very sample sample, thanks to our "pull through"
model. One of the big advantages of the model we are using now over -say-
block processing is that we can have predicatble tuned feedback loops.

Is this some kind of resource starvations setup? Like in:
>  http://en.wikipedia.org/wiki/Dining_philosophers_problem
>  http://en.wikipedia.org/wiki/Resource_starvation
> Then those things cannot be prevented as per design of the computer
> language, just as one cannot prevent non-termination (infinite loops) being
> programmed.
>

Indeed. I meant it to illustrate that multi-cores are no real extention of
Moore's law. Not everything can be paralelised.

Another example would be calculating the first million numbers of the
Finbonachi serries, that won't be done sooner just by having more cores
because every next number depends on the last one.

Yours,
Kas.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/chuck-users/attachments/20090724/3db3637a/attachment.html>


More information about the chuck-users mailing list