[chuck-users] now v.s. actual time

Tom Lieber lieber at princeton.edu
Thu Oct 9 20:55:53 EDT 2008


On Thu, Oct 9, 2008 at 5:04 PM, Kassen <signal.automatique at gmail.com> wrote:
> In this forum topic http://electro-music.com/forum/viewtopic.php?t=29358
> there is some talk about the time that a operation takes as compared to how
> it affects "now". As I said there; we may want to calculate a lot of numbers
> before starting synthesis (say in the case of a algorithmic score) and only
> start synthesis after we have those numbers. Right now you can't. You can
> tell ChucK to calculate those numbers, advance time by some amount estimated
> to be about as long as our CPU takes to do that and only start synthesis (by
> connecting to the dac) after that, but this will depend on a good estimate,
> which will of course differ with the CPU we may be on.
>
> Aside from this question there is the matter of how long a operation takes
> in a "strongly timed" language. In the past I have -unsuccessfully- tried to
> benchmark different versions of a operation  using "now". We have a lot of
> ways of dealing with time, there are probably few languages that deal with
> time in as much detail as ChucK does but we have no way at all of dealing
> with time as registered by the clock next to the programmer.
>
> There is a paradox here; I'd like to be able to advance time by exactly as
> much time as was taken (by the cpu) since the last advancing of time; the
> problem with that is that determining this might itself take quite a few CPU
> cycles while this is the sort of operation we would use when trying to do
> things as quickly as possible. Another question is that we can't yield to
> the UGen graph if need be. A loop of just repeated yielding will stop the
> Ugen calculations.
>
> I have no solutions to this but thought this forum topic at least gave
> (yielded?) a new look at the question.

With code like:

  while(notdone) {
    dowork();
    pause_dur => now;
  }

tweaking pause_dur until the computationally heavy code runs without
skipping is the same as benchmarking it to figure out how long it
takes to execute, and it's safe.

I don't think system calls, even ones like reading files, should take
a shred out of the shreduler until they finish, though I admit that's
personal preference. Deterministic parallel code like ChucK's is hard
to come by.

But code like this:

  me =< now; // step outside time
  doallwork();
  me => now; // step back

just can't happen.

Allowing shreds to step outside time would solve all the problems
brought up in the thread, but it would weaken some of the data
consistency guarantees ChucK's cooperative multishredding gives you.
Even if ChucK could interleave execution of your Neo shred with that
of those still enslaved by the virtual machines, Neo would have to be
frequently interrupted in the middle of whatever he was doing to
prevent audio underflows. And since he doesn't know when he'll be
interrupted, his data won't necessarily be consistent.

If it were a local variable in a for-loop, fine; but what if it were a
variable global to the file, or a static variable in a class used by
other shreds? ChucK code is written with the assumption that if
doesn't let time pass, no data can change beneath its feet.

My recommendation: use multiple ChucKs with message-passing (OSC,
whatever). I hear Prof. Cook does it, and he seems like an all right
guy.

---

By the way, I'm sorry, but I still can't figure out what this would be
expected to do:

  realnow => now;

-- 
Tom Lieber
http://AllTom.com/


More information about the chuck-users mailing list