[chuck-users] now v.s. actual time

Tom Lieber lieber at princeton.edu
Thu Oct 9 22:09:02 EDT 2008


On Thu, Oct 9, 2008 at 9:31 PM, Stephen Sinclair <radarsat1 at gmail.com> wrote:
> On Thu, Oct 9, 2008 at 8:55 PM, Tom Lieber <lieber at princeton.edu> wrote:
>> With code like:
>>
>>  while(notdone) {
>>    dowork();
>>    pause_dur => now;
>>  }
>>
>> tweaking pause_dur until the computationally heavy code runs without
>> skipping is the same as benchmarking it to figure out how long it
>> takes to execute, and it's safe.
>>
>> I don't think system calls, even ones like reading files, should take
>> a shred out of the shreduler until they finish, though I admit that's
>> personal preference. Deterministic parallel code like ChucK's is hard
>> to come by.
>>
>> But code like this:
>>
>>  me =< now; // step outside time
>>  doallwork();
>>  me => now; // step back
>>
>> just can't happen.
>>
>> Allowing shreds to step outside time would solve all the problems
>> brought up in the thread, but it would weaken some of the data
>> consistency guarantees ChucK's cooperative multishredding gives you.
>> Even if ChucK could interleave execution of your Neo shred with that
>> of those still enslaved by the virtual machines, Neo would have to be
>> frequently interrupted in the middle of whatever he was doing to
>> prevent audio underflows. And since he doesn't know when he'll be
>> interrupted, his data won't necessarily be consistent.
>
> Hm, are you talking about introducing "real" parallelism? (i.e., pthreads)
> I didn't really mean to imply that in anything I wrote, at least.
> I agree that doing it properly would be difficult and error prone.
> Probably the "out of time" shred would need to be forced not to access
> any common objects, at the very least, and communicate by other means
> than global variables.  (this would be equivalent to OSC, as you
> mention at the bottom.)

No, I am describing what it means for a single shred to execute "in
the background, for as long as it takes to finish." This is moving
from cooperative scheduling (shreds yielding time when they are ready)
to preemptive scheduling (shreds forced to yield time at the
shreduler's convenience), which is where concurrent modification
happens.

Not allowing such shreds to access objects outside of its scope would
solve the concurrency problems, though that seems extremely limiting.

>> By the way, I'm sorry, but I still can't figure out what this would be
>> expected to do:
>>
>>  realnow => now;
...
> In other words, if a function takes 3 ms to complete, you could make
> sure not to interrupt other shreds by,
>
> while (...) {
>  longfunction();
>  realnow => now;
> }
>
> This would ensure that logical time advances during the computation,
> while not forcing you to guess the actual time that longfunction()
> takes to execute.
>
> Note that none of this means changing anything in the shreduler.

That wouldn't help at all, though. ChucK has to execute longfunction()
to completion (holding up every other shred during that time) before
it gets to "realnow => now;".

-- 
Tom Lieber
http://AllTom.com/


More information about the chuck-users mailing list