[chuck-users] benchmarking UGens

Kassen signal.automatique at gmail.com
Wed Mar 11 10:32:32 EDT 2009


Atte;


> I still don't get it. You're saying that it's a problem that I run several
> shreds per test, right?


Yes. I think a shred in and off itself takes some cpu (based on tests I ran
a long time ago using empty shreds). Thus, if we have a single UGen per
shred and we wish to measure UGens we'd be measuring the cost of a UGen +
the cost of a shred. This would mean that all UGens would end up looking
slightly more expensive than they are. This would mean that according to the
numbers we'd get a network of 5 UGens would be at a disatvantage compared to
one of 3 UGens if we want to compare what they cost. In the first case we'd
have 5 times our "cost of measuring", in the second only 3 times.

Let's say a bag of candy costs 2$, a piza is 4$ and driving to the store
costs me 1$. This means buying a bag of candy costs 3$ (in practice), a piza
would cost me 5$ but driving o the store to buy both would only by 2+4+1=7$
as I'd only have to drive once. Here driving equates to "having a shred".

With a test like this;
repeat( 100 ) SinOsc s => dac;
week => now;

We'd have a 100 UGens and only a single shred, instead of 100 UGens and 100
shreds.



> Obviously I'm doing that to get numbers (cpu usage) in a range where they
> make sense, admittedly entirely based on my gut feeling. Comparing cpu loads
> of 2.1 and 2.2 is not as good as 84.0 and 88.0 + I'd expect small numbers to
> be relatively more polluted with "stuff from the system", including the vm.
> Also close to 100% things are useless, the system would start to blow up,
> stutter etc.
>

Makes sense.


> Wouldn't you say that it's safe to say that SinOsc is *about* twice as
> expensive as the others? I mean the numbers in the same column should be
> directly comparable, or?
>

Yes. I also think that the cost of a shred should be small compared to the
cost of a UGen. Here is a test to benchmark the cost of a 100 shreds that
all do nothing;

fun void wait()
    {
    week => now;
    }

repeat(100) spork~wait();

week => now;

As you'll see; the cost of those is non-zero.


>
> Naturally that is provided that the measurements are sane. Thinking about
> the statics mentioned by Tom (
> http://www.zedshaw.com/essays/programmer_stats.html) this is actually a
> real challenge. A simple way would be to have chuck run for a number of
> seconds, and take measurements of the cpu load at certain intervals. Then
> (with out knowing much about statistics) something like throwing away
> measurements that are way of and averaging between the remaining could make
> sense. Or one might be interested in the maximum load generated by the code,
> although many things outside of chuck (the system) could account for the
> "jitter".
>

Yes, true, though if some UGen would be corelated to high jitter that would
be a interesting metric as well. I'm not sure we have such UGens. I
recognise that this is a hard thing to measure.

>
> For instance the x50 of SinOsc are 28.0 compared to 21.5 in the two
> different runs. I have no idea where this jitter comes from (the specific
> UGen or my systems or something else), but clearly that's something that
> should be improved upon.
>

Yes. I saw that too. When I benchmark my own programs to see what they cost
(to see whether I think a certain change is worthwhile) I've often seen
considerable jitter. I'm inclined to blame the OS in most cases. Typically I
wait for a bit and see what the worst it ever does is as it's only the worst
case scenario that affects me. Taking only 5% is still no good to me if it
occasionally spikes and glitches.

Still, even if it's hard; this is very interesting and very worthwhile, I
feel.

Yours,
Kas.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/chuck-users/attachments/20090311/d6a13d9d/attachment.html>


More information about the chuck-users mailing list