[chuck-users] benchmarking UGens

Atte André Jensen atte.jensen at gmail.com
Wed Mar 11 07:00:05 EDT 2009


Kassen wrote:

> I meant this as a reason to be curious about the exact (as opposed to 
> relative) cost of UGens, not as  a speciffic scenario to test.
> 
> I hope that clarifies.

Kind of.

I still don't get it. You're saying that it's a problem that I run 
several shreds per test, right? Obviously I'm doing that to get numbers 
(cpu usage) in a range where they make sense, admittedly entirely based 
on my gut feeling. Comparing cpu loads of 2.1 and 2.2 is not as good as 
84.0 and 88.0 + I'd expect small numbers to be relatively more polluted 
with "stuff from the system", including the vm. Also close to 100% 
things are useless, the system would start to blow up, stutter etc.

But supposed we compare these lines (a result of the current version of 
the test):

file                                   x10    x50   x100
--------------------------------------------------------
01_PulseOsc.ck                         2.7    8.5   18.5
01_SawOsc.ck                           3.0   10.2   22.5
01_SinOsc.ck                           5.0   28.0   44.5
01_SqrOsc.ck                           2.7    8.7   19.2
01_TriOsc.ck                           3.0   10.2   22.5

Another run:

file                                   x10    x50   x100
--------------------------------------------------------
01_PulseOsc.ck                         3.0    9.5   19.5
01_SawOsc.ck                           3.0   11.0   22.0
01_SinOsc.ck                           5.5   21.5   45.0
01_SqrOsc.ck                           3.0    9.5   19.5
01_TriOsc.ck                           3.5   10.5   23.5

Wouldn't you say that it's safe to say that SinOsc is *about* twice as 
expensive as the others? I mean the numbers in the same column should be 
directly comparable, or?

Naturally that is provided that the measurements are sane. Thinking 
about the statics mentioned by Tom 
(http://www.zedshaw.com/essays/programmer_stats.html) this is actually a 
real challenge. A simple way would be to have chuck run for a number of 
seconds, and take measurements of the cpu load at certain intervals. 
Then (with out knowing much about statistics) something like throwing 
away measurements that are way of and averaging between the remaining 
could make sense. Or one might be interested in the maximum load 
generated by the code, although many things outside of chuck (the 
system) could account for the "jitter".

For instance the x50 of SinOsc are 28.0 compared to 21.5 in the two 
different runs. I have no idea where this jitter comes from (the 
specific UGen or my systems or something else), but clearly that's 
something that should be improved upon.

-- 
Atte

http://atte.dk    http://modlys.dk


More information about the chuck-users mailing list