Atte; this is a wonderful project.

Some notes on your method; you are using a file (and hence a shred) per Ugen. I believe there to be some small overhead for the usage of a shred so I believe you may be ending up with slightly high numbers. I once tested the cpu cost of sporking a thousand (maybe ten thousand) shreds that were all just waiting for a event and this took some small but signifficant amount of cpu.

Maybe the most remarkable result to me here is the cost of setting a SndBuf to loop; that's a rather big hit for what I imagine to come down to a single if-then instruction per sample. I also wonder why Pan2 is more expensive than 2 Gains; we could use 2 Gain Ugens (and a function) to emulate it, maybe this instead tests a higher cost for forcing the DAC to operate in stereo? Finally, like you,  I wonder about the different normalised results for the two tests of PRCRev that you performed; some difference would be understandable, for example the VM itself takes some CPU which would be scalled along with the whole thing but this looks like a rather signifficant difference to me. I wonder how we could explain that.

As for further tests that might be useful/ interesting; you could compare the Blit osc's to the regular ones (as well as try to deterimine what -if any- difference the amount of harmonics makes). Another interesting thing to compare might be LiSa with multiple voices v.s. as many copies of SndBuf.

If at all possible it would also be interesting to fully automate a series of tests like this so we could compare versions of ChucK later. We might be interested in the exact difference that a future implementation of block processing could make, for example.

The list goes on; we might want to know the cost of member functions like the .freq() parameter of filters, we may even want to know the cost of certain operations. For example in the case of the looping SndBuf above; we could make it loop using a "if the pointer goes out of range put it back at the start" construct or we could perform a modulo on the same pointer (which might carry the remainder, which would probably be desirable there), what would be cheaper in ChucK?

Those last tests would probably be going too far and be too detailed but we know very little about the price of such operations in ChucK.


Thanks again for sharing your results so far!
Yours,
Kas.


2009/3/10 Atte André Jensen <atte.jensen@gmail.com>
Hi

We all know that chuck is not the fastest audio software out there. But I guess like me you've all found ways to work around that. I often found myself wondering "how much can I save by disconnecting this UGen" or "exactly how expensive is another NRev".

For this purpose I started to do some benchmarking in the form of a bunch of .ck files and a bash script. My initial results are here:

http://atte.dk/chuck/results.txt

The first line is "chuck --loop", so the wm alone. "nb" is the number of files, "cpu" is the cpu usage as reported by htop on my laptop (2Ghz Intel dualcore), and "cpu normalized" is cpu-usage (simple multiplication) at nb=100.

Of course this doesn't make sense without seeing the .ck files, so I've put them here:

http://atte.dk/chuck/performance_tests.tgz

I'm quite aware of that this approach is very un-scientific, so any input on how to improve it is more than welcome. Esp. I'm wondering how 10 * PRCRev = 7% and 50 * PRCRev = 46% (should have been 35%).

I'm gonna continue my tests, but you're all welcome to supply files for testing. Maybe this should all end up on the wiki?

NB: This is in no way a critique of the developers. Sure I would love to see a faster chuck, but we still love it and use it.

--
Atte

http://atte.dk    http://modlys.dk
_______________________________________________
chuck-users mailing list
chuck-users@lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/chuck-users