[chuck-users] optimization
Spencer Salazar
ssalazar at CS.Princeton.EDU
Sun Sep 17 18:53:52 EDT 2006
Hey Jochen,
On Sep 17, 2006, at 4:54 PM, jochen.hartmann wrote:
> hi, i am running mini-audicle to write my chuck code on a mac ... i
> love this stuff
> i began writing more complex code where a lot of geometry is figured
> out to generate dynamically morphing sounds and now things are getting
> pretty slow. mini-audicle tends to eat up 90% of my cpu so once i have
> about 10 shreds moving at once, i can't do anything but force-quit to
> stop things...
>
> now most of this probably has to do with my code not being very
> optimized. so i was wondering:
> a. are there ways to give more memory to the chuck vm ?
There isn't really any way to give it more memory--each program on
your computer is given the illusion of 4GB of memory to work with,
but it cant use any more than that (unless you are using a 64 bit
system). But from your description it seems that you have plenty of
memory, and that CPU usage is more problematic.
> b. is it faster to run shreds from the command line (i would imagine
> so...) ?
Yes--command line chuck is definitely the most CPU efficient way of
ChucKing. The miniAudicle is intended to be a performance
environment, though, so we try to minimize CPU overhead of
miniAudicle as much as possible. But if the CPU is already
overloaded, things like typing and printing information to the
console will slow things down. Additionally, the virtual machine
monitor introduces a constant, small but appreciable processing hit.
This is something that does improve with each release, though--so one
suggestion would be to use the most recent release available if
you're not already.
> c. are there any other tips/tricks to help me optimize my code (maybe
> something to figure out where the bottlenecks are...) ?
Not knowing your programming experience/background or your code, I
can say that there are a lot of general code optimization strategies
that are very applicable to chuck. Please excuse me if you are
already familiar with these techniques and Im covering old territory.
- try to do as little processing in loops as possible, and avoid
nested loops. Look at your loops and try to see if there are any
calculations that can be done outside of the loop but still achieve
the same result. A trivial example:
for( 0 => float r; r < 20; r + 1 => r )
{
2.0 * pi * r => float C;
<<< C >>>;
}
can be optimized to
2.0 * pi => float twopi;
for( 0 => float r; r < 20; r + 1 => r )
{
twopi * r => float C;
<<< C >>>;
}
- avoid duplicate calculations by storing the results of recurring
calculations in temporary variables--memory is a lot cheaper and more
abundant than CPU time.
- avoid recursive functions that go deeper than a few levels of
recursion, unless absolutely necessary--every recursive computation
can be rewritten as a functionally equivalent loop, and vice versa,
but loops typically have less overhead than function calls.
- if you are doing processing for each and every sample its easy to
eat up the CPU quickly--try to structure your programs to do as
little every-sample work as possible.
(others on this list who have some insight as to ChucK-specific
optimization strategies or additional general optimization
techniques, please feel free to chime in.)
hope this helps!
spencer
>
> thanks!
> best
> - jochen
>
>
> HSDOM SOUND
> http://hsdom.com
> RECORDS
> http://phaserprone.com
> Brooklyn, NY
>
> _______________________________________________
> chuck-users mailing list
> chuck-users at lists.cs.princeton.edu
> https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
More information about the chuck-users
mailing list