[chuck-users] GC vs real-time (was Getting Started with ChucK)

Kassen signal.automatique at gmail.com
Thu Jul 23 19:50:50 EDT 2009


Hans;

What might this mean?


By this I mean that I tried to design my own reverb for moving soundsources
according to a particularly inefficient design once. This had to render to a
pair of wave files in non-real time and at some point it started swapping.
It also caused other issues and I had to abandon the plan for then, later
this turned out to be related to a array bug that has since then be fixed
(the design was still bad though it was a interesting experiment).

Though they are of course not strictly speaking related I do think that very
high memory usage (and by the time you exhaust a modern computer you are
using quite a lot) is quite likely co-related to high CPU usage. If you need
a Gig or so for your objects, your samples and your buffers in ChucK you are
probably also running into CPU issues at the same time.

On UNIX computers, each process gets a couple of address spaces on their
> own, like function stack, heap, program static data; on a 32-bit machine
> they are typically 2 GB each and handled by virtual memory, built into the
> OS. One can check usage by commands like 'systat vmstat', 'vm_stat'. If
> there is too much paging, there is too little available RAM relative the
> programs in use.
>

This particular experiment was done on my stripped version of Windows XP but
I don't think that affects matters much. A modern computer will most likely
have at least half a Gig off RAM available for the user to spend at will on
things like ChucK before any noticable swapping will be done. I'm simply not
very worried about that right now because I don't see how we'd use all of
that up in a way that *might* cause issues with some forms of GC that we may
or may not use in the future.

If you have a longer composition that uses a lot of previously recorded
instruments that you'd like to mix to a single file using ChucK you may need
a amount of memory that starts to approach the order of magnitude that
causes swapping. That's the only case I can think of right now but a) likely
that's not the sort of case that will trigger lots of GC on that data and b)
I think we are already re-claiming space for samples that are no longer used
and so far nobody had complained that this caused huge breakdowns in the
realtime paradigm. In fact I can't remember a single complaint about this at
all. Loading huge samples from HD while using a small latency is likely a
much larger issue.

Yours,
Kas.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/chuck-users/attachments/20090724/b4451984/attachment.html>


More information about the chuck-users mailing list