[chuck-users] zipper noise

Kassen signal.automatique at gmail.com
Sun Nov 4 12:24:50 EST 2007


On 11/3/07, Stephen Sinclair <radarsat1 at gmail.com> wrote:


>
> Yeah, I guess like I said it's a difficult problem to generalize.
> Since the "control" loops in Chuck can go right down to the sample
> level, I find the semantic difference between connecting two UGens and
> explicitly programming a connection in a while{} loop is actually
> somewhat of a gray area, the difference being more implementation.


It does make a huge difference in cpu load though and because Ugen
connections are far more predictable then programmers it's easier to
optimize.


For instance, the decision is usually that if you chuck, say, a SinOsc
> to an LPF, what you are chucking is the audio signal.  However, you
> could just as easily consider that you might want to chuck something
> to freq().  With the fact that you can then use a while{} to modulate
> freq() at the sample rate, you could argue that the only real
> difference between these two operations is syntactic.


Yeah, I suppose... You can indeed also set the rate at which Ugens run,
since that's the sample rate.... But I'd say that another major difference
is that you can't modulate the VM's sample-rate while the VM is running.


But of course, that's not to say that these choices are arbitrary
> either... Obviously they are made to maximize efficiency.  Chucking
> something at freq() would imply a great deal of extra calculations per
> sample.
>
> However, maybe the answer is simply to program some extra UGens that
> allow these kinds of connections, when they are wanted.  For instance,
> that is why we would not want to reprogram the STK synths using Chuck,
> and would rather keep them as UGens.


Yes, that would be a option that would make sense. What we could also do is
look for Ugens that have parameters that could be modulated at sample rate
without taking a big CPU hit, I think that's basically what Dan is doing
with his new sync input option for LiSa but so far we have no real syntax
for chucking signals to inputs that are different from each other. I suppose
Gain set to substract would be a example and so is the DAC, in a way.


I think a good solution for the zipper noise problem however, aside
> from the idea of chucking to object parameters, would be to have UGens
> automatically at least ramp their values, when they have been
> modified, for some determined number of samples.  (Or with a certain
> "inertia".)  This would greatly increase sound quality, imho.


Hmmmm. It's not a bad idea at all but there are no free rides. LPF.freq()
will still need to calculate it's coefficients every sample, as I mentioned
earlier I don't think you can just ramp the individual coefficients and
expect it to stay stable but perhaps there are filter designs for which
that's possible?



> well basically what I'm saying is that it would be nice to have this
> as an option to preserve sound quality without requiring the user to
> run his loop at 1::samp.


I have no real issues with running loops at 1::samp, except that it's so
expensive on the CPU. Perhaps a big round of ChucK optimization would cure
this but I'm also very much behind the choice to first focus on getting it
all to work and only then optimize.

Functionally speaking we can already do everything that has been mentioned,
you can use Envelope to ramp for you, meaning you do need a loop at 1::samp
but it can be a loop that only contains extremely simple commands. I also
think it's important that choices made here would fit with the rest of the
syntax.

On the other hand; some modules, like the oscillators, already do have
modulation inputs but for those a choice has to be made about the way the
modulation affects the oscillator. I'm currently leaning towards extending
the syntax there to enable us to use multiple modulation targets per Ugen.
If we would do that, where it makes sense, this could easily be used for
slides/interpolation as well by using Envelope. Doing it that way would
allow the user to determine the way the interpolation works and it's shape,
I like that much better then a ready-made and static solution.

I think I'm also in favor of allowing negative targets for Envelope to ramp
to. There are explicit checks in the code to prevent those but I don't quite
understand why those are there.


Maybe one day I'll have time to look at the Chuck code and see how
> difficult it would be to implement.


I think this side of ChucK borrows a lot from the STK, if you want to go
there you could do far worse then starting with Perry Cook's "Real sound
synthesis for interactive applications" which is both serious in purpose and
friendly  tone, good properties for a book on that sort of topic, IMHO.

I think there are some very hard choices to make here. I agree there is room
for improvement but I also think there is very real danger of mucking up
consistency. I hope Ge has some thoughts on this, so far his solutions are
quite good and the very core of ChucK syntax *is* directed at this very
issue and Gasten does have a good point in his last mail to this discussion.

Still; thinking out loud is fun!

Yours,
Kas.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.cs.princeton.edu/pipermail/chuck-users/attachments/20071104/d6b33af8/attachment.htm 


More information about the chuck-users mailing list