2010/11/27 Kassen
Me too! Though I just realized that 35 and 36 points correspond to coefs array sizes of 104 and 107. So I searched ChucK for "100" and found:
#define genX_MAX_COEFFS 100
D'oh.
Ok... does that make any sense at all, as a limit? Wouldn't ChucK memory be divided in some sort of blocks?
Hm, not sure what you mean by blocks, but the problem seems to be that while every Gen* checks that you don't provide too many points by comparing with genX_MAX_COEFFS, CurveTable compares with a separate variable, MAX_CURVE_PTS, which is set to 256, even though that function creates a coeffs array only genX_MAX_COEFFS elements long. The patch to fix it would probably look like this: ======== diff --git a/src/ugen_osc.cpp b/src/ugen_osc.cpp index 6a31ddc..deb3469 100644 --- a/src/ugen_osc.cpp +++ b/src/ugen_osc.cpp @@ -1461,7 +1461,7 @@ CK_DLL_CTRL( curve_coeffs ) t_CKINT i, points, nargs, seglen = 0, len = genX_tableSize; t_CKDOUBLE factor, *ptr, xmax=0.0; t_CKDOUBLE time[MAX_CURVE_PTS], value[MAX_CURVE_PTS], alpha[MAX_CURVE_PTS]; - t_CKFLOAT coeffs[genX_MAX_COEFFS]; + t_CKFLOAT coeffs[MAX_CURVE_PTS * 3]; t_CKUINT ii = 0; t_CKFLOAT v = 0.0; ======== In fact, maybe I'll publish that branch to my ChucK mirror...
Ohh, I'm definitely going to play with that. Was it logarithmic, or something less regular?
Well, say that we are using n bits to describe a range from 0V to 1V. First we'll consider bit 0. Assume this one is the most significant one. If it's high it should contribute .5V to the total. However, the resistors used would be cheap and may have as much as a 10% error (medical and military grade ones with smaller margins would be lots more expensive, if available). Because of this and depending on the exact properties of the device in our hands we'd get something like .47V instead. Let's consider bit 1 and say it's high too. This should contribute .25V. In practice it might instead add .26 . At this point the total value should be .750000 but instead it will be (.47 + .26 =) .73 . Repeat for all bits. I think you can assume the error per resistor to stay constant over the use of the "dac", for pieces of a realistic length. I also think that a 10% margin of error is about realistic, maybe we have members who used to solder back in the mid 80's who will know more. Oh, and of course these used plain analogue LPF's, not some sort of phase-linear FIR filter over a over-sampled version of the signal like modern soundcards. For the ultimate in realism of emulating old digital stuff note that often compander (compressor / expander) chips were used to suppress noise. Those might well be a bigger factor in the "punch" instruments like the MPC brought to genres like HipHop than the low bit-depth and rate on their own. There is a whole world of fascinating phenomena there.
Yes there is! So much more than I ever knew I wanted to know. I wonder if I will one day know it. It's interesting to see this on the Wikipedia page about the original Gameboy's audio: "2 square waves, 1 programmable 32-sample 4-bit PCM wave, 1 white noise, and one audio input from the cartridge" That's... a little more constrained than I thought it was. And it still sounds like this: http://www.youtube.com/watch?v=NmCCQxVBfyM Or maybe the music was pre-rendered, I guess I don't know. Anyway... either I'm implementing this wrong (naively?), or they'd have needed much smaller error than 10% to get any two devices to sound alike... unless the envelope is moved to after the quantization, in which case it sounds great no matter what you do. :D // this code no longer needs scale.ck, yay // sub-class and override valueFor() to define // whatever type of wavetable you want class SuperTable { // connect these Gain input; LiSa output; // private Gain mix; Step dc; // public // recalculates the table fun void recalculate() { (output.duration()/ samp) $ int => int NUM_SAMPS; 1.0 / NUM_SAMPS => float STEP_SIZE; // fill LiSa buffer with quantization map for( int x; x < NUM_SAMPS; x++ ) { output.valueAt( 2.0 * valueFor( x * STEP_SIZE ) - 1.0, x::samp ); // save } } // private // override this function to provide the value that // should be output for the given input // ('in' ranges from 0 to 1) fun float valueFor( float in ) { return in; } fun void initialize() { input => output; dc => output; // configure LiSa ( 10000 )::samp => output.duration; 1 => output.sync; 1 => output.play; // map input from [-1, 1] to (0, 1) .49 => input.gain; .5 => dc.next; } initialize(); } class Cruncher extends SuperTable { // private int bits; float levels; float bitContribution[1]; float error; float compressMix; // public fun void setBits( int num ) { if( num > 30 ) { <<< "I can't let you use", num, "bits, Dave." >>>; return; } num => bits; bits => bitContribution.size; Math.pow( 2, bits ) => levels; calculateErrors(); recalculate(); } fun void setError( float percent ) { percent => error; calculateErrors(); recalculate(); } fun void setCompressMix( float mix ) { mix => compressMix; calculateErrors(); recalculate(); } // private fun void calculateErrors() { for( 0 => int bit; bit < bits; bit++ ) { Std.rand2f( ( 1 << bit ) * ( 1. - error ), ( 1 << bit ) * ( 1. + error ) ) => bitContribution[bit]; } } fun float valueFor( float in ) { Math.round( in * levels ) $ int => int quantized; float out; for( 0 => int bit; bit < bits; bit++ ) { (quantized & (1 << bit)) > 0 => int on; out + (on ? bitContribution[bit] : 0.) => out; } out / (1 << bits) => out; //return out; return ( out * ( 1. - compressMix ) ) + ( ( ulaw( out * 2. - 1. ) / 2. + .5 ) * compressMix ); } fun float sgn( float f ) { return f >= 0 ? 1. : -1.; } fun float ulaw( float f ) { return Math.sgn(f) * Math.log( 1 + levels * Std.fabs( f ) ) / Math.log( 1 + levels ); } fun void cruncherInitialize() { // set default bit contribution error .1 => error; // set default compressor mix .3 => compressMix; // set default quantization level setBits( 8 ); } cruncherInitialize(); } // ugens Cruncher cruncher; TriOsc osc; ADSR env; LPF lpf; // patch osc => env => cruncher.input; //cruncher.output => lpf => dac; cruncher.output => dac; // configure env.set( 1::ms, 40::ms, .1, 300::ms ); cruncher.setBits( 8 ); cruncher.setError( .1 ); cruncher.setCompressMix( .3 ); 11025 => lpf.freq; // GO! [ 0, 1, 3, 8, 6, 4 ] @=> int notes[]; [ 0, 2, 4, 5, 7, 9, 11, 12, 14 ] @=> int scale[]; while( true ) { for( 0 => int i; i < notes.size(); i++ ) { scale[ notes[ i ] ] + 75 => Std.mtof => osc.freq; env.keyOn( 1 ); second / 6 => now; env.keyOff( 1 ); second / 7 => now; } }
Also, while turning the LiSa code into a UGen-like class, I realized that the ADSR in my original e-mail was not being quantized, and once I moved it inside, the sound became much less pleasant. The laser whizzing noises (aliasing?) become much more apparent. Interesting again with only 3 bits, though.
Yes, that makes a difference. I do think that real historical gear would sometimes put the envelope last (where this is viable, of course, it would be in the S612, not so in the gameboy) to suppress noise. This is why all non-modular analogue synths have the ADSR after the filter, even if the filter wouldn't ever self-oscillate. In anything hybrid I'd predict the envelope would be last. In purely digital stuff the envelope would be before the converter and it's trigger quantised to the bitrate. That last bit is a bit obvious when you think about it, but it matters in how static the final result will be perceived to be if we repeat the same drum a few times. To conclude; it's not entirely unlikely that we'll have grey beards (where appropriate) before we'll be able to perfectly emulate the sounds of our youths¹. ;¬) Kas. ¹Some might simply have greyer beards, but they may have to deal with tape and tube-amp emulation so it evens out.
Hm, that reminds me, I think I cribbed some tube amp patches off the forum a while back. I wonder how those'd sound in this... -- Tom Lieber http://AllTom.com/ http://favmusic.net/