What about maybe bypassing some of the computation by including a human game in the process?  E.g., make a spectral analysis of a participant's voice, then make a game of matching up visual patterns in that analysis with visual patterns in animal analyses.  After selecting patterns that seem to match, the action could return to something computational, like vocoding.  Maybe morphing from their spectrum to the animal's spectrum...

This may be irrelevant to your intentions!  but Nuno's idea caught my imagination, and this is what came to mind.

bf


On May 23, 2008, at 4:47 PM, Peter Todd wrote:

Hello,

There's an external for pd / max-msp called soundspotter designed for doing the kind of thing you want: http://www.soundspotter.org/

I'm not intimately familiar with it's workings, but it is open-source and Michael Casey has written several papers on the subject which you'll find under 'Research' on that page.  So, if you definitely want to use ChucK (I'd suggest pd with that external may be the easiest route in the short term), there'll be lots of useful information there.

If you did what you described with FFT, you would probably be vaguely on the right track, but I'm afraid it could turn out to be quite a long track...

Good luck, it sounds like a fun idea; and soundspotter should be a perfect fit.

Cheers,
Peter

On Fri, May 23, 2008 at 5:17 PM, Nuno Godinho <eu@nunogodinho.com> wrote:
Hi,

Here's what I'd like to achieve:

There's a pool of samples of animals. There's a microphone. Someone talks to
the microphone, the voice is analyzed and, based on a given similarity
criteria, an animal is chosen.

To be honest I don't know how this can be done and if it is easy or hard.
Should I try using FFT to determine, say, the main frequency and decide from
there? Which other criteria should I be able to compare? Any links or
samples to get me started?

Thanks,
Nuno