Hi Nuno, This would be a great application for our new SMIRK toolkit (small music information retrieval toolkit for MIR in ChucK), soon to be up at http://smirk.cs.princeton.edu. This is a great example of a problem that could be easily solved with a machine learning algorithm, wherein you 1) Extract features from a training set of animal sounds 2) Use them to train a classifier (now available in chuck: kNN, adaboost) 3) extract features from the mic input 4) use the trained classifier to classify the new inputs You could start with playing with FFT, centroid, RMS, rolloff, and other standard features, then use whatever features end up capturing your idea of similarity the best. I'll send a notice to this list once everything is totally up. Cheers, Rebecca On 24-May-08, at 12:00 PM, chuck-users-request@lists.cs.Princeton.EDU wrote:
Send chuck-users mailing list submissions to chuck-users@lis Hi,
Here's what I'd like to achieve:
There's a pool of samples of animals. There's a microphone. Someone talks to the microphone, the voice is analyzed and, based on a given similarity criteria, an animal is chosen.
To be honest I don't know how this can be done and if it is easy or hard. Should I try using FFT to determine, say, the main frequency and decide from there? Which other criteria should I be able to compare? Any links or samples to get me started?
Thanks, Nuno