Hi folks, We call your attention to the Small Music Information Retrieval toolKit, SMIRK, available and waiting for your download and abuse at http://smirk.cs.princeton.edu . Like smelt (http://smelt.cs.princeton.edu), SmirK is a set of ready-to- use building blocks and examples. It's all written in ChucK, so there's nothing to "install." (But it does come with its own ChucK class hierarchy, which you'll have to download, so you'll have to set your path carefully, unlike smelt. See the comments in each file for instructions.) Use the code as-is, or modify it to suit your needs. sMiRk includes 2 key components: feature extraction (using the UAna framework at http://chuck.cs.princeton.edu/uana/) and classification (e.g., k-nearest-neighbor, AdaBoost). This comes with some simple keyboard-based and MAUI-based interfaces for training and running classifiers. For example: * Train a classifier to recognize vowels versus consonants, and then apply it to pan the incoming audio appropriately * Do the same based on instrument, speaker, or even genre of songs in your iTunes collection * Recognize different trackpad or accelerometer gestures * Do all this training on-the-fly, in real-time! * Use MAUI to do simple visualizations of features and classification results (for Mac only; not required) We'd love you to download smirk, read about it, ask questions about it, abuse it, request new features, and contribute to it yourselves. We've set up a wiki at http://wiki.cs.princeton.edu/index.php/Chuck/SmirK where you can do all these things. Meanwhile, we're continuing to hack away... Cheers, Rebecca, Ge, and Perry