On 3/7/07, Veli-Pekka Tätilä <vtatila@mail.student.oulu.fi> wrote:
Hi Kas and Spencer,
As I wanted to comment both of your replies at once, I'm switching quoting
style in mid-thread, IF this new one is problematic, let me know in a reply
and I'l'll change back. AT least it makes it clear with screen readers who
is saying what without having to parse "greater than x 3" in my head,
<smile>. Feel free to snip heavily.
Very good!
V: Ah I see, sounds good to me. A classic accessibility question, does that
panel UI have the concept of keyboard focus, tab order and the ability to
interact with the controls using the same hotkeys for knobs that Mac and
WIndows use for sliders? THese are the most frequently overlooked points in
synth UI design, in terms of accessibility, and make life hard if your
primary input medium is keyboard and output comes via synthetic speech.
I agree. A while ago I started covering my desk and USB ports with gaming devices and so I tend to use the keyboard for those things a lot more since there is no more space for a separate mouse :-)
V: That's right, and also because sighted folks might like precise keyboard
control from time to time. Precise adjustment of values, and the ability to
type in values using numbers or modal dialogs would rock. I've seen a
zillion synth UIs in which I'd like to type in an exact cutoff value and
cannot, though I have 102 keys at my Disposal.
Indeed. Envelopes in Ableton Live come to mind.
V: It's funny you mention this. I didn't think of live performance when I
looked into ChucK, although I imagine it could be of great value in that,
too. As I input virtually everything via MIDI in my music, and record it
live in a seq, the live aspect of chucK wasn't my cup of tea, though.
Well, livecoding aside ChucK can work as a live instrument exactly like you want it to work. It takes MIDI, HID and OSC so the sky is the limit.
What
I'm looking for is a more accessible equivalent to Reaktor eventually, whose
UI has been going down and down for years, in terms of accessibility. NAmely
the ability to patch modular synths together and process MIDI eventes
either in real time or off-lien in MIDi files like you would in Cakewalk
application language. Another interest is as a simple testbench for audio
and MIDi mangling ideas comparable to VST but much easier and faster to work
with.
I think it will suit this role very well. I found development in ChucK to be very fast. MIDI file in and export aren't here yet but that seems like a useful and obvious future inclusion.
I've noticed that graphical programming, as in Reaktor, has a limit after
which it would be faster and more natural to express those ideas in code,
particularly if you have to use magnification and speech to begin with.
Laying out math formulae as a series of modules is one obvious example. And
another is counting stuff. To count a 16-bit integer quantity in Reaktor, I
Would have to worry about the range of events and chain too binary counters
together to get the precision. Where as in ChucK I can say:
0 => int myCounter; // And that's that.
++myCounter; // to increment, and modulo to wrap.
Yes, indeed. I have no experience with Reaktor but I've been with the Nord Modular sine the beginning, I personally found that being able to be very precise about the order in which things are computed is a huge benefit. That's not so easy to express in graphical systems. In programs like MAX it's implied in the layout but I'd rather use the layout to express something about the structure of the program which might not be the same intuitively all the time.
In Tassman I have at times run out of space on the screen in complicated patches.
[interface sonification]
V: I wonder if that's bene studied. As part of my Uni graduation stuff for
investigating issues of current screen readers from the user point of view,
which is just in its infancy, I've read plenty of stuff on accessibility. I
do know auditory icons have been researched.
Yes, most obvious is things like Windows making a click sound when you click your mouse. However all serious DAW users turn that off for obvious reasons. I have never heard of interface sonification for musical effect in music programs beyond the metronome, Still, I'm finding the idea intriguing. Right now I have a prototype that works like this. In my sequencer you can select a step that you might want to add a beat or note to (or remove one from, etc). I made it so that if the selected step comes up in the loop a click is heard in the main mix. Since that already depends on the sequencer itself it's inherently quantised and I picked a click that suits the other sounds. This is a big help in programing beats while not having any visual feedback and it blends with the rest quite naturally, in fact I found my self "playing" those clicks already. I'm now thinking about ways to extend this and elaborate on it.
I'm open for ideas here or links to research. Beyond organ players insisting on key clicks I'm drawing blanks with regard to interface sonification for music programs. For video games it's a different matter, games like Rez and Luminess use this sort of thing all over the place to great effect but neither of those would be so much fun if you can't see well. Space Channel5 (by the same designer as the other two) should be playable with bad vision. The Csound book has a interesting article on sonifying error messages in a way to indicate the sort of error or event that occurred, that's interesting too.
V: Quite nice, actually. But where do you store the sequences and in what
kind of a format? The japanese music macro language (mML) would be
especially nice for expressing modulation and monophonic parts in
ASCIIbetical notes. I mean, I've used it to compose mini tunes for the 8-bit
Nintendo on the PC. I still lack an official spec, though I've written a
simple MML parser for myself in Perl.
Oh, they are just arrays. When loading the program these are empty; you have to play it to get something and it's recording or enjoying it while it's here because after you shut it down the music is gone forever. This is on purpose, actually, it's a conceptual thing. I've been called crazy already <grin>. I can store and recall a few patterns while it runs so it's feasible to write little songs in it.
V: But what kind of params would you adjust via timing? tap tempo comes very
naturally but I don't know about the rest. Of course, switching patterns
needs to be done in time, too.
Nearly everything, actually. For example; I have a "accent" function. Say I'm editing my leadline and I press the "accent" button that's on the Beatmania controller (a PlayStation one). The program will then wait for the next note of the lead line to come up and assign a accent to it. >From then on there'll be a accent there.
Of course more then that is involved but it all works in real time and it's quite fast (which it obviously needs to be for live song writing).
[MIDI]
V: Lastly, some feedback about MIDI:
I mentioned MIDI processing as one potential use and find the current API a
bit raw.
It is. There have been more comments on this and that too is on our wish list. ChucK is a lot like Christmas for small kids; huge wish lists. On the other hand there are free presents all round the year <grin>. Unless you are in Princeton, then it must be more like a experiment in stress management and sleep deprivation.
Yours,
Kas.