Controling Chuck - building accessible GUI
There was a thread on this list back in January about control of chuck via OSC, a Python package. I have a few questions: First, I'm blind and use a screen reader. I want to build something that allows me to control chuck sreds via a GUI. I don't like Python much for programming, and am not sure if OSC would even work with my screen reader (anyone have any quick demos I could try just to see if the screen reader will deal with the UI toolkit at all)? What I was thinking of doing is building something in Mozilla's XUL language. This does have the ability to send network packets, and also can execute shell commands. So, was thinking about writing code in chuck and then pass arguments to each .ck file to change parameters. Of course, this might not work so smoothly if we wanted to change things on the fly. FOr this I guess I'd need to implement something like OSC's message passing scheme in XUL. Any advice, or suggestions? Anyone interested in rewriting audical so it works with a screen reader? There are so many great software synthesizers on the market now that work as both stand alone and/or plugins to popular hosts like Cakewalk's Sonar, but none that I've ever tried will allow the screen reader enough control to do real sound design. With effort, one can usually figure out how to change presets, but in a large number of cases even this is not possible. I was hoping that chuck could be used to build a fully accessible sound designers toolbox that could allow one to do things quickly and easily without having to write too much code. I've obviously never used it because it is very graphically oriented, but think of something like PD "Pure Data". You aparently can build all sorts of neat stuff by just plugging stuff together, without having to write a line of code. Thanx for any thoughts/suggestions... -- Rich
Hi,
I don't know if I can answer your whole question, but ... well, first
of all for Mozilla, take a look at something I posted a while ago that
lets you send and receive OSC messages using the XUL framework:
http://www.music.mcgill.ca/~sinclair/content/blog:communication_between_xul_...
It's only been tested on Linux so far, but you might find it useful.
I'm just waiting for someone to do something more impressive with it.
Second, I don't know much about screen readers, but I'm surprised that
as a blind person you want to use Chuck in a GUI environment. I
always thought that text-based things were much friendlier to use by
the blind, isn't that the case? I figured something like Chuck would
be much easier to use by teletype than a GUI environment like
PureData. I'm curious how well this screen reader thing works for
music. I would think that control through a physical interface like a
MIDI controller, or a text-based braille reader would be much easier.
Lastly, I'm not sure Chuck would be necessarily "better" if converted
to a graphical environment. If something like PureData is what you
want instead of writing code, why not just use PureData? Chuck is
pretty much designed to be code-oriented.
Just curious,
Steve
On Nov 19, 2007 4:05 PM, Rich Caloggero
There was a thread on this list back in January about control of chuck via OSC, a Python package. I have a few questions:
First, I'm blind and use a screen reader. I want to build something that allows me to control chuck sreds via a GUI. I don't like Python much for programming, and am not sure if OSC would even work with my screen reader (anyone have any quick demos I could try just to see if the screen reader will deal with the UI toolkit at all)?
What I was thinking of doing is building something in Mozilla's XUL language. This does have the ability to send network packets, and also can execute shell commands. So, was thinking about writing code in chuck and then pass arguments to each .ck file to change parameters. Of course, this might not work so smoothly if we wanted to change things on the fly. FOr this I guess I'd need to implement something like OSC's message passing scheme in XUL.
Any advice, or suggestions? Anyone interested in rewriting audical so it works with a screen reader? There are so many great software synthesizers on the market now that work as both stand alone and/or plugins to popular hosts like Cakewalk's Sonar, but none that I've ever tried will allow the screen reader enough control to do real sound design. With effort, one can usually figure out how to change presets, but in a large number of cases even this is not possible. I was hoping that chuck could be used to build a fully accessible sound designers toolbox that could allow one to do things quickly and easily without having to write too much code. I've obviously never used it because it is very graphically oriented, but think of something like PD "Pure Data". You aparently can build all sorts of neat stuff by just plugging stuff together, without having to write a line of code.
Thanx for any thoughts/suggestions... -- Rich
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
Hey, just saw your blog and your XUL project, but didn't see any contact for
you. So, glad you responded; thanx ...
Anyhow, text only interfaces are inherently more accessible, but not always
optimal for the job. What I am ultimately wanting is something like PD, or
at the very least a good synth like dimensionPro which I can actually
control in real time. I want to play with sound, not play with code!
<smile>
Last time I tried PureData, it was completely inaccessible to my screen
reader. To make something accessible takes some planning (use the right
toolkit and use standard widgets/components which have accessibility hooks
already in them), and test with real users! For instance, if your going to
use Java, then you either need to use Swing and its standard components, or
write your own code to properly use the Java Accessibility API on your own
AWT widgets (a fair amount of extra work so I've been told). Since most
sighted software designers insist on creating their own widgets, accessible
software tends to be hard to find, especially in music land.
It would be very cool to have some sort of physical control over a virtual
software synth, but again this does not come in a vacuum. You need to be
able to setup your controller to control what you want, in the way you want.
This generally requires the ability to interact with your synthesis software
and whatever software runs the controller. This again depends on the
accessibility of both.
So, perhaps your little XUL experiement will be at least a start. The great
thing about XUL/JS is that:
1. XUL produces accessible UI widgets with little effort
2. There seem to be a good number of standard and useful widgets/controls
implemented
3. Javascript is powerful and fairly easy to program
4. you don't need complex IDEs (which are usually not accessible via screen
reader) to write fairly complex XUL/JS code
All that said, I have some basic uqestions about your software:
1. My screen reader only runs on windows (boo), so what (in general terms)
do I need to get libLo running
2. Do I need anything else to get chuck to work with libLo
Thanx much in advance.
-- Rich
----- Original Message -----
From: "Stephen Sinclair"
Hi,
I don't know if I can answer your whole question, but ... well, first of all for Mozilla, take a look at something I posted a while ago that lets you send and receive OSC messages using the XUL framework:
http://www.music.mcgill.ca/~sinclair/content/blog:communication_between_xul_...
It's only been tested on Linux so far, but you might find it useful. I'm just waiting for someone to do something more impressive with it.
Second, I don't know much about screen readers, but I'm surprised that as a blind person you want to use Chuck in a GUI environment. I always thought that text-based things were much friendlier to use by the blind, isn't that the case? I figured something like Chuck would be much easier to use by teletype than a GUI environment like PureData. I'm curious how well this screen reader thing works for music. I would think that control through a physical interface like a MIDI controller, or a text-based braille reader would be much easier.
Lastly, I'm not sure Chuck would be necessarily "better" if converted to a graphical environment. If something like PureData is what you want instead of writing code, why not just use PureData? Chuck is pretty much designed to be code-oriented.
Just curious, Steve
On Nov 19, 2007 4:05 PM, Rich Caloggero
wrote: There was a thread on this list back in January about control of chuck via OSC, a Python package. I have a few questions:
First, I'm blind and use a screen reader. I want to build something that allows me to control chuck sreds via a GUI. I don't like Python much for programming, and am not sure if OSC would even work with my screen reader (anyone have any quick demos I could try just to see if the screen reader will deal with the UI toolkit at all)?
What I was thinking of doing is building something in Mozilla's XUL language. This does have the ability to send network packets, and also can execute shell commands. So, was thinking about writing code in chuck and then pass arguments to each .ck file to change parameters. Of course, this might not work so smoothly if we wanted to change things on the fly. FOr this I guess I'd need to implement something like OSC's message passing scheme in XUL.
Any advice, or suggestions? Anyone interested in rewriting audical so it works with a screen reader? There are so many great software synthesizers on the market now that work as both stand alone and/or plugins to popular hosts like Cakewalk's Sonar, but none that I've ever tried will allow the screen reader enough control to do real sound design. With effort, one can usually figure out how to change presets, but in a large number of cases even this is not possible. I was hoping that chuck could be used to build a fully accessible sound designers toolbox that could allow one to do things quickly and easily without having to write too much code. I've obviously never used it because it is very graphically oriented, but think of something like PD "Pure Data". You aparently can build all sorts of neat stuff by just plugging stuff together, without having to write a line of code.
Thanx for any thoughts/suggestions... -- Rich
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
Rich Caloggero wrote:
Hey, just saw your blog and your XUL project, but didn't see any contact for you. So, glad you responded; thanx ...
Anyhow, text only interfaces are inherently more accessible, but not always optimal for the job. What I am ultimately wanting is something like PD, or at the very least a good synth like dimensionPro which I can actually control in real time. I want to play with sound, not play with code! <smile>
Last time I tried PureData, it was completely inaccessible to my screen reader. To make something accessible takes some planning (use the right toolkit and use standard widgets/components which have accessibility hooks already in them), and test with real users! For instance, if your going to use Java, then you either need to use Swing and its standard components, or write your own code to properly use the Java Accessibility API on your own AWT widgets (a fair amount of extra work so I've been told). Since most sighted software designers insist on creating their own widgets, accessible software tends to be hard to find, especially in music land.
You might give jMax [1] a whirl. It's in the Pd family and the UI is implemented in Swing. That said, doing sound design with jMax, Pd and friends is still programming for all practical purposes. I ended up in the ChucK world precisely because I find the interfaces of those programs to be inelegant for programming (and I'm particularly fond of ChucK's timing model). Most of the experiments that I've seen (including one that I started) in binding ChucK to interfaces are mostly concerned with providing a UI for interacting with ChucK programs, not supplanting ChucK's logical building blocks. Were one to have that goal, I think starting with the STK [2] (which ChucK also uses for a lot of the heavy lifting) would probably be easier. -Scott [1] http://freesoftware.ircam.fr/rubrique.php3?id_rubrique=14 [2] http://ccrma.stanford.edu/software/stk/
hi rich! Rich Caloggero(e)k dio:
There was a thread on this list back in January about control of chuck via OSC, a Python package. I have a few questions:
First, I'm blind and use a screen reader. I want to build something that allows me to control chuck sreds via a GUI. I don't like Python much for programming, and am not sure if OSC would even work with my screen reader (anyone have any quick demos I could try just to see if the screen reader will deal with the UI toolkit at all)?
I am not sure I understand what you mean. So I will just do some remark. OSC is just a network protocol that has been implemented in many different languages. I always suggest Python because I think the Python OSC implementation is quite easy to use but you could use OSC from many other languages such as Java, C, Perl, Ruby and so on ... Check this link below for a list of languages that have OSC implementations, not all are languages, some are applications of frameworks http://www.cnmat.berkeley.edu/OpenSoundControl/ For example Reaktor does have OSC implemented. But this is graphical like PD and MAX as well. So OSC does not have a UI at all. Many different UI implemented on different languages can be used to create interfaces that later send OSC. For example Python and Tk, C and WxWidgets, Java and Swing, and so on. There are many combinations. Each has advantages and disadvantages. I guess in you case you should check which widgets toolkits work with your screen reader and then choose a language that can be used with that toolkit. Also check that the languages fulfils your needs. This is a list of toolkits at Wikipedia http://en.wikipedia.org/wiki/List_of_widget_toolkits The ones that work for C and C++ are usually ported to scripting languages such as Python, Perl, Ruby and so on. For example Tk and FLTK are very simple to use. hope this helps somehow enrike
What I was thinking of doing is building something in Mozilla's XUL language. This does have the ability to send network packets, and also can execute shell commands. So, was thinking about writing code in chuck and then pass arguments to each .ck file to change parameters. Of course, this might not work so smoothly if we wanted to change things on the fly. FOr this I guess I'd need to implement something like OSC's message passing scheme in XUL.
Any advice, or suggestions? Anyone interested in rewriting audical so it works with a screen reader? There are so many great software synthesizers on the market now that work as both stand alone and/or plugins to popular hosts like Cakewalk's Sonar, but none that I've ever tried will allow the screen reader enough control to do real sound design. With effort, one can usually figure out how to change presets, but in a large number of cases even this is not possible. I was hoping that chuck could be used to build a fully accessible sound designers toolbox that could allow one to do things quickly and easily without having to write too much code. I've obviously never used it because it is very graphically oriented, but think of something like PD "Pure Data". You aparently can build all sorts of neat stuff by just plugging stuff together, without having to write a line of code.
Thanx for any thoughts/suggestions... -- Rich
------------------------------------------------------------------------
_______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
Rich, You only mention the Audicle and not the MiniAudicle. The Mini is considerably more simple, I never used a screen reader but as far as I can see the Mini uses very straightforward text labels with the exception of a few icons that are also reachable by hotkeys. What I wasn't able to do was end arbitrary shreds in the Mini's console monitor without using the mouse (and thus depend on visual feedback), you can navigate the table of shreds using only the keyboard and all fields are labelled in plain text but on my Linux install I couldn't get the keyboard control to go to the table itself and hitting enter on the field that should remove the shred didn't do a thing. How far would those two additions get you? I think those would be nice to have for anyone and might especially enable people on a screen reader to navigate the whole program and those wouldn't require anything near the effort of a whole re-write of the Audicle. Much like Stephen I find the whole question quite mysterious, I never used a screen reader but my first guess would be that a a plain text editor with a command-line should be a perfect match for one, perhaps I don't understand completely what you are after? To answer your other question, you won't get a thing out of ChucK without writing some code but the good news is that per line of code you can get a lot of sound. On the forum we've been playing a game of getting the most interesting sounds out of one or two 80 character lines with very interesting results, in a few paragraphs you could write a small synth controlled by MIDI or a joypad and from there on it's good riddance to the whole screen with regard to playing music. I can't imagine the exact nature of your challenge but I would guess that ChucK might well suit your needs. Yours, Kas.
Hmm, I actually meant the mini-audicle... Because I was never really able to use it, I'm not sure exactly what it allows one to do, but I assume that it allows one to write code in some sort of editor and then send that off to chuck to be run. Is this true? If so, then the text editor also needs to be accessible: need to use a standard multi-line textbox (standard multiline text widget in whatever toolkit your using) and hope that the screen reader knows what it is.
What I wasn't able to do was end arbitrary shreds in the Mini's console monitor without using the mouse (and thus depend on visual feedback), you can navigate the table of shreds using only the keyboard and >all fields are labelled in plain text but on my Linux install I couldn't get the keyboard control to go to the table itself and hitting enter on the field that should remove the shred didn't do a thing. Yes, if there was a keyboard command to bring focus to the table of schreds and keyboard commands for removing and whatever else you can do with each schred, theis would help tremendously.
How far would those two additions get you? I think those would be nice to have for anyone and might especially enable people on a screen reader to navigate the whole program and those wouldn't require >anything near the effort of a whole re-write of the Audicle. Agreed, I think they would be very helpful additions.
From altern's message on this thread, it seems that Java Swing will interface with OSC, and Sun has put a lot of work into making this toolkit accessible. Thus, when you use a standard swing control in your program, Java knows how to: 1. keyboard enable this by default 2. communicate the state of the control to the access technology (screen reader) I'll have to check this out more thuroughly. Perhaps it would be more effective to write the UI in Java, but XUL just seems to be a natural for writing UIs. In any case, thanx for all the responses to my message. I'll post more when I've tried a few things... -- Rich ----- Original Message ----- From: Kassen To: ChucK Users Mailing List Sent: Monday, November 19, 2007 6:01 PM Subject: Re: [chuck-users] Controling Chuck - building accessible GUI Rich, You only mention the Audicle and not the MiniAudicle. The Mini is considerably more simple, I never used a screen reader but as far as I can see the Mini uses very straightforward text labels with the exception of a few icons that are also reachable by hotkeys. What I wasn't able to do was end arbitrary shreds in the Mini's console monitor without using the mouse (and thus depend on visual feedback), you can navigate the table of shreds using only the keyboard and all fields are labelled in plain text but on my Linux install I couldn't get the keyboard control to go to the table itself and hitting enter on the field that should remove the shred didn't do a thing. How far would those two additions get you? I think those would be nice to have for anyone and might especially enable people on a screen reader to navigate the whole program and those wouldn't require anything near the effort of a whole re-write of the Audicle. Much like Stephen I find the whole question quite mysterious, I never used a screen reader but my first guess would be that a a plain text editor with a command-line should be a perfect match for one, perhaps I don't understand completely what you are after? To answer your other question, you won't get a thing out of ChucK without writing some code but the good news is that per line of code you can get a lot of sound. On the forum we've been playing a game of getting the most interesting sounds out of one or two 80 character lines with very interesting results, in a few paragraphs you could write a small synth controlled by MIDI or a joypad and from there on it's good riddance to the whole screen with regard to playing music. I can't imagine the exact nature of your challenge but I would guess that ChucK might well suit your needs. Yours, Kas. ------------------------------------------------------------------------------ _______________________________________________ chuck-users mailing list chuck-users@lists.cs.princeton.edu https://lists.cs.princeton.edu/mailman/listinfo/chuck-users
On 20/11/2007, Rich Caloggero
Hmm, I actually meant the mini-audicle...
Ah, I see. Because I was never really able to use it, I'm not sure exactly what it
allows one to do, but I assume that it allows one to write code in some sort of editor and then send that off to chuck to be run. Is this true?
Yes, that's it. It's a text editor with a build in virtual machine and some management for the shreds that are running. It has syntax highlighting which would be useless to you (I imagine) and it has hotkeys for starting, stopping and replacing shreds which would be useful. If so, then the text editor also needs to be accessible: need to use a
standard multi-line textbox (standard multiline text widget in whatever toolkit your using) and hope that the screen reader knows what it is.
Oh. I had no idea it was all so delicate. I do think the Mini uses a fairly standard toolkit but I'm not sure the textbox is standard as it does syntax highlighting and so on. Well, evidently it's not standard enough for you <sad>. I fear that that would mean that right now if you'd like to work with ChucK the command-line version is the version for you. As such that's no great limitation for using chuck, at least I know that when I use the Mini it's mainly for the syntax highlighting and the rest of the time I use the command-line as well. Personally I am very interested in writing instruments where I don't need to use the screen so I can tell you those can be written but unlike you I do enjoy toying with code as part of my design process so the large question might be to what degree you are willing to go through a stage of writing code in order to get a instrument you can play without the need for visual feedback. It's sad that what might be a perfect solution; a hardware modular with knob labels in Braille is such a expensive thing. Ok, back to practical reality, what challenges would we face in the usage of plain command-line ChucK? How usable is a command-line at all with it's unusual tendency to add text at the bottom and scroll the history upward to you? Yours, Kas.
From altern's message on this thread, it seems that Java Swing will interface with OSC, and Sun has put a lot of work into making this toolkit accessible. Thus, when you use a standard swing control in your program, Java knows how to: 1. keyboard enable this by default 2. communicate the state of the control to the access technology (screen reader)
There is SwingOSC server developed in Java Swing. This is focused on SuperCollider but it should work with ChucK as well. It already works with PD so it should not be so difficult to make it work with ChucK, if it is not already done. http://opensoundcontrol.org/swing http://www.sciss.de/swingOSC/ It works this way: there is a java application that listens for certain OSC messages so the application can be controlled from other language via OSC. This allows to create widgets and get back values when the user interacts with those widgets. Say I send a message from ChucK to create a window with a button, Java pops it. If I click the button an OSC message would be received back at ChucK. So it just requires a set of classes in ChucK that mirror the server, this is not that complex and I am sure that the person that takes care of SwingOSC would be happy to help on this if you ask. BTW, you mention that PD is not accessible, have you asked about this in the PD list? I think it is very good that people like you with special needs raise their voices in these communities, otherwise the developers tend to forget about it. enrike
Hi Rich, I'm a fellow legally blind screen reader and magnifier user interested in ChucK running Win XP and the Dolphin Supernova reader & magnifier. I think our goals are actually quite similar. Although I must confess I haven't done much with ChucK lately. In addition to the cool, domain specific programming language it has, one of the big ideas intriguing me is the ability to use a modular synth independently. Needless to say I've never really used hardware modulars but did use Generator and reaktor before they went with portable and inaccessible custom controls and forgot both the keyboard interface and the user's color scheme. In the end, despite being a programmer, too, my goal is to do music rather than pure sound design or DSP so I would like to keep things practical. Nord Micro Modular is something that I'll have to look into, however I suppose the editor just is not accessible, at least the G2 demo release is not. So my thought was since I cannot use Reaktor, I would like to be able to achieve the equivalent thing somehow, at a practical level, putting together modules, binding hardware knobs to synth parameters, saving presets and playing the whole thing via MIDI in real time. In that context ChucK was not LInux only, was more real time than C Sound apparently and had a nice, easy programming language for the job. So far I am very far from realizing this Reaktor killer for the blind for several reasons. First of all there's no GUi really or no very easy environment to simply load a particular synth and tweak away. If there was an accessible GUi library for ChucK natively, and OO features like reflectionand access to object symbol tables (read hash or dict), one might be able to implement a facility that shows the user a list of editable synth parameters, and let's you MIDI learn any of them, saving the whole thing with the synth. This is very hard to do currently, you have to hardcode controller values and right some of the MIDi handling logic yourself, unless I'm much mistaken. The other major point is that ChucK's orientation is still at the Reaktor core level or lower and the focus seems to be on experimental sound design and DSP rather than on the modular end user level like that of Reaktor, Vaz Modular or the Nord, for that matter. With that said, there are very few analogs of the basic Reaktor building bloks at the module level, the idea is that you program them in on an ad hoc basis, which is fine if you just want to program. But if I'd like to use ChucK as a conventional modular environment, emulating existing modular patches and doing as little programming as you do in AHDL, when you plug logic components together, then ChucK isn't currently very well suited to that purpose. Of course, I'd like to handle patching textually, that's much faster than trying to achieve accessible, direct manipulation. Of course, since ChucK is fully programmable, nothing would stop you from recreating say the Reaktor 3 module library from scratch, especially given how much content there's for chucK already and how well the individual Reaktor modules have been documented. Still, this is a very big job and something I'm not up to yet. However, this might work well as a collaborative project, if the scope was narrowed to very early releases and an essential subset of the modules. The only thing I've emulated so far is the audio switch, with a parameterizable number of inputs, if I recall correctly. Your idea of using OSC and XUl is an interesting one though I know very little about both, I know much more about MIDI. Of the languages mentioned I only know Perl really well. One thing that worries me, though, is the accessibility of the Mozilla components. The thing is with Dolphin's mediocre support for FireFox, it is not quite as accessible as IE is just yet. So I'm afraid controls derived from it might be unusable at worst, Thunder Bird sure is not usable with Dolphin Supernova, much as I'd liek to. Testing is the only way to tell how well this XUL thingy works, I suppose. I suspect you're using the Jaws screen reader. Finally, given that ChucK doesn't currently support the kind of work I'd liek to do with it very well, and if you don't mind me asking, are there other environments that maybe serve me better? Reaktor is out of the question and somehow I've never been truely thrilled by SynthEdit. One app in addition to C Sound I always see mentioned about Linux and modulars is something called the Super Collider. Is that closer to what I'm looking for and what are the major differences to ChucK? I seem to recall Emacs being mentioned in that context, I've never gotten EmacsSpeak to run, although have a Ubuntu VM here. Last but not least, Rich, you might want to join the synth_programming Yahoo group. In addition to me there's at least one blind synth knut there, plus some mathematicians and programmers. We often talk about exotic forms of synthesis, and in fact I learned about chucK on the list, since my speech friendly pseudo code representations of modular patches started to bear a resemblance to ChucK code, which someone kindly pointedd out. my $self = shift; $self->Sleep(); # It is way beyond midnight localtime. -- With kind regards Veli-Pekka Tätilä (vtatila@mail.student.oulu.fi) Accessibility, game music, synthesizers and programming: http://www.student.oulu.fi/~vtatila Rich Caloggero wrote: Rich Caloggero wrote: There was a thread on this list back in January about control of chuck via OSC, a Python package. I have a few questions: First, I'm blind and use a screen reader. I want to build something that allows me to control chuck sreds via a GUI. I don't like Python much for programming, and am not sure if OSC would even work with my screen reader (anyone have any quick demos I could try just to see if the screen reader will deal with the UI toolkit at all)? What I was thinking of doing is building something in Mozilla's XUL language. This does have the ability to send network packets, and also can execute shell commands. So, was thinking about writing code in chuck and then pass arguments to each .ck file to change parameters. Of course, this might not work so smoothly if we wanted to change things on the fly. FOr this I guess I'd need to implement something like OSC's message passing scheme in XUL. Any advice, or suggestions? Anyone interested in rewriting audical so it works with a screen reader? There are so many great software synthesizers on the market now that work as both stand alone and/or plugins to popular hosts like Cakewalk's Sonar, but none that I've ever tried will allow the screen reader enough control to do real sound design. With effort, one can usually figure out how to change presets, but in a large number of cases even this is not possible. I was hoping that chuck could be used to build a fully accessible sound designers toolbox that could allow one to do things quickly and easily without having to write too much code. I've obviously never used it because it is very graphically oriented, but think of something like PD "Pure Data". You aparently can build all sorts of neat stuff by just plugging stuff together, without having to write a line of code. Thanx for any thoughts/suggestions... -- Rich
On 20/11/2007, Veli-Pekka Tätilä
Hi list, This is getting OT, so let's keep this on the list a minimal amount of time. If you prefer to reply and only comment my bits, I recommend we go off-list or snip maximally: Changing quoting styles: V, K, A = Veli-Pekka, Kassen, Altern Nord Modular: K: did you hear about the Nomad project to build a open-source NM editor in Java? V: Nope, that's news to me. I took a look at both the original editor and the Java version. Neither is really accessible enough for me to use Nord as an accessible Reaktor substitute, though the Java version is open source so that's doable. The original editor was better than recent versions of Reaktor Accessibilitywise. MOst of the important, likely custom controls in the Java release didn't implement Swing accessibility support, or at least my screen reader couldn't make much sense out of them, oh well. K: have no idea how a screen-reader would work with visual cables V: Depends, I use heavy full-screen magnification myself and the real mouse for slow direct manipulation when I have to use it. Screen readers normally follow control state changes and the keyboard focus, reading the text, state and type of the focused control in this order, and heuristically guessing labelness. There's a mode that logically iterates all on-screen controls, say the MSAA control tree, regardless of their WS_TABSTOP status, however, and in that mode called virtual focus, you can drag and drop. So in Reaktor you go to virtual focus, arrow through the screen logically until you find the right text output label, perform a drag, arrow to the destination label (i.e. an input), and perform a drop. The reader then programmatically simulates the mouse dragging and dropping. Of course, domain langs like AT&T's Graphwiz, dialogs with combo or edit boxes for connecting nodes, and tree-like editors such as Treepad are far nicer and more efficient to work with compared to a graphical tree, as far as screen readers go. A: first you nee to create in supercollider language a synthdef, just a compiled crossplatform description of connections of UGens in the supercollider scsynth. Once this is compiled you can forget about supercollider and the scsynth can load and control that synthdef. V: Thanks for the summary, this looks like a good environment to build on. However, it would still probably take a great deal of work, so if there's an easier route, in terms of bad lazyness, I'd rather take that. This talk abou the various synth environments let me reading up on Pure Data PD, however. I seem to enjoy the philosophy and real time nature of it, it is well documented and there's experimental VST support, too. And I know C for the low level stuf. The only major gripe I have is the graphical environment. The good things are that it is easy to get started in, it uses standard menus, and I can set up MIDi and audio graphically OK. However, most of the panel widgets it uses and the graphical signal path aren't really keyboard and programmatically screen reader accessible. Given that PD is described as a data flow language is there a textual, human readable and writable format, in which I could do PD patches without the graphics? Put another way, what did the author use for debugging before the GUI? At least there's some kind of exchange format for Max. So I could probably write perl scripts around that, as a desperate measure. Can I run PD on the command line as well? Ideally I'd definitely hope there's a dedicated language, to which format do the graphical widgets reduce internally, is there PD bytecode like there's a virtual machine in ChucK? I've only done one course in it, being mostly a usability guy and a bit of a computer scientist, but i did circuit design using Altera's hardware design language, AHDL. That is a dedicated language, the gist of which is to patch existing logic components together which can also be performed graphically in an editor. So something Altera like, with Perlish slices, Ruby:ish anonymous functions and other goodies would be ideal i.e. too good to be true. NOt to mention ChucKian time handling, that's something I like very much along with the assignment operator. -- With kind regards Veli-Pekka Tätilä (vtatila@mail.student.oulu.fi) Accessibility, game music, synthesizers and programming: http://www.student.oulu.fi/~vtatila
A: first you nee to create in supercollider language a synthdef, just a compiled crossplatform description of connections of UGens in the supercollider scsynth. Once this is compiled you can forget about supercollider and the scsynth can load and control that synthdef. V: Thanks for the summary, this looks like a good environment to build on. However, it would still probably take a great deal of work, so if there's an easier route, in terms of bad lazyness, I'd rather take that. This talk abou the various synth environments let me reading up on Pure Data PD, however. I seem to enjoy the philosophy and real time nature of it, it is well documented and there's experimental VST support, too. And I know C for the low level stuf.
The only major gripe I have is the graphical environment. The good things are that it is easy to get started in, it uses standard menus, and I can set up MIDi and audio graphically OK. However, most of the panel widgets it uses and the graphical signal path aren't really keyboard and programmatically screen reader accessible.
Given that PD is described as a data flow language is there a textual, human readable and writable format, in which I could do PD patches without the graphics? Put another way, what did the author use for debugging before the GUI? At least there's some kind of exchange format for Max. So I could probably write perl scripts around that, as a desperate measure. Can I run PD on the command line as well?
I dont really know answer to this questions. I just use PD so I am not sure how it works behind. I know it is possible to code extensions in C but thats all.
On Nov 21, 2007 12:18 PM, Veli-Pekka Tätilä
Given that PD is described as a data flow language is there a textual, human readable and writable format, in which I could do PD patches without the graphics? Put another way, what did the author use for debugging before the GUI? At least there's some kind of exchange format for Max. So I could probably write perl scripts around that, as a desperate measure. Can I run PD on the command line as well?
the file format is described here. http://puredata.info/docs/developer/fileformat I don't know if it actually is "human readable" but you could probably figure out of writing patches with text (or develops your own frontend). The thing is that pd's server-client architecture is not all that well designed so I am not sure if you could get far with it. You may wish to take a look at DesireData https://devel.goto10.org/desiredata which is for of Pure Data and where the server client separation is hopefully better. This project isn't necessarily moving along very quickly but perhaps with higher user-base (and maybe some coding help) it could develop into something worthwhile. The main developer is pretty busy with various other projects. What might interest you even more, is nova https://tim.klingt.org/nova/ another PD fork which also separated server-client architecture and AFAIK you can describe the dataflow with a textual syntax. I have not used it yet but although it's fairly young project, Tim is apparently using it already for live performances. I did not follow this discussion up til now so I am sorry if I miss some important point, but perhaps CSound would be an option for you. The drawback of yousing CSound is that it is not really suitable for making sounds "live" (you must define your instrument and compile it before you can use it) but there are interfaces made for Java, C#C++, Tcl, Python, Lisp and whatnot which allow you to use CSound from within those programming languages. Perhaps it could lead to some highly personnalized and specific use.
Ideally I'd definitely hope there's a dedicated language, to which format do the graphical widgets reduce internally, is there PD bytecode like there's a virtual machine in ChucK?
No, pd is not a compiled language. Unlike supercollider, csound, chuck, the file simply describes the order in which various objects are connected. The beauty of it is that the audio processing runs all the time while you patch so you don't have to reload the file everytime you make a change.
So something Altera like, with Perlish slices, Ruby:ish anonymous functions and other goodies would be ideal i.e. too good to be true. NOt to mention ChucKian time handling, that's something I like very much along with the assignment operator.
Nova might be your cup of tea but I am afraid it follows the PD model of audio signal treatment which is block based. By default those are 64 sample blocks. Youcld run it at 1 sample blocks but it becomes very inefficient for real-time work at that point. Hope that helps, ./MiS
Finally, given that ChucK doesn't currently support the kind of work I'd liek to do with it very well, and if you don't mind me asking, are there other environments that maybe serve me better? Reaktor is out of the question and somehow I've never been truely thrilled by SynthEdit. One app in addition to C Sound I always see mentioned about Linux and modulars is something called the Super Collider. Is that closer to what I'm looking for and what are the major differences to ChucK? I seem to recall Emacs being mentioned in that context, I've never gotten EmacsSpeak to run, although have a Ubuntu VM here.
Hi Rich and others, some more ideas after reading Veli-Pekka mail, not sure if this is useful for you, but there it goes ... one of the most interesting features of supercollider is the separation between the sound engine (scsynth) and the language (sclang). Again they talk each other via OSC. This effectively means that you can control the scsynth from any system that can send OSC because this is that actually the sclang does: send OSC messages to the scsynth. The system works as follows, first you nee to create in supercollider language a synthdef, this cannot be done but from supercollider language as far as i know. A synthdef is just a compiled crossplatform description of connections of UGens (Unit Generators) in the supercollider scsynth. Once this is compiled you can forget about supercollider and the scsynth can load and control that synthdef. The scsynth control commands are quite well documented, they allow for pretty much anything you might want to do. Load synthdefs, change the connections between synthdefs, load sound files ... This means that if you are not specially interested on sound design you could get a set of compiled basic synthdefs from somebody else and build on top of that whatever you want without needing to code a line in supercollider language. The synthdefs are you synth building blocks. You could patch then as you need via OSC and control its values. If you wanted to go into detailed sound design then you would need to go into designing your own synthdefs. But there are many already done you might get some by asking people in the supercollider list. You just need the compiled file and documentation about which parameters it takes and which info it sends back if any. Thats all. I use this from Python and it works fine. It seems a but weird but actually works really well. The scsynth sound quality is very good and it is really efficient CPU-wise. It is also totally cross platform. enrike
Hi Veli-Pekka. Wow, this is a very informative and interesting post. I don't have nearly the experience you do with synths, especially software ones. I've used a couple hardware modules, but have no front panel access so can't really tweak much. I do think our goals are similar. I don't know how reactor or nord or whatever work. I do have a vague sort of abstract sense of what I want the software to feel like... I basically am a tweaker, not necessarily a programmer, although I have done a bit of programming in my time.
at a practical level, putting together modules, binding hardware knobs to synth parameters, saving presets and playing the whole thing via MIDI in real time.
Yes, this is what I'm aiming for myself. I invision a system with some standard set of virtual instruments coded in ChucK. Each instrument would live in its own file, which could get displayed in a listbox in the UI. When a file was loaded in the UI, it would just call ChucK on it to load it into its VM and start it running in paralell with all the other currently running instruments. The UI would maintain a list of currently running instruments, and navigating this list would allow you to set parameters for that instrument, including ins and outs. The instruments could be implemented as stand alone ChucK programs. When run, each program would open a OSC connection and then wait for control messages from the UI. These messages would either be requests for state and available parameter data, or commands to modify the state of a parameter. You could have some sort of mechanism that allowed each ChucK module to tell the UI what parameters it has available for tweaking. The UI would be responsible for keeping track of what parameters were available for each module, and allow you to set/change any of them on the fly. ChucK has many types of unit generators and has filters and basic effects like reverb already in its standard object library.
there are very few analogs of the basic Reaktor building bloks at the module level, the idea is that you program them in on an ad hoc basis,
Can you describe a bit more about what a reactor module might look like? Again, it seems that ChucK has a lot of the basic building blicks available. We'd need to come up with an architecture to tie the gui to chuck, and then come up with a standard module or instrument class which would then make the writing of each real instrument more straightforward since it would take care of the parameter handling etc. We could start off with one basic single sample instrument, and one acoustically modeled instrument. I can invision eventually having some sort of SFZ parser which could read .sfz instrument files and then spit out equivalent ChucK code based on the same sample data. SInce .sfz files are simple text files, and the samples come separately, this might not be so hard.
very big job and something I'm not up to yet. However, this might work well as a collaborative project, if the scope was narrowed to very early releases and an essential subset of the modules. The only thing I've emulated so far is the audio switch, with a parameterizable number of inputs, if I recall correctly.
Can you say a bit more about this audio switch? Is this in ChucK? Did it
have a gui or was it midi controlable, etc?
Hope this makes sense... Maybe I'm getting too low-level here...
-- Rich
----- Original Message -----
From: "Veli-Pekka Tätilä"
Hi there again. Hope your having a good holiday.
I've not forgotten about the XUL interface to ChucK idea. I've benn working
on a little test page which summarizes the basic XUL elements and documents
some screen reader hints. I've only tested with Jaws, so would be
interested to see how Dolphin handles it.
Try the following:
http://www.mit.edu/~rjc/xulTests/tests.zip
If you unzip that in windows and then click on "run simple as chrome", this
should launch firefox with the page loaded, and without any of the browser
window furnature. This is probably the mode in which we'll want to use it,
and it is the mode which seems to work best with Jaws. The XUL document
explains more. Note you can also just open the simple.xul file with firefox
and then turn off all of your screen reader's virtual buffer handling stuff
(does Dolphin even have this concept - never used HAL ).
I want to include more element types - the tree element is definately
missing...
I'd recommend installing the latest firefox nightly build:
http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/firefox-...
If that doesn't work, try:
http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/
and choose the windows installer file ...
Even if this ChucK thing never gets off the ground, I still like XUL and
what the Mozilla folks are doing. I think it is a powerful way to write
GUI-based apps which does not require fancy visually-based gui builders.
The suite of technologies (i.e. XUL/XBL/HTML + CSS + JS) keeps semantics
separate from presentation and can produce some very powerful results fairly
quickly. Ad the OS access through XPCOM and you have a great way to write
GUIs for almost anything.
-- Cheers, Rich
----- Original Message -----
From: "Veli-Pekka Tätilä"
OOPS! That wasn't supposed to goto the list.
-- Rich
----- Original Message -----
From: "Rich Caloggero"
participants (7)
-
altern
-
Kassen
-
Michal Seta
-
Rich Caloggero
-
Scott Wheeler
-
Stephen Sinclair
-
Veli-Pekka Tätilä