On 7/30/2012 10:54 PM, Tim Chase wrote:
On 07/30/12 21:11, Eric S. Johansson wrote:
the ability for multiple people to work on the same document at
the same time is really important. Can't do that with Word or
Libre office. revision tracking in traditional word processors
are unpleasant to work with especially if your hands are broken.
If you're developing, I might recommend using text-based storage and
actual revision-control software. Hosting HTML (or Restructured
Text, or plain-text, or LaTeX) documents on a shared repository such
as GitHub or Bitbucket provides nicely for accessible documentation
as well as much more powerful revision control.
But then you hit a second layer of "doesn't really work nice with speech
recognition". Using a markup language can actually be more difficult than using
a WYSIWYG editor. For example, with a soft word, I can do most of the basics
using speech commands and I have what's called "Select-and-Say" editing
capability in the buffer. Can't do that with any other editor. They are not
integrated with NaturallySpeaking.
A few years ago I created a small scale framework for speech recognition users
(akasha). You use the domain specific market notation to be able to construct
Web applications. Instead of using classic HTML models, it used things that were
more appropriate to speech driven environment. unlike with HTML, which you
cannot write using speech recognition and a boatload of effort, akasha was 95%
speakable using out-of-the-box speech recognition.
This also brings me to the concept of how the design for speech recognition use.
Modifying an existing user interface or creating a new one either through a
speakable data format and minimal application changes or by the application and
grammar to a recognition engine and manipulating something that isn't speakable.
From experience, outside of the first model works well if you are looking for
relatively easy and very high marks accessibility, the second is required if you
are operating within a team and need to integrate with everybody else.
Unfortunately, the second technique points out just how badly designed most
software is and that led me to the concept of no-UI (not unlike no SQL) which is
more controversial than I want to get into right now.
It would please me greatly if you would be willing to try an
experiment. live my life for a while. Sit in a chair and tell
somebody what to type and where to move the mouse without moving
your hands. keep your hands gripping the arms or the sides of
the chair. The rule is you can't touch the keyboard you can't
touch the mice, you can't point at the screen. I suspect you
would have a hard time surviving half a day with these
limitations. no embarrassment in that, most people wouldn't make
it as far as a half a day.
I've tried a similar experiment and am curious on your input device.
Eye-tracking/dwell-clicking? A sip/puff joystick? Of the various
input methods I tried, I found that Dasher[1] was the most
intuitive, had a fairly high input rate and accuracy (both
initially, and in terms of correcting mistakes I'd made). It also
had the ability to generate dictionaries/vocabularies that made more
appropriate/weighted suggestions which might help in certain
contexts (e.g. pre-load a Python grammar allowing for choosing full
atoms in a given context).
Just ordinary speech recognition. NaturallySpeaking. Part of my hand problem is
that I no longer have good fine motor control which sucks because I used to
enjoy drawing with pencils and juggling. I've tried dasher and I don't have find
enough motor control to make it work very well. Sometimes I play games with my
girlfriend at fly or die and the user interfaces for the various game
controllers is simple enough that my hands only get in the way some of the time.
Or at least that's what say when she is beating me soundly at billiards. :-)
Some of the ideas you've mentioned have been thought in another contexts. The
problem is that when it comes to working with code, you have two problems.
Creation of code and editing code. Which do you do more? If you're like most of
us, it is editing. That's why I made the toggle words feature something that
would toggle both from a string name to a codename and vice versa.the future
version of this and, this is where I'm going to need a lot of help from the
Python community, would translate the statement in a buffer to a particular
two-dimensional form and place it into a special window that NaturallySpeaking
can operate on. The reason for covering it from the one-dimensional string of a
statement to a two-dimensional form is to make it easier to disambiguate
different features using speech. The idea isn't fully fleshed out because I want
to get through this one first and actually be able to start writing code again.
--
http://mail.python.org/mailman/listinfo/python-list