On 7/8/2012 4:24 AM, Rado Q wrote:
Moin moin,

=- Eric S. Johansson wrote on Sun  8.Jul'12 at  3:23:11 -0400 -=

It would be super superb if I could use some form of client-side
editing for sieve filtering but, nonessential.
Not supported yet, and probably never, since there are other/better
ways?!

I can understand that. It is complicated enough that you need to use it everyday in order to be able to write the rules is vaguely correctly. I have some ideas on creating a simplifying/guiding user interface (which is not one of those "wizards" dammit) but that'll wait until I have tools I need to write code with operational.

I need to turn off every single bit of command by keystroke. The
reasons is simple. If you're using speech recognition and you get
a misrecognition instead of command, think what happens your mail
buffer if three or four "random" words were dumped into mutt.
Not sure how the interaction works in detail, but can't you insert
some "buffer" between voice capture and passing to mutt
interpretation, like have it re-read for you what you said before it
sends, so you can approve it?

that's like dealing with typos by inserting a buffer between the keyboard and mutt and having it use speech to text to playback the input and then wait for your approval every time you hit the return key. Okay, that was little snarky but like with typing, you want speech and interfaces to be as efficient as possible because you don't want to break your voice especially if you have broken your hands.

Maybe you need another tool-layer between, which provides
input-control. Could then applied with other tools, too.

this might be a useful illustration about how I see interactions between NaturallySpeaking and an editor would work. The same basic flow and the grammar activation would apply here.

https://docs.google.com/drawings/d/186fxIwqdEFl37m3yI1n594xQ-gcaNCVXd8WKUrxu1g8/edit?pli=1
So the answer is turned off single character commands either
permanently or between detection of utterance and end of
utterance.
No, this would possbily be even the job of the terminal?!

I would need to rewrite the terminal emulator to shut off keyboard input? there's a chance we misunderstand each other.

There are two modes with speech recognition. Commands and dictation. The dictation is the injection of ordinary words like the message I'm generating here. Full of errors and lots of correction operations needed. Sadly, this does mean that the Select-and-Say features of NaturalSpeaking need to be available.

The second mode, command mode, is developed by defining a grammar and action routines associated with different parts of the graph. In the example I gave you, the grammar contains the phrase "kill sentence" and the action routine associated with the grammar tells the editor to kill the sentence in return that killed sentence.

So maybe one way to look at a speech accessible mutt is to have focus on a text window which has all of the necessary plaintext, Select-and-Say enabled features and a side window displaying mutt data. The text window is the focus for dictation so that all misrecognition events are caught in that text window. Then the side window would be the destination for all commands thereby separating the two and make the system more robust against misrecognition events.

I have to think about that a little bit more. It might be an interesting alternative path for dealing with dictation such command misrecognition events.

The second thing I would need is the ability to query internals of
Mutt so I can construct a context dependent grammars that can be spoken.
What kind of "internals"?

List of folders known, folder active, setting the active folder, list of messages known etc. may also want to be able to get/set configuration data.

Reply via email to