Hey, this is cool! My GF uses Dragon Naturally Speaking, and has been
vocal with me that its lack of equivalent under Linux is the only reason
she still uses Windows.

I recognize it's premature, but mind sharing a few more details? From
which Windows voice dictation software does it bridge? Also, to what
extent does it bridge? In listening to my GF's Naturally Speaking use,
she mainly sticks to dictation plus assorted editing commands ("scratch
that," "correct X to Y," "new paragraph," etc.) Are those types of
editing commands supported?

Thanks, will definitely keep an eye on this project. If it is at all
something she might use, I might look into offering development help if
there's any way to do so exclusively under Windows (I.e. bridge the
dictation component to a testing Linux-side adapter to enhance command
mappings.)


On 6/22/2015 8:35 AM, Eric S. Johansson wrote:
> as I mentioned earlier, I was working on a tool to bridge speech
> recognition from Windows VM to drive a Linux environment. I now have
> something good enough for plaintext dictation.  You can find it on
> git-hub https://github.com/alsoeric/speechbridge
>
> what I need to know is how to detect events such as focus changes, and
> currently active window and application.  later on, it would be useful
> to know how to operate on text areas (searching, selecting region etc.).
>
>
> _______________________________________________
> gnome-accessibility-list mailing list
> gnome-accessibility-list@gnome.org
> https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list
>

_______________________________________________
gnome-accessibility-list mailing list
gnome-accessibility-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list

Reply via email to