On 31/10/15 16:28, Eric S. Johansson wrote: > I'm trying to enhance my speech recognition bridge (Windows speech > recognition bridged to Linux) by adding some contextual awareness. I > have 2 needs, 1st is to know what application is running, 2nd is to > know if focus is on a text area (someplace you dictate plaintext in > addition to commands)[1] > > are there any examples or samples of the code I can use to accomplish > my goals?
Some time ago I have been gathering the tests I had on my machine on this (informal) repository: https://github.com/infapi00/at-spi2-examples So, a example of how to list the running applications registered to at-spi2 using pyatspi2: https://github.com/infapi00/at-spi2-examples/blob/master/python-pyatspi2/list_applications.py There aren't any specific example for your second request. You could take a look to the other examples on that repository, an pyatspi2, to see if you can infer that. If you need more ellaborated code, you could take a look to accerciser and orca code. BR > > --- eric > > [1] The reason for the 2nd need is obvious only if you've lived with > speech recognition for a while. if you are not focused on a text > box/region, you only want command grammars active otherwise > inadvertent plaintext dictation can trigger all sorts of single > character commands. I cannot tell you the number of email messages I > have lost as a result of inadvertent plaintext dictation when in > Thunderbird. > > > _______________________________________________ > gnome-accessibility-list mailing list > gnome-accessibility-list@gnome.org > https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list > -- Alejandro Piñeiro (apinhe...@igalia.com) _______________________________________________ gnome-accessibility-list mailing list gnome-accessibility-list@gnome.org https://mail.gnome.org/mailman/listinfo/gnome-accessibility-list