I agree with you, it would be a useful fallback. It would never be a
primary solution though, it is essentially screen-scraping, and would have
the same disadvantages as screen-scraping approaches that were used before
accessibility API:
* Accessibility APIs make it the app developer's responsibi
Hi Shadyar,
Not an immediate solution at all, but I would say that AI (Machine
Learning) which snapshots the screen or window and is able to extract the
text from the snapshot image to then read it aloud, might be superior to
legacy accessibility API paradigms which rely on the application develop