Hi there,
Answering one point below.
On Sun, 15 Mar 2020, Mateusz Viste wrote:
Well, Eric never claimed to be an expert in the subject, but nonetheless it
is always interesting to hear different hypothesis from peers. Even failed
hypothesis are valuable steps of the learning process. :)
Speaking personally, and far from objectively, these tools are how
millions of individuals interact with the world every day. Miss
information from an unqualified individual is part of how some have given
up before
actually finding existing tools.
>> One can resolve some of these issues by using the actual drivers
My current understanding is that these drivers send commands to the synth
about "what" to say and "how" to say it. I have dumped such communication by
abusing the JAWS software and I see human phrases ("what") paired with lots
of obscure control prefixes ("how").
Jaws, for many many reasons, is not the best example of screen reader
technology, although its marketing made it very popular.
As I said in a follow up post, there are some tools that indeed allow the
synthesizer to speak different languages, even sing in the case of
dectalk.
Now, there are free "Text-to-speach" solutions out there, so I wonder how
hard it could be to intercept instructions meant for a hardware synth and
translate them into something that eSpeak could process.
...and you have discovered how many Linux distributions manage their
screen reader and speech synthesis.
It has not, as of yet, been done in DOS to my knowledge.
One major major reason is the poor sound quality. Espeak can talk yes,
but when compared to say a dectalk, being understood is something else
entirely.
If you want an example of what at least one dectalk voice sounds like
check out well anything from Dr. Stephen hawking.
That degree of understanding is what a screen reader or speech synthesizer
can and speaking personally, should sound like.
>
Such a hack would allow one to use a sceen reader inside a virtualized
FreeDOS install and actually hear stuff without the need to own a hardware
gimmick. Maybe I'm naive, but this doesn't look impossible.
Well, no its not impossible, having already been done using other tools as
expressed.
Still though, the goal from an end user standpoint is not does it speak,
but can I understand it, choose a degree of inflection, control the rate
etc. etc.
Which is also very possible.
I cannot find any
information about the (vendor-specific) protocols used by these oldschool
synth devices, though. This needs some research.
Some of the people who wrote those tools still exist. I know the person
behind ASAP and aSAW, is still around. That screen reader package
incorporates the synthesizers you reference, and others, including one
designed to talk with the sound chips built into laptops.
He would be the sort of person to contact.
I can find a name and e-mail, as both are escaping me as I write.
Karen
My understanding from Joseph, is that he has coded the b&S which stands
for braille and speak, to function using tinytype and asap screen
readers as a out of the box install for Freedos. In fact he got
permission on list.
Karen, who is using a dectalk, right now.
On Sun, 15 Mar 2020, Eric Auer wrote:
>
> Hi Mateusz,
>
> > Hello Karen, indeed the screen-reading protocols seem to be not as
> > easy
> > as I imagined they would be. Eric hinted off-list that they may work
> > on
> > a phonem-by-phonem base rather than being able to process "normal"
> > written phrases. Also it seems each screen reader uses its own
> > protocol.
>
> > PROVOX claims to support things called ACCENT, AUDAPTER, BNS, BRLMATE,
> > DECTALK, DTLT, DTPC, LITETALK, PORTTALK, PSS. Of course none of these
> > names mean anything to me.
>
> A quick look at the rather exotic Assembly dialect sources of PROVOX
> tells me that there is no obvious text to phoneme translation algorithm
> but just tables on how to pronounce special chars or to spell out things
> char by char when the user requests that. There are tables for a large
> number of special chars which seem to vary across hardware speech synth
> brands but PROVOX seems to expect that the speech synth indeed has local
> CPU power and firmware to convert English text to speech inself, so the
> PROVOX code does not do that. This also means you can expect troubles
> with non-English text unless the synth firmware is multilingual.
>
> I predict the data protocol to the external speech synths to be reduced
> charset, plain English, with plenty of escape or setup sequences and in
> some cases one or two bits used for flags in each transmitted character.
>
> DECtalk is a real classic, the wikipedia page about it has some links:
>
> https://en.wikipedia.org/wiki/DECtalk
>
> My off-list description, by the way, was based on experiences with a
> phoneme chip for embedded computing. I was indeed unaware that speech
> synth hardware for PC has built-in computing power to speak plain text.
>
> There is also a quite small DOS TSR which can speak text on the internal
> PC speaker: The TSR contains phoneme recordings and has to be used with
> a separate command line tool to convert English text to phoneme speaking
> calls to the TSR. As PWM sound output was heavy work for ancient PC, the
> TSR is very bad in adjusting to modern CPU which are a lot faster. This
> is only interesting for the nostalgically inclined audience I would say.
>
> Regards, Eric
>
>
>
> _______________________________________________
> Freedos-user mailing list
> Freedos-user@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/freedos-user
>
>
_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user
_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user
_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user