Well, Eric never claimed to be an expert in the subject, but nonetheless it is always interesting to hear different hypothesis from peers. Even failed hypothesis are valuable steps of the learning process. :)

One can resolve some of these issues by  using the actual drivers provided by the actual programs themselves.

My current understanding is that these drivers send commands to the synth about "what" to say and "how" to say it. I have dumped such communication by abusing the JAWS software and I see human phrases ("what") paired with lots of obscure control prefixes ("how").

Now, there are free "Text-to-speach" solutions out there, so I wonder how hard it could be to intercept instructions meant for a hardware synth and translate them into something that eSpeak could process.

Such a hack would allow one to use a sceen reader inside a virtualized FreeDOS install and actually hear stuff without the need to own a hardware gimmick. Maybe I'm naive, but this doesn't look impossible. I cannot find any information about the (vendor-specific) protocols used by these oldschool synth devices, though. This needs some research.

Mateusz




My understanding  from Joseph, is that he has coded the b&S which stands for braille and speak,  to function using tinytype and asap screen readers as a out of the box install  for Freedos.  In fact he got permission on list.

Karen, who  is using a dectalk, right now.



On Sun, 15 Mar 2020, Eric Auer wrote:


Hi Mateusz,

Hello Karen, indeed the screen-reading protocols seem to be not as easy
as I imagined they would be. Eric hinted off-list that they may work on
a phonem-by-phonem base rather than being able to process "normal"
written phrases. Also it seems each screen reader uses its own protocol.

PROVOX claims to support things called ACCENT, AUDAPTER, BNS, BRLMATE,
DECTALK, DTLT, DTPC, LITETALK, PORTTALK, PSS. Of course none of these
names mean anything to me.

A quick look at the rather exotic Assembly dialect sources of PROVOX
tells me that there is no obvious text to phoneme translation algorithm
but just tables on how to pronounce special chars or to spell out things
char by char when the user requests that. There are tables for a large
number of special chars which seem to vary across hardware speech synth
brands but PROVOX seems to expect that the speech synth indeed has local
CPU power and firmware to convert English text to speech inself, so the
PROVOX code does not do that. This also means you can expect troubles
with non-English text unless the synth firmware is multilingual.

I predict the data protocol to the external speech synths to be reduced
charset, plain English, with plenty of escape or setup sequences and in
some cases one or two bits used for flags in each transmitted character.

DECtalk is a real classic, the wikipedia page about it has some links:

https://en.wikipedia.org/wiki/DECtalk

My off-list description, by the way, was based on experiences with a
phoneme chip for embedded computing. I was indeed unaware that speech
synth hardware for PC has built-in computing power to speak plain text.

There is also a quite small DOS TSR which can speak text on the internal
PC speaker: The TSR contains phoneme recordings and has to be used with
a separate command line tool to convert English text to phoneme speaking
calls to the TSR. As PWM sound output was heavy work for ancient PC, the
TSR is very bad in adjusting to modern CPU which are a lot faster. This
is only interesting for the nostalgically inclined audience I would say.

Regards, Eric



_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user




_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user


_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user

Reply via email to