[Apologies for cross-posting]
Dear all,
Please consider submitting your recent work to this special issue.
IEEE Transactions on Affective Computing (Impact Factor 10.506)
/Special Issue on Affective Speech and Language Synthesis, Generation,
and Conversion/
*Submission Deadline:* 31st March 2022
*Paper Submission Website:* https://mc.manuscriptcentral.com/taffc-cs*
**Call for Papers: *https://megastore.rz.uni-augsburg.de/get/ZIZqy6pKBT/
**
**Guest Editors:
*
*Shahin Amiriparian, University of Augsburg, Germany
Björn Schuller, Imperial College London, UK
Nabiha Asghar, Microsoft, USA
Heiga Zen, Google Research, Japan
Felix Burkhardt, audEERING / Technical University of Berlin, Germany
*Background & Motivation
*
As an inseparable and crucial part of spoken language, emotions play a
substantial role in the human-human conversation. They convey
information about a person’s needs, how one feels about the objectives
of a conversation, the trustworthiness of one’s verbal communication,
and more. Accordingly, substantial efforts have been made to generate
affective text and speech for conversational AI, artificial
storytelling, machine translation, and more. Similarly, there is a push
for converting the affect in text and speech – ideally in real-time and
fully preserving intelligibility, e.g., to hide one’s emotion, for
creative applications and in entertainment, or even to augment training
data for affect analyzing AI.
The rapid development of deep neural networks has increased the ability
of computers to produce natural speech and language in many languages.
Novel methodologies, including attention-based and sequence-to-sequence
Text-to-Speech (TTS), have shown promise in synthesizing high-quality
speech directly from text inputs. However, most TTS systems do not
convey the emotional context that is omnipresent in human-human
interaction. The lack of emotions in the generated speech can be assumed
as a major reason for the low perceived likeability of such systems.
Conversely, generative models such as WaveNet, which use raw waveforms
of the audio signals instead of the text input for speech generation,
can help to condition the emotions of the produced speech. Further,
variations of generative adversarial networks (GANs), such as StarGANs
or StyleGANs, have been successfully applied for speech-based emotion
conversion and generation. Similarly, in affective natural language
generation and conversion, deep-learning approaches have considerably
changed the landscape and opened up new abilities based on massive
language corpora and models. Yet, applications are to come at large,
featuring human-alike real-time generation and conversion of affect in
spoken and written language. However, the research in this field is
still in its infancy and calls for a new perspective when designing
neural speech and language synthesis, generation, and conversion models
that consider human affects for a more natural human-AI interaction and
a rich plethora of further applications.
This special issue is aimed at contributions from affective speech and
language synthesis, generation, and conversion and expanding current
research on current methodologies in this field and novel applications
integrating such technology. We invite contributions focusing on the
theoretical and practical perspectives as well as applications.
*
Topics of interest* for this special issue include, but are not limited to:
* Affective speech synthesis methods
* Affective natural language generation methods
* Affect conversion in spoken and written language methods
* Integration of affective speech and language in conversational AI
* Evaluation methods and user studies for the above
* Databases for affective speech and language synthesis, generation,
and conversion
* Applications of affective speech and language synthesis, generation,
and conversion
*Important Dates*
* *Submission Deadline:* 31st March 2022
* Peer review Due: 1st May 2022
* Revision Deadline: 15th July 2022
* Final Decision: 1st September 2022
* Publication: September 2022
--
Dr. Shahin Amiriparian
Chair for Embedded Intelligence for Health Care and Wellbeing
University of Augsburg
Eichleitnerstr. 30
86159 Augsburg
Germany
E-mail:shahin.amiripar...@uni-a.de
Phone: +49 (0) 821 598 - 4367
Google Scholar:https://scholar.google.com/citations?user=tM4-FpwAAAAJ
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai