Call for Papers: *IEEE Transactions on Neural Networks and Learning Systems*

*Special Issue on **Deep Representation and Transfer Learning for Smart and
Connected Health*


*Important Dates*

31 March 2019 – Deadline for manuscript submission

30 June 2019 – Reviewer’s comments to authors

31 August 2019 – Submission of revised papers

31 October 2019 – Final decision of acceptance

30 November 2019 – Camera-ready papers

December 2019-February 2020 – Tentative publication date



*Aims and Scope**:*

Deep neural networks have proven to be efficient learning systems for
supervised and unsupervised tasks in a wide range of challenging
applications. However, learning complex data representations using deep
neural networks can be difficult due to problems such as lack of data,
exploding or vanishing gradients, high computational cost, or incorrect
parameter initialization, among others. Transfer Learning (TL) can
facilitate the learning of data representations by taking advantage of
transferable features learned by a model in a source domain, and adapting
the model to a new domain. This approach has demonstrated to produce better
generalization performance than random weight initialization, and has
produced state-of-the-art results in signal and visual processing tasks.
Accordingly, emerging and challenging domains, such as smart and connected
health (SCH), can benefit from new theoretical advancements in
representation and transfer learning (RTL) methods.



One of the main advantages of TL is its potential to be applied in a wide
range of domains and for different learning tasks. For instance, in facial
affect recognition, the representations learned by a deep model trained to
recognize faces in an unsupervised fashion can be employed and improved by
a second model to perform emotion recognition in supervised manner.
Nonetheless, learning data representations that provide a good degree of
generalization performance remains a challenge. This is due to issues such
as the inherent trade-off between retaining too much information from the
input and learning universal features. Similarly, despite the obvious
advantages of TL, effective use of parameters learned by a given model in a
different domain is a challenge, particularly when there is limited data in
the target domain. This challenge increases when the joint distribution of
the input features and output labels is different in the target domain. In
addition, determining how to reject unrelated information or remove dataset
bias during TL is yet to be solved. Other limitations are caused by lack of
existing theoretical approaches in RTL capable of explaining or
interpreting the learning process of deep models, or determining how to
best learn a set of data representations that are ideal for a given task,
whether in a regression or classification problem.  Therefore, new n
theoretical mechanisms and algorithms are required to improve the
performance and learning process of deep neural networks.



Despite these constraints, RTL will play an essential role in building the
next generation of intelligent systems designed to assists humans with
their daily needs. Consequently, domains of great interest to human
society, such as SCH, will benefit from new advancements in RTL. For
instance, one of the main challenges in designing effective SCH
applications is overcoming the lack of labelled data. RTL can overcome this
limitation by training a model to learn universal data representations on
larger corpora in a different domain, and then adapting the model for use
in a SCH context. Similarly, RTL can be used in conjunction with generative
adversarial networks to overcome class imbalance problems by generating new
healthcare-related data, which can also be used to improve the
generalization performance of deep models in SCH applications. Furthermore,
RTL can be used to initialize and improve the learning of deep
reinforcement learning models designed for continuous learning in
patient-centered cognitive support systems, among others. Nonetheless, the
use of RTL in designing SCH applications requires overcoming problems such
as dataset bias or neural network co-adaptation.



This special issue on Deep Representation and Transfer Learning for Smart
and Connected Health invites researchers and practitioners to present
novel contributions
addressing theoretical aspects of representation and transfer learning. The
special issue will provide a collection of high quality research articles
addressing theoretical work aimed to improve the generalization performance
of deep models, as well as new theory attempting to explain and interpret
both concepts. State-of-the-art works on applying representation and
transfer learning for developing smart and connected health intelligent
systems are also very welcomed. Topics of interest for this special issue
include but are not limited to:


*Theoretical Methods:*

·      Distributed representation learning;

·      Transfer learning;

·      Invariant feature learning;

·      Domain adaptation;

·      Neural network interpretability theory;

·      Deep neural networks;

·      Deep reinforcement learning;

·      Imitation learning;

·      Continuous domain adaptation learning;

·      Optimization and learning algorithms for DNNs;

·      Zero and one-shot learning;

·      Domain invariant learning;

·      RTL in generative and adversarial learning;

·      Multi-task learning and Ensemble learning;

·      New learning criteria and evaluation metrics in RTL;

*     Application Areas: *

·      Health monitoring;

·      Health diagnosis and interpretation;

·      Early health detection and prediction;

·      Virtual patient monitoring;

·      RTL in medicine;

·      Biomedical information processing;

·      Affect recognition and mining;

·      Health and affective data synthesis;

·      RTL for virtual reality in healthcare;

·      Physiological information processing;

·      Affective human-machine interaction;


*Guest Editors*

*Vasile Palade*, Coventry University, UK

*Stefan Wermter*, University of Hamburg, Germany

*Ariel Ruiz-Garcia*, Coventry University, UK

*Antonio de Padua Braga*, University of Minas Gerais, Brazil

*Clive Cheong Took*, Royal Holloway (University of London), UK



*Submission Instructions  *

   1. Read the Information for Authors at
   https://cis.ieee.org/publications/t-neural-networks-and-learning-systems.
   2. Submit your manuscript at the TNNLS webpage (
   http://mc.manuscriptcentral.com/tnnls) and follow the submission
   procedure. Please, clearly indicate on the first page of the manuscript and
   in the cover letter that the manuscript is submitted to this special issue.
   Send an email to the guest editors Vasile Palade (
   vasile.pal...@coventry.ac.uk) and Ariel Ruiz-Garcia (
   ariel.ruiz-gar...@coventry.ac.uk) with subject “TNNLS special issue
   submission” to notify about your submission.
   3. Early submissions are welcome. We will start the review process as
   soon as we receive your contributions.



 For any other questions please contact Ariel Ruiz-Garcia (
ariel.ruiz-gar...@coventry.ac.uk).
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to