Brief summary:

We are accepting papers at the Continual Lifelong Learning Workshop at the 
Asian Conference on Machine Learning, this December in *hybrid* format! The 
physical conference will be held December 12-14th in Hyderabad, India: 
https://www.acml-conf.org/2022/

The deadline is 31st October (submit on OpenReview): 
https://continual-lifelong-learners.github.io/cfp/


Abstract:

Deep-learning methods require extremely large amounts of data and computational 
resources, and lack human-like natural abilities to quickly adapt to their 
surroundings and learn continually. Continual lifelong learning methods aim to 
bridge this gap between humans and machines. In this workshop, our goal is to 
bring together Asian researchers working on this topic, and connect them to 
other communities in the rest of the world. Similar workshops have taken place 
in other Machine Learning conferences, but they have largely focused on 
researchers from North America and Europe. This workshop will provide 
networking opportunities for researchers, allowing them to collaborate and work 
towards solving continual lifelong learning.

The workshop will focus on a broad range of topics covering many aspects of 
continual lifelong learning, including (but not limited to): fast adaptation, 
forward/backward transfer, continual reinforcement learning, skill learning, 
abstraction and representations for lifelong learning, and relationship to 
similar ideas such as multi-task learning, meta learning, curriculum learning, 
and active learning.

Invited speakers:

  *   Balaraman Ravindran<http://www.cse.iitm.ac.in/~ravi/> (Professor of 
Computer Science, IIT Madras, India)
  *   Jonghyun Choi<https://ppolon.github.io/> (Associate Professor, Yonsei 
University, South Korea)
  *   Thang D Bui<https://thangbui.github.io/> (Assistant Professor, Australian 
National University, Australia)
  *   Joseph K J<https://josephkj.in/> (PhD Student, IIT Hyderabad, India)

Call for Papers:

Machine learning and deep learning often assume all the data is available at 
once and accessible whenever needed during training, which is restrictive. 
Ideally, we want machines to learn as flexibly as humans do: humans can adapt 
quickly to new environments and can continue to learn throughout our lives. 
This is currently not possible in machine learning. Over recent years, there 
has been growing interest in developing systems that can adapt quickly. In 
Continual Lifelong Learning, methods can ideally handle a stream of incoming 
data from an ever-changing source, where revisiting data is challenging or even 
impossible. Ideally, such a system should be able to

  *   quickly adapt to changes,
  *   remember and faithfully transfer old knowledge to new situations,
  *   acquire new skills but continue to do so without forgetting,
  *   adjust to drifts in data and/or tasks,
  *   adapt the model/architecture accordingly, and so on.

Despite recent advances, many challenges remain. Different studies often 
formalise the problem differently and use different benchmarks. Even when there 
are empirical successes, there is little theoretical understanding. The field 
of continual lifelong learning remains an important, yet challenging, problem 
that we hope to discuss in this workshop.

The workshop will welcome submissions from a wide variety of topics aiming to 
address such challenges. We invite submissions (up to 5 pages, excluding 
references and appendix) in the ACML 2022 format. The submission deadline is 
October 31st (AoE). All submissions will be managed through OpenReview: 
https://openreview.net/group?id=ACML.org/2022/Workshop/CLL. The review process 
is double-blind so submissions should be anonymised. Please edit the ACML 
template so that the Editors section is empty/blank. Accepted work will be 
presented as posters during the workshop, and select contributions will be 
invited to give spotlight talks during the workshop.

We encourage submissions on the following topics, including but not limited to:

  *   Fast adaptation,
  *   Forward/backward transfer,
  *   Continual Reinforcement Learning,
  *   Bayesian continual learning,
  *   Memory-based methods for continual learning,
  *   Theory for continual lifelong learning,
  *   Applications of continual lifelong learning,
  *   Skill Learning, Temporal Abstractions for Continual RL,
  *   Unsupervised, semi-supervised and self-supervised continual learning.

Workshop organisers: Siddharth Swaroop<https://siddharthswaroop.github.io/>, 
Martin Mundt<http://owll-lab.com/>, Khimya 
Khetarpal<https://kkhetarpal.github.io/>, Peter 
Nickl<https://team-approx-bayes.github.io/>, Lu Xu<https://x-lu.github.io/>, 
Emtiyaz Khan<https://emtiyaz.github.io/>.

Hope to see you there (virtually or in-person),
Siddharth Swaroop, postdoc at Harvard University
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to