*ECML/PKDD Workshop on neuro-symbolic metalearning and AutoML*

This workshop explores different types of meta-knowledge, such as
performance summary statistics or pre-trained model weights. One way of
acquiring meta-knowledge is by observing learning processes and
representing it in such a way that it can be used later to improve future
learning processes. AutoML systems typically explore meta-knowledge
acquired from a single task, e.g., by modelling the relationship between
hyperparameters and model performance. Metalearning systems, on the other
hand, normally explore metaknowledge acquired on a collection of machine
learning tasks. This can be used not only for selection of the best
workflow(s) for the current task, but also for adaptation and fine-tuning
of a prior model to the new task. Many current AutoML and metalearning
systems exploit both types of meta-knowledge. Neuro-symbolic systems
explore the interplay between neural network-based learning and
symbol-based learning to get the best of those two types of learning. While
doing so, it tries to use the existing knowledge as a concrete symbolic
representation or as a transformed version of the symbolic representation
suited for the learning algorithm. The goal of this workshop is to explore
ways in which ideas can be cross-pollinated between the AutoML/Metalearning
and neuro-symbolic learning research communities. This could lead to, e.g.,
systems with interpretable meta-knowledge, and tighter integration between
machine learning workflows and automated reasoning systems.

Main research areas:

   - Controlling the learning processes
   - Definitions of configuration spaces
   - Few-shot learning
   - Elaboration of feature hierarchies
   - Exploiting hierarchy of features in learning
   - Meta-learning
   - Conditional meta-learning
   - Meta-knowledge transfer
   - Transfer learning
   - Transfer of prior models
   - Transfer of meta-knowledge between systems
   - Symbolic vs subsymbolic meta-knowledge
   - Neuro-symbolic learning
   - Explainable and interpretable meta-learning
   - Explainable artificial intelligence

Confirmed invited speakers include:

   - Artur d’Avila Garcez
   
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.city.ac.uk%2Fabout%2Fpeople%2Facademics%2Fartur-davila-garcez&data=05%7C01%7Cuai%40engr.orst.edu%7C7676901ecab3497bb60608db6c6a6980%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638222976509615172%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=SBmPyyPROYu1g%2F08CqFSdtYBi%2BWJkB5I%2FGbmQX1ARNk%3D&reserved=0>,
   City University of London, UK
   - Bernhard Pfahringer
   
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprofiles.waikato.ac.nz%2Fbernhard.pfahringer&data=05%7C01%7Cuai%40engr.orst.edu%7C7676901ecab3497bb60608db6c6a6980%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638222976509615172%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=O1MFT7gJzTmn7mMSF097Jcv8%2BzHg%2BZhjlfltqI7K43A%3D&reserved=0>,
 University of
   Waikato, New Zealand

Deadline: 26 June
Website: 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjanvanrijn.github.io%2Fmetalearning%2F2023ECMLPKDDworkshop&data=05%7C01%7Cuai%40engr.orst.edu%7C7676901ecab3497bb60608db6c6a6980%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638222976509615172%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=h2kcmASy2FjP3NEi%2FVB7PD2qcX1ke6DW%2FKF4PbHoCS8%3D&reserved=0

Best,
Workshop Chairs
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to