Workshop's Scope

While there is no uniformly agreed-upon definition of what constitutes safe or 
trustworthy AI, it is clear that such systems should exhibit certain 
properties. For example, systems should be robust to minor perturbations to 
their inputs and there should be some transparency about how a system arrives 
at a prediction or decision. More importantly, it is becoming increasingly 
common for deployed AI models to have to conform to requirements (e.g., legal) 
and/or exhibit specific properties (e.g., fairness). That is, it is necessary 
to verify that a model complies with these requirements. In the software 
engineering community, verification has been long studied with the goal of 
assuring that software fully satisfies the expected requirements.  Therefore, a 
key open question in the quest for safe AI is how verification and machine 
learning can be combined to provide strong guarantees about software that 
learns and that adapts itself on the basis of past experience? Finally, what 
are the boundaries of what can be verified, and how can and should system 
design be enhanced by other mechanisms (e.g., statistics on benchmarks, 
procedural safeguards, accountability) to produce the desired properties?

The goal of the Verifying Learning AI Systems (VeriLearn) workshop is to bring 
together researchers interested in these questions. The workshop will be held 
in conjunction with the 26th European Conference on Artificial Intelligence, 
which will take place in Krakow Poland.

https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdtai.cs.kuleuven.be%2Fevents%2FVeriLearn2023&data=05%7C01%7Cuai%40engr.orst.edu%7Cc77640f6571048722ffc08db4667ad1a%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638181183331117592%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=zuX%2FsYD2o7JUteinTCW4lHKKDu%2FJWIjJrnqKjYLazMg%3D&reserved=0

Topics of Interest

This workshop solicits papers on the following non-exhaustive list of topics:

  *   Representations and languages that facilitate reasoning and verification.

  *   Applications and extensions of software verification techniques in the 
context of machine learning.

  *   Verifying safety in dynamic systems or models.

  *   Reasoning about learned models to assess, e.g., their adherence to 
requirements.

  *   Learning models that are safe by design.

  *   Assessing the robustness of AI systems.

  *   Ways to evaluate aspects of AI systems that are relevant from a trust and 
safety perspective.

  *   Out of distribution detection and learning with abstention.

  *   Certification methodologies for AI systems.

  *   Concepts, approaches, and methods for identifying and dealing with the 
limits of verifiability.

  *   Approaches and case studies where verification is important for 
addressing ethical, privacy and societal concerns about AI.

  *   Case studies showing illustrative applications where verification is used 
to tackle issues related to safety and trustworthiness.

Submission Instructions and Dates

We solicit two types of papers:

- Long papers can be a maximum of 6 pages of content and an unlimited number of 
references in the ECAI 2023 formatting 
style<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fecai2023.eu%2FECAI2023&data=05%7C01%7Cuai%40engr.orst.edu%7Cc77640f6571048722ffc08db4667ad1a%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638181183331117592%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=h19njfh5I4AsEBJcmZ81LmYeoFWF00%2FwLzGAcTidmVk%3D&reserved=0>
 and should report on novel, unpublished work that might not be quite mature 
enough for a conference or journal submission.

- Extended abstracts can be 2 pages in ECAI formatting style and summarize 
recent publications fitting the workshops.

Submissions should be anonymous. Papers are to be submitted in pdf format at 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcmt3.research.microsoft.com%2FVeriLearn2023&data=05%7C01%7Cuai%40engr.orst.edu%7Cc77640f6571048722ffc08db4667ad1a%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638181183331117592%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=fVB%2BikiKBYNgg45EE9N24a%2BO5b2j37IvOIZYDKPst1A%3D&reserved=0

Paper submission deadline: 20/06/2023 @ 23:59pm CET

*Contact*

Jesse Davis (firstname<dot>lastn...@kuleuven.be<mailto:lastn...@kuleuven.be>)

*Organizers*

  *   Jesse Davis, KU Leuven,
  *   Bettina Berendt, TU Berlin, director of the Weizenbaum Institute for the 
Networked Society, KU Leuven
  *   Hendrik Blockeel, KU Leuven
  *   Luc De Raedt, KU Leuven
  *   Benoit Frenay, University of Namur
  *   Fredrik Heintz, Linköping Universit
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fliu.se%2Fen%2Femployee%2Ffrehe08&data=05%7C01%7Cuai%40engr.orst.edu%7Cc77640f6571048722ffc08db4667ad1a%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638181183331117592%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=22fdnaEuJsB%2FjzfWb0FeRl6T8nhd%2BOw378wzlzgzy4g%3D&reserved=0>
  *   Jean-Francois Raskin, Université Libre de Bruxelles
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to