PAIR2Struct: Privacy, Accountability, Interpretability, Robustness,
Reasoning on Structured Data @ICLR 2022

*https://pair2struct-workshop.github.io/*
<https://pair2struct-workshop.github.io/>


IMPORTANT DATES:

*Submission deadline:* February 26, 2022 at 12:00 AM UTC

*Author notifications:* March 26, 2022

*Workshop:* April 29, 2022, virtually

******************************************************************************************

*OVERVIEW*

In these years, we have seen principles and guidance relating to
accountable and ethical use of artificial intelligence (AI) spring up
around the globe. Specifically, Data *P*rivacy, *A*ccountability, *I*
nterpretability, *R*obustness, and *R*easoning have been broadly recognized
as fundamental principles of using machine learning (ML) technologies on
decision-critical and/or privacy-sensitive applications. On the other hand,
in tremendous real-world applications, data itself can be well represented
as various structured formalisms, such as graph-structured data (e.g.,
networks), grid-structured data (e.g., images), sequential data (e.g.,
text), etc. By exploiting the inherently structured knowledge, one can
design plausible approaches to identify and use more relevant variables to
make reliable decisions, thereby facilitating real-world deployments.

In this workshop, we will examine the research progress towards accountable
and ethical use of AI from diverse research communities, such as the ML
community, security & privacy community, and more. Specifically, we will
focus on the limitations of existing notions on *P*rivacy, *A*
ccountability, *I*nterpretability, *R*obustness, and *R*easoning. We aim to
bring together researchers from various areas (e.g., ML, security &
privacy, computer vision, and healthcare) to facilitate discussions
including related challenges, definitions, formalisms, and evaluation
protocols regarding the accountable and ethical use of ML technologies in
high-stake applications with structured data. In particular, we will
discuss the *interplay* among the fundamental principles from theory to
applications. We aim to identify new areas that call for additional
research efforts. Additionally, we will seek possible solutions and
associated interpretations from the notion of causation, which is an
inherent property of systems. We hope that the proposed workshop is
fruitful in building accountable and ethical use of AI systems in practice.

*CALL FOR PAPERS*

All submissions are due by Feb 26 '22 12:00 AM UTC.

*Topics include but are not limited to:*

• Privacy-preserving machine learning methods on structured data (e.g.,
graphs, manifolds, images, and text).
• Theoretical foundations for privacy-preserving and/or explainability of
deep learning on structured data (e.g., graphs, manifolds, images, and
text).
• Interpretability and accountability in different application domains
including healthcare, bioinformatics, finance, physics, etc.
• Improving interpretability and accountability of black-box deep learning
with graphical abstraction (e.g., causal graphs, graphical models,
computational graphs).
• Robust machine learning methods via graphical abstraction (e.g., causal
graphs, graphical models, computational graphs).
• Relational/graph learning under robustness constraints (robustness in
face of adversarial attacks, distribution shift, environment changes, etc.).

*Paper submissions:* To format your paper for submission please use the main
conference LaTeX style files
<https://github.com/ICLR/Master-Template/raw/master/archive/iclr2022.zip>.
The workshop has a strict maximum page limit of 5 pages for the main text
and 3 pages for the supplemental text. Citations may use additional,
unlimited pages.

*Submission page:* ICLR 2022 PAIR2Struct Workshop
<https://openreview.net/group?id=ICLR.cc/2022/Workshop/PAIR2Struct>.

Please note that ICLR policy states: "Workshops are not a venue
for work that has been previously published in other conferences on machine
learning. Work that is presented at the main ICLR conference should not
appear in a workshop."

*Submission deadline:* February 26, 2022 at 12:00 AM UTC

*Author notifications:* March 26, 2022

*Workshop:* April 29, 2022
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to