[UAI] Postdoc Positions in Privacy/Machine Learning at KAUST

2021-02-04 Thread Di Wang
Di Wang is currently an Assistant Professor in the Division of Computer, 
Electrical and Mathematical Sciences and Engineering (CEMSE) at the King 
Abdullah University of Science and Technology (KAUST). He is always looking for 
postdoc positions, with a flexible start date. The initial appointment will be 
1 year, with the possibility renewal for one for two additional years depending 
on the performance. The position will remain open until filled.

Di Wang obtained his PhD degree in Computer Science and Engineering at the 
State University of New York (SUNY) at Buffalo. During his PhD studies, he has 
been invited as a visiting student to the University of California, Berkeley, 
Harvard University, and Boston University. During the last 3 years, he was the 
first author for more than twenty four papers, including several JMLR, ML, 
ICML, NeurIPS, AAAI, IJCAI, ALT. He has collaborated with people from several 
top universities such as UVA, University of Toronto, Cornell, University of 
Boston, SUNY at Buffalo, MIT.

His interested areas include Machine Learning, Statistical Estimation, 
Fairness, Privacy and Security, see his personal 
website<http://www.acsu.buffalo.edu/~dwang45/> for details. If you are 
interested in working or have common interests with him, feel free to send him 
(di.w...@kaust.edu.sa<mailto:di.w...@kaust.edu.sa>) your CV. Successful 
candidates will be further contacted to provide more documents (e.g., two 
recommendation letters or a short statement of research plans).

Candidates need to hold an earned Ph.D. (or be near completion) in computer 
science or in a related field with mathematics or statistics or good 
programming skills in C/C++/Java/Matlab/Python. Successful candidates are 
expected to be self-motivated and have a good publication record (at least have 
one paper accepted by premier conferences or journals like COLT, NeurIPS, ICML, 
AISTATS, ALT, ICLR, CCS, S&P, KDD, AAAI, TIT, TPAMI, JMLR, MLJ, TKDE, TIFS 
etc.) and good command of English. The position holders will perform 
interdisciplinary research, shape research directions, produce and disseminate 
results through publications, coordinate research projects and help to 
supervise PhD. and M.S. students.

KAUST offers:

  1.  Very competitive tax exemption salary and benefit package (e.g., free 
high-standards fully furnished houses, health insurance, education at KAUST 
international schools for children, 20 days of paid vacation per year, 
relocation allowance, repatriation allowance).

  2.  Sufficient funding support (e.g., conference attendance and international 
trip support).

  3.  State-of-the-art research facilities, including one of the fastest 
supercomputers in the world (IBM BlueGene, 64000 cores, 64TB RAM), and a 
word-class visualization

  4.  Solid collaboration (including exchange visits) with our partners.

  5.  Vibrant campus life and impressive recreational facilities that include a 
private beach, marina and golf course.

Best,
Di
---
Di Wang, PhD
Assistant Professor
Division of CEMSE
King Abdullah University of Science and Technology
URL: http://www.acsu.buffalo.edu/~dwang45/
Email: di.w...@kaust.edu.sa<mailto:di.w...@kaust.edu.sa>
—

___
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai


[UAI] CFP: PAIR2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data @ ICLR 2022

2022-01-10 Thread Di Wang
**

PAIR2Struct: Privacy, Accountability, Interpretability, Robustness,
Reasoning on Structured Data @ICLR 2022

*https://pair2struct-workshop.github.io/*



IMPORTANT DATES:

*Submission deadline:* February 26, 2022 at 12:00 AM UTC

*Author notifications:* March 26, 2022

*Workshop:* April 29, 2022, virtually

**

*OVERVIEW*

In these years, we have seen principles and guidance relating to
accountable and ethical use of artificial intelligence (AI) spring up
around the globe. Specifically, Data *P*rivacy, *A*ccountability, *I*
nterpretability, *R*obustness, and *R*easoning have been broadly recognized
as fundamental principles of using machine learning (ML) technologies on
decision-critical and/or privacy-sensitive applications. On the other hand,
in tremendous real-world applications, data itself can be well represented
as various structured formalisms, such as graph-structured data (e.g.,
networks), grid-structured data (e.g., images), sequential data (e.g.,
text), etc. By exploiting the inherently structured knowledge, one can
design plausible approaches to identify and use more relevant variables to
make reliable decisions, thereby facilitating real-world deployments.

In this workshop, we will examine the research progress towards accountable
and ethical use of AI from diverse research communities, such as the ML
community, security & privacy community, and more. Specifically, we will
focus on the limitations of existing notions on *P*rivacy, *A*
ccountability, *I*nterpretability, *R*obustness, and *R*easoning. We aim to
bring together researchers from various areas (e.g., ML, security &
privacy, computer vision, and healthcare) to facilitate discussions
including related challenges, definitions, formalisms, and evaluation
protocols regarding the accountable and ethical use of ML technologies in
high-stake applications with structured data. In particular, we will
discuss the *interplay* among the fundamental principles from theory to
applications. We aim to identify new areas that call for additional
research efforts. Additionally, we will seek possible solutions and
associated interpretations from the notion of causation, which is an
inherent property of systems. We hope that the proposed workshop is
fruitful in building accountable and ethical use of AI systems in practice.

*CALL FOR PAPERS*

All submissions are due by Feb 26 '22 12:00 AM UTC.

*Topics include but are not limited to:*

• Privacy-preserving machine learning methods on structured data (e.g.,
graphs, manifolds, images, and text).
• Theoretical foundations for privacy-preserving and/or explainability of
deep learning on structured data (e.g., graphs, manifolds, images, and
text).
• Interpretability and accountability in different application domains
including healthcare, bioinformatics, finance, physics, etc.
• Improving interpretability and accountability of black-box deep learning
with graphical abstraction (e.g., causal graphs, graphical models,
computational graphs).
• Robust machine learning methods via graphical abstraction (e.g., causal
graphs, graphical models, computational graphs).
• Relational/graph learning under robustness constraints (robustness in
face of adversarial attacks, distribution shift, environment changes, etc.).

*Paper submissions:* To format your paper for submission please use the main
conference LaTeX style files
.
The workshop has a strict maximum page limit of 5 pages for the main text
and 3 pages for the supplemental text. Citations may use additional,
unlimited pages.

*Submission page:* ICLR 2022 PAIR2Struct Workshop
.

Please note that ICLR policy states: "Workshops are not a venue for work
that has been previously published in other conferences on machine
learning. Work that is presented at the main ICLR conference should not
appear in a workshop."

*Submission deadline:* February 26, 2022 at 12:00 AM UTC

*Author notifications:* March 26, 2022

*Workshop:* April 29, 2022
___
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai


[UAI] CFP: PAIR2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data @ICLR 2022

2022-02-21 Thread Di Wang
PAIR2Struct: Privacy, Accountability, Interpretability, Robustness,
Reasoning on Structured Data @ICLR 2022

*https://pair2struct-workshop.github.io/*



IMPORTANT DATES:

*Submission deadline:* February 26, 2022 at 12:00 AM UTC

*Author notifications:* March 26, 2022

*Workshop:* April 29, 2022, virtually

**

*OVERVIEW*

In these years, we have seen principles and guidance relating to
accountable and ethical use of artificial intelligence (AI) spring up
around the globe. Specifically, Data *P*rivacy, *A*ccountability, *I*
nterpretability, *R*obustness, and *R*easoning have been broadly recognized
as fundamental principles of using machine learning (ML) technologies on
decision-critical and/or privacy-sensitive applications. On the other hand,
in tremendous real-world applications, data itself can be well represented
as various structured formalisms, such as graph-structured data (e.g.,
networks), grid-structured data (e.g., images), sequential data (e.g.,
text), etc. By exploiting the inherently structured knowledge, one can
design plausible approaches to identify and use more relevant variables to
make reliable decisions, thereby facilitating real-world deployments.

In this workshop, we will examine the research progress towards accountable
and ethical use of AI from diverse research communities, such as the ML
community, security & privacy community, and more. Specifically, we will
focus on the limitations of existing notions on *P*rivacy, *A*
ccountability, *I*nterpretability, *R*obustness, and *R*easoning. We aim to
bring together researchers from various areas (e.g., ML, security &
privacy, computer vision, and healthcare) to facilitate discussions
including related challenges, definitions, formalisms, and evaluation
protocols regarding the accountable and ethical use of ML technologies in
high-stake applications with structured data. In particular, we will
discuss the *interplay* among the fundamental principles from theory to
applications. We aim to identify new areas that call for additional
research efforts. Additionally, we will seek possible solutions and
associated interpretations from the notion of causation, which is an
inherent property of systems. We hope that the proposed workshop is
fruitful in building accountable and ethical use of AI systems in practice.

*CALL FOR PAPERS*

All submissions are due by Feb 26 '22 12:00 AM UTC.

*Topics include but are not limited to:*

• Privacy-preserving machine learning methods on structured data (e.g.,
graphs, manifolds, images, and text).
• Theoretical foundations for privacy-preserving and/or explainability of
deep learning on structured data (e.g., graphs, manifolds, images, and
text).
• Interpretability and accountability in different application domains
including healthcare, bioinformatics, finance, physics, etc.
• Improving interpretability and accountability of black-box deep learning
with graphical abstraction (e.g., causal graphs, graphical models,
computational graphs).
• Robust machine learning methods via graphical abstraction (e.g., causal
graphs, graphical models, computational graphs).
• Relational/graph learning under robustness constraints (robustness in
face of adversarial attacks, distribution shift, environment changes, etc.).

*Paper submissions:* To format your paper for submission please use the main
conference LaTeX style files
.
The workshop has a strict maximum page limit of 5 pages for the main text
and 3 pages for the supplemental text. Citations may use additional,
unlimited pages.

*Submission page:* ICLR 2022 PAIR2Struct Workshop
.

Please note that ICLR policy states: "Workshops are not a venue
for work that has been previously published in other conferences on machine
learning. Work that is presented at the main ICLR conference should not
appear in a workshop."

*Submission deadline:* February 26, 2022 at 12:00 AM UTC

*Author notifications:* March 26, 2022

*Workshop:* April 29, 2022
___
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai