Hi all,

We’d like to invite you to submit to the NeurIPS 2021 Workshop on
Distribution Shifts. Our workshop will be broadly about distribution
shifts, and we will focus on bringing together applications and methods to
facilitate discussion on real-world distribution shifts. The deadline to
submit papers is on October 8, with an option to sign up for the mentorship
program by late September. Please see the workshop website
https://sites.google.com/view/distshift2021/ for the full information, and
feel free to reach out to distshift-workshop-2...@googlegroups.com if you
have any questions. Thank you!

---

*Workshop objectives & call for papers*

Distribution shifts—where a model is deployed on a data distribution
different from what it was trained on—pose significant robustness
challenges in real-world ML applications. Such shifts are often unavoidable
in the wild and have been shown to substantially degrade model performance
in applications such as biomedicine, wildlife conservation, sustainable
development, robotics, education, and criminal justice. For example, models
can systematically fail when tested on patients from different hospitals or
people from different demographics.

Through the workshop, we hope to support and accelerate research on
real-world distribution shifts. To this end, we will convene a diverse set
of domain experts and methods-oriented researchers working on distribution
shifts. We are broadly interested in methods, evaluations and benchmarks,
and theory for distribution shifts, and we are especially interested in
work on distribution shifts that arise naturally in real-world application
contexts. Examples of relevant topics include, but are not limited to:

   - *Examples of real-world distribution shifts in various application
   areas. *We especially welcome applications that are not widely discussed
   in the ML research community, e.g., education, sustainable development, and
   conservation.
   - *Methods for improving robustness to distribution shifts. *Relevant
   settings include domain generalization, domain adaptation, and
   subpopulation shifts, and we are interested in a wide range of approaches,
   from uncertainty estimation to causal inference to active data collection.
   We welcome both general-purpose methods, as well as other methods that
   incorporate prior knowledge on the types of distribution shifts we wish to
   be robust on. We encourage evaluating these methods on real-world
   distribution shifts.
   - *Empirical and theoretical characterization of distribution
shifts. *Distribution
   shifts can vary widely in the way in which the data distribution changes,
   as well as the empirical trends they exhibit. What empirical trends do we
   observe? What empirical or theoretical frameworks can we use to
   characterize these different types of shifts and their effects? What kinds
   of theoretical settings capture useful components of real-world
   distribution shifts?
   - *Benchmarks and evaluations.* We especially welcome contributions for
   subpopulation shifts, as they are underrepresented in current ML
   benchmarks. We are also interested in evaluation protocols that move beyond
   the standard assumption of fixed training and test splits -- for which
   applications would we need to consider other forms of shifts, such as
   streams of continually-changing data or feedback loops between models and
   data?


*Speakers*

   - Aleksander Madry (MIT)
   - Chelsea Finn (Stanford University)
   - Elizabeth Tipton (Northwestern University)
   - Ernest Mwebaze (Makerere University & Sunbird AI)
   - Jonas Peters (University of Copenhagen)
   - Masashi Sugiyama (University of Tokyo)
   - Suchi Saria (Johns Hopkins University & Bayesian Health)


*Panelists*

   - Andrew Beck (PathAI)
   - Jamie Morgenstern (University of Washington)
   - Judy Hoffman (Georgia Tech)
   - Tatsunori Hashimoto (Stanford University)


*Organizers*

   - Shiori Sagawa (Stanford University)
   - Pang Wei Koh (Stanford University)
   - Fanny Yang (ETH Zurich)
   - Hongseok Namkoong (Columbia University)
   - Jiashi Feng (National University of Singapore)
   - Kate Saenko (Boston University)
   - Percy Liang (Stanford University)
   - Sarah Bird (Microsoft)
   - Sergey Levine (UC Berkeley)
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to