Hi everyone,

We’d like to invite you to submit to the NeurIPS 2023 Workshop on
Distribution Shifts: New Frontiers with Foundation Models.

Website: 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.google.com%2Fview%2Fdistshift2023&data=05%7C01%7Cuai%40ENGR.orst.edu%7C3a5b8581dc354c2173c208dbae72e61c%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638295581322844463%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=RlfP8rZLpMmUcIpeWfw9lmXbv%2BnzTXXwoy9mPnLxvic%3D&reserved=0
Paper submission deadline: October 2, 2023 (Anywhere on Earth)
Author notification: October 27, 2023
Workshop: December 15, 2023, in-person in New Orleans, USA.

Authors who will not be able to attend in person are still encouraged to
submit. Accepted papers will be accompanied by a short pre-recorded video
to allow authors to present their work remotely.

Please reach out to distshift-workshop-2...@googlegroups.com if you have
any questions.

*Call for papers*
Distribution shifts—where a model is deployed on a data distribution
different from what it was trained on—pose significant robustness
challenges in real-world ML applications. Such shifts are often unavoidable
in the wild and have been shown to substantially degrade model performance.
For example, models can systematically fail when tested on patients from
different hospitals or people from different demographics. Training models
that are robust to such distribution shifts is a rapidly growing area of
interest in the ML community.

In recent years, foundation models—large pretrained models that can be
adapted for a wide range of tasks—have achieved unprecedented performance
on a broad variety of discriminative and generative tasks, including in
distribution shift scenarios. Foundation models open up an exciting new
frontier in the study of distribution shifts.The goal of our workshop is to
foster discussions and further research on distribution shifts, especially
in the context of foundation models.

Examples of relevant topics include, but are not limited to:
Effects of foundation models (e.g., pre-training, scale) on robustness
Robust adaptation of foundation models to downstream tasks
Distribution shifts from pretraining to downstream distributions, including
in the context of generative foundation models.
Beyond the above topics, we are broadly interested in methods, empirical
studies, and theory of distribution shifts, including those that do not
involve foundation models.

*Invited speakers*
Aditi Raghunathan, Carnegie Mellon University
Balaji Lakshminarayanan, Google DeepMind
Hoifung Poon, Microsoft Research
Kate Saenko, Boston University
Ludwig Schmidt, University of Washington
Peng Cui, Tsinghua University

*Organizers*
Becca Roelofs, Google
Fanny Yang, ETH Zurich
Hongseok Namkoong, Columbia University
Jacob Eisenstein, Google
Masashi Sugiyama, RIKEN & University of Tokyo
Pang Wei Koh, University of Washington
Shiori Sagawa, Stanford University
Tatsunori Hashimoto, Stanford University
Yoonho Lee, Stanford University
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to