We are happy to announce the I Can’t Believe It’s Not Better workshop at
NeurIPS 2023. This year the workshop is titled Failure Modes in the Age of
Foundation Models. We invite submissions that focus on surprising or negative
results when using foundation models as well as submissions with more general
negative results from machine learning. The full call for papers is below.
Key Information
Paper Submission Deadline - October 1, 2023 (Anywhere on Earth)
Workshop Website:
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.google.com%2Fview%2Ficbinb-2023%2Fhome&data=05%7C01%7Cuai%40engr.orst.edu%7Caac808ed61764093103308dba3eafbb9%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638284002978182186%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000%7C%7C%7C&sdata=RPZteg2XmdtaB4G39%2F%2B42VdKbA8Jpc7rSABYP2qIpyk%3D&reserved=0
Call For Papers
The goal of the I Can’t Believe It’s Not Better workshop series is to promote
“slow science” that pushes back against “leaderboard-ism”, and provides a forum
to share surprising or negative results. In 2023 we propose to apply this same
approach to the timely topic of foundation models.
The hype around ChatGPT, Stable Diffusion and SegmentAnything might suggest
that all the interesting problems have been solved and artificial general
intelligence is just around the corner. In this workshop we cooly reflect on
this optimism, inviting submissions on failure modes of foundation models, i.e.
unexpected negative results. In addition we invite contributions that will help
us understand when we should expect foundation models to disrupt existing
sub-fields of ML and when these powerful methods will remain complementary to
another sub-field of machine learning.
We invite submissions on the following topics:
• Failure modes of current foundation models (safety, explainability,
methodological limitations, etc.)
• Failure modes of applying foundation models, embeddings or other massive
scale deep learning models.
• Development of machine learning methodologies that benefit from
foundation models, but necessitate other techniques.
• Meta machine learning research and reflections on the impact of
foundation models on the broader field of machine learning.
• Negative scientific findings in a more general sense. In keeping with
previous workshops we will accept findings on methodologies or tools that gave
surprising negative results without foundation models. Such submissions are
encouraged especially with discussion on the relevance of findings in the
present climate where foundation models are changing the field.
Technical submissions may center on machine learning, deep learning or deep
learning adjacent fields (causal DL, meta-learning, generative modelling,
adversarial examples, probabilistic reasoning, etc) as well as domain specific
applications.
Papers will be assessed on:
• Clarity of writing
• Rigor and transparency in the scientific methodologies employed
• Novelty and significance of insights
• Quality of discussion of limitations
• Reproducibility of results
Selected papers will be optionally included in a special issue of PMLR.
Alternatively, some authors may prefer their paper to be in the non-archival
track which is to share preliminary findings that will later go to full review
at another venue.
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai