[Apologies for cross-posting and reposting]

Vision-and-Language Algorithmic Reasoning (VLAR 2023)
Workshop and Challenge
October 3, 2023, Paris, France
Held in conjunction with ICCV 2023
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwvlar.github.io%2Ficcv23&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811060967365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=9tLR3X75opSYrupbjWkprSqWtRRhUI20VwDvP350Q2c%3D&reserved=0


CALL FOR CONTRIBUTIONS

The focus of this workshop is to bring together researchers in multimodal
reasoning and cognitive models of intelligence towards positioning the
current research progress in AI within the overarching goal of achieving
machine intelligence. An important aspect is to bring to the forefront
problems in perception, language modeling, and cognition that are often
overlooked in state-of-the-art research and that are important for making
true progress in artificial intelligence. One specific problem that
motivated our workshop is the question of how well current deep models
learn broad yet simple skills and how well do they generalize their learned
models to solve problems that are not part of their learning set; such
skills even children learn and use effortlessly (e.g., see the paper “Are
Deep Neural Networks SMARTer than Second Graders?
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fabs%2F2212.09993&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811060967365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=j98UOI2LbQSQcfYuJLMfuItOGLWmm0Vh6LuOwS2kc0U%3D&reserved=0>”).
 In this workshop, we plan to bring
together outstanding researchers to showcase their cutting edge research on
the above topics that will inspire the audience to bring out the missing
pieces in our quest to solve the puzzle of artificial intelligence.

___________________________________________________________________________
IMPORTANT DATES

* Paper Track

*Submission deadline: ***July 24, 2023****** (11:59PM EDT) (extended from
July 20, 2023)
Paper decisions to authors: August 7, 2023
Camera-ready deadline: August 18, 2023 (11:59PM EDT)

* SMART-101 Challenge Track

Challenge open: June 15, 2023.
Submission deadline: September 1, 2023 (11:59PM EDT).
Arxiv paper deadline to be considered for awards: September 1, 2023
(11:59PM EDT).
Public winner announcement: October 3, 2023 (11:59PM EDT).

___________________________________________________________________________
TOPICS FOR PAPER TRACK

We invite submissions of original and high-quality research papers in the
topics related to vision-and-language algorithmic reasoning. The topics for
VLAR 2023 include, but are not limited to:

* Large language models, vision, and cognition including children’s
cognition
* Foundation models of intelligence, including vision, language, and other
modalities
* Artificial general intelligence / general-purpose problem solving
architectures
* Neural architectures for solving vision & language or language-based IQ
puzzles
* Embodiment and AI
* Large language models, neuroscience, and vision
* Functional and algorithmic / procedural learning in vision
* Abstract visual-language reasoning, e.g., using sketches, diagrams, etc.
* Perceptual reasoning and decision making
* Multimodal cognition and learning
* New vision-and-language abstract reasoning tasks and datasets
* Vision-and-language applications
___________________________________________________________________________
SUBMISSION INSTRUCTIONS FOR PAPER TRACK

* We are inviting only original and previously unpublished work. Dual
submissions are not allowed.
* All submissions are handled via the workshop’s CMT Website:
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcmt3.research.microsoft.com%2FVLAR2023&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811060967365%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=mCEzdrbm8HoXH3cGqTmHtwrUT%2BLK%2FFQJpF6W%2FD0bYKw%3D&reserved=0.
* Submissions should not exceed *four (4) pages* in length (excluding
references).
* Submissions should be made in PDF format and should follow the official
ICCV template and guidelines.
* All submissions should maintain author anonymity and should abide by the
ICCV conference guidelines for double-blind review.
* Accepted papers will be presented as either an oral, spotlight, or poster
presentation. At least one author of each accepted submission must present
the paper at the workshop.
* Presentation of accepted papers at our workshop will follow the same
policy as that for accepted papers at the ICCV main conference
* Accepted papers will also be part of the ICCV 2023 workshop proceedings.
* Authors may optionally upload supplementary materials, the deadline for
which is the same as that of the main paper and should be submitted
separately.

___________________________________________________________________________
INSTRUCTIONS FOR PARTICIPATING IN THE SMART-101 CHALLENGE TRACK

As part of VLAR 2023, we are hosting a challenge based on the Simple
Multimodal Algorithmic Reasoning Task – SMART-101 – dataset, which is
available for download here: 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsmartdataset.github.io%2Fsmart101%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=yY6SjgLb2rNrDejpU%2Bg0vFMmQyUe45E9JOayOvAjtlM%3D&reserved=0.
 The
accompanying CVPR 2023 paper “Are Deep Neural Networks SMARTer than Second
Graders” is available here: 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fabs%2F2212.09993&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=Mnk5s%2Flun09dNsUOG8ZdJmtcU6pErT3yJCmsDYuldAQ%3D&reserved=0.

* The challenge is hosted on Eval AI and is open to submissions, see
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Feval.ai%2Fweb%2Fchallenges%2Fchallenge-page%2F2088%2Foverview&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=PMSuNi1dF%2FM9cVR5zY8%2FS4qghzPdw32Qe524fzhUPMY%3D&reserved=0
* The challenge participants are required to make arXiv submissions
detailing their approach. These are only used to judge the competition, and
*will* *not be reviewed* and will not be part of workshop proceedings.
* Winners of the challenge are determined both by performance on the
leaderboard over a private test set as well as the novelty of the proposed
method (as detailed in the arXiv submission). Details are made available on
the challenge website.
* Prizes will be awarded on the day of the workshop.

___________________________________________________________________________
KEYNOTE SPEAKERS

Prof. Anima Anandkumar 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.eas.caltech.edu%2Fpeople%2Fanima&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=BkJwFVJ%2FYsx4tUlDLoFWvv%2FFk%2FUiW03M48kP%2Bs7Jy2Q%3D&reserved=0>,
 NVIDIA &
Caltech
Dr. François Chollet 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffchollet.com%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=tgYhk15GMpUf4kNdL4NdBBf87TorQHiIZmb0wdTvJJk%3D&reserved=0>,
 Google
Prof. Jitendra Malik 
<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpeople.eecs.berkeley.edu%2F~malik%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=vw8c%2BjE9MQ%2Fh1UrqgXCTUX99dA2FFh3IfKCKhqhd7BQ%3D&reserved=0>,
 Meta & UC
Berkeley
Prof. Elizabeth Spelke
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.harvardlds.org%2Four-labs%2Fspelke-labspelke-lab-members%2Felizabeth-spelke%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=u%2B%2FHQ3t%2BRsbUIcmo7kgauO%2B8n%2BX9fjxf2Ywy70NxsoU%3D&reserved=0>,
Harvard University
Prof. Jiajun Wu 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjiajunwu.com%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=LAywIVSWo9dz8dxXOSXBaX11rDuOYCCN80n3Yuen64E%3D&reserved=0>,
 Stanford University
___________________________________________________________________________
WORKSHOP ORGANIZERS

Anoop Cherian 
<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fusers.cecs.anu.edu.au%2F~cherian%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=bikSCOkeM%2F5HMfUp%2BldElHYemjMumpTAcaO0IpbkQOQ%3D&reserved=0>,
 Mitsubishi Electric
Research Laboratories
Kuan-Chuan Peng 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.merl.com%2Fpeople%2Fkpeng&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=4h6%2F98bXuQr6tlIFSPDrYXbSFchrGL8%2F9O8TGMlNH%2Fc%3D&reserved=0>,
 Mitsubishi Electric
Research Laboratories
Suhas Lohit 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.merl.com%2Fpeople%2Fslohit&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=eCbegYfOLWth%2B3D9G7%2BYOlXkK89bD7%2BcznJBQ7rUCXU%3D&reserved=0>,
 Mitsubishi Electric
Research Laboratories
Kevin A. Smith 
<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.mit.edu%2F~k2smith%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=Yh%2Bn0vYx1jENqqcR5vNUS%2B%2B3I9MxjuU3qwpw8vjnfPA%3D&reserved=0>,
 Massachusetts Institute of
Technology
Ram Ramrakhya 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fram81.github.io%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=0PRnBgenV2pfdSd9fr1bi3FnFLvmYG1rwzSbPnVwjPo%3D&reserved=0>,
 Georgia Institute of Technology
Honglu Zhou 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.google.com%2Fview%2Fhongluzhou%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=hakR4PF4x1G5x6c7IIt%2F4B%2FAtA5anAqarph%2FgIXbxO4%3D&reserved=0>,
 NEC Laboratories
America, Inc.
Tim K. Marks 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.merl.com%2Fpeople%2Ftmarks&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=AGo8%2BtICozdiTwRV7cnA4KgzVfq%2B%2BQc6%2B96Kj5OCS3o%3D&reserved=0>,
 Mitsubishi Electric
Research Laboratories
Joanna Matthiesen 
<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fjoanna-matthiesen-61a52a35%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=I%2F86E0h6tu3B0HbeYwZ4R%2BuB9%2BGgj6TKfuiD7wfF2Fk%3D&reserved=0>,
Math Kangaroo USA
Joshua B. Tenenbaum 
<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fweb.mit.edu%2Fcocosci%2Fjosh.html&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=xc%2BuVsgEM%2B8tI%2BoluhwVJd0ZJlevMzN51K4bCUfelFA%3D&reserved=0>,
 Massachusetts
Institute of Technology

___________________________________________________________________________
CONTACT
Email: vlaricc...@googlegroups.com
SMART-101 project: 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsmartdataset.github.io%2Fsmart101%2F&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=yY6SjgLb2rNrDejpU%2Bg0vFMmQyUe45E9JOayOvAjtlM%3D&reserved=0
Workshop Website: 
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwvlar.github.io%2Ficcv23&data=05%7C01%7Cuai%40engr.orst.edu%7C1eba1003c8ec4ec765f408db887539ae%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C638253811061123593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=o9WWnV3ba3F3vf54zFb7np%2FN3MHaNFO7L1gY7GxOhcU%3D&reserved=0
_______________________________________________
uai mailing list
uai@engr.orst.edu
https://it.engineering.oregonstate.edu/mailman/listinfo/uai

Reply via email to