If you would like to compete in the NIPS Competition Track at NIPS 2017,
please see the webpage of the individual competition for instructions on
how to join the competition.

https://nips.cc/Conferences/2017/CompetitionTrack

NIPS 2017 Competition Track

This is the first NIPS edition on "NIPS Competitions". We received 23
competition proposals related to data-driven and live competitions on
different aspects of NIPS. Proposals were reviewed by several high
qualified researchers and experts in challenges organization. Five
top-scored competitions were accepted to be run and present their results
during the NIPS 2017 Competition track day. Evaluation was based on the
quality of data, problem interest and impact, promoting the design of new
models, and a proper schedule and managing procedure. Below, you can find
the five accepted competitions. Please visit each competition webpage to
read more about the competition, its schedule, and how to participate. Each
competition has its own schedule defined by its organizers. The results of
the competitions, including organizers and top ranked participants talks
will be presented during the Competition track day at NIPS 2017. Organizers
and participants will be invited to submit their contribution as a book
chapter to the upcoming NIPS 2017 Competition book, within Springer Series
in Challenges in Machine Learning. Competition track day at the conference
will be on December 8th.
------------------------------
The Conversational Intelligence Challenge

Competition summary
Dialogue systems and conversational agents – including chatbots, personal
assistants and voice control interfaces – are becoming increasingly
widespread in our daily lives. In addition to the growing real-world
applications, the ability to converse is also closely related to the
overall goal of AI. Recent advances in machine learning have sparked a
renewed interest for dialogue systems in the research community. This NIPS
Live Competition aims to unify the community around the challenging task:
building systems capable of intelligent conversations. Teams are expected
to submit dialogue systems able to carry out intelligent and natural
conversations of news articles with humans. At the final stage of the
competition participants, as well as volunteers, will be randomly matched
with a bot or a human to chat and evaluate answers of a peer. We expect the
competition to have two major outcomes: (1) a measure of quality of
state-of-the-art dialogue systems, and (2) an open-source dataset collected
from evaluated dialogues.

Organizers

Mikhail Burtsev, Valentin Malykh, MIPT, Moscow
Ryan Lowe, McGill University, Montreal
Iulian Serban, Yoshua Bengio, University of Montreal, Montreal
Alexander Rudnicky, Alan W. Black, Carnegie Mellon University, Pittsburgh
Contact email: i...@convai.io

Webpage: http://convai.io
------------------------------
Classifying Clinically Actionable Genetic Mutations

Competition summary
While the role of genetic testing in advancing our understanding of cancer
and designing more precise and effective treatments holds much promise,
progress has been slow due to significant amount of manual work still
required to understand genomics. For the past several years, world-class
researchers at Memorial Sloan Kettering Cancer Center have worked to create
an expert-annotated precision oncology knowledge base. It contains several
thousand annotations of which genes are clinically actionable and which are
not based on clinical literature. This dataset can be used to train machine
learning models to help experts significantly speed up their research.
This competition is a challenge to develop classification models which
analyze abstracts of medical articles and, based on their content
accurately determine oncogenicity (4 classes) and mutation effect (9
classes) of the genes discussed in them. Participants will not only have an
opportunity to work with real-world data and get to answer one of the key
open questions in cancer genetics and precision medicine, but the winning
model will be tested and deployed at Memorial Sloan Kettering and will have
the potential to touch more than 120,000 patients it sees every year, and
many more around the world.

Organizers

Iker Huerga, huerg...@mskcc.org
Alexander Grigorenko, grigo...@mskcc.org
Anasuya Das, d...@mskcc.org
Leifur Thorbergsson, thorb...@mskcc.org
Competition Coordinators
Kyla Nemitx, nemi...@mskcc.org
Randi Kaplan, kapl...@mskcc.org
Jenna Sandker, muc...@mskcc.org

 Webpage:  https://www.mskdatascience.org/
------------------------------
Learning to Run

Competition summary
In this competition, you are tasked with developing a controller to enable
a physiologically-based human model to navigate a complex obstacle course
as quickly as possible. You are provided with a human musculoskeletal model
and a physics-based simulation environment where you can synthesize
physically and physiologically accurate motion. Potential obstacles include
external obstacles like steps, or a slippery floor, along with internal
obstacles like muscle weakness or motor noise. You are scored based on the
distance you travel through the obstacle course in a set amount of time.

Organizers

Lead organizer: Łukasz Kidziński <lukasz.kidzin...@stanford.edu>
Coordinators: Carmichael Ong, Mohanty Sharada, Jason Fries, Jennifer Hicks
Promotion: Joy Ku
Platform administrator: Sean Carroll
Advisors: Sergey Levine, Marcel Salathé, Scott Delp

Webpage: https://www.crowdai.org/challenges/nips-2017-learning-to-run
------------------------------
Human-Computer Question Answering Competition

Competition summary
Question answering is a core problem in natural language processing: given
a question, provide the entity that it is asking about.  When top humans
compete in this task, they answer questions incrementally; i.e., players
can interrupt the questions to show they know the subject better than their
slower competitors.  This formalism is called “quiz bowl“ and was the
subject of the NIPS 2015 best demonstration.
This year, competitors can submit their own system to compete in a quiz
bowl competition between computers and humans.  Entrants create systems
that receive questions one word at a time and decide when to answer.  This
then provides a framework for the system to compete against a top human
team of quiz bowl players in a final game that will be part of NIPS 2017.

Organiziers

Jordan Boyd-Graber (University of Colorado), jordan.boyd.gra...@colorado.edu
Hal Daume III (University of Maryland)
He He (Stanford)
Mohit Iyyer (University of Maryland)
Pedro Rodriguez (University of Colorado)

Webpage: http://sites.google.com/view/hcqa/
------------------------------
Adversarial Attacks and Defences

Competition summary
Most existing machine learning classifiers are highly vulnerable to
adversarial examples. An adversarial example is a sample of input data
which has been modified very slightly in a way that is intended to cause a
machine learning classifier to misclassify it. In many cases, these
modifications can be so subtle that a human observer does not even notice
the modification at all, yet the classifier still makes a mistake.
Adversarial examples pose security concerns because they could be used to
perform an attack on machine learning systems, even if the adversary has no
access to the underlying model.

To accelerate research on adversarial examples and robustness of machine
learning classifiers we organize a challenge that encourages researchers to
develop new methods to generate adversarial examples as well as to develop
new ways to defend against them. As a part of the challenge participants
are invited to develop methods to craft adversarial examples as well as
models which are robust to adversarial examples.

Organizers

Alexey Kurakin, kura...@google.com
Ian Goodfellow, goodfel...@google.com
Samy Bengio, ben...@google.com

Primary contact e-mail which will be provided to participants:
adversarial-examples-competit...@google.com

Webpage: coming soon
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to