CALL FOR ABSTRACTS
New Frontiers in Model Order Selection
NIPS-2011 Workshop, Sierra Nevada, Spain,
December 16, 2011
http://people.kyb.tuebingen.mpg.de/seldin/fimos.html
*UPDATE*
*New deadline: October 24!*
DESCRIPTION
Model order selection, which is a trade-off between model complexity and
its empirical data fit, is one of the fundamental questions in machine
learning. It was studied in detail in the context of supervised learning
with i.i.d. samples, but received relatively little attention beyond
this domain. The goal of our workshop is to raise attention to the
question of model order selection in other domains, share ideas and
approaches between the domains, and identify perspective directions for
future research. Our interest covers ways of defining model complexity
in different domains, examples of practical problems, where intelligent
model order selection yields advantage over simplistic approaches, and
new theoretical tools for analysis of model order selection. The domains
of interest span over all problems that cannot be directly mapped to
supervised learning with i.i.d. samples, including, but not limited to,
reinforcement learning, active learning, learning with delayed, partial,
or indirect feedback, and learning with submodular functions.
An example of first steps in defining complexity of models in
reinforcement learning, applying trade-off between model complexity and
empirical performance, and analyzing it can be found in [1-4]. An
intriguing research direction coming out of these works is simultaneous
analysis of exploration-exploitation and model order selection
trade-offs. Such an analysis enables to design and analyze models that
adapt their complexity as they continue to explore and observe new data.
Potential practical applications of such models include contextual
bandits (for example, in personalization of recommendations on the web
[5]) and Markov decision processes.
References:
[1] N. Tishby, D. Polani. "Information Theory of Decisions and Actions",
Perception-Reason-Action Cycle: Models, Algorithms and Systems, 2010.
[2] J. Asmuth, L. Li, M. L. Littman, A. Nouri, D. Wingate, "A Bayesian
Sampling Approach to Exploration in Reinforcement Learning", UAI, 2009.
[3] N. Srinivas, A. Krause, S. M. Kakade, M. Seeger, "Gaussian Process
Optimization in the Bandit Setting: No Regret and Experimental Design",
ICML, 2010.
[4] Y. Seldin, N. Cesa-Bianchi, F. Laviolette, P. Auer, J. Shawe-Taylor,
J. Peters, "PAC-Bayesian Analysis of the Exploration-Exploitation
Trade-off", ICML-2011 workshop on online trading of exploration and
exploitation.
[5] A. Beygelzimer, J. Langford, L. Li, L. Reyzin, R. Schapire,
"Contextual Bandit Algorithms with Supervised Learning Guarantees",
AISTATS, 2011.
CALL FOR ABSTRACTS
We invite submission of abstracts to the workshop. Abstracts should be
at most 4 pages long in the NIPS format
<https://nips.cc/PaperInformation/StyleFiles> (appendices are allowed,
but the organizers reserve the right to evaluate submissions based on
the first 4 pages only). Selected abstracts will be presented as posters
during the workshop. Submissions should be sent by email to seldin at
tuebingen dot mpg dot de.
Update: Two best abstracts will be presented as contributed talks!
IMPORTANT DATES
Submission Deadline: October 24.
Notification of Acceptance: November 4.
INVITED SPEAKERS
Shie Mannor <http://webee.technion.ac.il/people/shie>, Technion
John Langford <http://hunch.net/%7Ejl>, Yahoo!
Naftali Tishby <http://www.cs.huji.ac.il/%7Etishby>, The Hebrew University
Peter Auer <http://personal.unileoben.ac.at/auer>, Montanuniversität Leoben
ORGANIZERS
Yevgeny Seldin <http://www.kyb.mpg.de/%7Eseldin>, Max Planck Institute
for Intelligent Systems
Koby Crammer <http://webee.technion.ac.il/people/koby>, Technion
Nicolò Cesa-Bianchi <http://homes.dsi.unimi.it/%7Ecesabian>, University
of Milano
François Laviolette <http://www.ift.ulaval.ca/%7Elaviolette>, Université
Laval
John Shawe-Taylor <http://www.cs.ucl.ac.uk/staff/j.shawe-taylor>,
University College London
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai