MODEL CHECKING CONTEST 2024 - (2/2) - CALL FOR TOOLS

GOALS

   The Model Checking Contest (MCC) is a yearly event that assesses existing
   verification tools for concurrent systems on a set of models (i.e., 
   benchmarks) proposed by the scientific community. All tools are compared on 
   the same benchmarks and using the same computing platform, so that a fair 
   comparison can be made, contrary to most scientific publications, in which
   different benchmarks are executed on different platforms.

   Another goal of the Model Checking Contest is to infer conclusions about
   the respective efficiency of verification techniques for Petri nets
   (decision diagrams, partial orders, symmetries, etc.) depending on the
   particular characteristics of models under analysis. Through the feedback
   on tools efficiency, we aim at identifying which techniques can best tackle
   a given class of models.

   Finally, the Model Checking Contest seeks to be a friendly place where
   developers meet, exchange, and collaborate to enhance their verification
   tools.

   The Model Checking Contest is organized in three steps: 
   - the Call for Models (http://mcc.lip6.fr/cfm.html),
   - the present Call for Tools (http://mcc.lip6.fr/cft.html),
   - and the Contest itself.

CALL FOR TOOLS

   For the 2024 edition, we kindly ask the developers of verification tools for
   concurrent systems to participate in the MCC competition. Each tool will be
   assessed on both the accumulated collection of MCC models (these are the
   "known" models, see http://mcc.lip6.fr/models.html) and on the new models
   selected during the 2024 edition (these are the "surprise" models, see
   http://mcc.lip6.fr/cfm.html).

   The benchmarks on which tools will be assessed, are colored Petri nets
   and/or P/T nets. Some P/T nets are provided with additional information
   giving a hierarchical decomposition into sequential machines (these models
   are called Nested-Units Petri nets - see http://mcc.lip6.fr/nupn.php for
   more information): tools may wish to exploit this information to increase
   performance and scalability.

   Each tool may compete in one or more categories of verification tasks, such
   as reachability analysis, evaluation of CTL formulas, of LTL formulas, etc.

   Tools have to be submitted in binary-code form. Each submitted tool will be
   run by the MCC organizers in a virtual machine (typically configured with up
   to 4 cores, 16 Gbytes of RAM, and a time confinement of 30 or 60 minutes per
   run, i.e., per instance of a model). Last year, more than 1500 days of CPU
   time have been invested in the MCC competition. The MCC relies on BenchKit
   (https://github.com/fkordon/BenchKi t), a dedicated execution environment
   for monitoring the execution of processes and gathering of data.

   By submitting a tool, you explicitly allow the organizers of the Model
   Checking Contest to publish to publish on the MCC web site the binary
   executable of this tool, so that experiments can be reproduced by others
   after the contest. Detailed information is available from
   http://mcc.lip6.fr/rules.html.

   Note: to submit a tool, it is not required to have submitted any model to
   the MCC Call for Models. However, it is strongly recommended to pre-register
   your tool using the dedicated form before April 15, 2024:
   http://mcc.lip6.fr/2024/registration.php. You will then be informed of the
   way the contest is going. The sooner is the better.

   IMPORTANT: based on the discussions between the organizers and the tool
   developers, some changes were recently introduces to increase the accuracy
   of the contest. Please have a close look at the submission manual that
   includes such changes. You can find below the list of those that may have an
   impact for you:

   - Models now embed information about the properties located in the model
     forms (when available). The way it is described is presented here. So you
     may for example check if it is known that the model is «safe» or not.

   - The default virtual machine is divided in two disk images. mcc2024.vmdk is
     the bootable one that you update with your tools. It mounts
     mcc2024-input.vmdk in read-only mode that contains models and formulas for
     the contest.

IMPORTANT DATES

   - January 22, 2024: publication of the present Call for Tools.
   - January 29, 2024: publication of the updated contest rules for 2023 at
     http://mcc.lip6.fr/2023/pdf/rules.pdf
   - February 12, 2024: publication of the Tool Submission Kit, which is
     available from https://mcc.lip6.fr/2024/archives/SubmissionKit.tar.gz
   - April 15, 2024: deadline for tool pre-registration If you plan to submit a
     tool to the contest, please fill in the pre-registration form available
     from http://mcc.lip6.fr/registration.php
   - May 1st, 2024: deadline for tool submission
   - May 15, 2024: early feedback to tool submitters, following the preliminary
     qualification runs, which are performed using a few small instances of the
     "known" models.
   - June 10, 2024: more feedback to tool submitters, following the competition
     runs
   - June 25, 2024: announcement of MCC'2024 results during the Petri Net
     Conference conference (Geneva, Switzerland)

COMMITTEES

   General Chair
      Fabrice Kordon - Sorbonne Université/LIP6, France
   
   Model Board
      Pierre Bouvier - Kalray S.A., France
      Hubert Garavel - Inria/LIG, France
      Fabrice Kordon - Sorbonne Université, France     

   Execution Board
      Francis Hulin-Hubard - CNRS, France
      Fabrice Kordon - Sorbonne Université, France     
   
   Formula Board
      Loïg Jezequel - Univ. Nantes, France
      Emmanuel Paviot-Adet - Univ. Paris 5, France

--------------------------------------------------------------------------------------
Fabrice Kordon Sorbonne Université
Campus Pierre & Marie Curie
LIP6/MoVe, Office 26-00/202 or 26-25/216
4 place Jussieu, 75252 Paris Cedex 05
http://lip6.fr/Fabrice.Kordon/

Reply via email to