Dear all,

Please note that the Tool SubmissionKit is online.

Fabrice

===

MODEL CHECKING CONTEST 2023 - (2/2) - CALL FOR TOOLS

GOALS

   The Model Checking Contest (MCC) is a yearly event that assesses existing
   verification tools for concurrent systems on a set of models (i.e., 
   benchmarks) proposed by the scientific community. All tools are compared on 
   the same benchmarks and using the same computing platform, so that a fair 
   comparison can be made, contrary to most scientific publications, in which
   different benchmarks are executed on different platforms.

   Another goal of the Model Checking Contest is to infer conclusions about
   the respective efficiency of verification techniques for Petri nets
   (decision diagrams, partial orders, symmetries, etc.) depending on the
   particular characteristics of models under analysis. Through the feedback
   on tools efficiency, we aim at identifying which techniques can best tackle
   a given class of models.

   Finally, the Model Checking Contest seeks to be a friendly place where
   developers meet, exchange, and collaborate to enhance their verification
   tools.

   The Model Checking Contest is organized in three steps: 
   - the Call for Models (http://mcc.lip6.fr/cfm.html),
   - the present Call for Tools (http://mcc.lip6.fr/cft.html),
   - and the Contest itself.

IMPORTANT — TOOLYMPICS II in 2023

   The MCC will jointly take place with the TOOLympics 2023, that will gather a
   dozen established competitions to celebrate their achievements.

   The MCC will be part of TOOLYMPICS as one of the involved competitions, in
   Paris, April 22-27, 2023. Consequently, participants of the MCC should pay
   close attention to the deadlines in the calls as they occur very early for
   the 2023 edition.

CALL FOR TOOLS

   For the 2023 edition, we kindly ask the developers of verification tools for
   concurrent systems to participate in the MCC competition. Each submitted
   tool will be assessed on both the accumulated collection of MCC models
   (these are the "known" models, see http://mcc.lip6.fr/models.php) and on the
   new models selected during the 2023 edition (these are the "surprise"
   models, see http://mcc.lip6.fr/cfm.php).

   The benchmarks on which tools will be assessed, are colored Petri nets 
   and/or P/T nets. Some P/T nets are provided with additional information 
   giving a hierarchical decomposition into sequential machines (these models
   are called Nested-Units Petri nets - see http://mcc.lip6.fr/nupn.php for
   more information): tools may wish to exploit this information to increase
   performance and scalability.

   Each tool may compete in one or more categories of verification tasks, such
   as reachability analysis, evaluation of CTL formulas, of LTL formulas, etc.

   Submitted tools must be in binary executables. Each submitted tool will be
   run by the MCC organizers in a virtual machine (typically configured with up
   to 4 cores, 16 Gbytes of RAM per core, and a time confinement of 30 or 60
   minutes per run, i.e., per instance of a model). Last year, more than 1800
   days of CPU time have been invested in the MCC competition. The MCC relies
   on BenchKit (https://github.com/fkordon/BenchKit), a dedicated execution
   environment for monitoring and collecting data related to the execution of
   processes.

   By submitting a tool, you explicitly allow the organizers of the Model
   Checking Contest to publish to publish on the MCC web site the binary
   executable of this tool, so that experiments can be reproduced by others
   after the contest. Detailed information is available from
   http://mcc.lip6.fr/rules.html.

   Note: to submit a tool, it is not required to have submitted any model
   to the MCC Call for Models. However, it is strongly recommended to
   pre-register your tool using the dedicated form before February 1, 2023:
   http://mcc.lip6.fr/2023/registration.php. You will then be informed of
   the way the contest is going. The sooner is the better.

IMPORTANT DATES

   - October 15, 2022: publication of the present Call for Tools.

   - November 1, 2022: publication of the updated 2023 contest rules at 
     http://mcc.lip6.fr/rules.html.

   - December 1st, 2022: publication of the Tool Submission Kit, which is
     available from http://mcc.lip6.fr/2023/archives/ToolSubmissionKit.tar.gz.

   - February 1st, 2023: deadline for tool pre-registration. If you plan to
     submit a tool to the contest, please fill in the pre-registration form
     available from http://mcc.lip6.fr/2023/registration.php. This is not
     mandatory but you will receive dedicated information if needed.

   - February 15, 2023: deadline for tool submission.

   - March 5, 2023: early feedback to tool submitters, following the
     preliminary qualification runs, which are performed using a few small
     instances of the "known" models.

   - April 20, 2023: more feedback to tool submitters, following the
     competition runs.

   - April 22-27, 2023: official announcement of MCC'2023 within the context of
     ETAPS 2023 (Paris, France).

COMMITTEES

   General Chairs
      Fabrice Kordon - Sorbonne Université/LIP6, France
   
   Model Board
      Pierre Bouvier - Inria/LIG, France
      Hubert Garavel - Inria/LIG, France
      Fabrice Kordon - Sorbonne Université, France     
   
   Formula Board
      Loïg Jezequel - Univ. Nantes, France
      Emmanuel Paviot-Adet - Univ. Paris 5, France

--------------------------------------------------------------------------------------
Fabrice Kordon          
Sorbonne Université
Campus Pierre & Marie Curie
LIP6/MoVe, Office 26-00/202 or 26-25/216
4 place Jussieu, 75252 Paris Cedex 05
http://lip6.fr/Fabrice.Kordon/

----
[[ Petri Nets World:                                                ]]
[[              http://www.informatik.uni-hamburg.de/TGI/PetriNets/ ]]
[[ Mailing list FAQ:                                                ]]
[[ http://www.informatik.uni-hamburg.de/TGI/PetriNets/pnml/faq.html ]]
[[ Post messages/summary of replies:                                ]]
[[                               petrinet@informatik.uni-hamburg.de ]]

Reply via email to