top of page

FoMLAS 2023
July 17 - 18, 2023, Paris, France

​

6th Workshop on Formal Methods for ML-Enabled Autonomous Systems

Affiliated with CAV 2023

​

Instructions for the Vehicle Tutorial

The tutorial's webpage is here

To join the Slack channel, click here

The tutorial will be carried out on the participant's personal laptops. To prepare your machine in advance, please run the commands:

pip install vehicle-lang
pip install maraboupy

​

Scope and Topics of Interest

​

In recent years, deep learning has emerged as a highly effective way for creating real-world software, and is revolutionizing the way complex systems are being designed all across the board. In particular, this new approach is being applied to autonomous systems (e.g., autonomous cars, aircraft), achieving exciting results that are beyond the reach of manually created software. However, these significant changes have created new challenges when it comes to explainability, predictability and correctness: Can I explain why my drone turned right at that angle? Can I predict what it will do next? Can I know for sure that my autonomous car will never accelerate towards a pedestrian? These are questions with far-reaching consequences for safety, accountability and public adoption of ML-enabled autonomous systems. One promising avenue for tackling these difficulties is by developing formal methods capable of analyzing and verifying these new kinds of systems.  

The goal of this workshop is to facilitate discussion regarding how formal methods can be used to increase predictability, explainability, and accountability of ML-enabled autonomous
systems. The workshop welcomes results ranging from concept formulation (by connecting these concepts with existing research topics in verification, logic and game theory), through
algorithms, methods and tools for analyzing ML-enabled systems, to concrete case studies and exam
ples.

 

This year, the workshop will host the 4th International Verification of Neural Networks Competition (VNN-COMP'23).

For additional info, see https://sites.google.com/view/vnn2023.


The topics covered by the workshop include, but are not limited to, the following:
 

  • Formal specifications for systems with ML components

  • SAT-based and SMT-based methods for analyzing systems with deep neural network components

  • Mixed-integer Linear Programming and optimization-based methods for the verification of systems with deep neural network components

  • Testing approaches for ML components

  • Statistical approaches to the verification of systems with ML components

  • Approaches for enhancing the explainability of ML-based systems

  • Techniques for analyzing hybrid systems with ML components

  • Verification of quantized and low-precision neural networks

  • Abstract interpretation techniques for neural networks

Important Dates

​​

  • Abstract submission: April 28

  • Paper submission: May 5

  • Author notification: June 4

  • Workshop: July 17 - 18

Invited Talk

 
Speaker: Corina Pasareanu, CMU, NASA, KBR
​
Title: Compositional Verification for Learning-Enabled Autonomous Systems
​
Abstract: Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze due to their size, lack of formal specifications, and sensitivity to adversarial or natural perturbations in the environment.
We present compositional techniques for the formal verification of safety properties of such autonomous systems. The main idea is to abstract the hard-to-analyze components, such as DNN-based perception and environmental dynamics, using probabilistic or worst-case abstractions. This makes the systemamenable to formal analysis using off-the-shelf model checking tools.We illustrate the idea in a case study from the autonomous airplane domain

Vehicle Tutorial


Speakers: Ekaterina Komendantskaya and Matthew Daggitt, Heriot-Watt University
​
Abstract: The community has been making rapid progress in methods for incorporating logical specifications into neural networks, both in training and verification. However, to truly unlock the ability to verify real-world neural network-enhanced systems we believe the following is necessary.
  1. The specification should be written once and should automatically work with training and verification tools.

  2. The specification should be written in a manner independent of any particular neural network training/inference platform.

  3. The specification should be able to be written as a high-level property over the problem space, rather than a property over the embedding space.

  4. After verification the specification should be exportable to general interactive theorem provers so that its proof can incorporated into proofs about the larger system around the neural network.

In this tutorial we will present Vehicle, a tool that allows users to do all of this. We will provide an introduction to the Vehicle specification language, and then walk attendees through using it to express a variety of famous and not-so-famous specifications.  Subsequently we demonstrate how a specification can be compiled down to i) Tensorflow graphs to be used as loss functions during training, ii) queries for network verifiers, and iii) cross-compiled to Agda, a main-stream interactive theorem prover. Finally we will discuss some of the technical challenges in the implementation as well as some of the outstanding problems.

Paper Submission and Proceedings


Three categories of submissions are invited:
 

  • Original papers: contain original research and sufficient detail to assess the merits and relevance of the submission. For papers reporting experimental results, authors are strongly encouraged to make their data available.
     

  • Presentation-only papers: describe work recently published or submitted. We see this as a way to provide additional access to important developments that the workshop attendees may be unaware of.
     

  • Extended abstracts: given the informal style of the workshop, we strongly encourage the submission of preliminary reports of work in progress. These reports may range in length from very short to full papers, and will be judged based on the expected level of interest for the community.

​

All accepted papers will be posted online as part of informal proceedings on the day of the conference. If there is sufficient interest from the authors of accepted papers, we will consider publishing official proceedings as well.

​

​

Papers in all categories will be peer-reviewed. We recommend that papers be submitted as a single column, standard-conforming PDF, using the LNCS style, with a suggested page limit of 10 pages, not counting references - although other formats will be accepted. The review process will be single-blind.


To submit a paper, use EasyChair.

​

bottom of page