Dates

propositions: 24 septembre
inscription: 4 octobre
journées: 12-14 octobre

Contact

stephan.merz@loria.fr

Journées Nationales GEOCAL-LAC-LTP 2015

Résumés / Abstracts

Sylvain Conchon
Cubicle – Design and Implementation of an SMT-based Model Checker for Parameterized Systems
In this talk, I will present Cubicle, a new model checker for verifying safety properties of parameterized systems.

Cubicle is an implementation of the MCMT (Model Checking Modulo Theories) framework for array-based systems, a syntactically restricted class of parameterized transition systems with states represented as arrays indexed by an arbitrary number of processes.

While rudimentary at the moment, Cubicle's input language is sufficiently expressive for describing typical parameterized systems such as cache coherence protocols, fault-tolerant protocols and mutual exclusion algorithms.

Cubicle's implementation is based on a parallel symbolic backward reachability procedure using Satisfiabilty Modulo Theories. Its SMT solver is a tightly integrated, lightweight and enhanced version of Alt-Ergo and its parallel implementation relies on the Functory library.

Ali Assaf, Raphael Cauderlier
Mixing HOL and Coq in Dedukti
We use Dedukti as a logical framework for interoperability. We use automated tools to translate different developments made in HOL and in Coq to Dedukti, and we combine them to prove new results. We illustrate our approach with a concrete example where we instantiate a sorting algorithm written in Coq with the natural numbers of HOL.
Mathias Fleury
Formalization of Ground Inference Systems in Isabelle
Various methods have been developed for solving SAT problems, notably resolution, the Davis-Putnam-Logemann-Loveland procedure (DPLL) and an extension of it, the conflict-driven clause learning (CDCL). We have formalized these three algorithms in the proof assistant Isabelle/HOL, based on a chapter of Christoph Weidenbach's upcoming book "Automed Reasoning – The Art of Generic Problem Solving". The three calculi are presented uniformly as transition systems. We have formally proved that each calculus is a decision procedure for satisfiability of propositional logic. One outcome of the formalization is a verified SAT solver based on DPLL and implemented in a functional programming language.
Jasmin Blanchette, Johannes Hölzl, Andreas Lochbihler, Lorenz Panny, Andrei Popescu, Dmitriy Traytel
Types inductifs et coinductifs pour Isabelle/HOL
L'assistant de preuve Isabelle/HOL a récemment été enrichi avec les types coinductifs. Nous avons profité de l'occasion pour retravailler les types inductifs, afin de permettre des combinaisons arbitraires de types inductifs et coinductifs enchâssés.
Martin Bodin, Thomas Jensen, Alan Schmitt
Analyzing extensible records with separation logic and abstract interpretation
In our search for a domain for JavaScript, we turned to separation logic. This ongoing work aims at giving a separation logic domain tuned to abstract JavaScript-like heaps. We mix separation logic with shape analyses based on abstract interpretation. I shall expose here the main issues which we face, namely how the frame rule and some shape analyses techniques interfere.
Pierre Wilke, Sandrine Blazy, Frédéric Besson
A concrete memory model for CompCert
Semantics preserving compilation of low-level C programs is challenging because their semantics is implementation defined according to the C standard. This paper presents the proof of an enhanced and more concrete memory model for the CompCert C compiler which assigns a definite meaning to more C programs. In our new formally verified memory model, pointers are still abstract but are nonetheless mapped to concrete 32-bit integers. Hence, the memory is finite and it is possible to reason about the binary encoding of pointers. We prove that the existing memory model is an abstraction of our more concrete model thus validating formally the soundness of CompCert's abstract semantics of pointers. We also show how to adapt the front-end of CompCert thus demonstrating that it should be feasible to port the whole compiler to our novel memory model.
Léon Gondelman
Un système de types pragmatique pour la vérification déductive
Une grande partie de programmes que l'on souhaite vérifier rentre dans le cadre de logique de Hoare, où la correction du code est ramenée à des conditions de vérification ne nécessitant pas de modélisation explicite de la mémoire, ce qui les rend plus naturelles et plus simples à décharger avec les démonstrateurs automatiques. Cela s'explique par le fait que, le plus souvent, les structures de données mutables ne manipulent pas des pointeurs arbitraires, mais seulement des pointeurs qui peuvent être connus statiquement. Il est donc pertinent d'essayer de concevoir un langage qui offre un bon compromis entre expressivité et simplicité des conditions de vérification générées. Dans cet exposé, on présentera un tel langage, son système de types avec régions et son implémentation dans l'outil de vérification déductive Why3.
Sorin Stratulat
Affordable Cyclic Noetherian Induction Reasoning
Explicit induction is currently implemented by most of the modern proof systems that generate tree-shaped proofs but it is incompatible with useful features like the lazy and mutual induction reasoning. On the other hand, implicit and cyclic induction techniques support well these features but require inference systems able to generate proofs with more complex structures such as graph-shaped proofs. We propose a new proof method that combines their features, by transforming well-founded cyclic induction proofs into a set of explicit induction proofs generated by sequent-based systems.

Our main contributions are both theoretical and practical. From a theoretical point of view, we identify sufficient and reasonable conditions to successfully execute the transformation procedure. In practice, the new proof method is highly automatisable and efficient, the sum of the generated explicit induction proofs being comparable in size with (some normalized form of) the cyclic proof. Moreover, it can be fully implemented in any prover supporting explicit induction and able to define formula-based induction orderings and induction schemas from new terminating and total definitions of functions/predicates.

Timothy Bourke
Some recent improvements in modelling Loosely Time-Triggered Architectures
Les loosely Time-Triggered Architectures (LTTAs) ont été introduites pour la conception de systèmes de contrôle distribués. Elles ajoutent une couche de contrôle à des architectures quasi-périodiques, où les processeurs s'exécutent presque périodiquement. Cette couche de contrôle permet ensuite l'implantation d'applications synchrones.

Nous montrons comment l'implantation d'une application synchrone sur une architecture quasi-périodique peut être formalisée dans un cadre synchrone. Pour cela nous utiliserons le langage Zélus qui permet de modéliser les comportements discrets de l'application et des contrôleurs, mais aussi les aspects temps réel de l'architecture. Nous détaillerons deux protocoles, Back-Pressure LTTA inspiré des circuits élastiques et Time-Based LTTA qui repose sur des mécanismes d'attente.

Clément Poncelet
An interpretable Model for Interactive Music Systems
The role of an Interactive Music System (IMS) is to accompany musicians during live performances, acting like a real musician. It must react in realtime to audio signals from musicians, according to a timed specification called mixed score, written in a domain specific language. Such goals imply strong requirements of temporal reliability and robustness to unforeseen errors in input, yet not much addressed by the computer music community.

This presentation details the model used in our application of Model-Based Testing techniques and tools to a state-of-the-art IMS, including in particular: offline and on-the-fly approaches for the generation of relevant input data for testing (including timing values).

Our method is based on formal models (specifications) in an ad hoc intermediate representation, compiled directly from mixed scores, and either passed, after conversion into Timed Automata, to the model-checker Uppaal in the offline approach, or executed by a virtual machine in the online approach.

Andrew Polonsky
Coinductive foundations of infinitary rewriting
This talk will give an overview of coinductive reformulation of infinitary rewriting. In the classical approach, infinite reductions are defined as ordinal-indexed sequences, which are continuous with respect to the tree topology of infinite terms and satisfying an additional condition on the depth of redex activity. An alternative approach is to treat infinite reductions as a coinductive relation between infinite terms. Remarkably, a simple corecursive definition captures the classical infinite reduction in a way which allows for easy proofs of standard results by coinduction. We illustrate this by a proof of infinitary standardization theorem. This proof has been fully formalized in the Coq proof assistant using coinductive types.
Benedikt Ahrens, Ralph Matthes
Substitution systems revisited
Matthes and Uustalu (TCS 327(1-2):155-174, 2004) presented a categorical description of substitution systems capable of capturing syntax involving binding which is independent of whether the syntax is made up from least or greatest fixed points. We extend this work in two directions: we continue the analysis by creating more categorical structure, in particular by organizing substitution systems into a category and studying its properties, also specific ones that are relevant for univalent foundations, and we develop the proofs of the results of the cited paper and our new ones in UniMath, a recent library of univalent mathematics formalized in the Coq theorem prover.
Pierre Lescanne
Compter les lambda termes et le mettre en bijection avec des structures combinatoires
L'approche combinatoire du lambda calcul est à la croisée de la combinatoire analytique et de la logique. Elle a conduit à plusieurs résultats intéressants sur les propriétés computationnelles du lambda-calcul dont certains problèmes difficiles, pour lesquels les combinatoriciens n'ont pas d'outils, ne sont pas résolus. Elle a aussi permis de développer des outils efficaces de génération aléatoire de termes typés ou non, fondée sur les échantilloneurs de Boltzmann par exemple. Dans cet exposé je considérerai une mesure naturelle de la taille des lambda termes à la de Bruijn où tous les composants des termes ont une taille 1. Grâce à l'analyse complexe des séries génératrices nous pouvons calculer des valeurs asymptotiques. Nous pouvons aussi mettre en évidence des liens avec des dénombrements d'autres structures combinatoires d'arbres binaires. Je présenterai les bijections qui existent entre ces structures et les lambda termes qui peuvent éclairer d'un jour nouveau le lambda calcul.

Cette recherche a été faite en collaboration avec Maciej Bendkowski, Katarzyna Grygiel, et Marek Zaionc.

Stéphane Graham-Lengrand
Realisability semantics of abstract focussing, formalised
In this talk we present a sequent calculus for abstract focussing, equipped with proof-terms: in the tradition of Zeilberger's work, logical connectives and their introduction rules are left as a parameter of the system, which collapses the synchronous and asynchronous phases of focussing as macro rules. We go further by leaving as a parameter the operation that extends a context of hypotheses with new ones, which allows us to capture both classical and intuitionistic focussed sequent calculi.

We then define the realisability semantics of (the proofs of) the system, on the basis of Munch's orthogonality models for the classical focussed sequent calculus, but now operating at the higher level of abstraction mentioned above. We prove, at that level, the Adequacy Lemma, namely that if a term is of type A, then in the model its denotation is in the (set-theoretic) interpretation of A. This exhibits the fact that the universal quantification involved when taking the orthogonal of a set, reflects in the semantics Zeilberger's universal quantification in the macro rule for the asynchronous phase.

Everything is formalised in Coq.

Kaustuv Chaudhuri, Sonia Marin, Lutz Straßburger
Focusing for Nested Sequents
The proof theory of modal logic has recently encountered a certain return to interest. The formalism of nested sequents in particular has proven successful to give analytic proof systems for various modal logics in a uniform and purely syntactic way, not only in the classical case but also for different variants of intuitionistic modal logics.

However, these systems are usually not well suited for proof search, due to the amount of non-determinism induced by the possibility of applying inference rules not only at the root but to any node in a sequent tree.

On the other hand, the focusing discipline, first introduced by Andreoli for linear logic, proposes a normal form for sequent calculus proofs, based on permutation properties of the logical connectives, that is more suitable for both proof search and proof representation.

We extend this idea to modal logics and show how a focusing result can be obtained for nested sequents systems that are complete with respect to the systems given by Brünnler for the most common (classical) modal logics.

Delia Kesner
Completeness of Call-by-Need – A Logical View
We develop a characterization of call-by-need normalization by means of typability, i.e. we show that a term is normalizing in call-by-need if and only if it is typable in a suitable system with non-idempotent intersection types. This first result is used to derive a new completeness proof of call-by-need w.r.t. call-by-name. Concretely, we show that call-by-need and call-by-name are observationally equivalent, so that in particular, the former turns out to be a correct implementation of the latter.
Fabio Zanasi
Interacting Hopf algebras: the theory of linear systems
We present by generators and equations the theory IH whose free model is the category of linear subspaces over a field k. Terms of IH are string diagrams which, for different choices of k, express different kinds of networks and graphical formalisms used by scientists in various fields, such as quantum circuits, electrical circuits and signal flow graphs. The equations of IH arise by distributive laws between PROPs of Hopf algebras – from which the name interacting Hopf algebras. The characterisation in terms of subspaces allows to think of IH as a string diagrammatic syntax for linear algebra: linear maps, spaces and their transformations are all faithfully represented in the graphical language, resulting in an alternative, often insightful perspective on the subject matter.

As main application, we use IH to axiomatise a formal semantics of signal processing circuits, for which we study full abstraction and realisability. Our analysis suggests a reflection about the role of causality in the semantics of computing devices.

This is a joint work with Filippo Bonchi and Pawel Sobocinski.

Stefano Guerrini
Questing for a Theory of Quantum Computability
We propose a definition of quantum computable functions as mappings between superpositions of natural numbers to probability distributions of natural numbers. Each function is obtained as a limit of an infinite computation of a quantum Turing machine. The class of quantum computable functions is recursively enumerable, thus opening the door to a quantum computability theory which may follow some of the classical developments.
Simon Perdrix, Quanlong Wang
Quantum Diagrammatic Reasoning: Completeness and Incompleteness
The ZX-calculus is a graphical language for quantum mechanics and quantum information processing with built-in rewrite rules, where complementarity of quantum observables is axiomatized within a general framework of compact closed symmetric monoidal categories. It is known that the ZX-calculus can be used to express any operation in pure state qubit quantum mechanics, i.e. it is universal. Furthermore, any equality derived in the ZX-calculus can also be derived in the standard Dirac or matrix mechanics,i.e. it is sound. The ZX-calculus has also been proven complete for quantum mechanics when phase angles multiples of π/2, and for single-qubit quantum mechanics with phase angles multiples of π/4, meaning that any equality (up to a global scalar) that can be derived using matrices in these kinds of quantum mechanics can also be derived pictorially.

Although the ZX-calculus is known to be incomplete for the overall pure state qubit quantum mechanics, the question of the completeness of the ZX calculus for the π/4 fragment is a crucial step in the development of the ZX calculus because of its (approximative) universality for quantum mechanics (i.e. any unitary evolution can be approximated using gates with phase angles multiples of π/4 only). Actually, the completeness for this fragment has been stated as one of the main current open problems in categorical quantum mechanics.

Here, we construct a quite unusual interpretation for the ZX calculus, where all the rewrite rules still hold under this interpretation. We also find a graphical equation in the π/4 fragment of the ZX-calculus such that this equation does not hold under the unusual interpretation, but still holds (up to a global scalar) under the standard matrices interpretation.Therefore, we prove the incompleteness of ZX-calculus for the π/4 fragment of quantum mechanics, thus solve an open problem in CQM. Finally, we propose to extend the language with a new axiom.

Christophe Raffalli
The power of subtyping
Subtyping is a well known concept in statically typed programming languages. However, it is not fully exploited. We will show that using subtyping, inductive and coinductive data types can be encoded and that recursive programs can be typed, while remaining in a standard typing system with no termination test nor specific rule.
Paul-André Melliès
Logique tensorielle et effets algébriques
La logique tensorielle est une logique primitive du tenseur et de la négation qui raffine la logique linéaire en relâchant l'hypothèse que la négation tensorielle est involutive. Dans cet exposé, j'expliquerai comment la logique tensorielle permet d'opérer une synthèse entre logique linéaire et effets de bord algébrique. Je montrerai en particulier comment ce point de vue purement logique et catégorique sur les langages de programmation nous permet de définir de manière universelle des catégories de jeux de dialogue.
Noam Zeilberger
Linear lambda calculus and the combinatorics of embedded graphs
Take a sphere and draw a graph inside it without crossing edges, so as to carve up the sphere into contiguous regions. Now pick an edge of the graph and assign it a direction. Voilà, you have what combinatorists call a rooted planar map. Planar maps are a very old subject, but the idea of rooting a map (here, by picking an edge together with a direction) was introduced by W. T. Tutte in order to simplify the problem of counting isomorphism classes of maps. In his 1963 paper, A Census of Planar Maps, Tutte gave a beautiful closed formula for the number of rooted planar maps with n edges. Since then, people have also counted rooted maps on surfaces other than the sphere – it's an active subfield of combinatorics that overlaps with many other areas of mathematics as well.

Surprisingly, recent studies of the combinatorics of linear lambda calculus have revealed connections to the theory of rooted maps. In joint work with Alain Giorgetti, we gave a size-preserving bijection between rooted planar maps and a fragment of linear lambda calculus consisting of the beta-normal planar terms. Earlier work by Bodini, Gardy, and Jacquot has also related linear lambda terms to rooted trivalent maps on oriented surfaces of arbitrary genus. While the meaning of these connections is still unclear, I believe they raise many interesting questions, and the aim of my talk will be to introduce some of the elements of this wide-open domain.

Michele Pagani, Christine Tasson, Lionel Vaux
A characterization of strong normalizability by a finiteness structure via the Taylor expansion of lambda-terms
In the folklore of differential linear logic and its denotational models, a common intuition is that the structure of finiteness spaces, introduced by Ehrhard, semantically reflects the strong normalization property of cut-elimination.

We make this intuition formal in the context of lambda-calculus by introducing a finiteness structure on resource lambda-terms, which is such that a lambda-term is strongly normalizing iff the support of its Taylor expansion is finitary.

This structure is a refinement of the one introduced by Ehrhard in his LICS 2010 paper: for the latter, we also prove that any strongly normalizing lambda-term has a finitary Taylor expansion (this was already known for the terms of system F), but the converse does not hold. As was already the case in Ehrhard's work, the proofs do not rely on any uniformity property, hence this approach is valid for non-deterministic extensions of the lambda-calculus as well.

One application of this work is then the existence of a normal form for the Taylor expansion of any strongly normalizable term. This is a necessary first step to extend previous results of Ehrhard and Regnier (TCS 2008) that were limited to a uniform setting.

Horace Blanc
Formalizing Functions as Processes
A classic result of the pi-calculus, due to Robin Milner, is its ability to mimic the lambda-calculus. In this original form, however, the relationship between the two calculi is one-to-many steps and holds only up to bisimulation in the pi-calculus. Recently, Accattoli simplified it by using a refinement of the lambda-calculus called linear substitution calculus (LSC), and a new approach to the definition of the rewriting rules of the pi-calculus. The improved relationship is a simulation that is one-to-one step in both directions, i.e. it is a bisimulation. Moreover, the relationship holds up to structural equivalence only.

In this talk, I will present my formalization of Accattoli's result in the Abella proof assistant. To my knowledge, this is the first formalization of a translation from lambda-terms to processes. After a general overview, I will focus on the formalization of binders and of reduction rules for terms and processes that are defined using the notion of contexts.

Alice Pavaux
Une interprétation ludique des types fonctionnels
Proche de la sémantique des jeux, la ludique s'en distingue par l'ajout d'une action spéciale "démon" qui témoigne d'une erreur ou un échec. Une preuve peut ainsi être dénotée par un objet sans démon. Les types de la ludique sont définis grâce au comportement calculatoire commun des objets qui les composent, donc de manière interne et interactive, et non pas par une restriction externe. La nature primitive de la ludique permet l'étude des types dans divers cadres, notamment logique linéaire, types dépendants, mais aussi structure des fonctions. Ainsi, elle est un environnement adapté pour la représentation des types de données et des fonctions agissant sur ces données.

Dans cet exposé, je présenterai un travail en cours dans lequel les données (entiers, listes, ...) sont représentées par des types ludiques dits élémentaires, c'est-à-dire dont les objets "intéressants" (dans un sens précis) sont sans démon. Je montrerai en particulier qu'un type fonctionnel sur des types élémentaires n'est pas nécessairement élémentaire. J'analyserai les cas dans lesquels cela se produit, puis donnerai des pistes pour l'interprétation de ces résultats en termes d'exécution de processus.

Guilhem Jaber
Raisonner sur l'équivalence de programmes via la sémantique des jeux opérationnelle
L'équivalence contextuelle de programmes écrits dans un langage fonctionnel avec références (i.e. des cellules mémoires modifiables) est un problème notoirement difficile, spécialement en présence de références d'ordre supérieur (i.e. des cellules mémoires pouvant stocker des fonctions). Ces vingt dernières années différentes techniques ont été introduites dans ce but: les relations logiques à la Kripke, les bisimulations et la sémantique des jeux algorithmique.

Lors de cet exposé, nous introduirons la sémantique des jeux opérationnelle qui se base sur la sémantique de traces introduite par Laird pour un langage avec référence. Nous l'utiliserons ensuite pour définir une nouvelle technique de preuve d'équivalence de programmes : les bisimulations ouvertes à la Kripke. Cette méthode est une tentative d'unifier les techniques introduites précédemment. Si le temps le permet, nous verrons comment étendre cette technique pour gérer le polymorphisme.

Charles Grellois, Paul-André Melliès
A semantic study of higher-order model-checking
A common approach in verification, model-checking consists in computing whether a given formula holds on an abstract model of interest. Decidability is usually obtained by standard reasoning on finite graphs overapproximating the behavior of programs.

The model-checking of functional programs requires however a careful treatment of higher-order computation, which is a major hurdle to the traditional abstraction of programs as finite graphs, and leads to the need for semantic methods.

In this talk, we explain how linear logic provides models of computation which can be very naturally extended to account for the model-checking problem of properties of monadic second-order logic, mixing inductive and coinductive specification, over infinite trees generated by the coinductive rewriting of a term with recursion.

This is joint work with my PhD advisor Paul-André Melliès.

Guillaume Bury
Intégrer le simplex à la méthode des tableaux
La vérification de programmes nécessite souvent de considérer des contraintes arithmétiques. Afin de pouvoir prouver des formules impliquant de telles contraintes, je présente une extension de la méthode des tableaux pour l'arithmétique linéaire (aussi appelée arithmétique de Presburger), ainsi qu'un algorithme de recherche de preuve qui s'appuie sur l'algorithme du simplex pour l'arithmétique rationnelle et réelle, et sur une stratégie de "branch & bound" pour l'arithmétique entière. Je discuterai ensuite des résultats de l'implémentation de cette extension dans l'outil de déduction automatique au premier ordre Zenon.
Gabriel Scherer
Un algorithme simple pour l'équivalence de lambda-termes avec sommes et type vide
La décidabilité de l'équivalence de programmes simplement typés avec types sommes est connue depuis les travaux de Neil Ghani en 1995, mais la question de l'ajout du type vide est restée ouverte. (En termes catégoriques, on veut décider l'égalité de morphismes dans une catégorie bicartésienne close avec un objet initial.)

Une conséquence de mon travail de thèse sur l'utilisation du focusing pour décider si un type a un habitant unique est une procédure de décision pour l'équivalence de lambda-termes simplement typés en présence de sommes. De façon inattendue, cette procédure semble s'étendre naturellement et simplement en présence du type vide. Dans cet exposé j'aimerais expliquer l'algorithme et la façon dont il dérive du multi-focusing maximal, et donner une intuition de pourquoi les méthodes précédentes s'étendaient mal au cas du type vide.

Cet exposé correspond à un travail en cours. L'algorithme est simple et relativement intuitif mais sa preuve de terminaison est délicate et n'est pas terminée; les dés sont jetés, mais ils ne sont pas encore tombés du bon côté.

Damiano Mazza
Parsimonious Logic
Parsimonious logic is a linear logical system which arose recently from a complexity analysis of reduction in the infinitary affine lambda-calculus. Its associated term calculus (via Curry-Howard), called the parsimonious lambda-calculus, has nice properties in terms of computational complexity, leading to novel implicit characterizations of complexity classes, some of which were never given functional characterizations before (namely, the non-uniform complexity classes L/poly and P/poly). We will give an overview of these results and the open questions they induce.
Hubert Godfroy, Jean-Yves Marion
A taste of soft-linear logic for staged computation
The goal of this work is to study at high level relations between executable codes (programs) and readable/writable codes (data). In low level languages like assembly, programs are not different form data, and what is called program is the set of codes executed during the run. In higher level languages, like in C/C++, programs (C code) are mostly disjoint from data (string for instance), but it is still possible to do self-modification with function pointers. Interpreted languages like Python make it easy to execute data with instruction exec which takes strings and executes it.

Logical frameworks have been developed to study this phenomenon. Temporal logic uses the next operator to specify that a statement is true for the next stage. Programmatically, the stage of each term is clearly given. In this framework, it is easy to determine when a piece of code will be evaluated.

On the other hand, modal logic can be used to describe closed values which remain to be evaluated. In this framework data codes are syntactically identified: each term of the form <t> : □A is in normal form and then is seen as a data.

In this paper we want to understand more precisely what is responsible (in terms of logic) of phenomena of self-modification like reading/writing mechanism, data execution, program freezing... Tools come from the linear logic world, especially from its soft part. We will see interpretations of common operation like dereliction, digging...