Scientific Talks

Session 1: Auditorium
Reinhard Gotzhein: Engineering of Adaptive Communication Systems
Networked Systems Group, Technische Universität Kaiserslautern

For almost a decade, ad hoc networking is receiving increasing attention in research and industry. Ad hoc networks form dynamically, and typically consist of a set of small, low-power nodes communicating over a wireless medium. To satisfy specific communication requirements on the one hand, and to cope with the properties of ad hoc networks such as node mobility, varying channel quality, and scarce resources on the other hand, adaptive communication systems are needed. In the talk, we will elaborate on engineering methods for adaptive communication systems, and will present case studies from the ad hoc network domain. In particular, we will outline our holistic, model driven development approach, with SDL as modeling language. All development steps are supported by a semantically integrated tool suite. Production and simulation code are entirely generated from SDL models and automatically instrumented to interface with different operating systems and communication technologies.

slides (PDF)
Bernd Finkbeiner: Synthesis of Distributed Systems
Reactive Systems Group, Fachrichtung Informatik, Universität des Saarlandes

We provide a uniform solution to the problem of synthesizing a finite-state distributed system from a temporal specification. Input to our algorithm is a formula specifying the correct computation trees and a directed graph representing the system architecture (where the nodes represent processes, which communicate synchronously through shared variables attached to the edges). Our algorithm works uniformly for all omega-regular specification languages and all system architectures with decidable synthesis problems. The talk is based on joint work with Sven Schewe.

Paul Brauner and Clément Houtmann: Superdeduction
Équipe PROTHEO, INRIA Lorraine & LORIA

We present the superdeduction, a systematic way to incorporate a theory inside a deduction system. We demonstrate then its main properties: soundness and completeness. We expose afterwards a proof-term language for superdeduction for classical sequent calculus along with a cut-elimination theorem for a class of theories. Finally we present lemuridae, an interactive theorem prover based on superdeduction.

Viorica Sofronie-Stokkermans: Hierarchical and Modular Reasoning in Complex Theories
Programming Logics Group, Max-Planck-Institut für Informatik

A long term goal of the research in computer science is the analysis and verification of complex systems (these can be theories of mathematics, programs, reactive oder hybrid systems, or large databases). Correctness proofs for such systems can often be reduced to reasoning in extensions or combinations of standard theories. As the resulting proof tasks are usually very large, it is extremely important to have efficient decision procedures for such complex theories.

In this talk we discuss situations in which efficient reasoning in complex theories is possible. We consider a special type of theory extensions, which we call local, where hierarchic reasoning is possible. We give several examples of theories important for computer science or mathematics which are local extensions of a base theory. We show that the decidability (and complexity) of the universal fragment of a local theory extension can be expressed in terms of the decidability (resp. complexity) of a suitable fragment of the base theory. We also mention possibilities of modular reasoning in combinations of local theory extensions, as well as possibilities of obtaining interpolants in a hierarchical manner, and discuss applications in verification and knowledge representation.

Calogero Zarba: Interpolation for Datastructures
Reactive Systems Group, Fachrichtung Informatik, Universität des Saarlandes

Interpolation based automatic abstraction is a powerful and robust technique for the automated analysis of hardware and software systems. In this talk we present a general algorithm to construct interpolants for any recursively enumerable theory. In particular, efficient procedures to construct interpolants for the theories of arrays, sets, and multisets are discussed using the reduction approach for obtaining decision procedures for complex data structures. The approach taken is that of reducing the theories of such data structures to the theories of equality and linear arithmetic for which efficient interpolating decision procedures exist. This enables interpolation based techniques to be applied to programs that manipulate these data structures.

Silvio Ranise: Decision Procedures for Software Analysis
Équipe CASSIS, INRIA Lorraine & LORIA

We start by giving the context of our research: solving satisfiability problems modulo theories (SMT problems) obtained by applying declarative approaches to software analysis. We also discuss the main challenges that an SMT solver should face in order to provide the necessary degree of automation to solve such kind of problems. Then, we sketch how we tackled some of the problems by exploiting techniques based on equational reasoning (to build in a uniform way decision procedures for the satisfiability problems in theories modelling data structures such as lists and arrays), combination methods (to reason in heterogeneous theories encompassing decidable fragments of Arithmetics and data structures), and proof reconstruction (to certify the results of the SMT solvers). We conclude the talk by identifying some lines of future research. While developing the main theoretical ideas, we discuss their implementation in the SMT solver, haRVey.

Session 2: Room A008
Rhaleb Zayer: A Framework for Natural Animation of Digitized Models
Department Computer Graphics, Max-Planck-Institut für Informatik

I will present a novel versatile, fast and simple framework to generate high-quality animations of scanned human characters from input motion data. The method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human subjects of completely different shape and proportions. Our approach handles many different types of input data, e.g. other animated meshes and motion capture, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data.

Bruno Lévy: Dynamic Function Bases
Équipe ALICE, INRIA Lorraine & LORIA

The advent of 3D scanning technology makes it easy to convert a real object into a computer representation. However, the raw nature of the so-constructed geometry hinters efficiently using these 3D objects. The geometry representation constructed by a 3D scanner consists of a regular sampling of the geometry, with a large number of points. Similarly, a bitmap image is composed of a large number of pixels.

In this presentation, we consider the problem of finding an equation of an object (or an image) from a sampling. In other words, from a concrete, sampled representation, our goal is to automatically construct an abstract, high-level representation. We will demonstrate how the method can be applied to both 2D images (bitmap to vector conversion) and 3D objects (mesh to Spline conversion).

Finally, the problem mentioned above (converting scanned data into equations) is an instance of a more general problem, that concerns the numerical solution of PDEs (Partial Differential Equation). Current methods (FEM: Finite Element Modeling) first define a function basis attached to a mesh, and solve for the degrees of freedom yielded by this function basis. Our goal is to introduce more flexibility in the method, by optimizing both the degrees of freedom and the function basis simultaneously. We explain the formulation and show early results.

Michael Sagraloff: Fast and Exact Geometric Analysis of Real Algebraic Plane Curves
Department Algorithms and Complexity, Max-Planck-Institut für Informatik

 

slides (PDF)
Sylvain Petitjean: Line transversals to disjoint balls
Équipe VEGAS, INRIA Lorraine & LORIA

We give a presentation of recent results on sets of lines piercing balls. In particular, we give the outline of a key convexity property of the directions of lines meeting a set of balls in a prescribed order. We then show how this property allows to improve upon several old and new results, such as bounds on the number of connected components of line transversals and Helly-type theorems.

slides (PPT)
Thierry Declerck: Standards and Infrastructures for Language Resources
Language Technology Lab, Deutsches Forschungszentrum für Künstliche Intelligenz

In the talk I present the eContent project LIRICS, which is about the development of a Linguistic Infrastructure for Interoperable Resources and Systems. More specifically the project is concerned with the enforcement or the development of ISO procedures for standards in the domain of language resources, including NLP lexica, morpho-syntactic and syntactic annotations as well as semantic content. This work is done with the purpose of enabling and ensuring the interoperability and reuse of existing and new language resources.

This project is coordinated by Loria and DFKI is one of the partner of Lirics.

slides (PPT)
Carlos Areces and Claire Gardent: Textual Entailment Recognition
Équipe Langue et Dialogue, INRIA Lorraine & LORIA

Textual entailment recognition consists in deciding whether one text fragment (called the Hypothesis) is a consequence of another (called the Text). For instance, given the following Text/Hypothesis pair:

T: The debate lasted many hours, but after much bitter discussion the Labour Party decided to back Tony Blair's policy on Iraq.

H: The Labour Party backed Tony Blair's Iraq policy.

an automated textual entailment system should be capable of recognising that T textually entails H.

Roughly, an automated textual entailment system consists of two main components namely, a component for semantic construction (which given a sentence builds a meaning representation for that sentence) and a component for reasoning.

In this talk, we will start by briefly presenting the work done in Nancy on relating natural language expressions and meaning representations either going from text to meaning (analysis) or on the contrary, from meaning to text (generation). We will then go on to sketch the architecture of a textual entailment system we plan to develop. Finally, we will identify possible areas of collaborations within the French-German Computer Science Collaboration namely, grammar development and evalution; parsing and realisation optimisation; ontologies and lexical semantics; parsing and inference; reasoners for natural language processing. We will also say a few words about the recently born European Masters Program in Language and Communication Technologies and how it could be linked to the French-German collaboration.

slides (PDF)
 
Stephan Merz