Seminar of Departement 1 at LORIA

Seminar of Departement Algorithms, Computation, Image and Geometry at LORIA

                  This page presents seminars of the department Algorithms, Computation, Image and Geometry together with more specialized team seminars from teams ABC, Adagio, Caramba Gamble, Magrit, MFX, and Pixel.


) invité par l’équipe associée CURATIVE fera un séminaire lundi prochain (le 1/07) à 14h en salle C103.

Magrit Seminar.

July 1st, 14:00 Room C103
Pete Hammer, Boston
Computational fluid dynamics to guide care of patients with congenital heart disease
Computational fluid dynamics (CFD) refers to use of computers to numerically approximate the Navier-Stokes equations, which are the basic conservation laws that govern fluid flow. This methodology is well established in the design of mechanical systems and has more recently been used to study biological flows in the human body. In this talk, I will describe recent work by my group using CFD to better understand blood flow in patients with various forms of congenital heart disease, including single-ventricle defects, anomalous coronary arteries, and pulmonary vein stenosis . I will give examples of both disease-specific studies, where the goal is generalized treatment guidelines, and patient-specific CFD studies, where results can directly inform patient care.​

Pixel Seminar.

July 9th, Room A006
Baogang Hu, LIAMA/NLPR, Beijing, China
Information Theoretic Learning (ITL) in Pattern Classification
When ITL (termed by Principe, et al. 2000) criteria have played more roles in the study of machine learning, we are still puzzled by their successes and limitations in applications, or their connections to the empirical learning criteria. This talk will focus on ITL in the study of pattern classification. The connections between the two sets of learning criteria are presented in a binary classification. I will introduce the novel theory of abstaining learning for both Bayesian classifiers and mutual information classifiers, and the cost-free learning from the real-world data sets in comparison with cost-sensitive learning. The talk will show that ITL will provide us a fundamental for understanding the learning mechanisms.
Speaker: Baogang Hu received his Ph.D. degree in 1993 from Department of Mechanical Engineering, McMaster University, Canada. Currently, he is a professor of National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China. From 2000 to 2005, he served as the Chinese Director of “Chinese-French Laboratory of Information, Automation and Applied Mathematics”(LIAMA) . His current researches include machine learning and plant growth modeling.

Day of Department.

25 mai 2019, 9h-14h30 Salle A008
  • 9h30 Jimmy Etienne, MFX : Curved slicing for additive manufacturing
  • 9h50 Gabrielle De Micheli, Caramba : Recovering ECDSA cryptographic keys from partial information
  • 10h10 Charles Duménil, Gamble : Expected size of the Delaunay triangulation of random points on a surface
  • 10h30 Pause
  • 11h Aude Le Gluher, Caramba : How much time does it take to factor an integer?
  • 11h20 George Krait, Gamble : Numerical Algorithm for the Topology of Singular Plane Curves
  • 11h40 Semyon Efremov & Thibaut Tricard, MFX : Procedural phasor noise
  • 13h30 Remi Decelle, Adagio : Measuring wood quality of logs from low-cost sensors at the sawmill or at the road side
  • 13h50 Paul Huynh, Caramba : NIST's lightweight cryptography initiative
Jimmy Etienne, MFX: Curved slicing for additive manufacturing
3D printing by local material deposition is becoming increasingly important in companies and homes, but many defects mean that these parts are rarely used outside the prototyping phases. The most common defects are:
- delamination (structural weakness created by the deposition of successive flat layers)
- the staircase effect (visual defect created by the deposition of successive layers on the surface)

The approach envisaged is to use existing equipment in the field (3D printers, robotic arms) and to be able to print objects with better structural and/or visual quality. To this end, we plan to develop "curved" slicing techniques in order to avoid the current techniques of slicing by plane (although there is still room for improvement).

In the literature, the approaches never directly address the problems associated with automatic cutting of curved slices. There are proofs of concepts of 3D printing outside the plane, scheduling algorithms for wire-frame objects, objects cut into different printable areas (still on planes), and others print only the outer surface or a thin layer in a curved manner. The scientific objective of this thesis is to revisit the slicing algorithms by allowing deposits along arbitrary curves, in order to improve the quality of the parts (surface condition and/or mechanical resistance). The usual use of planar slices actually simplifies the problem of slicing. When approaching curved slicing, the addition of a large number of degrees of freedom makes the problem difficult to address. We therefore have a progressive approach, which consists in gradually relaxing the flatness constraints to target a generic method at the end of the thesis.

Gabrielle De Micheli, Caramba: Recovering ECDSA cryptographic keys from partial information
A signature algorithm such as ECDSA allows a person to sign some message in order for the receiver to be guaranteed the message is indeed from the right issuer. The Elliptic Curve Digital Signature Algorithm, known as ECDSA, is such a signing algorithm where each signature on some message requires the use of some nonce k. ECDSA, as well as other cryptographic protocols, are known to leak information which can then be used to recover the secret key of the signer. In this work, we look at different ways to recover the secret key used in ECDSA when partial information about the nonce k is known, in particular some consecutive bits. We explain how key recovery can be derived from some difficult lattice problem.

Charles Duménil, Gamble: Expected size of the Delaunay triangulation of random points on a surface
It is well known that a planar triangulation has a linear size. This result is due to the Euler characteristic. In dimension 3 or higher, we can find triangulations that have a size of Θ(n2).
We compute the size of the triangulation when the points are distributed on surface. For instance, on a cylinder, it is proved that you can have a bad size, O(n√n), even with a good deterministic sample. But if the points are distributed at random, then a tight bound Θ(n ln n) is reached in expectation. We try to adapt those results on generic surfaces where good samples lead to a triangulation in O(n ln n), and where we expect a linear expected size.

Aude Le Gluher, Caramba: How much time does it take to factor an integer?
The security of some asymetric cryptography relies on the fact that it takes a prohibitive amount of time to factor large enough integers. But how much is "large enough" ? In order to answer the question, I study a factorizing method: the number field sieve (NFS). The goal of my thesis is to get the best estimations possible for its computing time, both from a theoretical and a practical viewpoint. In this talk, I will present a method I designed to assess the asymptotic complexity of NFS.

George Krait, Gamble: Numerical Algorithm for the Topology of Singular Plane Curves
We are interested in computing the topology of plane singular curves. For this, the singular points must be isolated. Numerical methods for isolating singular points are efficient but not certified in general. We are interested in developing certified numerical algorithms for isolating the singularities. In order to do so, we restrict our attention to the special case of plane curves that are projections of smooth curves in higher dimensions. In this setting, we show that the singularities can be encoded by a regular square system whose isolation can be certified by numerical methods. This type of curves appears naturally in robotics applications and scientific visualization.

Semyon Efremov & Thibaut Tricard, MFX: Procedural phasor noise
Procedural pattern synthesis is a fundamental tool of Computer Graphics, ubiquitous in games and special effects. By calling a single procedure in every pixel -- or voxel -- large quantities of details are generated at low cost, enhancing textures, producing complex structures within and along surfaces. Such procedures are typically implemented as pixel shaders.

We propose a novel procedural pattern synthesis technique that exhibits desirable properties for modeling highly contrasted patterns that are especially well suited to produce surface and microstructure details. In particular, our synthesizer affords for a precise control over the profile, orientation and distribution of the produced stochastic patterns, while allowing to grade all these parameters spatially. Our technique defines a stochastic smooth phase field (a phasor noise) that is then fed into a periodic function (e.g. a sine wave), producing an oscillating field with prescribed main frequencies and preserved contrast oscillations. In addition, the profile of each oscillation is directly controllable (e.g. sine wave, sawtooth, rectangular or any 1D profile). Our technique builds upon a reformulation of Gabor noise in terms of a phasor field that affords for a clear separation between local intensity and phase.

Remi Decelle, Adagio: Measuring wood quality of logs from low-cost sensors at the sawmill or at the road side
The thesis focuses on the estimation of wood quality. The analysis of X-ray computer tomography (CT) images has been extensively studied over the past 30 years and more recently. We are interested here in the analysis of RGB images taken by digital cameras (cameras or smartphones). We only have images of log ends. In the literature, few studies have been carried out for the processing of such images.
The quality of the wood determines its use and price. This quality also influences the physical properties of the log. Several elements highlight this quality, such as the difference between the position of the marrow and the geometric centre of the log or the average thickness of the rings.

Among the features mentioned, an important feature to detect is the pith. Few studies have already been focused on this work on such images. However, these studies make some assumptions, a manual segmentation of the log must be done beforehand or the log is centered in the image. Our first work was to develop an algorithm to locate the pith without these assumptions. Our current work is to study the gap between the pith and the geometric centre of the log (a quality indicator). To do this, we look at the segmentation tools.

Paul Huynh, Caramba: NIST's lightweight cryptography initiative
Driven by the need for cryptographic primitives that can run on devices with very low computing power, small memory and limited power supply, a process to evaluate and standardize these so-called lightweight cryptographic algorithms was recently initiated by the Nation Institute of Standards and Technology (NIST). This presentation intends to give an overview of this initiative and will briefly introduce our submission, "Lilliput-AE".

CARAMBA Seminar.

23 mai 2019, 10h30h Salle Jean Legras
Robin Larrieu , LIX
Fast polynomial reduction for generic bivariate ideals
Let A, B in K[X,Y] be two bivariate polynomials over an effective field K, and let G be the reduced Grbner basis of the ideal I := <A,B> generated by A and B, with respect to some weighted-degree lexicographic order. Under some genericity assumptions on A,B, we will see how to reduce a polynomial with respect to G with quasi-optimal complexity, despite the fact that G is much larger than the intrinsic complexity of the problem. For instance, if A,B have total degree n, that is O(n2) coefficients, then G has O(n3) coefficients but reduction can be done in time O(n2). We will consider two situations where these new ideas apply, leading to different algorithms: - First, there is a class called "vanilla Grbner bases" for which there is a so-called terse representation that, once precomputed, allows to reduce any polynomial P in time O(n2). In this setting, assuming suitable precomputation, multiplication and change of basis can therefore be done in time O(n2) in the quotient algebra K[X,Y] / <A,B>. - Then, we assume that A and B are given in total degree and we consider the usual degree lexicographic order. Although the bases are not vanilla in this case, they admit a so-called concise representation with similar properties. Actually, the precomputation can also be done efficiently in this particular setting: from the input A,B, one can compute a Grbner basis in concise representation in time O(n2). As a consequence, multiplication in K[X,Y] / <A,B> can be done in time O(n2) including the cost of precomputation.



29 novembre 2018, 10h00 Ampgi Gilles Kahn (C)
Jean-Baptiste Labrune, Bell Labs
Radical Design: augmented environments for building dynamic objects with active matter
Programmable matter, also called active matter, is a class of physical and biological materials that allow new kind of affordances and interactions (Tibbits, Active Matter, MIT Press, 2017). Static entities become dynamic and can shapeshift, turn elastic or solid on demand. Active matter is still a prototype in laboratories, hence not easily available nor simple to use for non-experts.
However, like VLSI opened the field for advanced software engineering, technologies such as 3D / 4D printing and bio/nano-assemblers will probably popularize them soon. In this new context, how will we program but also design with these new materials? What will be the appropriate tools and interaction paradigms to build this world that Ivan Sutherland defined as "ultimate" ?
On the basis of my research as PhD at INRIA and postdoc at the MIT Medialab, i will show some initial answers to these questions. I will detail in particular how the vision of Radical Atoms (Ishii, Labrune & al 2012) proposes a new kind of paradigm called Radical Design, connecting biology, material science, computational modelling of microstructures, HCI and design.
Jean-Baptiste Labrune is a designer & researcher, specialized in the study of creative places and processes. His researches focus on the notion of "Exaptation", the way in which users of technology reconfigure and hack it, producing original and unexpected functions and uses. He completed his Phd at INRIA and postdoc at MIT, then became researcher at Bell Labs and interaction design professor at ENSAD (Arts Dcos School). He organizes and lead many "hybrid" workshops in art & sciences places in France (Arts Dcos, Beaux-Arts, Palais de Tokyo, Mains d'Oeuvres) & internationally (Mediamatic, Interaction Design Institute Ivra, IMAL, Hangar, Hyperwerk, Akademie Schloss Solitude, MIT Medialab).

COLLOQUIUM DU LORIA, propos par le dpartement

27 novembre 2018, 13:30, Amphi Gilles Kahn
Claire Mathieu, cole Normale Sprieure
Stable Matching in Practice
la page du colloquium


8 novembre 2018, 10h00 Salle Jacques-Louis Lyons (A008)
Etienne Corman, Carnegie Mellon University
Numerical Representation of Geometry for Shape Deformation
Geometric data are today largely available and accessible due to the democratization of 3D modelling softwares and low-cost 3D scanning techniques. Thus, defining a compelling representation in order to compare, measure similarity or find similar patterns between non-Euclidean data has become one great challenge in geometry processing. We propose a novel way to capture and characterize distortion between pairs of shapes by extending the recently proposed framework of shape differences built on functional maps. We demonstrate that a set of four operators is complete, capturing intrinsic and extrinsic structure and fully encoding a shape up to rigid motion in both discrete and continuous settings.


8 novembre 2018, 14h00 Salle Jean Legras (A006)
Chee Yap, New-York University
Subdivision Path Planning in Robotics: Theory and Practice
ABSTRACT: Motion planning is a fundamental problem in robotics. We propose to design path planners based on three foundations:
(1) The notion of ``resolution-exact'' planners. Conceptually, it avoids the zero problem of exact computation.
(2) The use of ``soft predicates'' for achieving such algorithms in the subdivision approach.
(3) The ``feature-based technique'' for constructing such soft predicates.
We formulate an algorithmic framework called ``Soft Subdivision Search'' (SSS) that incorporates these ideas. There are many parallels between our framework and the well-known Sampling or Probabilistic Roadmap framework. Both frameworks lead to algorithms that are - practical - easy to implement - flexible and extensible - with adaptive and local complexity. In contrast to sampling and previous resolution approaches, SSS confers strong theoretical guarantees, including halting.
In a series of papers we demonstrated the power of these ideas, by producing planners for planar robots with 2, 3 and 4 degrees of freedom (DOF) that outperform or matches state-of-art sampling-based planners. Most recently, we produced a planner for two spatial robots (rod and ring) with 5 DOFs. Non-heuristic planners for such robots has been considered a challenge for the subdivision approach. We outline a general axiomatic theory underlying these results, including subdivision in non-Euclidean configuration spaces,
Joint work with Y.J.Chiang, C.H.Hsu, C.Wang, Z.Luo, B.Zhou, J.P.Ryan.

GAMBLE Seminar.

7 novembre 2018, 10h30 Salle Jean Legras (A008)
Chee Yap, New-York University
Continuous Amortization and the Complexity of Root Isolation
The problem of isolating all roots (real or complex) of a polynomial is a highly classical problem. For integer polynomials, it has been known for 30 years from the work of Schnhage and Pan that the bit-complexity of this problem is O(n2 L) when the polynomial has degree n with L-bit coefficients. Pan informally calls such bounds "near optimal". But their algorithm has never been implemented. Until recently, it was assumed that such near-optimal complexity requires the divide-and-conquer method of Schnhage.
Meanwhile, subdivision algorithms for root isolation have proved efficient and widely implemented. But the worst case complexity of subdivision algorithms always lagged behind the divide-and-conquer algorithms. In the last few years, this gap is finally closed for complex roots [ISSAC'2016]. Moreover, our near-optimal root isolation algorithm was implemented this year with good preliminary results.
In this talk, we focus on some techniques that proved useful in the complexity analysis of subdivision algorithms -- in particular the idea of "Continuous Amortization".
Based on several papers with M.Sagraloff, V.Sharma, M.Ruben, J.Xu and M.Burr.


11 juillet 2018, 14h30 Salle Philippe Flajolet (C005)
Nicolas Delanoue, Universit d'Angers
Calcul par intervalles et transport optimal
Le problme du transport optimal a t premirement formalis par le mathmaticien franais Gaspard Monge en 1781. Depuis Kantorovich, ce problme (gnralis) est formul avec la thorie de la mesure et a t remis sur le devant de la scne par Cdric Villani (mdaille Fields 2010).
Rcemment, si la fonction qui apparat dans le cot est polynomial, Jean Bernard Lassere et Didier Henrion ont propos une mthode de rsolution base sur les moments des mesures pour reformuler le problme. Leur approche gnre un problme de programmation linaire avec contraintes convexes.
En s'appuyant sur le calcul par intervalles, nous proposons une discrtisation garantie qui gnre un programme linaire avec contraintes linaires dont l'optimum est une borne infrieure de la valeur de l'optimum du problme de Kantorovich. Durant la prsentation, je rappellerai le problme de Kantorovich et discuterai de notre approche. Si le temps le permet, je parlerai d'une discrtisation garantie du dual qui donne une borne suprieure de l'optimum.
[1] Topics in Optimal Transportation, Cdric Villani, AMS (2003).
[2] Optimal Transportation and Economic Applications, Guillaume Carlier, link

MAGRIT Seminar.

lundi 18 juin 2018, 9h00) Salle C103
Robert D. Howe Harvard Biorobotics Lab
Ultrasound guided heart surgery: Ultrasound image processing, fast motion compensation, catheter robot control.
To treat defects within the heart, surgeons currently use stopped-heart techniques. These procedures are highly invasive and incur a significant risk of neurological impairment. We are developing methods for performing surgery within the heart while it is beating. New real-time 3-D ultrasound imaging allows visualization through the opaque blood pool, but this imaging modality poses difficult image processing challenges due to poor resolution, acoustic artifacts, and data rates of 30 to 40 million voxels per second. To track instruments within the heart we have developed a Radon transform-based algorithm. Implementation using a graphics processor unit (GPU) enables real-time processing of the ultrasound data stream. For manipulation of rapidly moving cardiac tissue we have created a fast robotic device that can track the tissue based on ultrasound image features. This allows the surgeon to interact with the heart as if it was stationary. Our in vitro studies show that this approach enhances dexterity and lowers applied forces. To complete integration of ultrasound imaging with the robotic device we have developed a predictive controller that compensates for the imaging and image processing delays to ensure good tracking performance. We will present applications of this technology in atrial septal defect closure and mitral valve annuloplasty procedures, demonstrating the potential for improved patient outcomes.

MAGRIT Seminar.

mercredi 6 juin 2018, 15h00) Salle A006
Peter E. Hammer Harvard Biorobotics Lab
Using engineering and computational methods to inform surgery for congenital heart disease.
Surgeons have traditionally used an approach based on clinical experience and intuition. However, in congenital heart disease, the variety of anatomical abnormality is almost endless, defying an experience-based approach. Analytical approaches based on quantitative analysis and engineering principles have been proposed and show promise to help improve surgical management in highly complex cases. In this talk, examples will be presented to show the use of quantitative approaches to inform surgery, including: (1) characterization of the mechanical properties of cardiovascular tissues and use of this data to guide surgical techniques, (2) simulation to explore novel methods for valve reconstruction in the growing child, (3) lumped element modeling of the circulation to aid in decision-making for complex circulations.


25 mai 2018, Salle Philippe Flajolet (C005)


11 mai 2018, 13h15 Salle Jean Legras (A008)
Brian Wyvill, Universit de Victoria (Canada)
Implicit Modelling for Animation
Modelling with triangle meshes has been well supported in both hardware and software for computer animation, but problems arise with meshes when defining closed volumes and arbitrary topologies. Representing volumes as procedural implicit models makes for simple collision tests and methods for deformations. An example in modelling is presented for constructing smooth isolated or interconnected 3-D inscribed volumes. This approach can be used to build porous structures, including volcanic rocks, liquid or dry foam, and tiny creatures know as radiolarians. In animation, a series of research projects between French and Canadian researchers has led to an implicit approach to high quality skinning for character animation enhancing existing mesh approaches. This talk gives a high level view of these projects and shows that implicit modelling is a practical methodology for use in the animation and games industry.
Bio: Brian Wyvill is a professor at the University of Victoria in Canada. He has been conducting research in computer graphics since the early 1970s and spent several decades working on implicit modelling, and on the intersection between computing and art. Brian worked on the original Alien movie and has presented several animated shorts at the SIGGRAPH electronic theatre. He is currently on the board of Eurographics and ACM SIGGRAPH vice-president.

GAMBLE Seminar.

3 avril 2018, 10h00 Salle Bob
Olivier Devillers, LORIA
Codage compact de triangulation
Alors que les codages classiques (winged edge) utilisent 19n ou 13 n rfrences pour stocker une triangulation de genre 0 de n points, je prsenterai une structure permettant
- avec 4n rfrences de naviguer d'un triangle l'autre en temps constant
- avec 5n rfrences de, en plus, accder aux sommets en temps constant et divers variantes permettant de concilier cette structure compacte avec un format de compression 4n bits, de stocker des informations dans les faces de la triangulation ou dans les coins des triangles. Il s'agit d'un travail commun avec Luca Castelli Aleardi (LIX) (Le papier)

ABC Seminar.

16 mars 2018, 11h00 Salle Flajolet
Jean-Baptiste Courbot, INRIA, Paris
Dtection, segmentation et poursuite d'objets dans des images
Cette prsentation abordera trois contributions dans le domaine du traitement d'images, correspondant mes travaux de thse puis de post-doctorat. La premire problmatique est celle de la dtection d'objets trs peu lumineux dans des images hyperspectrales astronomiques pour lesquelles chaque pixel contient une information rsolue sur une portion du spectre lumineux. Dans ce cadre, je prsenterai des tests de type Generalized Likelihood Ratio qui permettent de spcifier les informations spatiales, spectrales et de similarit des objets recherchs. Ensuite, je prsenterai une formulation du problme de dtection comme un cas particulier de segmentation d'images par champs de Markov couples. Ce cadre permet de proposer une segmentation baysienne non supervise et robuste la prsence de bruit extrmement fort dans les images. J'aborderai dans une troisime partie mes travaux actuels, qui rpondent une problmatique d'analyse de la dynamique nuageuse dans des squences d'images. Ce problme est abord sous l'angle du filtrage particulaire multi-objets d'une part, et de la formulation de problmes d'optimisation parcimonieuse d'autre part.

ADAGIO Seminar.

14 mars 2018, 14h00 Salle Bob
Odysse Merveille, ICube, Strasbourg
RORPO : une mthode morphologique pour l'analyse des structures curvilignes. Applications au filtrage et la segmentation des vaisseaux sanguins.
L'analyse des structures curvilignes en 3 dimensions est un problme difficile en analyse d'images. En effet, ces structures sont fines, facilement corrompues par le bruit et prsentent une gomtrie complexe. Depuis plusieurs annes, de nombreuses mthodes spcialement ddies au traitement d'images contenant des structures curvilignes ont vu le jour. Ces mthodes concernent diverses applications en science des matriaux, tldtection ou encore en imagerie mdicale. Malgr cela, l'analyse des structures curvilignes demeure une tche complexe. Dans cette prsentation nous parlerons de la caractrisation des structures curvilignes pour l'analyse d'images. Nous prsenterons en premier lieu une nouvelle mthode appele RORPO, partir de laquelle deux caractristiques peuvent tre calcules. La premire est une caractristique d'intensit, qui prserve l'intensit des structures curvilignes tout en rduisant celle des autres structures. La deuxime est une caractristique de direction, qui fournit en chaque point d'une image, la direction d'une structure curviligne potentielle. RORPO, contrairement la plupart des mthodes de la littrature, est une mthode non locale, non linaire et mieux adaptes l'anisotropie intrinsque des structures curvilignes. Cette mthode repose sur une notion rcente de Morphologie Mathmatique: les oprateurs par chemins. RORPO peut directement servir au filtrage d'images contenant des structures curvilignes, afin de spcifiquement les prserver, mais aussi de rduire le bruit. Mais les deux caractristiques de RORPO peuvent aussi tre utilises comme information a priori sur les structure curvilignes, afin d'tre intgres dans une mthode plus complexe d'analyse d'image. Dans un deuxime temps, nous prsenterons ainsi un terme de rgularisation destin la segmentation variationnelle, utilisant les deux caractristiques de RORPO. L'information apporte par ces deux caractristiques permet de rgulariser les structures curvilignes seulement dans la direction de leur axe principal. De cette manire, ces structures sont mieux prserves, et certaines structures curvilignes dconnectes par le bruit peuvent aussi tre reconnectes. Des rsultats en 2D et 3D de ces mthodes seront enfin prsents sur des images de vaisseaux sanguins provenant de diverses modalits.

COLLOQUIUM DU LORIA, propos par le dpartement

16 janvier 2018, 13:30, Amphi Gilles Kahn
Grgoire Allaire, CMAP, cole Polytechnique
Taking into account additive manufacturing constraints in topology optimization of structures
la page du colloquium


GAMBLE Seminar.

8 novembre 2017, 11h00 Salle Jacques-Louis Lions
Vicent Despr, Gamble
Computing the Geometric Intersection Number of Curves.
The geometric intersection number of a curve C on a surface is the minimal number of self-intersections of any homotopic curve, i.e. of any curve obtained from C by continuous deformation. We give a quadratic algorithm to tackle this problem. The algorithm is purely combinatorial but the proof involves some basic hyperbolic geometry. All the point is to describe a discrete structure that has a similar behaviour as the continuous hyperbolic case. I will first explain what is a hyperbolic surface and why this geometric structure is useful and then show how we discretized it. (joint work with Francis Lazarus, GIPSA-Lab)

MAGRIT Seminar.

2 octobre 2017, 9h30 Salle C103
Fabien Pierre, Magrit
Colorisation de vidos : de l'tat-de-l'art aux dbouchs industriels.
La colorisation d'image est un problme extrmement mal pos mais qui intresse l'industrie du divertissement. Ce double point de vue en fait un sujet trs attractif. Dans cet expos, on prsentera l'tat de l'art et les mthodes qui ont t dveloppes par l'orateur pendant sa thse. Celles-ci reposent sur des approches non-locales et variationnelles. Les fonctionnelles utilises sont non-lisses et non-convexes et ont fait l'objet de techniques de minimisation originales. Cela a permis d'implmenter un logiciel exprimental qui associe l'utilisateur une approche base-exemple ce qui donne une mthode efficace, flexible et rapide. Une extension la vido est propose, dont l'implmentation en GPU permet une interactivit de l'approche variationnelle avec l'utilisateur. Nanmoins, celle-ci n'est pas oprationnelle aux yeux d'experts du milieu de la colorisation. En vue de se conformer ces besoins, quelques pistes seront proposes.

Workshop organized by GAMBLE.

September 25-26 2017, Room Philippe Flajolet,
10 talks, Workshop of the associated team "Astonishing"
More info

CARAMBA Seminar.

21 septembre 2017, 10h30h Salle Jacques-Louis Lyons
Laurent Imbert , LIRMM
Randomized Mixed-Radix Scalar Multiplication
Depuis 1996 et les premiers travaux de Kocher sur le sujet, les veloppeurs d'algorithmes cryptographiques savent qu'ils doivent imprativement veiller protger leurs implmentations contre les attaques par canaux cachs. Ces attaques visent aussi bien les implmentations matrielles que logicielles et s'appliquent aux algorithmes symmtriques et asymtriques. Aprs un rapide tour d'horizon de ces attaques, je prsenterai un nouvel algorithme de multiplication scalaire sur courbe elliptique concu pour rsister la plupart des attaques par observation pour un surcot infrieur aux contremesures classiques (ou presque).

MAGRIT Seminar.

6 juin 2017, 14h Salle C103
Thomas Waite, Harvard Biorobotics Lab
Cosserat Rods for Modeling Robotic Catheter Systems
Tendon-driven robotic catheters are capable of precise execution of minimally invasive cardiac procedures including ablations and imaging. However, these procedures require accurate modeling of not only the catheter and tendons but also their interactions with surrounding tissue and vasculature. For this reason, mechanically-derived models provide a clear advantage over geometric models, given the ease with which mechanical models handle contact fores and friction. As a solution, we present a fully-mechanical model of a tendon-driven robotic catheter system based on Cosserat rods and integrated with a stable, implicit scheme. We rst validate the physical accuracy of the Cosserat rod as a model for a simple catheter centerline against an analytical model for large deformations and experimental data. We then expand the system by adding a second Cosserat rod to model a single tendon in addition to the catheter, and define the constraints of the tendon-catheter system with penalty forces. We then validate the resulting tendon-catheter system against experimental data to prove its physical accuracy. This model represents a new contribution to the field of robotic catheter modeling in which both the tendons and catheter are modeled by fully-mechanical Cosserat rods and fully validated against experimental data.


31 mai 2017, Salle Philippe Flajolet (C005)

MAGRIT Seminar.

30 mai 2017, 14h Salle C103
Douglas Perrin, Harvard Biorobotics Lab
Machine Learning and Image Processing: from the Lab to Clinical Use
In this presentation I will talk about our work in the Cardiac Surgery Department at Boston Children's Hospital on translating recent results in machine learning and image processing from the lab into clinical use. Our approach has been to build collaborative teams consisting of medical fellows, surgical and cardiology attending physicians, industrial collaborators, and faculty from the Harvard School of Engineering and Applied Sciences along with their Ph.D. students and post-docs. I will be talking about some of the benefits and pitfalls of this approach. Specifically, challenges surrounding getting data within a hospital environment as well as lessons learned on project integration across clinical and engineering expertise. I will go into detail on one project where we have been using machine learning to inspect echo-cardiograms for congenital heart defects. Ultrasound images are quite different from natural images, for which artificial neural networks have already been successfully applied to for classification tasks. This imaging modality has a more challenging noise profile than photography images and may not have features directly linked to the specific class (i.e. the pathology). That said we have had success extracting intrinsic characteristics (view) from these types of images and our classification results are more accurate than simple random chance for several of the pathologies of interest. We believe this work represent an important step in developing automated screening tools for these diseases. In the case of birthing centers without pediatric cardiologist on site, these methods represent an effective way to transfer expertise from larger centers that will have a direct impact on the number of newborns with these pathologies that can be detected and treated.

ALICE Seminar.

19 mai 2017, 14:00 Salle B013 (Bob)
Jeanne Pellerin, Universit de Louvain
How Many Ways are there to Subdivide a Hexahedron in Tetrahedra?
Indirect hex-dominant meshing methods rely on the combination of the elements of a tetrahedral mesh into hexahedra. Before combining tetrahedra into hexahedra, we should be able to enumerate the different subdivision patterns of a hex into tetrahedra. We will show that when distorted, however valid finite elements, hexes are accounted for, the number of possible subdivision patterns is larger than previously thought. We present an algorithm that does not rely on any predefined pattern and build the largest set of potential hexes from an input tetrahedral mesh.


11 mai 2017, 14h Salle Philippe Flajolet (C005)
Marie-Franoise Roy, Universit de Rennes 1
Interpolation d'Hermite multivarie symetrique, doubles sommes de Sylvester et sous-rsultants.
Les doubles sommes de Sylvester sont des expressions en fonctions des racines de deux polynmes qui donnent une autre approche de la thorie des sous-resultants, habituellement dfinis grace la matrice de Sylvester.
Dans le cas de racines multiples la dfinition de Sylvester donne une valeur 0/0 dont il s'agit de lever l'indtermination. La cl est l'utilisation d'une extension de l'interpolation d'Hermite aux polynmes multivaris symtriques et des dterminants de Vandermonde generaliss.
Travail en commun avec Aviva Szpirglas

MAGRIT Seminar.

7 avril 2017, 11h Salle C103
Fabien Pierre, Technische Universitt Kaiserslautern
Problmes lis la couleur
Cet expos traite de problmes lis la couleur. En particulier, on s'intresse des problmatiques communes la colorisation d'images, de vidos et au rehaussement de contraste. Si on considre qu'une image est compose de deux informations complmentaires, une achromatique (sans couleur) et l'autre chromatique (en couleur), les applications tudies consistent traiter une de ces deux informations en prservant sa complmentaire. En colorisation, la difficult est de calculer une image couleur en imposant son niveau de gris. Le rehaussement de contraste vise modifier l'intensit d'une image en prservant sa teinte. Ces problmatiques communes m'ont conduit tudier formellement la gomtrie de l'espace RGB. On a dmontr que les espaces couleur classiques de la littrature pour rsoudre ces types de problme conduisent des erreurs. Un algorithme, appel spcification luminance-teinte, qui calcule une couleur ayant une teinte et une luminance donnes est dcrit dans cet expos. L'extension de cette mthode un cadre variationnel a t propose. Ce modle a t utilis avec succs pour rehausser les images couleur, en utilisant des hypothses connues sur le systme visuel humain. Les mthodes de l'tat-de-l'art pour la colorisation d'images se divisent en deux catgories. La premire catgorie regroupe celles qui diffusent des points de couleurs poss par l'utilisateur pour obtenir une image colorise (colorisation manuelle). La seconde est constitue de celles qui utilisent une image couleur de rfrence ou une base d'images couleur et transfrent les couleurs de la rfrence sur l'image en niveaux de gris (colorisation base exemple). Les deux types de mthodes ont leurs avantages et inconvnients. Dans cet expos, on prsente un modle variationnel pour la colorisation base exemple. Celui-ci est tendu en une mthode unifiant la colorisation manuelle et base exemple. Enfin, nous dcrivons des modles variationnels qui colorisent des vidos tout en permettent une interaction avec l'utilisateur.

MAGRIT Seminar.

6 avril 2017, 14h Salle C103
Ronan Sicre, INRIA-Rennes
Part-based methods
Ronan will present his works in the fields of computer vision and pattern recognition, with a focus on part-based methods. Parts are mid-level features that are learned from data. They are generative and distinctive, meaning that they occur often in some part of the training data and rarely in the rest. Once parts are learned, they are alligned to produce global representations and perform classification. These representations outperform global CNN descriptions. The most recent works allows to learn parts in an unsupervised fashion. In these works parts are applied to image classification and retrieval, but could be applied on other types of data.

MAGRIT Seminar.

31 mars 2017, 14h Salle C103
Bruno Sciolla, INSA-Lyon
Amliorer l'apprentissage de formes pour la segmentation par des rseaux de neurones profonds
Les rseaux de neurones convolutionnels sont trs performants pour extraite des caractristiques locales et mergentes grande chelle de donnes spatio-temporelles. Nous nous intressons en particulier aux rseaux de neurones ddis la segmentation - c'est dire produire des cartes d'tiquettes. Les rseaux actuels donnent des segmentations qui manquent de cohrence globale : cela tient au fait que l'architecture par convolution donne accs aux caractristiques au voisinage d'un point donn, mais pas aux tiquettes prdites. Nous proposons une architecture de rseau "Refine", dans laquelle le rseau a accs ses propres tiquettes infres, qui permet un meilleur apprentissage de formes. Cette mthode peut-tre vue comme une amlioration de l'apprentissage supervis profond ddie la segmentation. Nous illustrons cette mthode pour la segmentation de la prostate sur des donnes IRM 3D.

ALICE Seminar.

24 mars 2017, Salle B011 (Alice)
Aldo Gonzalez Lorenzo, Universit de Marseille
Homologie algorithmique applique aux objets discrets
La thorie de l'homologie formalise la notion de trou dans un espace. Pour un sous-ensemble de l'espace Euclidien, on dfinit une squence de groupes d'homologie, dont leurs rangs sont interprts comme le nombre de trous de chaque dimension. Ainsi, β0, le rang du groupe d'homologie de dimension zro, est le nombre de composantes connexes, β1 est le nombre de tunnels ou anses et β2, est le nombre de cavits. Ces groupes sont calculables quand l'espace est dcrit d'une faon combinatoire, comme c'est le cas pour les complexes simpliciaux ou cubiques. partir d'un objet discret (un ensemble de pixels, voxels ou leur analogue en dimension suprieure) nous pouvons construire un complexe cubique et donc calculer ses groupes d'homologie.
Je vais prsenter trois approches relatives au calcul de l'homologie sur des objets discrets. En premier lieu, j'introduirai le champ de vecteurs discret homologique, une structure combinatoire gnralisant les champs de vecteurs gradients discrets, qui permet de calculer les groupes d'homologie. Cette notion permet de voir la relation entre plusieurs mthodes existantes pour le calcul de l'homologie et rvle galement des notions subtiles associs. Je prsenterai ensuite un algorithme linaire pour calculer les nombres de Betti dans un complexe cubique 3D, ce qui peut tre utilis pour les volumes binaires. Enfin, je vais prsenter deux mesures (l'paisseur et la largeur) associes aux trous d'un objet discret, ce qui permet d'obtenir une signature topologique et gomtrique plus intressante que les simples nombres de Betti. Cette approche fournit aussi quelques heuristiques permettant de localiser les trous, d'obtenir des gnrateurs d'homologie ou de cohomologie minimaux, d'ouvrir et de fermer les trous.

ALICE Seminar.

March 8th 2017, 14:30 Room B013
Mlina Skourass, MIT
Design and Fabrication of Deformable Objects

Deformable objects have a plethora of applications: they can be used for entertainment, advertisement, engineering, robotics, medical applications and many more. However the design of custom deformable objects is a difficult task. The designer must foresee and invert the effects of external forces on the behavior of the object in order to make the right design decisions. In this talk, I will present different approaches based on physics-based simulation and inverse modeling techniques that alleviate these difficulties and allow the designer to easily create custom deformable objects by automating some of the most tedious aspects of the design process. The framework that we introduce is tailored to the design of various objects such as inflatable membranes, actuated characters and topology optimization of high-resolution functional objects.

GAMBLE Seminar.

February 22th 2017, Room A006
Adrien Poteaux, LIFL
Singularities of a plane algebraic curve: improving the Newton-Puiseux algorithm.

Puiseux series (generalization of Taylor series above critical points) are a fundamental object in the theory of plane algebraic curves [4]. Nevertheless, their computation is not easy ; existing algorithms (usually the Newton-Puiseux algorithm and its variants [1]) are purely symbolic algorithm with high complexity. Running computations numerically provide Taylor series with a small convergence radius ; in particular, this does not provide any information on the structure of the Puiseux series (that is one of the main interest of their computation).

More precisely, considering a bivariate polynomial F ∈ K[X, Y] with K a number field, and C = {(xₒ , yₒ) ∈ ℂ | F (xₒ , yₒ) = 0} the associated plane algebraic curve, computing the Puiseux series with the Newton-Puiseux algorithm generate field extension of high degrees (up to D3 if D is the total degree of F ) ; more importantly, there is an important coefficient growth. The binary complexity estimate of P.G. Walsh in O(D^(36+ε)) illustrates this problem [5], and practical experiments confirm a bad behavior. To overcome these difficulties, the following strategy has been developed in [2]:

  1. Find the structure of the Puiseux series modulo a well chosen prime number p
  2. Use this structure to compute a numerical approximation of the coefficients of the Puiseux series.

It remains to deal with the arithmetic complexity of the algorithm. Duval proved a O(D^8) complexity result [1]. This has been improved to O(D^5) in [2] by truncating powers of X during the computation and introducing fast multiplication. More recently, we showed in [3] how to reduce the total number of recursive calls of the algorithm from O(D2) to O(D), leading to a complexity in O(D^4).

This talk will first describe the context, and the rational Newton-Puiseux algorithm of [1]. We will also briefly describe the symbolic-numeric strategy of [2], Finally, we will focus on the arithmetic complexity, presenting results of [2, 3], and a new divide and conquer algorithm that reduce the complexity to O(D3).

This work begun during my PhD, under the supervision of Marc Rybowicz, who also participated to [3]. The new divide and conquer algorithm is being written in collaboration with Martin Weimann.

References :

[1] D. Duval. Rational Puiseux Expansions. Compositio Mathematica, 70:119154, 1989.

[2] Adrien Poteaux. Calcul de dveloppements de Puiseux et application au calcul de groupe de monodromie dune courbe algbrique plane. PhD thesis, Universit de Limoges, 2008.

[3] A. Poteaux & M. Rybowicz Improving Complexity Bounds for the Computation of Puiseux Series over Finite Fields. ISSAC 2015, paratre.

[4] R. J. Walker. Algebraic Curves. Springer-Verlag, 1950.

[5] P. G. Walsh. A Polynomial-time Complexity Bound for the Computation of the Singular Part of an Algebraic Function. Mathematics of Computation, 69:11671182, 2000.

Workshop organized by ALICE.

February 2nd 2017, Amphi
9 talks, Workshop on Shape Synthesis for Modeling and Fabrication
More info


VEGAS Seminar.

November 15th 2016, Room A006
Mickal Buchet, Tohoku University
Interprtation et utilisation de l'analyse topologique de donnes
L'analyse topologique de donnes et l'utilisation de la persistance ont donn des rsultats discriminants intressants, notamment pour l'analyse en science des matriaux. Les techniques actuelles permettent de fournir des signatures et certaines statistiques, cependant, du point de vue applicatif, des informations supplmentaires sont dsires. Une meilleure comprhension de la structure est requise lorsque l'on cherche optimiser les proprits de matriaux et l'origine physique des structures topologiques.
Dans cet expos, je prsenterai certains rsultats actuels obtenus en science des matriaux et une nouvelle approche pour aider la comprhension des phnomnes physiques. J'aborderai galement certains problmes connexes directement relis la structure et la combinatoire, alatoire ou non, de structures gomtriques.

This page is maintained by Olivier.Devillers(at)