Title: « Asynchronism : Continuous and discrete contexts »
- In the domain of scientific computing, the main objectives consist in fastly performing a large amount of computations and treating large-size problems. In order to attain those goals, parallel computing has been developed, firstly with dedicated machines such as vector computers and shared memory ones. Then, according to the decrease of the production costs of the machines and the increase of their power, parallelism has focused on local clusters. Finally, the enhancement of the throughput in the worldwide network has allowed to envisage those last years the use of clusters of clusters, that is to say, clusters of geographically distant clusters interconnected via the Internet.
- That extension of the concept of parallel machine theoretically permits to gather all the machines in the world which are connected to the Internet. It presents the double advantage to increase the computation power and the memory capacity in RAM and mass storage, which allows the treatment of large-size problems.
- However, the programming of such meta-clusters cannot be exactly the same as the one of classical machines or even local clusters. Effectively, in that new architecture, the heterogeneity of the machines and of the communication links generates strong constraints to obtain good performances. In particular, the synchronization of the communications between the processors generally induces a major loss of performance.
- Thus, it is essential to propose a new kind of algorithms which allows us to use those meta-clusters as efficiently as possible. This is for all those reasons that, after having worked on synchronous parallel algorithms during my PhD Thesis, I have decided to focus my work on asynchronism, in the continuous context on the one hand, with parallel iterative algorithms, and in the discrete context on the other hand, with finite-states discrete-time systems.
- Asynchronism permits to efficiently solve a large number of scientific problems on meta-clusters by bringing the necessary flexibility for the adaptation to the different speeds of processors and communication links between them. However, to ensure a stable behavior and to obtain an optimal efficiency, several aspects must be studied and/or adapted according to the context of use. In the continuous case, one will especially focus on the convergence conditions of the iterative process, the detection of that convergence, the halting procedure and the load balancing. In the discrete case, one will also focus on the convergence of the system, by trying to precisely identify the initial states leading to a given fixed point, but also on the theoretical study of the asynchronous behavior and on the mixing of synchronism and asynchronism to design new systems with a given behavior.
Keywords: Dynamical systems, asynchronism, parallelism, scientific computing.
Link to HAL (with PDF file)