Research
I work on improving performance of HPC
runtime systems, especially distributed runtime systems.
Postdoc (2022-2023)
I did a postdoc in the
Research Group Parallel Computing of the Technical University of Vienna,
where I worked with Jesper Larsson Träff,
Sascha Hunold and
Ioannis Vardas on MPI
process mapping and estimation of possible performance improvement
of MPI applications.
PhD thesis (2019-2022)
I did my PhD thesis in the TADaaM team in
Inria Bordeaux Sud-Ouest,
under the supervision of Alexandre Denis
and Emmanuel Jeannot.
I worked on interactions between task-based runtime systems and communication libraries for
High Performance Computing, especially StarPU
and NewMadeleine.
I defended my thesis On
the Interactions between HPC Task-based Runtime Systems and Communication Libraries
on November 29, 2022.
I developed two main directions in my thesis:
- Dynamic broadcasts
-
To be able to optimize broadcasts appearing in task graphs of
StarPU applications, we developed what we called dynamic
broadcasts. Functions such as
MPI_Bcast
cannot be used within StarPU, mainly because recipient processes do
not know wheter data comes from a broadcast or a regular
point-to-point request, and in the same fashion they do not know
other nodes involved in the broadcast. Thus, only the original
sender node has all informations to be able to call
MPI_Bcast
. Accurately detecting broadcasts in the task
graph is not straighforward neither.
Our dynamic broadcasts overcome these constraints. The
broadcast communication pattern required by the task-based
algorithm is detected automatically, then the broadcasting
algorithm relies on active messages and source routing, so that
participating nodes do not need to know each other and do not
need to synchronize. Receiver receives data the same way as it
receives point-to-point communication, without having to know
it arrives through a broadcast.
- Memory contention between computations and communications
-
To amortize the cost of MPI communications, distributed
parallel runtime systems can usually overlap network
communications with computations in the hope that it improves
global application performance. When using this technique, both
computations and communications are running at the same time.
We studied the possible interferences between computations and
communications when they are executed in parallel. The main
interference that can occur is memory contention between data
used by computations and data used by network communications.
In some cases, this contention can cause severe slowdown of
both computations and communications.
To predict memory bandwidth for computations and for
communications when they are executed side by side, we proposed
a model taking data locality and contention into account.
Elaboration of the model allowed to better understand locations
of bottlenecks in the memory system and what are the strategies
of the memory system in case of contention.
Publications
A complete list of my publications is available on
HAL.
The following list, my DBLP page, my
Google Scholar page
or my ORCiD page
contains only my major publications.
Authors are sorted by alphabetical order.
-
Using Mixed-Radix Decomposition to Enumerate Computational Resources of Deeply Hierarchical Architectures
Sascha Hunold, Philippe Swartvagher, Jesper Larsson Träff, Ioannis Vardas
Exa-MPI, Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (SC), 2023.
-
Tracing task-based runtime systems: feedbacks from the StarPU case
Alexandre Denis, Emmanuel Jeannot, Philippe Swartvagher, Samuel Thibault
Concurrency and Computation: Practice and Experience, 2023.
-
Predicting Performance of Communications and Computations under Memory Contention in Distributed HPC Systems
Alexandre Denis, Emmanuel Jeannot, Philippe Swartvagher
International Journal of Networking and Computing, 2023,
Special Issue on Workshop on Advances in Parallel and
Distributed Computational Models 2022.
-
Modeling Memory Contention between Communications and Computations in Distributed HPC Systems
Alexandre Denis, Emmanuel Jeannot, Philippe Swartvagher
IPDPS 2022 - IEEE International Parallel and Distributed
Processing Symposium Workshops (24th Workshop on
Advances in Parallel and Distributed Computational Models),
May 2022, Lyon / Virtual, France.
-
Interferences between Communications and Computations in Distributed HPC Systems
Alexandre Denis, Emmanuel Jeannot, Philippe Swartvagher
ICPP 2021 - 50th International Conference on
Parallel Processing, Aug 2021, Chicago / Virtual, United
States.
-
Using Dynamic Broadcasts to improve Task-Based Runtime Performances
Alexandre Denis, Emmanuel Jeannot, Philippe Swartvagher, Samuel Thibault
Euro-Par - 26th International European
Conference on Parallel and Distributed Computing, Rzadca
and Malawski, Aug 2020, Warsaw / Virtual, Poland.
Other activities