Philippe SWARTVAGHER's picture

Philippe SWARTVAGHER

Post-doctoral fellow in the Research Group Parallel Computing at TU Wien

Twitter LinkedIn GitHub GitLab


Research

I work on improving performance of HPC runtime systems, especially distributed runtime systems.

PhD thesis

I did my PhD thesis in the TADaaM team in Inria Bordeaux Sud-Ouest, under the supervision of Alexandre Denis and Emmanuel Jeannot.
I worked on interactions between task-based runtime systems and communication libraries for High Performance Computing, especially StarPU and NewMadeleine.
I developed two main directions in my thesis:

Dynamic broadcasts

To be able to optimize broadcasts appearing in task graphs of StarPU applications, we developed what we called dynamic broadcasts. Functions such as MPI_Bcast cannot be used within StarPU, mainly because recipient processes do not know wheter data comes from a broadcast or a regular point-to-point request, and in the same fashion they do not know other nodes involved in the broadcast. Thus, only the original sender node has all informations to be able to call MPI_Bcast. Accurately detecting broadcasts in the task graph is not straighforward neither.
Our dynamic broadcasts overcome these constraints. The broadcast communication pattern required by the task-based algorithm is detected automatically, then the broadcasting algorithm relies on active messages and source routing, so that participating nodes do not need to know each other and do not need to synchronize. Receiver receives data the same way as it receives point-to-point communication, without having to know it arrives through a broadcast.

Memory contention between computations and communications

To amortize the cost of MPI communications, distributed parallel runtime systems can usually overlap network communications with computations in the hope that it improves global application performance. When using this technique, both computations and communications are running at the same time.
We studied the possible interferences between computations and communications when they are executed in parallel. The main interference that can occur is memory contention between data used by computations and data used by network communications. In some cases, this contention can cause severe slowdown of both computations and communications.
To predict memory bandwidth for computations and for communications when they are executed side by side, we proposed a model taking data locality and contention into account. Elaboration of the model allowed to better understand locations of bottlenecks in the memory system and what are the strategies of the memory system in case of contention.


I defended my thesis On the Interactions between HPC Task-based Runtime Systems and Communication Libraries on November 29, 2022. The manuscript is available, the slides as well.


Publications ORCiD: 0000-0003-3786-7364

Authors are sorted by alphabetical order.

A complete list of my publications is available on HAL.

Presentations

About my work
Miscellaneous

Teaching

Other activities



Personal



Contact

swartvagher [at] par [dot] tuwien [dot] ac [dot] at
I have a PGP key: 0x6EC3C10693C090C3 (942A 2C17 4547 99B5 62E3 4127 6EC3 C106 93C0 90C3).