WELCOME
Current position : Associate Professor at ISAE-Supaero (DAEP)
E-mail : michael.bauerheim isae-supaero.fr
Phone : +33 5 61 33 80 85
Location : Building 38, office 223
Previous positions :
2015-2016 - Post-doctoral position at LMFA (France) and ETHZ (Switzerland) :
- Aeroacoustic broad-band and tonal noise or rotor-stator interactions. LES and LNSE on the non-linear saturation of aeroacoustic sources.
2014-2015 - Post-doctoral position at IMFT (France) :
- DNS and experiment of flame stabilization on a rotating cylinder.
2014 - Visiting scholar at Stanford University (USA) :
- Uncertainty Quantification applied to symmetry breaking in thermoacoustics.
2012-2014 - PhD in thermo-acoustics at CERFACS and SNECMA (France) :
- Theory and LES of thermoacoustic instabilities in annular combustion chambers.
2011 - Visiting scholar at Georgia Tech (USA) :
- LES for combustion instabilities in methane-oxygen rocket engines.
RESEARCH ACTIVITIES
High Performance Learning for Aeronautical Flow Physics
Aeroacoustics
OPPORTUNITIES 2018-2019
PostDoc : Development of innovative AI approaches for fluid-structure interactions
Artificial Intelligence (AI) recently emerges in many engineering fields as a new approach to handle complex systems and elaborate physical models. Based on the training of large neural networks, Deep Learning is one of those methods which has shown outstanding results. In fluid mechanics, breakthrough in numerical methods can be expected by using such a technique to develop complex physical models, or accelerating current numerical solvers. Yet, the small amount of studies dedicated to AI for fluid mechanics suggests that progress is still required to make these methods mature and reliable. The department of Aerodynamics, Energetic and Propulsion (DAEP) at ISAE- Supaero is currently applying deep learning techniques to several problems encountered in fluid mechanics, involving data from experiments or numerical simulations. This postdoc position will complement the current team to apply AI to tackle fluid-structure interaction (FSI) problems. Such phenomena emerge when flow oscillations couple with the vibration of a solid surface, for example the airfoil itself. Numerical simulations of such FSI are usually very expensive. In this context, after a learning phase, deep learning is expected to provide flow predictions without additional computational effort. This innovative approach will be developed during this Postdoc, especially for transonic flows which are still challenging to predict. The selected candidate will also contribute to the other projects related to deep learning and through collaborations with AI experts from Jolibrain.
Internship : Pressure reconstruction from PIV measurements using Artificial Intelligence : application to nano drones’ flapping wings
The department of Aerodynamics, Energetic and Propulsion (DAEP) at ISAE-Supaero is currently studying nano drones with flapping wings (Fig. 1, left). A crucial step in this design is the assessment of the aerodynamic forces (lift and drag) generated by this tiny drone when flapping its wings. However, while in computational fluid dynamics (CFD) all local data can be extracted (pressure, velocity, density etc.), in experiment only the velocity field can be easily obtained, especially for such small devices. Therefore, reconstructing the pressure field based on velocity PIV measurements is required, prior to reconstruct the aerodynamic forces. Classical methods exist, but have been found very sensitive to noise. Following a previous project (Fig. 1, right), an innovative approach based on Artificial Intelligence (AI) is proposed : using numerical simulations which provide both the velocity and pressure fields, a deep neural network is trained to reproduce the latter from the former. After the learning phase, the network is able to reconstruct an unknown pressure field from a noisy PIV velocity measurements with a high robustness to noise. This project will continue a previous work carried out on this topic (cylinder configurations, Fig. 1, right) to more complex configurations and flows. The target application is the reconstruction of the unsteady lift and drag forces from an actual experimental campaign.
Internship : Accelerating Lattice Boltzmann Methods using a deep learning approach
Artificial Intelligence (AI) recently emerges in the engineering fields as a new approach to handle complex systems and elaborate physical models. Deep learning is one of those methods, based on a training/validation technique, which has shown outstanding results. For instance, a Go virtual player (one of the most difficult problem in AI) has been recently trained using a deep learning strategy, and has won for the first time a world-class professional in a five-game match in 2016.
In fluid mechanics, breakthrough in numerical methods can be expected by using such a technique to develop complex physical models or enhance current numerical solvers [1]. This project will focus on the Lattice Boltzmann Method (LBM) which was revealed as an effective solver to compute low-Mach number flows because of its high-accuracy and low-cost advection scheme. Compared with Navier-Stokes solvers, the equations to be solved in LBM are discretized in time, space, and velocity, the latter requiring a specific model known as lattice where a few discrete velocities are chosen among the continuous velocity space. Such a method yields effective computation with outstanding accuracy at low Mach numbers. However, the accuracy and numerical costs of the LBM for higher Mach numbers (M > 0.3) is still a challenge, which requires new developments.
Therefore, this project intends to improve current LBM methods using a deep learning strategy. This internship will focus on the classical weakly compressible 2D formulation available in the code Palabos, where the velocity space is discretized with a standard lattice 2DQ9, i.e. where 9 discrete velocities are used. Note that the more velocities are computed, the more expansive and more accurate the simulation is. The main question addressed in this internship is : can we compute less velocities while keeping the same level of accuracy ? One key idea is to use deep learning to learn how to compensate the reduced number of velocities, for example through learnt source terms or learnt extra discrete velocities.
This internship for LBM on weakly compressible flows will be a first step towards the improvement of LBM methods at high Mach number flows, where a reduced number of velocities at constant accuracy might lead to significant breakthrough.
Internship : Intelligent data collection for efficient model surrogating with deep learning
Recent advances in Artificial Intelligence like machine learning based on deep neural networks allows the learning of any function of interest, given enough input/output samples. While the offline learning time can be long, the online computation of the output given the inputs takes constant time, generally very short. This paves a way for using such techniques for building surrogate models/functions where long computations are mandatory at high frequency. This is particularly true in elaborate physics models where iterative computation is necessary in every cell of a discretized space, for instance to solve local partial derivative equations as ones encountered in fluid motion, heat exchange among others.
Using surrogate models based on deep learning has already shown interesting results, but questions are still opened in order for such methods to spread out. While the learning phase of the neural network is well known, the problem of collecting the data using the original function before learning (or the loop including both of them) is much less investigated. In practice, it may have dramatic consequences for several reasons :
• Deep learning generally needs a lot of data as it does not take into account any modelling hypothesis. If the original function is expensive or slow to compute (for instance very precise simulator in fluid dynamics, or real environment), the global amount of data should be minimized.
• There is a trade-off between the quantity of data used for learning, and the quality of the surrogate model obtained, but it is difficult to assess since the various deep architecture properties are not completly modelled.
• The impact of the quality of the data used, particularly in terms of variety, is still not clearly understood : the data should cover all the cases, and certainly need to be more dense around sensitive places. Yet, methods to identify which data should be generate to complete the dataset are still missing.
Such problems have been studied as hyperparmeter optimization (parameters for data col- lection can be seen as hyperparameters), for instance with the pioneering ParamILS algorithm. Techniques have evolved, for instance, by adding a model of the error of the surro- gate model like in the AutoML framework, and/or by using the sequential nature of the hyperparameter search, like in the SMAC framework. All these approaches consider the sequential problem of selecting good (hyper-)parameters, seeing the results in terms of error, and then selecting another set of good (hyper-)parameters and so on. Another approach is the reinforcement learning framework, which has shown very impressive results in the few last years. Such techniques can use deep learning techniques in order to build a model of the expected future errors after choosing some hyper-parameters.
This internship will investigate the first few steps towards algorithms and methodologies for intelligent data collection taking into account the criteria above, using different techniques going from statistical modelling to reinforcement learning. This will be applied to building surrogate models of the Poisson equation resolution used in fluid mechanics.
Internship : Etude de faisabilité des méthodes d’apprentissage profond appliqués à la thermique spatiale
L’apprentissage profond (Deep Learning) est une stratégie de Machine Learning rencontrant un vif succès dans de nombreuses industries, notamment le spatial, où elle a démontré des capacités largement supérieures aux approches antérieures dans des secteurs clés comme l’analyse d’image. En calcul scientifique, des exemples récents tendent à démontrer le potentiel de ces méthodes.
L’analyse thermique des systèmes spatiaux (satellites, instruments, etc.) repose actuellement sur des méthodes de calcul scientifique conventionnelles type (i) équation de la chaleur résolue en temps et en espace et (ii) lancer de rayon utilisant les méthodes de Monte Carlo pour les aspects radiatifs. L’objectif de ces analyses est de connaître, in fine, la température de l’ensemble des sous-systèmes constituant le système afin de garantir son bon comportement dans ses missions.
Cependant, ces approches peuvent s’avérer coûteuses en termes de temps de calcul en fonction des différents profils de mission (pointage, flux externes, sondes interplanétaires…). Pour faire face aux enjeux compétitifs rencontrés actuellement dans le domaine du spatial, AIRBUS Defence&Space déploie actuellement un plan de digitalisation ambitieux qui vise en particulier à améliorer ses processus d’analyse.
Le stage propose d’explorer les possibilités de l’apprentissage profond afin d’entraîner des réseaux de neurones de grandes tailles à "imiter" un jeu d’apprentissage, par exemple pour produire un champ de température sur un sous-système d’un satellite en fonction des paramètres de sa mission. Pour ce faire, un jeu de données issu de simulations d’haute-fidélité obtenu par Airbus Defence and Space sera utilisé. Des réseaux de neurones seront entraînés sur ces données, puis validés sur de nouvelles configurations ou missions, et enfin comparés aux prédictions réalisées par des méthodes issues de l’état de l’art.
Le/la stagiaire sera encadré(e) par l’ISAE (M. Bauerheim, N. Gourdain) et le CERFACS (A. Misdariis, C. Lapeyre) pour les aspects académiques, et par Airbus Defence and Space (R. Mari, J. Ponsy) pour les aspects industriels.