Two positions on Robust and explainable AI are available (postdoc/research engineer) from March 2022.
These two postdoc/research engineer positions will benefit from the effort of the group in this domain, working with two PhD students (Jayneel Parekh and Mélanie Gornet), supervisors, Florence d’Alché and Pavlo Mozharovskyi and collaborations with Winston Maxwell (Télécom Paris, Operational AI ethics), Stéphan Clémençon (DSAIDIS) and IDEMIA through the ANR project LIMPID.
The application should be formatted as a single PDF file and should include:
The PDF file should be sent to the supervisors : email@example.com, firstname.lastname@example.org, with email subject [DSAIDIS Postdoc]
(first round of selection – 15 March 2022)
Postdoc/research engineer (position 1)
We are currently working on a novel framework named FLINT for Learning with Interpretability (see our last paper) that allows to tackle by-design learning and that can be specialized to post-hoc explanation. It consists on learning jointly a pair of models one devoted to the interpretation of the other. An axiomatic view of the loss and penalties is adopted, which yields to the learning of a dictionary of hegh level features.
This framework was successfully tested on image recognition and we now consider next challenges –
– still on image recognition: ensuring faithfulness, imposing knowledge-based properties (invariance by transformation for instance) to high level features and improving understandability of the high level feature functions by adding auxiliary tasks to solve,
– on other tasks like multilabel classification and structured prediction: development of the FLINT framework
Postdoc/research engineer (position 2)
– definition of evaluation metrics for robustness, uncertainty and interpretability assessment
– study of theoretical guarantees
– development of a Python platform dedicated to measure/test several approaches at the lens of robustness and explainability in for image data, including our tools (FLINT).