Two positions on Robust and explainable AI are available (postdoc/research engineer) from March 2022.

These two postdoc/research engineer positions will benefit from the effort of the group in this domain, working with two PhD students (Jayneel Parekh and Mélanie Gornet), supervisors, Florence d’Alché and Pavlo Mozharovskyi and collaborations with Winston Maxwell (Télécom Paris, Operational AI ethics), Stéphan Clémençon (DSAIDIS) and IDEMIA through the ANR project LIMPID.

Practical information for application

  • Place of work: Campus of Institut Polytechnique de Paris (25 km from Paris): Télécom Paris [TP], 19 place Marguerite Perey, F-91120 Palaiseau with transportation.
  • Starting date: From Feb, 20 2022

The application should be formatted as a single PDF file and should include:

  • A complete and detailed curriculum vitae
  • A personal statement
  • A selection of two international publications and PhD thesis reports
  • Contact details of two referees

The PDF file should be sent to the supervisors : florence.dalche@telecom-paris.fr, pavlo.mozharovskyi@telecom-paris.fr, with email subject [DSAIDIS Postdoc]

(first round of selection – 15 March 2022)

Postdoc/research engineer (position 1)

We are currently working on a novel framework named FLINT for Learning with Interpretability (see our last paper) that allows to tackle by-design learning and that can be specialized to post-hoc explanation. It consists on learning jointly a pair of models one devoted to the interpretation of the other. An axiomatic view of the loss and penalties is adopted, which yields to the learning of a dictionary of hegh level features.

This framework was successfully tested on image recognition and we now consider next challenges –

– still on image recognition: ensuring faithfulness, imposing knowledge-based properties (invariance by transformation for instance) to high level features and  improving understandability of the high level feature functions by adding auxiliary tasks to solve,

– on other tasks like multilabel classification and structured prediction: development of the FLINT framework

Postdoc/research engineer (position 2)

– definition of evaluation metrics for robustness, uncertainty and interpretability assessment

– study of theoretical guarantees

– development of a Python platform dedicated to measure/test several approaches  at the lens of robustness and explainability in for image data, including our tools (FLINT).