Industries and services currently experience a new revolution as a result of the convergence of Big Data and Artificial Intelligence. The access to large volumes of data combined with a growing computing power opens the door to the development of AI systems, and more specifically to Machine Learning tools.

Boosted by Internet of Things, Machine Learning is about to spread out in almost all processes of industry from the early design of a product to its use by the customer: supply and demand forecasting, predictive maintenance, predictive modeling of users, to name but a few, are all examples of applications currently targeted by ML. In parallel, data generated in a company has a value and generates opportunities for new services enabled by ML algorithms. Eventually, the product itself, empowered by AI, can become smart and autonomous as the self-managed network and the self-driving car.

Compared to the first age of of Big Data Analytics, new and higher expectations come into the play. To be used in decision-making in critical areas (defense, health, transportation,…) or simply to permit the necessary trust that a technology requires to be adopted, AI systems must offer guarantees on their correctness, their robustness, the traceability of learning and the interpretability of their decisions. Moreover, embedded in a non stationary environment, they are supposed to interact with their environment, be aware of their potential weaknesses and continue to improve themselves through relevant interactions. For research groups, this represents a set of stimulating challenges whose solutions will be the key to the long-lasting use of Data Science and AI tools.