AISTATS is the International Conference on Artificial Intelligence and Statistics. Its 22nd edition took place in Naha, Okinawa, Japan from 16 April to 18 April 2019. Since its inception in 1985, AISTATS has been an interdisciplinary gathering of researchers at the intersection of artificial intelligence, machine learning, statistics and related areas.
The 2019 AISTATS edition featured 360 different papers. It is thus smaller than NeurIPS and ICML, making it easier to interact with authors, while preserving a tough selection on the submissions. Moreover, focus is mainly put on the statistical analysis of the proposed methods, which is at the core of the Image-Data-Signal department activity.
Pierre Laforgue and Alex Lambert, PhD students at Télécom Paris, have presented two papers:
Romain R Brault (Télécom Paris), Alex Lambert (Télécom Paris), Zoltan Szabo (Ecole Polytechnique), Florence d’Alché-Buc (Télécom Paris), Maxime Sangnier (Sorbonne University)
Machine learning has witnessed tremendous success in solving tasks depending on a single hyperparameter. When considering simultaneously a finite number of tasks, multi-task learning enables one to account for the similarities of the tasks via appropriate regularizers. A step further consists of learning a continuum of tasks for various loss functions. A promising approach, called Parametric Task Learning, has paved the way in the continuum setting for affine models and piecewise-linear loss functions. In this work, we introduce a novel approach called Infinite Task Learning whose goal is to learn a function whose output is a function over the hyperparameter space. We leverage tools from operator-valued kernels and the associated vector-valued RKHSs that provide an explicit control over the role of the hyperparameters, and also allows us to consider new type of constraints. We provide generalization guarantees to the suggested scheme and illustrate its efficiency in cost-sensitive classification, quantile regression and density level set estimation.
Pierre Laforgue (Télécom Paris), Stephan Clémençon (Télécom Paris), Florence d’Alche-Buc (Télécom Paris)
This paper investigates a novel algorithmic approach to data representation based on kernel methods. Assuming that the observations lie in a Hilbert space X , the introduced Kernel Autoencoder (KAE) is the composition of mappings from vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHSs) that minimizes the expected reconstruction error. Beyond a first extension of the autoencoding scheme to possibly infinite dimensional Hilbert spaces, KAE further allows to autoencode any kind of data by choosing X to be itself a RKHS. A theoretical analysis of the model is carried out, providing a generalization bound, and shedding light on its connection with Kernel Principal Component Analysis. The proposed algorithms are then detailed at length: they crucially rely on the form taken by the minimizers, revealed by a dedicated Representer Theorem. Finally, numerical experiments on both simulated data and real labeled graphs (molecules) provide empirical evidence of the KAE performances.