Abstract: Users expect robots to act in a human-like manner when working together to achieve a mutual goal. For such activity, trust is an essential socio-psychological construct that mitigates the collaboration performance between participants in a Human-Robot Interaction (HRI). While research in trust gained increased interest in HRI, studies focusing on building computational models of trust are scarce, especially in multiparty scenarios. In this paper, we present multimodal computational models of trust in a humans-robot interaction scenario. More specifically, we address trust modeling as a binary and as a multi-class classification. We also investigate how early- and late-fusion of modalities impact trust modeling. Our results indicate that early-fusion performs better in both the binary and multi-class formulations, meaning that the interplay of modalities is important when studying trust. We also run a SHapley Additive exPlanation (SHAP) values analysis of our models, and present the results for a Random Forest in the binary classification problem. Finally, we discuss which multimodal
features are the most relevant to detect trust or mistrust.
Keywords: Trust, Human Robot Interaction, Affective Computing
Abstract: Our discourses are full of potential lexical ambiguities, due in part to the pervasive use of words having multiple senses. Sometimes, one word may even be used in more than one sense throughout a text. But, to what extent is this true for different kinds of texts? Does the use of polysemous words change when a discourse involves two people, or when speakers have time to plan what to say? We investigate these questions by comparing the polysemy level of texts of different nature, with a focus on spontaneous spoken dialogs; unlike previous work which examines solely scripted, written, monolog-like data. We compare multiple metrics that presuppose different conceptualizations of text polysemy, i.e., they consider the observed or the potential number of senses of words, or their sense distribution in a discourse. We show that the polysemy level of texts varies greatly depending on the kind of text considered, with dialog and spoken discourses having generally a higher polysemy level than written monologs. Additionally, our results emphasize the need for relaxing the popular “one sense per discourse” hypothesis.
See the presentation
Abstract: Many companies possess customer service chats to help the customer and soothe their difficulties. However, customer service data is confidential thus cannot easily be shared in the research community. This also implies these data are not labeled, leading to many issues training machine learning models on it. Given a very small subset of labeled data, we propose a semi-supervised framework dedicated to customer service data. Our framework has multiple purposes, from predicting customer satisfaction to identifying the status of the customer’s problem. We apply it on textual data from SNCF customer service chats.
See the presentation