Séminaire : Gabriele Ciravegna - Intesa Sanpaolo Innovation Center (ISPIC)
par BUTEL Nathalie
On May 27 at 10am, we have the great opportunity to hear from Dr. Gabriele Ciravegna on Explainable AI.
The talk will be held in the I3S conference room.
Title : Accurate, Interpretable, Verifiable Concept Bottleneck Models
Abstract :
I present a concise overview of a research trajectory aimed at making Concept Bottleneck Models (CBMs) accurate, interpretable, and verifiable. CBMs are transparent neural network as they predict a set of human understandable concepts before producing the final output. However, early CBMs faced a trade-off between accuracy and transparency. Concept Embedding Models (CEMs) addressed this by learning rich, high-dimensional concept representations that enable test-time interventions without sacrificing performance. Yet, both CBMs and CEMs mainly rely on black-box task predictors, limiting their interpretability. The Deep Concept Reasoner (DCR) introduced a neural-symbolic layer that constructs logic rules over concepts for prediction, achieving full prediction interpretability but lacking formal verifiability of the learnt rules. The Concept-based Memory Reasoner (CMR) resolves this by incorporating a memory of trainable rules, enabling formal verification, rule-level interventions, and pre-deployment checks.
Finally, Mixture of Concept Bottleneck Experts (M‑CBEs) generalize CBMs by freeing the functional form of expert predictors. Through linear and symbolic instantiations, M‑CBEs reveal a broader design space that flexibly navigates the accuracy–interpretability trade‑off and adapts to diverse user needs.
M-CBEs fully integrate all three properties, representing an accurate, interpretable, and verifiable CBM.
Bio :
Gabriele Ciravegna is a Researcher at the Intesa Sanpaolo Innovation Center’s Artificial Intelligence Lab. His work focuses on advancing the explainability, robustness, and efficiency of Deep Neural Networks. He is particularly recognized for his contributions to Concept-based Explainable AI (XAI), with applications across Computer Vision, Natural Language Processing, and Healthcare.
An active member of the research community since 2019, Gabriele frequently publishes in and reviews for top-tier venues, including NeurIPS, ICML, AAAI, IJCAI, IEEE TNNLS and IEEE TPAMI. He earned his Ph.D. under the guidance of Prof. Marco Gori, receiving the IEEE Caianiello Award and the Città di Firenze Award for the Best Ph.D. Thesis in 2021. In addition to his research, he co-lectures Machine Learning courses at Université Côte d’Azur and Politecnico di Torino, where he also completed two postdoctoral fellowships.