Program at a Glance
| Time | Wed 12 Aug. | Thu 13 Aug. | Fri 14 Aug. | Sat 15 Aug. | Sun 16 Aug. |
|---|---|---|---|---|---|
| 09:00–10:00 | Keynote | Keynote | Workshop | Doctoral Consortium | |
| 10:00–10:30 | Coffee Break | Coffee Break | Coffee Break | Coffee Break | |
| 10:30–12:00 | Technical Talks | Technical Talk | Workshop | Doctoral Consortium | |
| 12:00–13:00 | Lunch | Lunch & PC Meeting | Lunch | Lunch | |
| 14:00–14:30 | Coffee Break | Coffee Break | Coffee Break | Coffee Break | |
| 13:00–14:30 | Technical Talks | Technical Talks | Workshop | Doctoral Consortium | |
| 15:00–16:00 | DC Intro | Technical Talks | Technical Talks | Workshop | |
| 16:00–17:00 | DC Intro | Technical Talks & Poster-Pitches | Community Meeting | Commute: Uni → City | |
| 17:00–18:00 | Reception & Posters | Social Program | |||
| 18:00–19:00 | Reception & Posters | Commute: Uni → City | Social Program | ||
| 19:00–20:00 | Commute: Uni → City | Conference Dinner | Social Program | ||
| 20:00–21:00 | City Tour | Conference Dinner | |||
| 21:00–22:00 | Conference Dinner |
Keynotes
Rosina O. Weber
Professor of Information Science and Computer Science, Drexel University, USAXAI is in Trouble, but is CBR?
In recent work, I examined why the subfield of XAI is in trouble by analyzing issues related with its scope, key definitions, motivations, and evaluation practices. Although popular, this field has not progressed enough to offer substantial solutions for AI explainability. Based on that analysis, I proposed a few directions. Although no one claims that CBR is in trouble, it remains a valuable yet non-mainstream AI subfield. Is there an identity problem with CBR? Is it confused not knowing if it is symbolic or subsymbolic? In the past two years alone, we have demonstrated how CBR can support AI alignment, benchmark additive feature attributions, and enhance LLMs. However, CBR may also benefit from additional strategic directions.
Bio:
Rosina O. Weber is a Professor of Information Science and Computer Science at Drexel University where she advises students with interdisciplinary interests. With degrees in both Economics (B.A.) and Engineering (M.S., Ph.D.), she is a leader in explainable artificial intelligence (XAI) and case-based reasoning where multiple disciplines converge. Weber has spent more than two decades combining symbolic and neural methods to build use-inspired AI systems across biomedical, legal, military, and science-and-technology domains. Her research has been funded by NIH, DARPA, DHS, and international agencies, including projects such as DARPA POCUS-AI (improving model accuracy through XAI), the NIH NCATS Biomedical Data Translator (designing and explaining a reasoning agent), and Sweden’s Vinnova-funded initiative to embed explanatory capabilities in deployed AI applications. Professor Weber has co-chaired multiple XAI workshops, delivered XAI tutorials, and taught AI to both computer-science majors and students from non-computational disciplines. Her scholarship appears in venues such as AI Magazine, Applied AI Letters, Expert Systems with Applications, Knowledge-based systems, AAAI, and ICCBR; her papers have earned best-paper honors, and she has received a Research Excellence Award. Beyond academia, she has been featured on Good Day Philadelphia and NBC’s Nightly News with Lester Holt, as well as podcasts. She is an elected member of the AAAI Executive Council and also a member of AAAS, AWIS, and ACL.
Eyke Hüllermeier
Professor of Artificial Intelligence and Machine Learning, Ludwig-Maximilians-Universität München, GermanyIn any case, or maybe not? Towards uncertainty-aware CBR
The representation and handling of uncertainty have recently received increasing attention in machine learning. One important branch of research focuses on distinguishing, representing, and quantifying two different types of uncertainty: aleatoric and epistemic. Another line of research that has gained traction is conformal prediction, a methodology for generating set-valued predictions with theoretical coverage guarantees. This talk will establish connections between these two uncertainty frameworks and case-based reasoning, elaborate on their potential as a formal foundation for uncertainty-aware CBR, and highlight the specific challenges that arise in this context.
Bio:
Eyke Hüllermeier is a full professor at the Institute of Informatics at LMU Munich, Germany, where he holds the Chair of Artificial Intelligence and Machine Learning. He studied mathematics and business computing, received his PhD in Computer Science from Paderborn University in 1997, and a Habilitation degree in 2002. Before joining LMU, he held professorships at several other German universities (Dortmund, Magdeburg, Marburg, Paderborn) and spent two years as a Marie Curie fellow at the IRIT in Toulouse (France). His research interests are centered around methods and theoretical foundations of artificial intelligence, with a particular focus on machine learning, preference modeling, and reasoning under uncertainty. He has published more than 400 articles on related topics in top-tier journals and major international conferences, and several of his contributions have been recognized with scientific awards. Professor Hüllermeier is Editor-in-Chief of Data Mining and Knowledge Discovery, Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), and serves on the editorial boards of several other AI and machine learning journals. He is currently President of EuADS, the European Association for Data Science, a member of the Strategy Board of the Munich Center for Machine Learning (MCML), and a member of the Steering Committee of the Konrad Zuse School of Excellence in Reliable AI (relAI).