The ExplAINN project, funded by the German Federal Ministry for Education and Research and carried out from 1 October 2019 to 30 September 2022, focused on explainable artificial intelligence (XAI). Its primary aim was to make machine‑learning models transparent, interpretable, and trustworthy for human users. The team investigated the relevance of XAI through three complementary approaches: analysing existing black‑box models, developing inherently interpretable methods, and studying how model robustness influences performance. A key outcome was a general framework for comparing XAI contributions, which was adopted by the community and presented at several conferences. This framework helped clarify the distinction between interpretation and explanation, a topic that had previously lacked consensus.
The project produced a range of technical results. Visualisation techniques for computer‑vision tasks were developed to provide intuitive explanations of model predictions, improving user understanding of complex decision processes. Robustness analyses revealed that while many models could be interpreted, achieving strong resilience against adversarial attacks remained challenging; repeated experiments in work packages 3 and 4 did not reach the robustness levels initially targeted. Nevertheless, the research highlighted that robust models tend to converge on similar feature representations, regardless of architecture, reinforcing the link between robustness and interpretability. The team also contributed to the creation of the DIN SPEC 92001‑3, a national standard on AI life‑cycle processes and quality requirements, specifically the section on explainability. This milestone underscored the practical impact of the project’s findings on industry and regulatory practice.
In addition to the scientific outputs, ExplAINN produced 26 peer‑reviewed publications at venues such as NeurIPS, ICCV, ICPR, TPAMI, and IJCNN, and delivered two invited talks. The XAI Handbook, co‑authored by the team, was published and widely cited. Three doctoral dissertations and eight master’s theses were supervised at the Technical University of Kaiserslautern, ensuring a strong academic legacy. The project’s results were showcased at events like the ML2R conference and the AI Academy, strengthening collaboration among German AI centres.
Collaboration was central to the project’s success. The core team, led by Sebastian Palacio, included researchers from DFKI GmbH and the Technical University of Kaiserslautern. The project’s interdisciplinary nature brought together experts in machine learning, computer vision, and human‑computer interaction. Funding from the German Federal Ministry enabled the recruitment of additional researchers and the organization of workshops that fostered knowledge exchange. Following the project’s conclusion, the team continued to build on its achievements through subsequent DFKI initiatives such as XAINES, SustainML, and SenpAI, further advancing the field of explainable AI and enhancing the national and international reputation of the institute.
