The CogXAI project, funded by the German Federal Ministry of Education and Research (BMBF) under grant 01IS19076, ran from 1 October 2019 to 30 June 2023. It was carried out in cooperation with the Artificial Intelligence Lab at the University of Magdeburg, the Fraunhofer Institute for Integrated Circuits (IIS) and the Berlin start‑up Motor Ai GmbH. The Fraunhofer IIS represented the speech‑assistant domain, while Motor Ai provided the autonomous‑driving use case. The project was organised into seven work packages, with the first two focusing on the development of new analysis techniques and novel neural‑network architectures, and the third on linking these advances to the partner applications. Additional packages addressed usability, transfer to industry, teaching integration and science communication.
Scientifically, CogXAI pursued two complementary goals: the creation of post‑hoc explanation methods inspired by cognitive neuroscience, and the design of inherently interpretable neural‑network architectures. The first goal led to the introduction of Neuron Activation Profiles (NAPs). A NAP records how a network responds to distinct groups of inputs, allowing researchers to compare activation patterns across classes or conditions. Because a NAP aggregates responses over many examples, it provides a global explanation that does not rely on visualisation of individual inputs. The project also developed a visualisation technique for these profiles, inspired by brain‑activity maps. By re‑ordering neurons according to similarity of activation, the method produces topographic activation maps that display the internal state of a network in a spatially organised manner, facilitating intuitive interpretation of hidden layers.
The second goal involved translating principles from predictive coding and active inference into neural‑network design. The team produced predictive‑coding KNNs that employ exact error‑backpropagation without requiring a global error signal. In deeper variants, the architecture enforces a strictly local information flow, so that each layer can be inspected independently. These designs enable a new form of interpretability: local error signals and layer‑wise activations can be examined without reference to the entire network. In addition, the project explored active‑learning and planning models that adaptively adjust their internal representations during inference, further aligning network behaviour with cognitive processes.
While the report does not report benchmark accuracy figures, it emphasises that the primary contribution lies in methodological innovation rather than raw performance gains. The developed techniques were integrated into the partner applications: the Fraunhofer IIS used the NAPs and topographic maps to analyse speech‑assistant models, and Motor Ai incorporated the predictive‑coding architectures into its perception pipeline. Usability studies confirmed that the new visualisations improved model‑debugging efficiency for engineers. The project also produced teaching modules and public outreach materials, ensuring that the advances reach both academia and industry.
In summary, CogXAI advanced the explainability of deep neural networks by combining cognitive‑neuroscience‑inspired analysis tools with novel, locally interpretable architectures. Through collaboration with Fraunhofer IIS and Motor Ai, the project demonstrated the practical relevance of these methods in speech assistance and autonomous driving, while also contributing to education and public communication of AI research.
