The MDLMA – Multimodal Data Analysis with Multi‑Task Deep Learning – part C project, funded by the German Federal Ministry of Education and Research under the grant number 031L0202C, ran from 1 January 2020 to 30 June 2023. It was a sub‑project of the CompLS Round 2 consortium, with the Deutsches Elektronensynchrotron (DESY) as the lead institution. The consortium brought together several research groups that supplied synchrotron‑based tomographic data, histological samples, and high‑performance computing resources, including modern GPU nodes such as the NVIDIA Quad‑A100. The project’s primary scientific aim was to develop deep‑learning methods that could analyse large, multimodal biomedical imaging datasets with unprecedented speed and precision, specifically targeting the monitoring of bone‑implant degradation and regeneration.
Central to the effort was the creation of a comprehensive data catalogue that incorporated full metadata and was integrated into the DAPHNE4NFDI ecosystem as a standard data‑lifecycle platform. This catalogue enabled sustainable data management, visualization, and seamless integration into AI workflows. Building on this foundation, the team engineered several AI‑based segmentation and reconstruction pipelines. An active‑learning framework for interactive, guided segmentation (Bashir Kazimi et al.) and a scaled‑U‑Net architecture (Ivo M. Baltruschat et al.) were key innovations. The scaled‑U‑Net was extended to perform nine‑class segmentation—bone, implant, corroded implant, background, and intermediate classes—rather than the conventional three‑class approach. A soft‑voting scheme combined the nine probability maps to produce a single, highly accurate segmentation. Comparative visualisations showed that this “SUN” segmentation outperformed both semi‑automatic workflows and expert manual segmentations, reducing over‑estimation of implant volume and yielding more faithful boundary delineation. The improved accuracy directly translated into more reliable quantitative degradation parameters, such as implant‑resorption rates and bone‑regeneration speeds. Although the report does not provide explicit numerical accuracy metrics, it states that the new methods “significantly surpassed the precision of previously used techniques” and produced results in a markedly shorter time.
The project also delivered a user‑friendly dashboard framework that wrapped the full suite of tools—segmentation, registration, multimodal analysis, denoising, and interactive annotation—into a single interface. This framework, built on Jupyter and ML‑exchange, allowed researchers to launch complex workflows without deep technical knowledge. The compute infrastructure achieved 99.97 % availability and 99.6 % utilisation over the project period, with partners accessing the system at a higher rate than a single compute node would allow, thanks to the cooperative model that leveraged newer GPU hardware.
In terms of collaboration, the MDLMA consortium coordinated the development of the data catalogue, the AI pipelines, and the dashboard, while DESY provided the primary synchrotron beamtime and HPC resources. The project’s outputs are openly available, with code and data catalogues accessible through the SciCat instance at https://scicat-mdlma.desy.de/. The consortium’s work has been disseminated in seven peer‑reviewed publications, including contributions to SPIE and Scientific Reports, and has laid the groundwork for future multimodal imaging studies. The total budget of €275,534.00 was largely expended on personnel (€168,063.39), GPU hardware (€48,877.92), and administration (€58,356.92), underscoring the project’s focus on both scientific innovation and sustainable infrastructure development.
