The project produced a fully integrated demonstrator that simulates a complete transcatheter aortic valve implantation (TAVI) procedure and embeds all software components developed by the five consortium partners. Central to the demonstrator is an algorithm that optimises the C‑arm projection direction before the intervention. By analysing 4‑D shape models of the aorta and the positions of the aortic valve leaflets, the algorithm computes the optimal fluoroscopic angle that aligns the valve plane with the image plane. A ray‑tracing based routine then generates a digitally reconstructed radiograph (DRR) from the chosen pose, allowing the operator to visualise the expected fluoroscopic view in advance. This pre‑planning step reduces the number of live fluoroscopy acquisitions, thereby lowering both procedure time and patient radiation dose.
To compensate for respiratory motion, a breathing‑phase classification module was added. Fluoroscopic images are segmented to identify the patient’s breathing state, and the system applies a motion‑correction offset to the navigation display. This improves the localisation accuracy of the catheter tip and the valve prosthesis during the procedure. In parallel, a deep‑learning instrument‑detection module was trained on fluoroscopic images that had been annotated with the positions of catheters and guidewires. A UNet architecture with residual connections was found to outperform other tested networks, providing real‑time segmentation of instruments in the live feed. The combined use of motion compensation and instrument localisation gives the operator a three‑dimensional view of the catheter relative to the patient‑specific anatomy, which is rendered by the 3‑D navigation system developed by the University of Ulm.
The data handling backbone of the project is the mTRIAL platform, created by mediri GmbH. It anonymises and pseudonymises patient data, allowing the secure upload of additional files such as landmark annotations and shape models. The platform supports the exchange of data among partners and feeds the demonstrator with the necessary inputs. All computationally intensive tasks, including the training of the deep‑learning models and the generation of DRRs, are performed on a dedicated GPU server that can be accessed remotely by the project team.
The consortium consisted of five partners: mediri GmbH (project lead, responsible for the optimisation algorithm, breathing‑phase module, instrument detection, and mTRIAL platform), 1000shapes GmbH (creation of patient‑specific 3‑D and 4‑D vascular and organ models), Universitätsklinikum Ulm (clinical data provision, image‑processing tasks, and documentation of the intervention), Fraunhofer MEVIS (integration of multimodal data, segmentation, and registration), and the University of Ulm (development of the 3‑D navigation visualisation). The project ran from 1 January 2019 to 31 March 2023 and was funded by the German Federal Ministry of Education and Research (grant 13GW0372A). A kickoff meeting took place in Ulm on 12–13 November 2019, and despite travel restrictions imposed by the COVID‑19 pandemic, the team maintained close communication through monthly video conferences. The demonstrator was presented at several internal and external events, showcasing the reduction in fluoroscopy usage, improved navigation accuracy, and enhanced documentation capabilities that together promise lower complication rates, shorter hospital stays, and broader access to complex interventional procedures.
