Add to favorites:
Share:
The funding call aims to address the challenges of integrating next-generation Artificial Intelligence (AI) technologies into a seamless, secure, and efficient cognitive computing continuum spanning Cloud, Edge, and IoT environments. This continuum seeks to enable distributed AI training and inference processes, overcoming the resource and cost limitations of current high-performance AI solutions.
Proposals should focus on developing novel tools and technologies for orchestrating AI workloads across heterogeneous infrastructures, optimizing key metrics like energy efficiency, latency, and model accuracy. Particular attention is given to enabling decentralized and federated computing through advanced scheduling, orchestration, and data compression mechanisms while maintaining robust data security and privacy.
The projects will also tackle energy and hardware efficiency by utilizing innovative hardware accelerators and heterogeneous processor architectures, aiming to reduce the environmental impact of AI processes. Emphasis is placed on openness and strategic autonomy to strengthen Europe's AI and data ecosystem and enhance international collaboration.
Targeted outcomes include innovative cloud-edge integration solutions, enhanced interoperability, and strategic industrial cooperation to support hyper-distributed AI applications. Proposals must demonstrate their solutions’ applicability across diverse domains and share results with the European R&D community.
Opening: 10 Jun 2025
Deadline(s): 13 Nov 2025
Expected Outcome
- Novel AI-enabled Cloud and Edge management solutions tailored to AI workload processing.
- Strategic industrial cooperation across the cognitive cloud-edge-IoT continuum.
- Seamless and trustworthy integration across computing and data environments.
- Enhanced openness and strategic autonomy in the AI and data economy.
- Improved international collaboration with guaranteed interoperability.
Scope
- Develop seamless and secure integration of Cloud, Edge, and IoT environments.
- Address computing infrastructure requirements for generative AI and large language models.
- Enable distributed AI training and inference processes across the continuum.
- Optimize energy efficiency and resource utilization.
- Incorporate cutting-edge hardware and virtualization techniques.
- Factor in security, privacy, and trust in AI applications.