Add to favorites:
Share:
Proposals are expected to contribute to one or more of the following:
- Robust AI models and systems capable of resisting different classes of adversarial manipulation;
- Innovative defence mechanisms for AI models and systems against new attack families;
- Methodologies for detecting and mitigating attacks, such as data poisoning, backdoor exploitation and misclassification;
- AI systems leveraging privacy-enhancing technologies that maintain data confidentiality and regulatory compliance, enabling trusted in-house AI deployments (e.g., for governments and enterprises).
Scope:
The increasing reliance on AI in cybersecurity, critical infrastructure, and decision-making processes raises concerns about the security and robustness of AI systems. As AI systems become more prevalent, they are increasingly targeted by adversarial attacks that manipulate inputs, compromise training data, or introduce hidden vulnerabilities. This topic aims to strengthen the resilience of AI systems and algorithms against various threats and attacks, such as enhancing their resilience against adversarial attacks, backdoor injections, and data poisoning. Proposals should develop real-time anomaly detection, mitigation techniques to defend against adversarial attacks and robust federated learning techniques, in synergies with leading efforts on AI transparency, and in compliance with the AI Act. The topic is expected to:
- Develop robust AI models resistant to adversarial attacks. Exploring techniques to harden AI models and systems against adversarial perturbations, such as adversarial training, robust optimisation, and defence mechanisms that enhance the trustworthiness of AI.
- Improve detection of manipulated or poisoned training data. Advancing methodologies to identify and mitigate compromised datasets, leveraging techniques such as anomaly detection, provenance tracking, and automated data validation mechanisms.
- Address the concept of Private AI by developing mechanisms that enable AI models to be trained, deployed and operated in privacy-preserving environments, particularly for sensitive use cases, as for example for government and enterprise settings. This includes ensuring AI computations and data remain within trusted execution boundaries (e.g. on-premise or regulated cloud environments), and leveraging existing and emerging privacy-enhancing techniques such as federated learning, secure aggregation, computing on encrypted data, quantum-safe homomorphic encryption and secure inference in deep learning to safeguard the protection of personal and other sensitive data throughout the AI lifecycle.
Expected Outcome
Proposals are expected to contribute to one or more of the following:
- Robust AI models and systems capable of resisting different classes of adversarial manipulation;
- Innovative defence mechanisms for AI models and systems against new attack families;
- Methodologies for detecting and mitigating attacks, such as data poisoning, backdoor exploitation and misclassification;
- AI systems leveraging privacy-enhancing technologies that maintain data confidentiality and regulatory compliance, enabling trusted in-house AI deployments (e.g., for governments and enterprises).
Scope
The increasing reliance on AI in cybersecurity, critical infrastructure, and decision-making processes raises concerns about the security and robustness of AI systems. As AI systems become more prevalent, they are increasingly targeted by adversarial attacks that manipulate inputs, compromise training data, or introduce hidden vulnerabilities. This topic aims to strengthen the resilience of AI systems and algorithms against various threats and attacks, such as enhancing their resilience against adversarial attacks, backdoor injections, and data poisoning. Proposals should develop real-time anomaly detection, mitigation techniques to defend against adversarial attacks and robust federated learning techniques, in synergies with leading efforts on AI transparency, and in compliance with the AI Act. The topic is expected to:
- Develop robust AI models resistant to adversarial attacks. Exploring techniques to harden AI models and systems against adversarial perturbations, such as adversarial training, robust optimisation, and defence mechanisms that enhance the trustworthiness of AI.
- Improve detection of manipulated or poisoned training data. Advancing methodologies to identify and mitigate compromised datasets, leveraging techniques such as anomaly detection, provenance tracking, and automated data validation mechanisms.
- Address the concept of Private AI by developing mechanisms that enable AI models to be trained, deployed and operated in privacy-preserving environments, particularly for sensitive use cases, as for example for government and enterprise settings. This includes ensuring AI computations and data remain within trusted execution boundaries (e.g. on-premise or regulated cloud environments), and leveraging existing and emerging privacy-enhancing techniques such as federated learning, secure aggregation, computing on encrypted data, quantum-safe homomorphic encryption and secure inference in deep learning to safeguard the protection of personal and other sensitive data throughout the AI lifecycle.
Partner Requests
Explore Real Collaboration Opportunities
🔍 As a logged-in member, you now have exclusive access to all active Partner Requests for this Funding Call.
See who’s looking for collaborators, explore exciting project ideas, and discover how others are planning to make an impact.
💡 Use these insights to get inspired—or take the next step and start a request of your own (first 3 entries for free).
Log in or registrate here for free.
Ask our experts about this call
Connect with the Listing Owner!
💬 Please log in now to send a direct message to our experts and ask your questions. Not a member yet? Sign up for free and start connecting today!
Related Funding and Finance Opportunities
Unlock Exclusive Funding Opportunities!
🔑 Get instant access to tailored funding opportunities that perfectly match your needs. This powerful feature is exclusively available to our premium members—helping you save time, stay ahead of the competition, and secure the right funding faster.
Upgrade to Premium now and never miss an important opportunity again! Already a premium member? Log in here to explore your matches.
Related Innovation Offers
Discover Tailored Innovation Offers!
🚀 Gain access to technology solutions that match your specific needs and interests—carefully selected to support your innovation goals. These offers are exclusively available to our premium members, helping you identify relevant technologies faster and start the right conversations with potential partners.
Upgrade to Premium now and explore your personalized technology matches today! Already a premium member? Log in here to view your tailored offers.
Related Knowledgeable Resources
Discover More with Premium: Related Knowledge Resources
🔒 You’re missing out on expert-curated knowledge specifically matched to this topic. As a Premium member, you gain exclusive access to in-depth articles, guides, and insights that help you make smarter decisions, faster.
Whether you’re preparing a funding proposal, researching a new market, or just need reliable information—our Premium knowledge matches save you hours of research and point you directly to what matters.
Upgrade to Premium now and instantly unlock relevant knowledge tailored to your needs! Already a member? Log in here to view your personalized content.

