Add to favorites:
Share:
This funding call invites proposals to organise a technological challenge focused on privacy-preserving human-AI dialogue systems. The aim is to create a comprehensive testing environment where advanced dialogue systems, especially those designed for defence use cases, can be rigorously evaluated. With an indicative budget of EUR 7.000.000, the call supports a single funded proposal, ensuring that the selected project is well-resourced to address the complex and multifaceted challenges inherent in human-AI interaction in security-sensitive domains.
The primary objective of this call is to establish a robust framework for evaluating the performance of dialogue systems in defence settings. These systems, despite their impressive capabilities, are known to exhibit various errors and inconsistencies. In order to improve their reliability, the challenge will create shared data sets, clearly defined metrics, and evaluation protocols. This approach will not only drive progress by pinpointing areas for improvement, but it will also build trust among system developers, users, and funders by providing objective, data-driven insights into system performance.
Key activities within the scope of this call include setting up the necessary infrastructure for testing and evaluation. This involves the collection, annotation, and curation of data that is representative of actual defence use cases, ensuring that the evaluation environment reflects real-world conditions. Detailed evaluation plans will be developed and refined in close consultation with both the participating teams and representative defence users. These users will play an active role in testing the demonstrators, providing critical feedback that will help shape the final evaluation metrics and protocols.
The technological challenge is designed to integrate knowledge from various stakeholders. Proposals are expected to include a thorough plan that covers the establishment of data annotation guidelines, quality assessment processes, and the distribution of curated databases. Additionally, the challenge will involve organising experimental test campaigns to objectively measure system performance according to predefined protocols. The challenge further mandates the organisation of debriefing workshops, where insights and results from the test campaigns will be discussed, and potential improvements will be identified.
Beyond its technical focus, the call seeks to promote community building at the European defence level by fostering collaboration between research teams, defence users, and industry players. The expected outcomes include the standardisation of testing procedures for dialogue systems, enhanced clarity on system performance, and the development of trustworthy AI systems capable of supporting operational decision-making. Moreover, by creating a centralised testing environment and reliable metrics, the challenge aims to stimulate further innovation and continuous improvement in the field of human-AI dialogue systems.
Overall, this call represents a significant opportunity to advance the state-of-the-art in dialogue systems for defence applications. It is geared towards creating solutions that not only meet the stringent requirements of managing classified information but also justify their responses through transparent and robust evaluation methods.
Opening: 18.02.2025
Deadline(s): 16.10.2025
Expected Outcome
- Standardization of testing for dialogue systems.
- Enhanced clarity on system performance.
- Trusted human-AI dialogue systems for defence.
- Creation of annotated and curated defence datasets.
- Improved interoperability and resilience in defence technologies.
- Community building at the European defence level.
- Reliable evaluation metrics and protocols.
- Evidence-based feedback from representative users.
- Benchmarking data for further system development.
- Strengthened trust in AI decision-making.
Scope
- Organize a technological challenge on human-AI dialogue systems.
- Setup testing infrastructure for evaluation.
- Collection, annotation and distribution of data.
- Elaboration of evaluation plans and metrics.
- Coordination with participating teams and stakeholders.
- Organization of debriefing workshops.
- Integration of defence use case data.
- Engagement with representative defence users.
- Measurement of system performance.
- Evaluation of ability to manage classified information.
Partner Requests
Explore Real Collaboration Opportunities
As a logged-in member, you now have exclusive access to all active Partner Requests for this Funding Call.
See who’s looking for collaborators, explore exciting project ideas, and discover how others are planning to make an impact.
Use these insights to get inspired—or take the next step and start a request of your own (available to premium members only).
Log in or registrate here for free.
Create a Partner Request for this Funding Call!
Unlock the full power of our Partner Request Tool—connect directly with potential collaborators, send tailored requests, and boost your chances of success through strategic partnerships. This exclusive feature is only available to our premium members.
Upgrade to Premium now and start building valuable connections today! Already a premium member? Log in here to access your requests.
Ask our experts about this call
Connect with the Listing Owner!
Please log in now to send a direct message to our experts and ask your questions. Not a member yet? Sign up for free and start connecting today!
Related Funding and Finance Opportunities
Unlock Exclusive Funding Opportunities!
Get instant access to tailored funding opportunities that perfectly match your needs. This powerful feature is exclusively available to our premium members—helping you save time, stay ahead of the competition, and secure the right funding faster.
Upgrade to Premium now and never miss an important opportunity again! Already a premium member? Log in here to explore your matches.