Add to favorites:
Share:
This funding call focuses on advancing methodologies for assessing the capabilities and risks of General Purpose AI (GPAI) models, including multimodal AI systems. With the growing sophistication of GPAI models such as large language models, the need for robust evaluation frameworks is pressing. Current assessment methods fall short of capturing the comprehensive capabilities and ethical implications of these technologies. This call seeks innovative research to develop tools, techniques, and benchmarks that rigorously evaluate GPAI models. The aim is to ensure their reliability, fairness, ethical behavior, and alignment with societal values.
Proposals should address one or more areas, including forecasting emergent capabilities, assessing impactful or risky capabilities, and developing explainable and interpretable assessment techniques. Outputs will include benchmark tests and methodologies for policymakers, AI providers, and other stakeholders. Collaborative, interdisciplinary teams are encouraged to ensure real-world applicability.
The call emphasizes societal impact through the integration of Social Sciences and Humanities (SSH) expertise. Open Science practices are required, with research outcomes shared via the AI-on-demand platform and other resources. Proposals should demonstrate progress using clear KPIs and participate in international contests to validate methods.
Aligned with the co-programmed European Partnership on AI, Data, and Robotics (ADRA), the call encourages synergies with related initiatives and projects, fostering a cohesive European AI, Data, and Robotics ecosystem. By addressing these challenges, the project aims to support the implementation of the AI Act and bolster Europe’s position in AI innovation and governance.
Opening: 10 Jun 2025
Deadline(s): 13 Nov 2025
Data provided by Sciencebusiness.net
This funding opportunity represents a pre-agreed draft that has not yet been officially approved by the European Commission. The final, approved version is expected to be published in the first quarter of 2025. This draft is provided for informational purposes and may be used to preliminarily form consortia and develop project ideas, but it is offered without any guarantees or warranties.
Expected Outcome
- New methodologies to evaluate GPAI models and systems.
- Benchmarks to assess GPAI capabilities and risks.
- Support for the AI Act through evaluations by the AI Office.
- Tools and techniques to address real-world AI capabilities and risks.
- Societal impact enhanced through SSH involvement.
Scope
- Develop new assessment frameworks, methodologies, and tools for GPAI.
- Address capabilities and risks of multimodal and large language models.
- Forecast and evaluate emergent capabilities, including beneficial and harmful uses.
- Focus on high-impact or risky domains such as cybersecurity and CBRN hazards.
- Emphasize interpretability and explainability in AI systems.
- Create benchmark tests for GPAI evaluation.
- Ensure interdisciplinary collaboration, including SSH contributions.
- Facilitate Open Science practices and share results on relevant platforms.