Add to favorites:
Share:
The objective evaluation of artificial intelligence (AI) technologies, such as those used for satellite image analysis, requires a specific organisation whereby systems are tested on datasets that are new to the systems (blind testing), but that are representative of the tasks under study, and using common protocols. This scheme is commonly referred to as a “technological challenge”. One objective of the call is to organise a technological challenge driving research toward enhanced satellite image analysis for defence applications, and in particular for the combined analysis of optical and radar images. While a few challenges on satellite image analysis are organised in other contexts, there is a need for evaluations focusing on defence use cases, and for large datasets with annotations enabling accurate performance measurements.
Scope:
The proposals must address the organisation of a technological challenge on multi-source satellite image analysis based on the preliminary evaluation plan provided as part of the call document (see Annex 4b). This includes the collection, annotation and distribution of data, and the writing of the evaluation plans. The proposals must also address the possibility to involve representative defence users testing the demonstrators produced by the participating teams and providing feedback.
The following use cases should be considered when elaborating the evaluation plans:
- Target analysis: vehicle functional status recognition, target identification and early target detection, classification and recognition.
- Monitoring and tracking: change in detection, camps, oil tank volume estimation, non-linear tracking of targets, routes.
- Searching: disturbed terrain detection, vehicle trails or moving targets detection, camouflage and material detection, minefield detection.
- Mapping: terrain modelling (ground, water, roads, constructions), trafficability analysis and obstacle detection, coastal bathymetry for shallow waters.
- Damage assessment: battle damage assessment, detection of climate/environment disaster-affected areas impacting defence operations.
Both optical and radar images must be considered. They may include the following:
Optical
- Standard (i.e., in the visible part of the EM spectrum) panchromatic and/or multispectral images with various spatial resolutions below or above 1 metre depending on the selected use-cases, based on commercial, dual-use or defence systems.
- Hyperspectral images with various spatial resolutions below 50m, based on scientific, commercial, dual-use or defence systems.
- Infrared images.
This may include multi-view stereo or video modes where available.
Radar
- X-band SAR images (amplitude and phase, different polarisation) with various resolutions below or above 1 metre depending on the selected use-cases, based on commercial, dual-use or defence systems.
- C or S-band SAR images.
- P or L-band SAR images.
The use of optical and radar aerial imagery may also be considered, in particular to test systems on high-resolution images for certain types of images (e.g. hyperspectral or infrared, simulation of higher resolution images).
Metadata that would normally be used in operational scenarios should be provided.
The actual types of images and metadata to be used for the challenge should be described in the proposals.
Types of activities
The following table lists the types of activities which are eligible for this topic, and whether they are mandatory or optional (see Article 10(3) EDF Regulation):
Types of activities (art 10(3) EDF Regulation) |
Eligible? |
|
(a) |
Activities that aim to create, underpin and improve knowledge, products and technologies, including disruptive technologies, which can achieve significant effects in the area of defence (generating knowledge) |
Yes(optional) |
(b) |
Activities that aim to increase interoperability and resilience, including secured production and exchange of data, to master critical defence technologies, to strengthen the security of supply or to enable the effective exploitation of results for defence products and technologies (integrating knowledge) |
Yes(mandatory) |
(c) |
Studies, such as feasibility studies to explore the feasibility of new or upgraded products, technologies, processes, services and solutions |
Yes(optional) |
(d) |
Design of a defence product, tangible or intangible component or technology as well as the definition of the technical specifications on which such a design has been developed, including any partial test for risk reduction in an industrial or representative environment |
Yes(optional) |
(e) |
System prototyping of a defence product, tangible or intangible component or technology |
No |
(f) |
Testing of a defence product, tangible or intangible component or technology |
No |
(g) |
Qualification of a defence product, tangible or intangible component or technology |
No |
(h) |
Certification of a defence product, tangible or intangible component or technology |
No |
(i) |
Development of technologies or assets increasing efficiency across the life cycle of defence products and technologies |
No |
Accordingly, the proposals must cover at least the following tasks as part of mandatory activities:
- Integrating knowledge:
- Setting up of the infrastructures for testing satellite image analysis systems in the framework of the technological challenge.
- Production of data annotation guidelines, collection and annotation of data, quality assessment, distribution and curation of databases.
- Organisation of the evaluation campaigns, and in particular:
- Coordination of the exchanges with other stakeholders on the data annotation guidelines and evaluation plans, and elaboration of these documents;
- Management of the experimental test campaigns, including the objective measurements of the performances of the technological modules submitted to the tests by the participating teams according to the protocols and metrics described in the evaluation plans;
- Organisation of the debriefing workshops
The proposals should include clear descriptions of the proposed criteria to assess work package completion. Criteria should include the production of detailed evaluation plans agreed upon by all stakeholders, the production of the annotated databases needed for the evaluations, the production of measurements for all systems submitted to the tests by the participating teams following these plans, and the organisation of the needed events.
Functional requirements
The proposed solutions should enable the measurement of the performances of satellite image analysis systems according to detailed evaluation plans based on the preliminary evaluation plan provided as part of this call document (see Annex 4b). Key aspects of the foreseen detailed evaluation plans and associated data management should be described in the proposals.
Proposals should in particular describe:
- the detailed use cases to be addressed and the nature and size of image data to collect;
- the nature and volume of data annotation to be produced, the order of magnitude of the number of different semantic classes, object types and characteristics considered for annotations, and the granularity of these classes with examples;
- a framework for trusted sharing of data during the challenge and beyond;
- a detailed plan of the test campaigns and an overall timeline/Gantt chart of the challenge;
- the evaluation procedures (rules and tools to implement the metrics) and significance tests to be performed on measurements.
A user board consisting of representative defence users should be set up and involved in the preparation of the evaluation plans and of the data. Data should be representative of use cases of interest for defence. Proposals should describe the foreseen efforts from users to test demonstrators and provide feedback.
Data may be annotated in a semi-automatic way. Agreements may be sought with participants to use automatic tools developed by them. All annotations should be manually checked. To assess the relevance and accuracy of the data annotations, at least part of the data should be annotated by two independent annotators. The two sets of annotations should be compared to each other using the same metrics as for the evaluation of system outputs. An analysis of this inter-annotator agreement should be presented during the evaluation campaign workshops.
During the challenge, a detailed evaluation plan should be prepared for each evaluation campaign. Drafts of these detailed evaluation plans should be submitted for discussion to the participating teams, early enough to take into account feedback and leave time for system development before the actual test campaigns. Any evolution of the evaluation plans should take into account several factors: technical possibilities and cost, scientific relevance of the measurement, and representativeness of the metrics and protocols with respect to military needs. The justification of any change that is not subject to a consensus should be documented.
The user board and the participating teams should be involved in the steering of the challenge. Proposals should include a clear description of the foreseen governance and decision-making processes.
Expected Impact:
The outcome should contribute to:
- Collaboration, knowledge sharing, and new partnerships that drive collective progress in AI solution development for defence imagery analysis at the EU level;
- The development of policies and potential standards for AI in defence imagery analysis, enhancing interoperability across EU Member States;
- An enhanced cost-effectiveness of systems, optimising resource utilisation and reducing operational expenses.
Expected Outcome
Scope
The proposals must address the organisation of a technological challenge on multi-source satellite image analysis based on the preliminary evaluation plan provided as part of the call document (see Annex 4b). This includes the collection, annotation and distribution of data, and the writing of the evaluation plans. The proposals must also address the possibility to involve representative defence users testing the demonstrators produced by the participating teams and providing feedback.
The following use cases should be considered when elaborating the evaluation plans:
- Target analysis: vehicle functional status recognition, target identification and early target detection, classification and recognition.
- Monitoring and tracking: change in detection, camps, oil tank volume estimation, non-linear tracking of targets, routes.
- Searching: disturbed terrain detection, vehicle trails or moving targets detection, camouflage and material detection, minefield detection.
- Mapping: terrain modelling (ground, water, roads, constructions), trafficability analysis and obstacle detection, coastal bathymetry for shallow waters.
- Damage assessment: battle damage assessment, detection of climate/environment disaster-affected areas impacting defence operations.
Both optical and radar images must be considered. They may include the following:
Optical
- Standard (i.e., in the visible part of the EM spectrum) panchromatic and/or multispectral images with various spatial resolutions below or above 1 metre depending on the selected use-cases, based on commercial, dual-use or defence systems.
- Hyperspectral images with various spatial resolutions below 50m, based on scientific, commercial, dual-use or defence systems.
- Infrared images.
This may include multi-view stereo or video modes where available.
Radar
- X-band SAR images (amplitude and phase, different polarisation) with various resolutions below or above 1 metre depending on the selected use-cases, based on commercial, dual-use or defence systems.
- C or S-band SAR images.
- P or L-band SAR images.
The use of optical and radar aerial imagery may also be considered, in particular to test systems on high-resolution images for certain types of images (e.g. hyperspectral or infrared, simulation of higher resolution images).
Metadata that would normally be used in operational scenarios should be provided.
The actual types of images and metadata to be used for the challenge should be described in the proposals.
Types of activities
The following table lists the types of activities which are eligible for this topic, and whether they are mandatory or optional (see Article 10(3) EDF Regulation):
Types of activities (art 10(3) EDF Regulation) |
Eligible? |
|
(a) |
Activities that aim to create, underpin and improve knowledge, products and technologies, including disruptive technologies, which can achieve significant effects in the area of defence (generating knowledge) |
Yes(optional) |
(b) |
Activities that aim to increase interoperability and resilience, including secured production and exchange of data, to master critical defence technologies, to strengthen the security of supply or to enable the effective exploitation of results for defence products and technologies (integrating knowledge) |
Yes(mandatory) |
(c) |
Studies, such as feasibility studies to explore the feasibility of new or upgraded products, technologies, processes, services and solutions |
Yes(optional) |
(d) |
Design of a defence product, tangible or intangible component or technology as well as the definition of the technical specifications on which such a design has been developed, including any partial test for risk reduction in an industrial or representative environment |
Yes(optional) |
(e) |
System prototyping of a defence product, tangible or intangible component or technology |
No |
(f) |
Testing of a defence product, tangible or intangible component or technology |
No |
(g) |
Qualification of a defence product, tangible or intangible component or technology |
No |
(h) |
Certification of a defence product, tangible or intangible component or technology |
No |
(i) |
Development of technologies or assets increasing efficiency across the life cycle of defence products and technologies |
No |
Accordingly, the proposals must cover at least the following tasks as part of mandatory activities:
- Integrating knowledge:
- Setting up of the infrastructures for testing satellite image analysis systems in the framework of the technological challenge.
- Production of data annotation guidelines, collection and annotation of data, quality assessment, distribution and curation of databases.
- Organisation of the evaluation campaigns, and in particular:
- Coordination of the exchanges with other stakeholders on the data annotation guidelines and evaluation plans, and elaboration of these documents;
- Management of the experimental test campaigns, including the objective measurements of the performances of the technological modules submitted to the tests by the participating teams according to the protocols and metrics described in the evaluation plans;
- Organisation of the debriefing workshops
The proposals should include clear descriptions of the proposed criteria to assess work package completion. Criteria should include the production of detailed evaluation plans agreed upon by all stakeholders, the production of the annotated databases needed for the evaluations, the production of measurements for all systems submitted to the tests by the participating teams following these plans, and the organisation of the needed events.
Functional requirements
The proposed solutions should enable the measurement of the performances of satellite image analysis systems according to detailed evaluation plans based on the preliminary evaluation plan provided as part of this call document (see Annex 4b). Key aspects of the foreseen detailed evaluation plans and associated data management should be described in the proposals.
Proposals should in particular describe:
- the detailed use cases to be addressed and the nature and size of image data to collect;
- the nature and volume of data annotation to be produced, the order of magnitude of the number of different semantic classes, object types and characteristics considered for annotations, and the granularity of these classes with examples;
- a framework for trusted sharing of data during the challenge and beyond;
- a detailed plan of the test campaigns and an overall timeline/Gantt chart of the challenge;
- the evaluation procedures (rules and tools to implement the metrics) and significance tests to be performed on measurements.
A user board consisting of representative defence users should be set up and involved in the preparation of the evaluation plans and of the data. Data should be representative of use cases of interest for defence. Proposals should describe the foreseen efforts from users to test demonstrators and provide feedback.
Data may be annotated in a semi-automatic way. Agreements may be sought with participants to use automatic tools developed by them. All annotations should be manually checked. To assess the relevance and accuracy of the data annotations, at least part of the data should be annotated by two independent annotators. The two sets of annotations should be compared to each other using the same metrics as for the evaluation of system outputs. An analysis of this inter-annotator agreement should be presented during the evaluation campaign workshops.
During the challenge, a detailed evaluation plan should be prepared for each evaluation campaign. Drafts of these detailed evaluation plans should be submitted for discussion to the participating teams, early enough to take into account feedback and leave time for system development before the actual test campaigns. Any evolution of the evaluation plans should take into account several factors: technical possibilities and cost, scientific relevance of the measurement, and representativeness of the metrics and protocols with respect to military needs. The justification of any change that is not subject to a consensus should be documented.
The user board and the participating teams should be involved in the steering of the challenge. Proposals should include a clear description of the foreseen governance and decision-making processes.