The project, funded by the German Research Center for Artificial Intelligence (DFKI) under grant 16DHB3024, ran from 1 November 2019 to 31 December 2022 and was led by Prof. Dr. Peter Loos at the Institute for Business Informatics (IWi). Its aim was to raise the quality of teaching in courses that involve graphical modeling, such as software engineering and business process modeling, by introducing competency‑oriented assessment formats. The consortium comprised DFKI, the University of Paderborn (UPB), the Karlsruhe Institute of Technology (KIT), the University of Duisburg‑Essen (UDE), and the Gesellschaft für Informatik e.V. KIT coordinated the partnership, while UDE and KIT contributed expertise on UML, BPMN, and Petri nets, and UPB supplied didactic materials and evaluation expertise.
Technically, the effort produced a modular e‑assessment platform that supports both low‑level “model understanding” tasks and higher‑order “model creation” tasks. The platform integrates existing automated evaluation tools from the consortium partners into a unified web application. In the first work package, a competency model for graphical modeling was developed, together with a task catalogue and assessment rubrics. The catalogue was derived from a literature review of four textbooks, yielding five task categories: “model explain” (14 %), “identify errors in modeling language”, “evaluate model quality”, “model creation”, and “model refinement”. These categories were mapped to the competency model and used to generate assessment schemes.
The technical implementation involved building the platform infrastructure (work package 2.1) and embedding the automated EPK evaluation services (work package 2.2). Prototype versions were deployed in pilot courses at DFKI and at partner institutions (work package 3). Evaluation of the pilot, carried out by UPB, focused on media‑pedagogical design and usability, and informed iterative refinements of the platform. The platform’s modular design and open‑source licensing enable adaptation to other modeling languages and institutional contexts, supporting both formative exercises and summative examinations.
Performance data from the pilot indicated that the automated assessment could process a full EPK model in under two seconds, and that the system correctly identified errors in 92 % of the test cases. User feedback highlighted improved clarity of grading criteria and reduced instructor workload. The platform’s scalability was demonstrated by handling up to 500 concurrent users during a large lecture assessment without degradation of response time.
Beyond the technical deliverables, the project established a transfer strategy. The Gesellschaft für Informatik disseminated results through newsletters and a dedicated project website, while workshops and scientific publications were organized to engage the broader academic community. The consortium’s cooperation agreement, signed at project start, governed data sharing and intellectual property, ensuring that the platform and its components remain freely available under open‑source licenses.
In summary, the KEA‑Mod project delivered a competency‑oriented e‑assessment framework that integrates automated evaluation tools, a competency model, and a task catalogue for graphical modeling. The platform’s successful pilot deployment, positive evaluation metrics, and open‑source release position it as a reusable resource for higher‑education institutions seeking to enhance assessment quality in modeling courses. The collaborative effort, coordinated by KIT and supported by DFKI, UPB, UDE, and the Gesellschaft für Informatik, demonstrates how interdisciplinary partnerships can translate research into practical teaching tools within a defined funding period.
