摘要
:
The creation of a process model is primarily a formalization task that faces the challenge of constructing a syntactically correct entity, which accurately reflects the semantics of reality, and is understandable to the model read...
展开
The creation of a process model is primarily a formalization task that faces the challenge of constructing a syntactically correct entity, which accurately reflects the semantics of reality, and is understandable to the model reader. This article proposes a framework called Model Judge, focused toward the two main actors in the process of learning process model creation: novice modelers and instructors. For modelers, the platform enables the automatic validation of the process models created from a textual description, providing explanations about quality issues in the model. Model Judge can provide diagnostics regarding model structure, writing style, and semantics by aligning annotated textual descriptions to models. For instructors, the platform facilitates the creation of modeling exercises by providing an editor to annotate the main parts of a textual description, which is empowered with natural language processing capabilities so that the annotation effort is minimized. So far around 300 students in process modeling courses of five different universities around the world have used the platform. The feedback gathered from some of these courses shows good potential in helping students to improve their learning experience, which might, in turn, impact process model quality and understandability. Moreover, our results show that instructors can benefit from getting insights into the evolution of modeling processes, including arising quality issues of single students, but also discovering tendencies in groups of students. Although the framework has been applied to process model creation, it could be extrapolated to other contexts where the creation of models based on a textual description plays an important role.
收起