Table of Contents
Fetching ...

Automata Learning versus Process Mining: The Case for User Journeys

Paul Kobialka, Andrea Pferscher, Bernhard K. Aichernig, Einar Broch Johnsen, Silvia Lizeth Tapia Tarifa

Abstract

With the servitization of business, understanding how users experience services becomes a crucial success factor for companies. Therefore, there is a need to include feedback from user experiences in the software engineering process. Behavioral models of user journeys, describing how users experience their interaction with a service, can provide insights and potentially improve services. In this paper, we investigate techniques that allow the automatic generation of behavioral models from user interactions with a service, recorded in an event log. We first compare two established techniques that generate behavioral models from a given event log: automata learning and process mining. Afterward, we present a novel, hybrid method that combines both automata learning and process mining methods to overcome their limitations. For the existing techniques, we present methods to learn models of user journeys and evaluate the accuracy of the resulting models. We then compare these techniques with our novel method for the automatic extraction of user journey models from the event logs of digital services. We assess the practical applicability of all techniques by evaluating real-world applications. Our results show that process mining techniques rely on expert knowledge, while automata learning techniques depend on the distribution of events in the given event log. We further show that the proposed hybrid technique combines the strengths of both process mining and automata learning, automatically selecting the best method and parameter settings for a given event log to learn very accurate models.

Automata Learning versus Process Mining: The Case for User Journeys

Abstract

With the servitization of business, understanding how users experience services becomes a crucial success factor for companies. Therefore, there is a need to include feedback from user experiences in the software engineering process. Behavioral models of user journeys, describing how users experience their interaction with a service, can provide insights and potentially improve services. In this paper, we investigate techniques that allow the automatic generation of behavioral models from user interactions with a service, recorded in an event log. We first compare two established techniques that generate behavioral models from a given event log: automata learning and process mining. Afterward, we present a novel, hybrid method that combines both automata learning and process mining methods to overcome their limitations. For the existing techniques, we present methods to learn models of user journeys and evaluate the accuracy of the resulting models. We then compare these techniques with our novel method for the automatic extraction of user journey models from the event logs of digital services. We assess the practical applicability of all techniques by evaluating real-world applications. Our results show that process mining techniques rely on expert knowledge, while automata learning techniques depend on the distribution of events in the given event log. We further show that the proposed hybrid technique combines the strengths of both process mining and automata learning, automatically selecting the best method and parameter settings for a given event log to learn very accurate models.

Paper Structure

This paper contains 22 sections, 4 equations, 6 figures, 4 tables.

Figures (6)

  • Figure 1: Three step procedure for the creation and analysis of user journey models. This work focuses on Step 1, indicated by the red square, i.e., the automatic creation of behavioral models from event logs.
  • Figure 2: Example of a transition system.
  • Figure 3: Markov chain, learned PM and AL models of an assessment system. Different state representations (for PM) and confidence values (for AL) are used to learn models from a log with 80 traces. For simplicity, we omit transition labels.
  • Figure 4: Results for the six evaluated learning setups on the synthesized benchmark set.
  • Figure 5: Proportion of failed test traces for the case studies.
  • ...and 1 more figures

Theorems & Definitions (8)

  • Example 1: Transition System
  • Example 2: Assessment System
  • Example 3: Event Log for the Assessment System
  • Example 4: State representations
  • Definition 1: DFS from state representations
  • Example 5: PM for the Assessment System
  • Example 6: AL for the Assessment System
  • Example 7: $\alpha$-Approximation for AL