Hi! I am Claudio Di Ciccio

About me

In short: I am an academic researcher. I am fond of computer science, music, and languages. I love Sonia, whom I had the joy to marry, and Annika, our daughter.

You can find me on the  scholar profiles below. I also publish posts on  LinkedIn, and push code on  GitHub. Keep on reading to know more.

Short biography and CV

I am an associate pro­fessor at the Department of Information and Computing Sciences of Utrecht University (Netherlands). Previously, I was an associate professor at the Department of Computer Science of Sapienza University of Rome (Italy), and, earier on, an assistant professor at the In­sti­tute for In­form­a­tion Busi­ness of the Vi­enna Uni­versity of Eco­nom­ics and Busi­ness (WU Vi­enna), Aus­tria.

My re­search in­terests in­clude pro­cess min­ing, automated reasoning in AI, and blockchain technologies. I have pub­lished more than 100 re­search pa­pers, among oth­ers in In­form­a­tion Sys­tems, De­cision Sup­port Sys­tems, ACM Trans­ac­tions on Man­age­ment In­form­a­tion Sys­tems (TMIS), ACM Trans­ac­tions on Software Engineering Methodologies (TOSEM), and IEEE Transactions on Visualization and Computer Graphics.

I am a mem­ber of the Steering Committee of the IEEE Task Force on Pro­cess Min­ing, coedit the process mining newsletter, and am part of the XES/OCED working group. I serve as a re­viewer for in­ter­na­tional journ­als (in­clud­ing Information Systems, Information Sciences, and Decision Support Systems) and as a PC mem­ber of numerous international conferences, including IJCAI, BPM, CAiSE, and ICPM I was the general co-chair of the fifth International Conference on Process Mining in 2023 and co-organiser of the Human in the (Process) Mines Dagstuhl Seminar, PC co-chair of the first BPM Blockchain Forum, of the third International Conference on Process Mining in 2021, and of the twentieth Int. Conference on Business Process Management in 2022.

In August 2018, I was nominated Researcher of the Month of WU Vienna. In September 2015, I re­ceived the best pa­per award of the 13th con­fer­ence on Busi­ness Pro­cess Man­age­ment. I ob­tained my Ph.D. in Com­puter Science and En­gin­eer­ing with hon­our­able men­tion in 2013 at Sapi­enza, Uni­versity of Rome, with a thesis on the auto­mated dis­cov­ery of flex­ible work­flows from semi-­struc­tured text data sources.

For further details on my career, please feel free to check the Curriculum Vitae: the link to dowload it is right below.

Honours and awards

Peer-reviewed publications

Scholar profiles

Please find the list of my publications on:

Full list

All the pre-prints below are generated with my paper-copyright-watermark tool. You can clone and fork it here.

  1. Alessio Cecconi, Luca Barbaro, Claudio Di Ciccio, Arik Senderovich (2024) Measuring rule-based LTLf process specifications: A probabilistic data-driven approach. In: Information Systems, 120, 102312. IEEE. DOI: 10.1016/j.is.2023.102312.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process specifications define the behavior of processes by means of rules based on Linear Temporal Logic on Finite Traces LTLf. In a mining context, these specifications are inferred from, and checked on, multi-sets of runs recorded by information systems (namely, event logs). To this end, being able to gauge the degree to which process data comply with a specification is key. However, existing mining and verification techniques analyze the rules in isolation, thereby disregarding their interplay. In this paper, we introduce a framework to devise probabilistic measures for declarative process specifications. Thereupon, we propose a technique that measures the degree of satisfaction of specifications over event logs. To assess our approach, we conduct an evaluation with real-world data, evidencing its applicability for diverse process mining tasks, including discovery, checking, and drift detection.

  2. Edoardo Marangone, Claudio Di Ciccio, Daniele Friolo, Eugenio Nerio Nemmi, Daniele Venturi, Ingo Weber (2023) MARTSIA: Enabling Data Confidentiality for Blockchain-based Process Execution. In: EDOC 2023, 58-76, Springer. DOI: 10.1007/978-3-031-46587-1_4.
    Read the pre-print. Download the BiBTeX entry.

    Multi-party business processes rely on the collaboration of various players in a decentralized setting. Blockchain technology can facilitate the automation of these processes, even in cases where trust among participants is limited. Transactions are stored in a ledger, a replica of which is retained by every node of the blockchain network. The operations saved thereby are thus publicly accessible. While this enhances transparency, reliability, and persistence, it hinders the utilization of public blockchains for process automation as it violates typical confidentiality requirements in corporate settings. In this paper, we propose MARTSIA: A Multi-Authority Approach to Transaction Systems for Interoperating Applications. MARTSIA enables precise control over process data at the level of message parts. Based on Multi-Authority Attribute-Based Encryption (MA-ABE), MARTSIA realizes a number of desirable properties, including confidentiality, transparency, and auditability. We implemented our approach in proof-of-concept prototypes, with which we conduct a case study in the area of supply chain management. Also, we show the integration of MARTSIA with a state-of-the-art blockchain-based process execution engine to secure the data flow.

  3. Anti Alman, Alessio Arleo, Iris Beerepoot, Andrea Burattin, Claudio Di Ciccio, Manuel Resinas (2023) Tiramisu: A Recipe for Visual Sensemaking of Multi-Faceted Process Information. In: ICPM Workshops 2023, 1-12, Springer. (To appear)Best Workshop Paper Award at the 5th International Workshop on Event Data and Behavioral Analytics (EdbA'23).
    Read the pre-print. Download the BiBTeX entry.

    Knowledge-intensive processes represent a particularly challenging scenario for process mining. The flexibility that such processes allow constitutes a hurdle as it is hard to capture in a single model. To tackle this problem, multiple visual representations of the same processes could be beneficial, each addressing different information dimensions according to the specific needs and background knowledge of the concrete process workers and stakeholders. In this idea paper, we propose a novel framework leveraging visual analytics for the interactive visualization of multi-faceted process information, aimed at easing the investigation tasks of users in their process analysis tasks. This is primarily achieved by an interconnection of multiple visual layers, which allow our framework to display process information under different perspectives and to project these perspectives onto a domain-friendly representation of the context in which the process unfolds. We demonstrate the feasibility of the framework through its application in two use-case scenarios in the context of healthcare and personal information management.

  4. Davide Basile, Claudio Di Ciccio, Valerio Goretti, Sabrina Kirrane (2023) Blockchain based resource governance for decentralized web environments. In: Frontiers in Blockchain, 6, 1141909. Frontiers. DOI: 10.3389/fbloc.2023.1141909. (Open access)
    Read the pre-print. Download the BiBTeX entry.

    Decentralization initiatives such as Solid, Digi.me, and ActivityPub aim to give data owners more control over their data and to level the playing field by enabling small companies and individuals to gain access to data, thus stimulating innovation. However, these initiatives typically use access control mechanisms that cannot verify compliance with usage conditions after access has been granted to others. In this paper, we extend the state of the art by proposing a resource governance conceptual framework, entitled ReGov, that facilitates usage control in decentralized web environments. We subsequently demonstrate how our framework can be instantiated by combining blockchain and trusted execution environments. Through blockchain technologies, we record policies expressing the usage conditions associated with resources and monitor their compliance. Our instantiation employs trusted execution environments to enforce said policies, inside data consumers’ devices. We evaluate the framework instantiation through a detailed analysis of requirments derived from a data market motivating scenario, as well as an assessment of the security, privacy, and affordability aspects of our proposal.

  5. Iris Beerepoot, Claudio Di Ciccio, Hajo A. Reijers, Stefanie Rinderle-Ma, Wasana Bandara, Andrea Burattin, Diego Calvanese, Tianwa Chen, Izack Cohen, Benoît Depaire, Gemma Di Federico, Marlon Dumas, Christopher van Dun, Tobias Fehrer, Dominik A. Fischer, Avigdor Gal, Marta Indulska, Vatche Isahagian, Christopher Klinkmüller, Wolfgang Kratsch, Henrik Leopold, Amy Van Looy, Hugo Lopez, Sanja Lukumbuzya, Jan Mendling, Lara Meyers, Linda Moder, Marco Montali, Vinod Muthusamy, Manfred Reichert, Yara Rizk, Michael Rosemann, Maximilian Röglinger, Shazia Sadiq, Ronny Seiger, Tijs Slaats, Mantas Simkus, Ida Asadi Someh, Barbara Weber, Ingo Weber, Mathias Weske, Francesca Zerbato (2023) The biggest business process management problems to solve before we die. In: Computers in Industry, 146, 103837. Elsevier. DOI: https://doi.org/10.1016/j.compind.2022.103837. (Open access)
    Download the BiBTeX entry.

    It may be tempting for researchers to stick to incremental extensions of their current work to plan future research activities. Yet there is also merit in realizing the grand challenges in one’s field. This paper presents an overview of the nine major research problems for the Business Process Management discipline. These challenges have been collected by an open call to the community, discussed and refined in a workshop setting, and described here in detail, including a motivation why these problems are worth investigating. This overview may serve the purpose of inspiring both novice and advanced scholars who are interested in the radical new ideas for the analysis, design, and management of work processes using information technology.

  6. Jörg Becker, Friedrich Chasin, Michael Rosemann, Daniel Beverungen, Jennifer Priefer, Jan vom Brocke, Martin Matzner, Adela del Rio Ortega, Manuel Resinas, Flavia Santoro, Minseok Song, Kangah Park, Claudio Di Ciccio (2023) City 5.0: Citizen involvement in the design of future cities. In: Electronic Markets, 33 (10). Springer. DOI: 10.1007/s12525-023-00621-y. (Open access)
    Download the BiBTeX entry.

    A citizen-centric view is key to channeling technological affordances into the development of future cities in which improvements are made with the quality of citizens’ life in mind. This paper proposes City 5.0 as a new citizen-centric design paradigm for future cities, in which cities can be seen as markets connecting service providers with citizens as consumers. City 5.0 is dedicated to eliminating restrictions that citizens face when utilizing city services. Our design paradigm focuses on smart consumption and extends the technology-centric concept of smart city with a stronger view on citizens’ roadblocks to service usage. Through a series of design workshops, we conceptualized the City 5.0 paradigm and formalized it in a semi-formal model. The applicability of the model is demonstrated using the case of a telemedical service offered by a Spanish public healthcare service provider. The usefulness of the model is validated by qualitative interviews with public organizations involved in the development of technology-based city solutions. Our contribution lies in the advancement of citizen-centric analysis and the development of city solutions for both academic and professional communities.

  7. Dina Bayomie, Claudio Di Ciccio, Jan Mendling (2023) Event-case correlation for process mining using probabilistic optimization. In: Information Systems, 114, 102167. Elsevier. DOI: 10.1016/j.is.2023.102167.
    Read the pre-print. Download the BiBTeX entry.

    Process mining supports the analysis of the actual behavior and performance of business processes using event logs. An essential requirement is that every event in the log must be associated with a unique case identifier (e.g., the order ID of an order-to-cash process). In reality, however, this case identifier may not always be present, especially when logs are acquired from different systems or extracted from non-process-aware information systems. In such settings, the event log needs to be pre-processed by grouping events into cases — an operation known as event correlation. Existing techniques for correlating events have worked with assumptions to make the problem tractable: some assume the generative processes to be acyclic, while others require heuristic information or user input. Moreover, they abstract the log to activities and timestamps, and miss the opportunity to use data attributes. In this paper, we lift these assumptions and propose a new technique called EC-SA-Data based on probabilistic optimization. The technique takes as inputs a sequence of timestamped events (the log without case IDs), a process model describing the underlying business process, and constraints over the event attributes. Our approach returns an event log in which every event is associated with a case identifier. The technique allows users to flexibly incorporate rules on process knowledge and data constraints. The approach minimizes the misalignment between the generated log and the input process model, maximizes the support of the given data constraints over the correlated log, and the variance between activity durations across cases. Our experiments with various real-life datasets show the advantages of our approach over the state of the art.

  8. Davide Basile, Claudio Di Ciccio, Valerio Goretti, Sabrina Kirrane (2023) A Blockchain-driven Architecture for Usage Control in Solid. In: ICDCS Workshops 2023, 19-24, IEEE. DOI: 10.1109/ICDCSW60045.2023.00009.
    Read the pre-print. Download the BiBTeX entry.

    Decentralization initiatives like Solid enable data owners to control who has access to their data and to stimulate innovation by creating both application and data markets. Once data owners share their data with others, though, it is no longer possible for them to control how their data are used. To address this issue, we propose a usage control architecture to monitor compliance with usage control policies. To this end, our solution relies on blockchain and trusted execution environments. We demonstrate the potential of the architecture by describing the various workflows needed to realize a motivating use case scenario for data markets. Additionally, we discuss the merits of the approach from privacy, security, integrateability, and affordability perspectives.

  9. Claudio Di Ciccio (2023) Blockchain and Distributed Ledger Technologies. In: The Role of Distributed Ledger Technology in Banking: From Theory to Practice 2023, 11-34, Cambridge University Press. DOI: 10.1017/9781009411783.003.
    Read the pre-print. Download the BiBTeX entry.

    Distributed ledger technologies (DLTs) have attracted significant attention in the last few years. They gained a noticeable momentum, particularly after the introduction of blockchains as a basic building block for the development of new cryptocurrencies and tokens. This opportunity opened up new research directions to support the modern economy with numerous possibilities to redesign and innovate the market in accordance with the digital revolution we are witnessing. However, these technologies are yet to prove in practice their capability to match all the dependability and security requirements imposed in the economic and banking sector. In this chapter, we will provide an overview of the technical features of DLTs (and of blockchains in particular), outlining their potential impact in the economic field. We will first introduce the reader to their definition from a technical point of view, illustrate its core mechanisms and the guarantees they provide, and describe how these features are realised in a decentralised way. Finally, we will draw opportunities and challenges stemming from the adoption of this technology.

  10. Paolo Bottoni, Claudio Di Ciccio, Remo Pareschi, Domenico Tortola, Nicola Gessa, Gilda Massa (2023) Blockchain-as-a-service and blockchain-as-a-partner: Implementation options for supply chain optimization. In: Blockchain: Research and Applications, 100119. Elsevier. DOI: https://doi.org/10.1016/j.bcra.2022.100119. (Open access)
    Read the pre-print. Download the BiBTeX entry.

    Smart contracts show a high potential to make Supply Chain Management strategies epochally leaping towards higher levels of productivity, not only in the functioning of production processes but also in terms of product innovation and overall economic returns. This article illustrates the principle of Income Sharing as a highly performing economic strategy for supply chains with a natural implementation in blockchain smart contracts. It proposes a blockchain-based architecture that uses smart contracts to implement various algorithmic versions of the Income Sharing principle among companies participating in a supply chain. The formation of the total income and its consequent redistribution is calculated taking into account the role of the technological platform automating these procedures, which therefore becomes a party to the inter-company business project of a supply chain in the alternative roles, as feasible in business practice, of Blockchain-as-a-Service and Blockchain-as-a-Partner. The approach is implemented on Hyperledger Fabric, the most widespread platform for private and consortium blockchains. We compare and justify this design choice with the alternative given by public blockchains, with specific attention to Ethereum.

  11. Alessio Cecconi, Giuseppe De Giacomo, Claudio Di Ciccio, Fabrizio Maria Maggi, Jan Mendling (2022) Measuring the interestingness of temporal logic behavioral specifications in process mining. In: Information Systems, 107, 101920. Elsevier. DOI: 10.1016/j.is.2021.101920.
    Read the pre-print. Download the BiBTeX entry.

    The assessment of behavioral rules with respect to a given dataset is key in several research areas, including declarative process mining, association rule mining, and specification mining. An assessment is required to check how well a set of discovered rules describes the input data, and to determine to what extent data complies with predefined rules. Particularly in declarative process mining, Support and Confidence are used most often, yet they are reportedly unable to provide a sufficiently rich feedback to users and cause rules representing coincidental behavior to be deemed as representative for the event logs. In addition, these measures are designed to work on a predefined set of rules, thus lacking generality and extensibility. In this paper, we address this research gap by developing a measurement framework for temporal rules based on (LTLpf). The framework is suitable for any temporal rules expressed in a reactive form and for custom measures based on the probabilistic interpretation of such rules. We show that our framework can seamlessly adapt well-known measures of the association rule mining field to declarative process mining. Also, we test our software prototype implementing the framework on synthetic and real-world data, and investigate the properties characterizing those measures in the context of process analysis.

  12. Edoardo Marangone, Claudio Di Ciccio, Ingo Weber (2022) Fine-Grained Data Access Control for Collaborative Process Execution on Blockchain. In: BPM Blockchain, RPA and CEE Forum 2022, 51-67, Springer. DOI: 10.1007/978-3-031-16168-1_4.
    Read the pre-print. Download the BiBTeX entry.

    Multi-party business processes are based on the cooperation of different actors in a distributed setting. Blockchains can provide support for the automation of such processes, even in conditions of partial trust among the participants. On-chain data are stored in all replicas of the ledger and therefore accessible to all nodes that are in the network. Although this fosters traceability, integrity, and persistence, it undermines the adoption of public blockchains for process automation since it conflicts with typical confidentiality requirements in enterprise settings. In this paper, we propose a novel approach and software architecture that allow for fine-grained access control over process data on the level of parts of messages. In our approach, encrypted data are stored in a distributed space linked to the blockchain system backing the process execution; data owners specify access policies to control which users can read which parts of the information. To achieve the desired properties, we utilise Attribute-Based Encryption for the storage of data, and smart contracts for access control, integrity, and linking to process data. We implemented the approach in a proof-of-concept and conduct a case study in supply-chain management. From the experiments, we find our architecture to be robust while still keeping execution costs reasonably low.

  13. Anton Yeshchenko, Claudio Di Ciccio, Jan Mendling, Artem Polyvyanyy (2022) Visual Drift Detection for Event Sequence Data of Business Processes. In: IEEE Transactions on Visualization and Computer Graphics, 28 (8), 3050-3068. IEEE. DOI: 10.1109/TVCG.2021.3050071.
    Read the pre-print. Download the BiBTeX entry.

    Event sequence data is increasingly available in various application domains, such as business process management, software engineering, or medical pathways. Processes in these domains are typically represented as process diagrams or flow charts. So far, various techniques have been developed for automatically generating such diagrams from event sequence data. An open challenge is the visual analysis of drift phenomena when processes change over time. In this paper, we address this research gap. Our contribution is a system for fine-granular process drift detection and corresponding visualizations for event logs of executed business processes. We evaluated our system both on synthetic and real-world data. On synthetic logs, we achieved an average F-score of 0.96 and outperformed all the state-of-the-art methods. On real-world logs, we identified all types of process drifts in a comprehensive manner. Finally, we conducted a user study highlighting that our visualizations are easy to use and useful as perceived by process mining experts. In this way, our work contributes to research on process mining, event sequence analysis, and visualization of temporal data.

  14. Marco Roveri, Claudio Di Ciccio, Chiara Di Francescomarino, Chiara Ghidini (2022) Computing unsatisfiable cores for LTLf specifications. In: PMAI@IJCAI 2022, 81-84, CEUR-WS.org. (Open access)
    Download the BiBTeX entry.

    We tackle the challenge of extracting unsatisfiable cores from LTLf (linear-time temporal logic on finite traces) specifications.

  15. Claudio Di Ciccio, Marco Montali (2022) Declarative Process Specifications: Reasoning, Discovery, Monitoring. In: Process Mining Handbook 2022, 108-152, Springer. DOI: 10.1007/978-3-031-08848-3_4. (Open access)
    Read the pre-print. Download the BiBTeX entry.

    The declarative specification of business processes is based upon the elicitation of behavioural rules that constrain the legal executions of the process. The carry-out of the process is up to the actors, who can vary the execution dynamics as long as they do not violate the constraints imposed by the declarative model. The constraints specify the conditions that require, permit or forbid the execution of activities, possibly depending on the occurrence (or absence) of other ones. In this chapter, we review the main techniques for process mining using declarative process specifications, which we call declarative process mining. In particular, we focus on the three fundamental tasks of (1) reasoning on declarative process specifications, which is in turn instrumental to their (2) discovery from event logs and their (3) monitoring against running process executions to promptly detect violations. We ground our review on Declare, one of the most widely studied declarative process specification languages. Thanks to the fact that Declare can be formalized using temporal logics over finite traces, we exploit the automata-theoretic characterization of such logics as the core, unified algorithmic basis to tackle reasoning, discovery, and monitoring. We conclude the chapter with a discussion on recent advancements in declarative process mining, considering in particular multi-perspective extensions of the original approach.

  16. Moe Thandar Wynn, Julian Lebherz, Wil M.P. van der Aalst, Rafael Accorsi, Claudio Di Ciccio, Lakmali Jayarathna, H.M.W. Verbeek (2022) Rethinking the Input for Process Mining: Insights from the XES Survey and Workshop. In: Process Mining Workshops 2022, 3-16, Springer. DOI: 10.1007/978-3-030-98581-3_1. (Open access)
    Read the pre-print. Download the BiBTeX entry.

    Although the popularity and adoption of process mining techniques grew rapidly in recent years, a large portion of effort invested in process mining initiatives is still consumed by event data extraction and transformation rather than process analysis. The IEEE Task Force on Process Mining conducted a study focused on the challenges faced during event data preparation (from source data to event log). This paper presents findings from the online survey with 289 participants spanning the roles of practitioners, researchers, software vendors, and end-users. These findings were presented at the XES 2.0 workshop co-located with the 3rd International Conference on Process Mining. The workshop also hosted presentations from various stakeholder groups and a discussion panel on the future of XES and the input needed for process mining. This paper summarises the main findings of both the survey and the workshop. These outcomes help us to accelerate and improve the standardisation process, hopefully leading to a new standard widely adopted by both academia and industry.

  17. Jorge Munoz-Gama, Niels Martin, Carlos Fernandez-Llatas, Owen A. Johnson, Marcos Sepúlveda, Emmanuel Helm, Victor Galvez-Yanjari, Eric Rojas, Antonio Martinez-Millana, Davide Aloini, Ilaria Angela Amantea, Robert Andrews, Michael Arias, Iris Beerepoot, Elisabetta Benevento, Andrea Burattin, Daniel Capurro, Josep Carmona, Marco Comuzzi, Benjamin Dalmas, Rene de la Fuente, Chiara Di Francescomarino, Claudio Di Ciccio, Roberto Gatta, Chiara Ghidini, Fernanda Gonzalez-Lopez, Gema Ibanez-Sanchez, Hilda B. Klasky, Angelina Prima Kurniati, Xixi Lu, Felix Mannhardt, Ronny Mans, Mar Marcos, Renata Medeiros de Carvalho, Marco Pegoraro, Simon K. Poon, Luise Pufahl, Hajo A. Reijers, Simon Remy, Stefanie Rinderle-Ma, Lucia Sacchi, Fernando Seoane, Minseok Song, Alessandro Stefanini, Emilio Sulis, Arthur H.M. ter Hofstede, Pieter J. Toussaint, Vicente Traver, Zoe Valero-Ramon, Inge van de Weerd, Wil M.P. van der Aalst, Rob Vanwersch, Mathias Weske, Moe Thandar Wynn, Francesca Zerbato (2022) Process mining for healthcare: Characteristics and challenges. In: Journal of Biomedical Informatics, 127, 103994. DOI: 10.1016/j.jbi.2022.103994. (Open access)
    Download the BiBTeX entry.

    Process mining techniques can be used to analyse business processes using the data logged during their execution. These techniques are leveraged in a wide range of domains, including healthcare, where it focuses mainly on the analysis of diagnostic, treatment, and organisational processes. Despite the huge amount of data generated in hospitals by staff and machinery involved in healthcare processes, there is no evidence of a systematic uptake of process mining beyond targeted case studies in a research context. When developing and using process mining in healthcare, distinguishing characteristics of healthcare processes such as their variability and patient-centred focus require targeted attention. Against this background, the Process-Oriented Data Science in Healthcare Alliance has been established to propagate the research and application of techniques targeting the data-driven improvement of healthcare processes. This paper, an initiative of the alliance, presents the distinguishing characteristics of the healthcare domain that need to be considered to successfully use process mining, as well as open challenges that need to be addressed by the community in the future.

  18. Claudio Di Ciccio, Giovanni Meroni, Pierluigi Plebani (2022) On the adoption of blockchain for business process monitoring. In: Software and Systems Modeling, 21 (3), 915-937. Springer. DOI: 10.1007/s10270-021-00959-x. (Open access)
    Read the pre-print. Download the BiBTeX entry.

    Being the blockchain and distributed ledger technologies particularly suitable to create trusted environments where participants do not trust each other, business process management represents a proper setting in which these technologies can be adopted. In this direction, current research work primarily focuses on blockchain-oriented business process design, or on execution engines able to enact processes through smart contracts. Conversely, less attention has been paid to study if and how blockchains can be beneficial to business process monitoring. This work aims to fill this gap by (1) providing a reference architecture for enabling the adoption of blockchain technologies in business process monitoring solutions, (2) defining a set of relevant research challenges derived from this adoption, and (3) discussing the current approaches to address the aforementioned challenges.

  19. Alessio Cecconi, Claudio Di Ciccio, Arik Senderovich (2022) Measurement of Rule-based LTLf Declarative Process Specifications. In: ICPM 2022, 96-103, IEEE. DOI: 10.1109/ICPM57379.2022.9980690.
    Read the pre-print. Download the BiBTeX entry.

    The classical checking of declarative Linear Temporal Logic on Finite Traces (LTLf) specifications verifies whether conjunctions of sets of formulae are satisfied by collections of finite traces. The data on which the verification is conducted may be corrupted by a number of logging errors or execution deviations at the level of single elements within a trace. The ability to quantitatively assess the extent to which traces satisfy a process specification (and not only if they do so or not at all) is thus key, especially in process mining scenarios. Previous techniques proposed for this aim either require formulae to be extended with quantitative operators or cater to the coarse granularity of whole traces. In this paper, we propose a framework to devise probabilistic measures for declarative process specifications on traces at the level of events, inspired by association rule mining. Thereupon, we describe a technique that measures the degree of satisfaction of these specifications over bags of traces. To assess our approach, we conduct an evaluation with real-world data.

  20. Dina Bayomie, Kate Revoredo, Claudio Di Ciccio, Jan Mendling (2022) Improving Accuracy and Explainability in Event-Case Correlation via Rule Mining. In: ICPM 2022, 24-31, IEEE. DOI: 10.1109/ICPM57379.2022.9980684.
    Read the pre-print. Download the BiBTeX entry.

    Process mining analyzes business processes’ behavior and performance using event logs. An essential requirement is that events are grouped in cases representing the execution of process instances. However, logs extracted from different systems or non-process-aware information systems do not map events with unique case identifiers (case IDs). In such settings, the event log needs to be pre-processed to group events into cases – an operation known as event correlation. Existing techniques for correlating events work with different assumptions: some assume the generating processes are acyclic, others require extra domain knowledge such as the relation between the events and event attributes, or heuristic information about the activities’ execution time behavior. However, the domain knowledge is not always available or easy to acquire, compromising the quality of the correlated event log. In this paper, we propose a new technique called EC-SA-RM, which correlates the events using a simulated annealing technique and iteratively learns the domain knowledge as a set of association rules. The technique requires a sequence of timestamped events (i.e., the log without case IDs) and a process model describing the underlying business process. At each iteration of the simulated annealing, a possible correlated log is generated. Then, EC-SA-RM uses this correlated log to learn a set of association rules that represent the relationship between the events and the changing behavior over the events’ attributes in an understandable way. These rules enrich the input and improve the event correlation process for the next iteration. EC-SA-RM returns an event log in which events are grouped in cases and a set of association rules that explain the correlation over the events. We evaluate our approach using four real-life datasets.

  21. Boudewijn F. van Dongen, Johannes De Smedt, Claudio Di Ciccio, Jan Mendling (2021) Conformance checking of mixed-paradigm process models. In: Information Systems. Elsevier. DOI: 10.1016/j.is.2020.101685.
    Read the pre-print. Download the BiBTeX entry.

    Mixed-paradigm process models integrate strengths of procedural and declarative representations like Petri nets and Declare. They are specifically interesting for process mining because they allow capturing complex behavior in a compact way. A key research challenge for the proliferation of mixed-paradigm models for process mining is the lack of corresponding conformance checking techniques. In this paper, we address this problem by devising the first approach that works with intertwined state spaces of mixed-paradigm models. More specifically, our approach uses an alignment-based replay to explore the state space and compute trace fitness in a procedural way. In every state, the declarative constraints are separately updated, such that violations disable the corresponding activities. Our technique provides for an efficient replay towards an optimal alignment by respecting all orthogonal Declare constraints. We have implemented our technique in ProM and demonstrate its performance in an evaluation with real-world event logs.

  22. Alessio Cecconi, Adriano Augusto, Claudio Di Ciccio (2021) Detection of Statistically Significant Differences Between Process Variants Through Declarative Rules. In: BPM Forum 2021, 73-91, Springer. DOI: 10.1007/978-3-030-85440-9_5.
    Read the pre-print. Download the BiBTeX entry.

    Services and products are often offered via the execution of processes that vary according to the context, requirements, or customisation needs. The analysis of such process variants can highlight differences in the service outcome or quality, leading to process adjustments and improvement. Research in the area of process mining has provided several methods for process variants analysis. However, very few of those account for a statistical significance analysis of their output. Moreover, those techniques detect differences at the level of process traces, single activities, or performance. In this paper, we aim at describing the distinctive behavioural characteristics between variants expressed in the form of declarative process rules. The contribution to the research area is two-pronged: the use of declarative rules for the explanation of the process variants and the statistical significance analysis of the outcome. We assess the proposed method by comparing its results to the most recent process variants analysis methods. Our results demonstrate not only that declarative rules reveal differences at an unprecedented level of expressiveness, but also that our method outperforms the state of the art in terms of execution time.

  23. Davide Basile, Valerio Goretti, Claudio Di Ciccio, Sabrina Kirrane (2021) Enhancing Blockchain-Based Processes with Decentralized Oracles. In: BPM Blockchain and RPA Forum 2021, 102-118, Springer. DOI: 10.1007/978-3-030-85867-4_8.
    Read the pre-print. Download the BiBTeX entry.

    The automation of business processes via blockchain-based systems allows for trust, reliability and accountability of execution. The link that connects modules that operate within the on-chain sphere and the off-chain world is key as processes often involve the handling of physical entities and external services. The components that create that link are named oracles. Numerous studies on oracles and their implementations are arising in the literature. Nevertheless, their availability, integrity and trust could be undermined if centralized architectures are adopted, as taking over an oracle could produce the effect of a supply-chain attack on the whole system. Solutions are emerging that overcome this issue by turning the architecture underneath the oracles into a distributed one. In this paper, we investigate the design and application of oracles, distinguishing their adoption for the in-flow or out-flow of information and according to the initiator of the exchange (hence, pull- or push-based).

  24. Anti Alman, Claudio Di Ciccio, Fabrizio Maria Maggi, Marco Montali, Han van der Aa (2021) RuM: Declarative Process Mining, Distilled. In: BPM 2021, 23-29, Springer. DOI: 10.1007/978-3-030-85469-0_3.
    Read the pre-print. Download the BiBTeX entry.

    Flexibility is a key characteristic of numerous business process management domains. In these domains, the paths to fulfil process goals may not be fully predetermined, but can strongly depend on dynamic decisions made based on the current circumstances of a case. A common example is the adaptation of a standard treatment process to the needs of a specific patient. However, high flexibility does not mean chaos: certain key process rules still delimit the execution space, such as rules that prohibit the joint administration of certain drugs in a treatment, due to dangerous interactions. A renowned means to handle flexibility by design is the declarative approach, which aims to define processes through their core behavioural rules, thus leaving room for dynamic adaptation. This declarative approach to both process modelling and mining involves a paradigm shift in process thinking and, therefore, the support of novel concepts and tools. Complementing our tutorial with the same title, this paper provides a high-level introduction to declarative process mining, including its operationalisation through the RuM toolkit, key conceptual considerations, and an outlook for the future.

  25. Iris Beerepoot, Claudio Di Ciccio, Hajo A. Reijers, Stefanie Rinderle-Ma (2021) The Biggest Business Process Management Problems of Our Time. In: PROBLEMS@BPM 2021, 1-5, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    In their contributions to the first edition of the Workshop on BPM Problems to Solve before We Die, the authors identified nine problems. We categorise them along three levels: the event level, the process level, and the enterprise level. The event level is where detailed information on process activities resides. The second level is that of the individual processes. Multiple processes subsequently make up the top level of the enterprise. We introduce each of the corresponding problems within this categorization.

  26. Anti Alman, Claudio Di Ciccio, Fabrizio Maria Maggi (2021) Rule Mining with RuM (Extended Abstract). In: ITBPM 2021, 38-43, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process modeling languages are especially suitable to model loosely-structured, flexible business processes. One of the most prominent of these languages is Declare. The Declare language can be used for all process mining branches and a plethora of techniques have been implemented to support process mining with Declare. The process mining application RuM integrates multiple Declare-based process mining methods into a single application and is developed to be the starting point for the use of Declare both in industry and academia. RuM has been evaluated by conducting a qualitative user evaluation, the results of which have been used as input for further development. In this paper, we give a short overview of the current functionalities of RuM, including the main improvements made thus far

  27. Daniel Beverungen, Joos C. A. M. Buijs, Jörg Becker, Claudio Di Ciccio, Wil M.P. van der Aalst, Christian Bartelheimer, Jan vom Brocke, Marco Comuzzi, Karsten Kraume, Henrik Leopold, Martin Matzner, Jan Mendling, Nadine Ogonek, Till Post, Manuel Resinas, Kate Revoredo, Adela del-Río-Ortega, Marcello La Rosa, Flávia Maria Santoro, Andreas Solti, Minseok Song, Armin Stein, Matthias Stierle, Verena Wolf (2021) Seven Paradoxes of Business Process Management in a Hyper-Connected World. In: Business & Information Systems Engineering, 63 (2), 145-156. Springer. DOI: 10.1007/s12599-020-00646-z. (Open access)
    Download the BiBTeX entry.

    Business Process Management is a boundary-spanning discipline that aligns operational capabilities and technology to design and manage business processes. The Digital Transformation has enabled human actors, information systems, and smart products to interact with each other via multiple digital channels. The emergence of this hyper-connected world greatly leverages the prospects of business processes – but also boosts their complexity to a new level. We need to discuss how the BPM discipline can find new ways for identifying, analyzing, designing, implementing, executing, and monitoring business processes. In this research note, selected transformative trends are explored and their impact on current theories and IT artifacts in the BPM discipline is discussed to stimulate transformative thinking and prospective research in this field.

  28. Kathrin Figl, Claudio Di Ciccio, Hajo A. Reijers (2020) Do declarative process models help to reduce cognitive biases related to business rules?. In: ER 2020, 119-133, Springer. DOI: 10.1007/978-3-030-62522-1_9.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process modeling languages, such as Declare, represent processes by means of temporal rules, namely constraints. Those languages typically come endowed with a graphical notation to draw such models diagrammatically. In this paper, we explore the effects of diagrammatic representation on humans' deductive reasoning involved in the analysis and compliance checking of declarative process models. In an experiment, we compared textual descriptions of business rules against textual descriptions that were supplemented with declarative models. Results based on a sample of 75 subjects indicate that the declarative process models did not improve but rather lowered reasoning performance. Thus, for novice users, using the graphical notation of Declare may not help readers properly understand business rules: they may confuse them in comparison to textual descriptions. A likely explanation of the negative effect of graphical declarative models on human reasoning is that readers interpret edges wrongly. This has implications for the practical use of business rules on the one hand and the design of declarative process modeling languages on the other.

  29. Sabrina Kirrane, Claudio Di Ciccio (2020) BlockConfess: Towards an Architecture for Blockchain Constraints and Forensics. In: AIChain@Blockchain 2020, 539-544, IEEE. DOI: 10.1109/Blockchain50366.2020.00078.
    Read the pre-print. Download the BiBTeX entry.

    Although Blockchain is still an emerging technology it has the potential to serve as a general purpose Information and Communication Technology platform. Already, smart contract / chaincode platforms, such as Ethereum and Hyperledger Fabric, provide support for the execution of arbitrary computations. However, the suitability of these platforms for specifying and enforcing data and service usage constraints (e.g., usage policies, regulatory obligations, societal norms) and providing guarantees with respect to conformance has yet to be determined. In order to address this gap, in this position paper we argue that symbolic artificial intelligence techniques in the form of semantic technology based policy languages and business process conformance tools and techniques, can together be used to provide guarantees with respect to the behaviour of autonomous smart contract / chaincode applications.

  30. Anti Alman, Claudio Di Ciccio, Dominik Haas, Fabrizio Maria Maggi, Alexander Nolte (2020) Rule Mining with RuM. In: ICPM 2020, IEEE. DOI: 10.1109/ICPM49681.2020.00027.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process modeling languages are especially suitable to model loosely-structured, unpredictable business processes. One of the most prominent of these languages is Declare. The Declare language can be used for all process mining branches and a plethora of techniques have been implemented to support process mining with Declare. However, using these techniques can become cumbersome in practical situations where different techniques need to be combined for analysis. In addition, the use of Declare constraints in practice is often hampered by the difficulty of modeling them: the formal expression of Declare is difficult to understand for users without a background in temporal logic, whereas its graphical notation has been shown to be unintuitive. In this paper, we present RuM, a novel application for rule mining that addresses the above-mentioned issues by integrating multiple Declare-based process mining methods into a single unified application. The process mining techniques provided in RuM strongly rely on the use of Declare models expressed in natural language, which has the potential of mitigating the barriers of the language bias. The application has been evaluated by conducting a qualitative user evaluation with eight process analysts.

  31. Alessio Cecconi, Giuseppe De Giacomo, Claudio Di Ciccio, Jan Mendling (2020) A Temporal Logic-Based Measurement Framework for Process Mining. In: ICPM 2020, IEEE. DOI: 10.1109/ICPM49681.2020.00026.
    Read the pre-print. Download the BiBTeX entry.

    The assessment of behavioral rules with respect to a given dataset is key in several research areas, including declarative process mining, association rule mining, and specification mining. The assessment is required to check how well a set of discovered rules describes the input data, as well as to determine to what extent data complies with predefined rules. In declarative process mining, in particular, some measures have been taken from association rule mining and adapted to support the assessment of temporal rules on event logs. Among them, support and confidence are used more often, yet they are reportedly unable to provide a sufficiently rich feedback to users and often cause spurious rules to be discovered from logs. In addition, these measures are designed to work on a predefined set of rules, thus lacking generality and extensibility. In this paper, we address this research gap by developing a general measurement framework for temporal rules based on Linear-time Temporal Logic with Past on Finite Traces (LTLpf). The framework is independent from the rule-specification language of choice and allows users to define new measures. We show that our framework can seamlessly adapt well-known measures of the association rule mining field to declarative process mining. Also, we test our software prototype implementing the framework on synthetic and real-world data, and investigate the properties characterizing those measures in the context of process analysis.

  32. Anti Alman, Claudio Di Ciccio, Dominik Haas, Fabrizio Maria Maggi, Jan Mendling (2020) Rule Mining in Action: The RuM Toolkit. In: ICPM Doctoral Consortium / Tools 2020, 51-54, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Procedural process modeling languages can be difficult to use for process mining in cases where the process recorded in the event log is unpredictable and has a high number of different branches and exceptions. In these cases, declarative process modeling languages such as DECLARE are more suitable. Declarative languages do not aim at modeling the end-to-end process step by step, but constrain the behavior of the process using rules thus allowing for more variability in the process model yet keeping it compact. Although there are several commercial and academic process mining tools available based on procedural models, there are currently no comparable tools for working with declarative models. In this paper, we present RuM, an accessible and easy-to-use rule mining toolkit integrating multiple DECLARE-based process mining methods into a single unified application. RuM implements process mining techniques based on Multi-Perspective DECLARE, namely the extension of DECLARE supporting data constraints together with controlflow constraints. In particular, RuM includes support for process discovery, conformance checking, log generation and monitoring as well as a model editor. The application has been evaluated by conducting a qualitative user evaluation with eight process analysts.

  33. Artem Polyvyanyy, Hanan Alkhammash, Claudio Di Ciccio, Luciano García-Bañuelos, Anna A. Kalenkova, Sander J.J. Leemans, Jan Mendling, Alistair Moffat, Matthias Weidlich (2020) Entropia: A Family of Entropy-Based Conformance Checking Measures for Process Mining. In: ICPM Doctoral Consortium / Tools 2020, 39-42, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    This paper presents a command-line tool, called Entropia, that implements a family of conformance checking measures for process mining founded on the notion of entropy from information theory. The measures allow quantifying classical non-deterministic and stochastic precision and recall quality criteria for process models automatically discovered from traces executed by IT-systems and recorded in their event logs. A process model has “good” precision with respect to the log it was discovered from if it does not encode many traces that are not part of the log, and has “good” recall if it encodes most of the traces from the log. By definition, the measures possess useful properties and can often be computed quickly.

  34. Anton Yeshchenko, Jan Mendling, Claudio Di Ciccio, Artem Polyvyanyy (2020) VDD: A Visual Drift Detection System for Process Mining. In: ICPM Doctoral Consortium / Tools 2020, 31-34, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Research on concept drift detection has inspired recent advancements of process mining and expanding the growing arsenal of process analysis tools. What has so far been missing in this new research stream are techniques that support comprehensive process drift analysis in terms of localizing, drillingdown, quantifying, and visualizing process drifts. In our research, we built on ideas from concept drift, process mining, and visualization research and present a novel web-based software tool to analyze process drifts, called Visual Drift Detection (VDD). Addressing the comprehensive analysis requirements, our tool is of benefit to researchers and practitioners in the business intelligence and process analytics area. It constitutes a valuable aid to those who are involved in business process redesign projects.

  35. Roman Mühlberger, Stefan Bachhofner, Eduardo Castelló Ferrer, Claudio Di Ciccio, Ingo Weber, Maximilian Wöhrer, Uwe Zdun (2020) Foundational Oracle Patterns: Connecting Blockchain to the Off-chain World. In: BPM Blockchain and RPA Forum 2020, Springer. DOI: 10.1007/978-3-030-58779-6_3.
    Read the pre-print. Download the BiBTeX entry.

    Blockchain has evolved into a platform for decentralized applications, with beneficial properties like high integrity, transparency, and resilience against censorship and tampering. However, blockchains are closed-world systems which do not have access to external state. To overcome this limitation, oracles have been introduced in various forms and for different purposes. However so far common oracle best practices have not been dissected, classified, and studied in their fundamental aspects. In this paper, we address this gap by studying foundational blockchain oracle patterns in two foundational dimensions characterising the oracles: (i) the data flow direction, i.e., inbound and outbound data flow, from the viewpoint of the blockchain; and (ii) the initiator of the data flow, i.e., whether it is push or pull-based communication. We provide a structured description of the four patterns in detail, and discuss an implementation of these patterns based on use cases. On this basis we conduct a quantitative analysis, which results in the insight that the four different patterns are characterized by distinct performance and costs profiles.

  36. Claudio Di Ciccio, Giovanni Meroni, Pierluigi Plebani (2020) Business Process Monitoring on Blockchains: Potentials and Challenges. In: BPMDS 2020, 36-51, Springer. DOI: 10.1007/978-3-030-49418-6_3.
    Read the pre-print. Download the BiBTeX entry.

    The ability to enable a tamper-proof distribution of immutable data has boosted the studies around the adoption of blockchains also in Business Process Management. In this direction, current research work primarily focuses on blockchain-based business process design, or on execution engines able to enact processes through smart contracts. Although very relevant, less studies have been devoted so far on how the adoption of blockchains can be beneficial to business process monitoring. This work goes into this direction by providing an insightful analysis to understand the benefits as well as the hurdles of blockchain-enabled business process monitoring. In particular, this work considers the adoption of programmable blockchain platforms to manage the generation, distribution, and analysis of business process monitoring data.

  37. Artem Polyvyanyy, Andreas Solti, Matthias Weidlich, Claudio Di Ciccio, Jan Mendling (2020) Monotone Precision and Recall Measures for Comparing Executions and Specifications of Dynamic Systems. In: ACM Trans. Softw. Eng. Methodol., 29 (3). ACM. DOI: 10.1145/3387909.
    Read the pre-print. Download the BiBTeX entry.

    The behavioural comparison of systems is an important concern of software engineering research. For example, the areas of specification discovery and specification mining are concerned with measuring the consistency between a collection of execution traces and a program specification. This problem is also tackled in process mining with the help of measures that describe the quality of a process specification automatically discovered from execution logs. Though various measures have been proposed, it was recently demonstrated that they neither fulfil essential properties, such as monotonicity, nor can they handle infinite behaviour. In this article, we address this research problem by introducing a new framework for the definition of behavioural quotients. We prove that corresponding quotients guarantee desired properties that existing measures have failed to support. We demonstrate the application of the quotients for capturing precision and recall measures between a collection of recorded executions and a system specification. We use a prototypical implementation of these measures to contrast their monotonic assessment with measures that have been defined in prior research.

  38. Claudio Di Ciccio (2020) Towards a Process-oriented Analysis of Blockchain Data. In: MOD-DLT 2020, 42-44, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Blockchains sequentially store the history of transactional information, in a virtually immutable and distributed way. Moreover, second-generation blockchains such as Ethereum are programmable environments, and every operation invocation towards the smart contracts corresponds to a transaction sequentially collated in the ledgers. They thus allow for the controlled enactment of multi-party processes as well as the immutable recording of their distributed execution. Despite the verification, tracking, and monitoring of such blockchain-enabled processes appears paramount, a formal and implemented framework encompassing those aspects is still a mostly unexplored research avenue. The talk revolves around the current state of the art, as well as the opportunities and challenges that arise when it comes to conducting a process-oriented analysis on data stemming from blockchains, from a representation and modelling perspective.

  39. Christian Janiesch, Agnes Koschmider, Massimo Mecella, Barbara Weber, Andrea Burattin, Claudio Di Ciccio, Giancarlo Fortino, Avigdor Gal, Udo Kannengiesser, Felix Mannhardt, Andrea Marrella, Jan Mendling, Andreas Oberweis, Manfred Reichert, Stefanie Rinderle-Ma, Estefanía Serral, WenZhan Song, Jianwen Su, Victoria Torres, Matthias Weidlich, Mathias Weske, Liang Zhang (2020) The Internet-of-Things Meets Business Process Management: Mutual Benefits and Challenges. In: IEEE Systems, Man, and Cybernetics Magazine, 6 (4), 34-44. IEEE. DOI: 10.1109/MSMC.2020.3003135.
    Download the BiBTeX entry.

    The Internet of Things (IoT) refers to a network of connected devices collecting and exchanging data over the Internet. These things can be artificial or natural, and interact as autonomous agents forming a complex system. In turn, Business Process Management (BPM) was established to analyze, discover, design, implement, execute, monitor and evolve collaborative business processes within and across organizations. While the IoT and BPM have been regarded as separate topics in research and practice, we strongly believe that the management of IoT applications will strongly benefit from BPM concepts, methods and technologies on the one hand; on the other one, the IoT poses challenges that will require enhancements and extensions of the current state-of-the-art in the BPM field. In this paper, we question to what extent these two paradigms can be combined and we discuss the emerging challenges.

  40. Anton Yeshchenko, Claudio Di Ciccio, Jan Mendling, Artem Polyvyanyy (2019) Comprehensive Process Drift Detection with Visual Analytics. In: ER 2019, 119-135, Springer. DOI: 10.1007/978-3-030-33223-5_11.
    Read the pre-print. Download the BiBTeX entry.

    Recent research has introduced ideas from concept drift into process mining to enable the analysis of changes in business processes over time. This stream of research, however, has not yet addressed the challenges of drift categorization, drilling-down, and quantification. In this paper, we propose a novel technique for managing process drifts, called Visual Drift Detection (VDD), which fulfills these requirements. The technique starts by clustering declarative process constraints discovered from recorded logs of executed business processes based on their similarity and then applies change point detection on the identified clusters to detect drifts. VDD complements these features with detailed visualizations and explanations of drifts. Our evaluation, both on synthetic and real-world logs, demonstrates all the aforementioned capabilities of the technique.

  41. Dina Bayomie, Claudio Di Ciccio, Marcello La Rosa, Jan Mendling (2019) A Probabilistic Approach to Event-Case Correlation for Process Mining. In: ER 2019, 136-152, Springer. DOI: 10.1007/978-3-030-33223-5_12.
    Read the pre-print. Download the BiBTeX entry.

    Process mining aims to understand the actual behavior and performance of business processes from event logs recorded by IT systems. A key requirement is that every event in the log must be associated with a unique case identifier (e.g., the order ID in an order-to-cash process). In reality, however, this case ID may not always be present, especially when logs are acquired from different systems or when such systems have not been explicitly designed to offer process-tracking capabilities. Existing techniques for correlating events have worked with assumptions to make the problem tractable: some assume the generative processes to be acyclic while others require heuristic information or user input. In this paper, we lift these assumptions by presenting a novel technique called EC-SA based on probabilistic optimization. Given as input a sequence of timestamped events (the log without case IDs) and a process model describing the underlying business process, our approach returns an event log in which every event is mapped to a case identifier. The approach minimises the misalignment between the generated log and the input process model, and the variance between activity durations across cases. The experiments conducted on a variety of real-life datasets show the advantages of our approach over the state of the art.

  42. Roman Mühlberger, Stefan Bachhofner, Claudio Di Ciccio, Luciano García-Bañuelos, Orlenys López-Pintado (2019) Extracting Event Logs for Process Mining from Data Stored on the Blockchain. In: BPM Workshops 2019, 690-703, Springer. DOI: 10.1007/978-3-030-37453-2_55.
    Read the pre-print. Download the BiBTeX entry.

    The integration of business process management with blockchains across organisational borders provides a means to establish transparency of execution and auditing capabilities. To enable process analytics, though, non-trivial extraction and transformation tasks are necessary on the raw data stored in the ledger. In this paper, we describe our approach to retrieve process data from an Ethereum blockchain ledger and subsequently convert those data into an event log formatted according to the IEEE Extensible Event Stream (XES) standard. We show a proof-of-concept software artefact and its application on a data set produced by the smart contracts of a process execution engine stored on the public Ethereum blockchain network.

  43. Suhrid Satyal, Ingo Weber, Hye-young Paik, Claudio Di Ciccio, Jan Mendling (2019) Business process improvement with the AB-BPM methodology. In: Information Systems, 84, 283-298. Elsevier. DOI: 10.1016/j.is.2018.06.007.
    Read the pre-print. Download the BiBTeX entry.

    A fundamental assumption of Business Process Management (BPM) is that redesign delivers refined and improved versions of business processes. This assumption, however, does not necessarily hold, and any required compensatory action may be delayed until a new round in the BPM life-cycle completes. Current approaches to process redesign face this problem in one way or another, which makes rapid process improvement a central research problem of BPM today. In this paper, we address this problem by integrating concepts from process execution with ideas from DevOps. More specifically, we develop a methodology called AB-BPM that offers process improvement validation in two phases: simulation and AB tests. Our simulation technique extracts decision probabilities and metrics from the event log of an existing process version and generates traces for the new process version based on this knowledge. The results of simulation guide us towards AB testing where two versions (A and B) are operational in parallel and any new process instance is routed to one of them. The routing decision is made at runtime on the basis of the achieved results for the registered performance metrics of each version. Our routing algorithm provides for ultimate convergence towards the best performing version, no matter if it is the old or the new version. We demonstrate the efficacy of our methodology and techniques by conducting an extensive evaluation based on both synthetic and real-life data.

  44. Claudio Di Ciccio, Fajar J. Ekaputra, Alessio Cecconi, Andreas Ekelhart, Elmar Kiesling (2019) Finding Non-compliances with Declarative Process Constraints through Semantic Technologies. In: CAiSE Forum 2019, 60-74, Springer. DOI: 10.1007/978-3-030-21297-1_6.
    Read the pre-print. Download the BiBTeX entry.

    Business process compliance checking enables organisations to assess whether their processes fulfil a given set of constraints, such as regulations, laws, or guidelines. Whilst many process analysts still rely on ad-hoc, often handcrafted per-case checks, a variety of constraint languages and approaches have been developed in recent years to provide automated compliance checking. A salient example is DECLARE, a well-established declarative process specification language based on temporal logics. DECLARE specifies the behaviour of processes through temporal rules that constrain the execution of tasks. So far, however, automated compliance checking approaches typically report compliance only at the aggregate level, using binary evaluations of constraints on execution traces. Consequently, their results lack granular information on violations and their context, which hampers auditability of process data for analytic and forensic purposes. To address this challenge, we propose a novel approach that leverages semantic technologies for compliance checking. Our approach proceeds in two stages. First, we translate DECLARE templates into statements in SHACL, a graph-based constraint language. Then, we evaluate the resulting constraints on the graph-based, semantic representation of process execution logs. We demonstrate the feasibility of our approach by testing its implementation on real-world event logs. Finally, we discuss its implications and future research directions.

  45. Han van der Aa, Claudio Di Ciccio, Henrik Leopold, Hajo A. Reijers (2019) Extracting Declarative Process Models from Natural Language. In: CAiSE 2019, 365-382, Springer. DOI: 10.1007/978-3-030-21290-2_23.
    Read the pre-print. Download the BiBTeX entry.

    Process models are an important means to capture information on organizational operations and often represent the starting point for process analysis and improvement. Since the manual elicitation and creation of process models is a time-intensive endeavor, a variety of techniques have been developed that automatically derive process models from textual process descriptions. However, these techniques, so far, only focus on the extraction of traditional, imperative process models. The extraction of declarative process models, which allow to effectively capture complex process behavior in a compact fashion, has not been addressed. In this paper we close this gap by presenting the first automated approach for the extraction of declarative process models from natural language. To achieve this, we developed tailored Natural Language Processing techniques that identify activities and their inter-relations from textual constraint descriptions. A quantitative evaluation shows that our approach is able to generate constraints that closely resemble those established by humans. Therefore, our approach provides automated support for an otherwise tedious and complex manual endeavor.

  46. Claudio Di Ciccio, Alessio Cecconi, Marlon Dumas, Luciano García-Bañuelos, Orlenys López-Pintado, Qinghua Lu, Jan Mendling, Alexander Ponomarev, An Binh Tran, Ingo Weber (2019) Blockchain Support for Collaborative Business Processes. In: Informatik Spektrum, 42, 182–190. Springer. DOI: 10.1007/s00287-019-01178-x. (Open access)
    Read the pre-print. Download the BiBTeX entry.

    Blockchain technology provides basic building blocks to support the execution of collaborative business processes involving mutually untrusted parties in a decentralized environment. Several research proposals have demonstrated the feasibility of designing blockchain-based collaborative business processes using a high-level notation, such as the Business Process Model and Notation (BPMN), and thereon automatically generating the code artifacts required to execute these processes on a blockchain platform. In this paper, we present the conceptual foundations of model-driven approaches for blockchain-based collaborative process execution and we compare two concrete approaches, namely Caterpillar and Lorikeet.

  47. Svitlana Vakulenko, Kate Revoredo, Claudio Di Ciccio, Maarten de Rijke (2019) QRFA: A Data-Driven Model of Information-Seeking Dialogues. In: ECIR 2019, 541-557, Springer. DOI: 10.1007/978-3-030-15712-8_35. Best User Paper Award of the 41st European Conference on Information Retrieval (ECIR 2019).
    Read the pre-print. Download the BiBTeX entry.

    Understanding the structure of interaction processes helps us to improve information-seeking dialogue systems. Analyzing an interaction process boils down to discovering patterns in sequences of alternating utterances exchanged between a user and an agent. Process mining techniques have been successfully applied to analyze structured event logs, discovering the underlying process models or evaluating whether the observed behavior is in conformance with the known process. In this paper, we apply process mining techniques to discover patterns in conversational transcripts and extract a new model of information-seeking dialogues, QRFA, for Query, Request, Feedback, Answer. Our results are grounded in an empirical evaluation across multiple conversational datasets from different domains, which was never attempted before. We show that the QRFA model better reflects conversation flows observed in real information-seeking conversations than models proposed previously. Moreover, QRFA allows us to identify malfunctioning in dialogue system transcripts as deviations from the expected conversation flow described by the model via conformance analysis.

  48. Stefan Schönig, Claudio Di Ciccio, Mendling Jan (2019) Configuring SQL-based process mining for performance and storage optimisation. In: SAC 2019, 94-97, ACM. DOI: 10.1145/3297280.3297532.
    Read the pre-print. Download the BiBTeX entry.

    Process mining is the area of research that embraces the automated discovery, conformance checking and enhancement of process models. Declarative process mining approaches offer capabilities to automatically discover models of flexible processes from event logs. However, they often suffer from performance issues with real-life event logs, especially when constraints to be discovered go beyond a standard repertoire of templates. By leveraging relational database performance technology, a new approach based on SQL querying has been recently introduced, to improve performance though still keeping the nature of discovered constraints customisable. In this paper, we provide an in-depth analysis of configuration parameters that allow for a speed-up of the answering time and a decrease of storage space needed for query processing. Thereupon, we provide configuration recommendations for process mining with SQL on relational databases.

  49. Pnina Soffer, Annika Hinze, Agnes Koschmider, Holger Ziekow, Claudio Di Ciccio, Boris Koldehofe, Oliver Kopp, Arno Jacobsen, Jan Sürmeli, Wei Song (2019) From event streams to process models and back: Challenges and opportunities. In: Information Systems, 81, 181-200. Elsevier. DOI: 10.1016/j.is.2017.11.002.
    Read the pre-print. Download the BiBTeX entry.

    Abstract The domains of complex event processing (CEP) and business process management (BPM) have different origins but for many aspects draw on similar concepts. While specific combinations of BPM and CEP have attracted research attention, resulting in solutions to specific problems, we attempt to take a broad view at the opportunities and challenges involved. We first illustrate these by a detailed example from the logistics domain. We then propose a mapping of this area into four quadrants – two quadrants drawing from CEP to create or extend process models and two quadrants starting from a process model to address how it can guide CEP. Existing literature is reviewed and specific challenges and opportunities are indicated for each of these quadrants. Based on this mapping, we identify challenges and opportunities that recur across quadrants and can be considered as the core issues of this combination. We suggest that addressing these issues in a generic manner would form a sound basis for future applications and advance this area significantly.

  50. Anton Yeshchenko, Claudio Di Ciccio, Jan Mendling, Artem Polyvyanyy (2019) Comprehensive Process Drift Analysis with the Visual Drift Detection Tool. In: ER Forum and Poster & Demos Session 2019, 108-112, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Recent research has introduced ideas from concept drift into process mining to enable the analysis of changes in business processes over time. This stream of research, however, has not yet addressed the challenges of drift categorization, drilling-down, and quantification. In this tool demonstration paper, we present a novel software tool to analyze process drifts, called Visual Drift Detection (VDD), which fulfills these requirements. The tool is of benefit to the researchers and practitioners in the business intelligence and process analytics area, and can constitute a valuable aid to those who are involved in business process redesign endeavors.

  51. Amr Azzam, Peb Ruswono Aryan, Alessio Cecconi, Claudio Di Ciccio, Fajar J. Ekaputra, Javier D. Fernández, Sotiris Karampatakis, Elmar Kiesling, Angelika Musil, Marta Sabou, Pujan Shadlau, Thomas Thurner (2019) The CitySPIN Platform: A CPSS Environment for City-Wide Infrastructures. In: CPSS@IoT 2019, 57-64, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Cyber-physical Social System (CPSS) are complex systems that span the boundaries of the cyber, physical and social spheres. They play an important role in a variety of domains ranging from industry to smart city applications. As such, these systems necessarily need to take into account, combine and make sense of heterogeneous data sources from legacy systems, from the physical layer and also the social groups that are part of/use the system. The collection, cleansing and integration of these data sources represents a major effort not only during the operation of the system, but also during its engineering and design. Indeed, while ongoing efforts are concerned primarily with the operation of such systems, limited focus has been put on supporting the engineering phase of CPSS. To address this shortcoming, within the CitySPIN project we aim to create a platform that supports stakeholders involved in the design of these systems especially in terms of support for data management. To that end, we develop methods and techniques based on Semantic Web and Linked Data technologies for the acquisition and integration of heterogeneous data from disparate structured, semi-structured and unstructured sources, including open data and social data. In this paper we present the overall system architecture with a core focus on data acquisition and integration. We demonstrate our approach through a prototypical implementation of an adaptive planning use case for public transportation scheduling.

  52. Claudio Di Ciccio, Fabrizio Maria Maggi, Marco Montali, Jan Mendling (2018) On the Relevance of a Business Constraint to an Event Log. In: Information Systems, 78, 144-161. Elsevier. DOI: 10.1016/j.is.2018.01.011.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process modeling languages such as declare describe the behavior of processes by means of constraints. Such constraints exert rules on the execution of tasks upon the execution of other tasks called activations. The constraint is thus fulfilled both if it is activated and the consequent rule is respected, or if it is not activated at all. The latter case, named vacuous satisfaction, is clearly less interesting than the former. Such a distinction becomes of utmost importance in the context of declarative process mining techniques, where processes are analyzed based on the identification of the most relevant constraints valid in an event log. Unfortunately, this notion of relevance has never been formally defined, and all the proposals existing in the literature use ad-hoc definitions that are only applicable to a pre-defined set of constraint patterns. This makes existing declarative process mining techniques inapplicable when the target constraint language is extensible, and may contain formulae that go beyond the pre-defined patterns. In this paper, we tackle this open challenge, and show how the notion of constraint activation and vacuous satisfaction can be captured semantically, in the case of constraints expressed in arbitrary temporal logics over finite traces. Our solution relies on the annotation of finite state automata to incorporate relevance-related information. We discuss the formal grounding of our approach and describe the implementation thereof. We finally report on experimental results gathered from the application of our approach to real-life data, which show the advantages and feasibility of our solution.

  53. Suhrid Satyal, Ingo Weber, Hye-young Paik, Claudio Di Ciccio, Jan Mendling (2018) Shadow Testing for Business Process Improvement. In: CoopIS 2018, 153-171, Springer. DOI: 10.1007/978-3-030-02610-3_9.
    Read the pre-print. Download the BiBTeX entry.

    A fundamental assumption of improvement in Business Process Management (BPM) is that redesigns deliver refined and improved versions of business processes. These improvements can be validated online through sequential experiment techniques like AB Testing, as we have shown in earlier work. Such approaches have the inherent risk of exposing customers to an inferior process version during the early stages of the test. This risk can be managed by offline techniques like simulation. However, offline techniques do not validate the improvements because there is no user interaction with the new versions. In this paper, we propose a middle ground through shadow testing, which avoids the downsides of simulation and direct execution. In this approach, a new version is deployed and executed alongside the current version, but in such a way that the new version is hidden from the customers and process workers. Copies of user requests are partially simulated and partially executed by the new version as if it were running in the production. We present an architecture, algorithm, and implementation of the approach, which isolates new versions from production, facilitates fair comparison, and manages the overhead of running shadow tests. We demonstrate the efficacy of our technique by evaluating the executions of synthetic and realistic process redesigns.

  54. Alessio Cecconi, Claudio Di Ciccio, Giuseppe De Giacomo, Jan Mendling (2018) Interestingness of traces in declarative process mining: The Janus LTLpf approach. In: BPM 2018, 121-138, Springer. DOI: 10.1007/978-3-319-98648-7_8.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process mining is the set of techniques aimed at extracting behavioural constraints from event logs. These constraints are inherently of a reactive nature, in that their activation restricts the occurrence of other activities. In this way, they are prone to the principle of ex falso quod libet: they can be satisfied even when not activated. As a consequence, constraints can be mined that are hardly interesting to users or even potentially misleading. In this paper, we build on the observation that users typically read and write temporal constraints as if-statements with an explicit indication of the activation condition. Our approach is called Janus, because it permits the specification and verification of reactive constraints that, upon activation, look forward into the future and backwards into the past of a trace. Reactive constraints are expressed using Linear-time Temporal Logic with Past on Finite Traces (LTLpf). To mine them out of event logs, we devise a time bidirectional valuation technique based on triplets of automata operating in an on-line fashion. Our solution proves efficient, being at most quadratic w.r.t. trace length, and effective in recognising interestingness of discovered constraints.

  55. Claudio Di Ciccio, Alessio Cecconi, Jan Mendling, Dominik Felix, Dominik Haas, Daniel Lilek, Florian Riel, Andreas Rumpl, Philipp Uhlig (2018) Blockchain-Based Traceability of Inter-organisational Business Processes. In: BMSD 2018, 56-68, Springer. DOI: 10.1007/978-3-319-94214-8_4.
    Read the pre-print. Download the BiBTeX entry.

    Blockchain technology opens up new opportunities for Business Process Management. This is mainly due to its unprecedented capability to let transactions be automatically executed and recorded by Smart Contracts in multi-peer environments, in a decentralised fashion and without central authoritative players to govern the workflow. In this way, blockchains also provide traceability. Traceability of information plays a pivotal role particularly in those supply chains where multiple parties are involved and rigorous criteria must be fulfilled to lead to a successful outcome. In this paper, we investigate how to run a business process in the context of a supply chain on a blockchain infrastructure so as to provide full traceability of its run-time enactment. Our approach retrieves information to trace process instances execution solely from the transactions written on-chain. To do so, hash-codes are reverseengineered based on the Solidity Smart Contract encoding of the generating process. We show the results of our investigation by means of an implemented software prototype, with a case study on the reportedly challenging context of the pharmaceutical supply chain.

  56. Suhrid Satyal, Ingo Weber, Hye-young Paik, Claudio Di Ciccio, Jan Mendling (2018) AB Testing for Process Versions with Contextual Multi-armed Bandit Algorithms. In: CAiSE 2018, 19-34, Springer. DOI: 10.1007/978-3-319-91563-0_2.
    Read the pre-print. Download the BiBTeX entry.

    Business process improvement ideas can be validated through sequential experiment techniques like AB Testing. Such approaches have the inherent risk of exposing customers to an inferior process version, which is why the inferior version should be discarded as quickly as possible. In this paper, we propose a contextual multi-armed bandit algorithm that can observe the performance of process versions and dynamically adjust the routing policy so that the customers are directed to the version that can best serve them. Our algorithm learns the best routing policy in the presence of complications such as multiple process performance indicators, delays in indicator observation, incomplete or partial observations, and contextual factors. We also propose a pluggable architecture that supports such routing algorithms. We evaluate our approach with a case study. Furthermore, we demonstrate that our approach identifies the best routing policy given the process performance and that it scales horizontally.

  57. Stefan Schönig, Cristina Cabanillas, Claudio Di Ciccio, Stefan Jablonski, Jan Mendling (2018) Mining team compositions for collaborative work in business processes. In: Software & Systems Modeling, 17 (2), 675-693. Springer. DOI: 10.1007/s10270-016-0567-4.
    Read the pre-print. Download the BiBTeX entry.

    Process mining aims at discovering processes by extracting knowledge about their different perspectives from event logs. The resource perspective (or organisational perspective) deals, among others, with the assignment of resources to process activities. Mining in relation to this perspective aims to extract rules on resource assignments for the process activities. Prior research in this area is limited by the assumption that only one resource is responsible for each process activity, and hence, collaborative activities are disregarded. In this paper, we leverage this assumption by developing a process mining approach that is able to discover team compositions for collaborative process activities from event logs. We evaluate our novel mining approach in terms of computational performance and practical applicability.

  58. Thomas Baier, Claudio Di Ciccio, Jan Mendling, Mathias Weske (2018) Matching events and activities by integrating behavioral aspects and label analysis. In: Software & Systems Modeling, 17 (2), 573-598. Springer. DOI: 10.1007/s10270-017-0603-z. (Open access)
    Read the pre-print. Download the BiBTeX entry.

    Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.

  59. Fabrizio Maria Maggi, Claudio Di Ciccio, Chiara Di Francescomarino, Taavi Kala (2018) Parallel algorithms for the automated discovery of declarative process models. In: Information Systems, 74, 136-152. Elsevier. DOI: 10.1016/j.is.2017.12.002.
    Read the pre-print. Download the BiBTeX entry.

    Abstract The aim of process discovery is to build a process model from an event log without prior information about the process. The discovery of declarative process models is useful when a process works in an unpredictable and unstable environment since several allowed paths can be represented as a compact set of rules. One of the tools available in the literature for discovering declarative models from logs is the Declare Miner, a plug-in of the process mining tool ProM. Using this plug-in, the discovered models are represented using Declare, a declarative process modeling language based on ltl for finite traces. However, the high execution times of the Declare Miner when processing large sets of data hampers the applicability of the tool to real-life settings. Therefore, in this paper, we propose a new approach for the discovery of Declare models based on the combination of an Apriori algorithm and a group of algorithms for Sequence Analysis to enhance the time performance of the plug-in. The approach has been developed in a way that it is easy to be parallelized using two different partitioning methods: the search space partitioning, in which different groups of candidate constraints are processed in parallel, and the database partitioning, in which different chunks of the log are processed at the same time. The approach has been implemented in ProM in its sequential version and in two multi-threading implementations leveraging these two partitioning methods. All the new variants of the plug-in have been evaluated using a large set of synthetic and real-life event logs.

  60. Jan Mendling, Ingo Weber, Wil Van Der Aalst, Jan Vom Brocke, Cristina Cabanillas, Florian Daniel, Søren Debois, Claudio Di Ciccio, Marlon Dumas, Schahram Dustdar, Avigdor Gal, Luciano García-Bañuelos, Guido Governatori, Richard Hull, Marcello La Rosa, Henrik Leopold, Frank Leymann, Jan Recker, Manfred Reichert, Hajo A. Reijers, Stefanie Rinderle-Ma, Andreas Solti, Michael Rosemann, Stefan Schulte, Munindar P. Singh, Tijs Slaats, Mark Staples, Barbara Weber, Matthias Weidlich, Mathias Weske, Xiwei Xu, Liming Zhu (2018) Blockchains for Business Process Management - Challenges and Opportunities. In: ACM Trans. Manage. Inf. Syst., 9 (1), 4:1-4:16. ACM. DOI: 10.1145/3183367.
    Read the pre-print. Download the BiBTeX entry.

    Blockchain technology offers a sizable promise to rethink the way interorganizational business processes are managed because of its potential to realize execution without a central party serving as a single point of trust (and failure). To stimulate research on this promise and the limits thereof, in this article, we outline the challenges and opportunities of blockchain for business process management (BPM). We first reflect how blockchains could be used in the context of the established BPM lifecycle and second how they might become relevant beyond. We conclude our discourse with a summary of seven research directions for investigating the application of blockchain technology in the context of BPM.

  61. Giovanni Meroni, Claudio Di Ciccio, Jan Mendling (2017) An Artifact-Driven Approach to Monitor Business Processes Through Real-World Objects. In: ICSOC 2017, 297-313, Springer. DOI: 10.1007/978-3-319-69035-3_21.
    Read the pre-print. Download the BiBTeX entry.

    Nowadays, many business processes once intra-organizational are becoming inter-organizational. Thus, being able to monitor how such processes are performed, including portions carried out by service providers, is paramount. Yet, traditional process monitoring techniques present some shortcomings when dealing with inter-organizational processes. In particular, they require human operators to notify when business activities are performed, and to stop the process when it is not executed as expected. In this paper, we address these issues by proposing an artifact-driven monitoring service, capable of autonomously and continuously monitor inter-organizational processes. To do so, this service relies on the state of the artifacts (i.e., physical entities) participating to the process, represented using the E-GSM notation. A working prototype of this service is presented and validated using real-world processes and data from the logistics domain.

  62. Johannes De Smedt, Claudio Di Ciccio, Jan Vanthienen, Jan Mendling (2017) Model Checking of Mixed-Paradigm Process Models in a Discovery Context - Finding the Fit Between Declarative and Procedural. In: BPM workshops 2017, 74-86, Springer. DOI: 10.1007/978-3-319-58457-7_6.
    Read the pre-print. Download the BiBTeX entry.

    The act of retrieving process models from event-based data logs can offer valuable information to business owners. Many approaches have been proposed for this purpose, mining for either a procedural or declarative outcome. A blended approach that combines both process model paradigms exists and offers a great way to deal with process environments which consist of different layers of flexibility. In this paper, it will be shown how to check such models for correctness, and how this checking can contribute to retrieving the models as well. The approach is based on intersecting both parts of the model and provides an effective way to check (i) whether the behavior is aligned, and (ii) where the model can be improved according to errors that arise along the respective paradigms. To this end, we extend the functionality of Fusion Miner, a mixed-paradigm process miner, in a way to inspect which amount of flexibility is right for the event log. The procedure is demonstrated with an implemented model checker and verified on real-life event logs.

  63. Suhrid Satyal, Ingo Weber, Hye-young Paik, Claudio Di Ciccio, Jan Mendling (2017) AB-BPM: Performance-Driven Instance Routing for Business Process Improvement. In: BPM 2017, 113-129, Springer. DOI: 10.1007/978-3-319-65000-5_7.
    Read the pre-print. Download the BiBTeX entry.

    A fundamental assumption of Business Process Management (BPM) is that redesign delivers new and improved versions of business processes. This assumption, however, does not necessarily hold, and required compensatory action may be delayed until a new round in the BPM life-cycle completes. Current approaches to process redesign face this problem in one way or another, which makes rapid process improvement a central research problem of BPM today. In this paper, we address this problem by integrating concepts from process execution with ideas from DevOps. More specifically, we develop a technique called AB-BPM that offers AB testing for process versions with immediate feedback at runtime. We implemented this technique in such a way that two versions (A and B) are operational in parallel and any new process instance is routed to one of them. The routing decision is made at runtime on the basis of the achieved results for the registered performance metrics of each version. AB-BPM provides for ultimate convergence towards the best performing version, no matter if it is the old or the new version. We demonstrate the efficacy of our technique by conducting an extensive evaluation based on both synthetic and real-life data.

  64. Luciano Baresi, Claudio Di Ciccio, Jan Mendling, Giovanni Meroni, Pierluigi Plebani (2017) mArtifact: an Artifact-driven Process Monitoring Platform. In: BPM (Demos) 2017, 1-5, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Traditionally, human intervention is required to monitor a business process. Operators notify when manual activities are executed, and manually restart the monitoring whenever the process is not executed as expected. This paper presents mArtifact, an artifact-driven process monitoring platform. mArtifact uses the E-GSM artifact-centric language to represent the process. This way, when a violation occurs, it can flag the affected activities without halting the monitoring. By predicating on the conditions of the physical artifacts participating in a process, mArtifact autonomously detects when activities are executed and constraints are violated. The audience is expected to be familiar with business process monitoring and artifact-centric modeling languages.

  65. Christian Sturm, Stefan Schönig, Claudio Di Ciccio (2017) Distributed Multi-Perspective Declare Discovery. In: BPM (Demos) 2017, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process models define the behaviour of processes by means of constraints exerted over their activities. Multi-perspective declarative approaches extend the expressiveness of those constraints to include resources, time, and information artefacts. In this paper, we present a fast distributed approach and software prototype to discover multi-perspective declarative models out of event logs, based upon parallel computing. The demo is targeted at process mining researchers and practitioners, and describes the tool through its application on a use case, based on a publicly available real-life bench-mark.

  66. Giovanni Meroni, Claudio Di Ciccio, Jan Mendling (2017) Artifact-driven Process Monitoring: Dynamically Binding Real-world Objects to Running Processes. In: CAiSE Forum 2017, 105-112, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Monitoring inter-organizational business processes requires explicit knowledge about when activities start and complete. This is a challenge because no single system controls the process, activities might not be directly recorded, and the overall course of execution might only be determined at runtime. In this paper, we address these problems by integrating process monitoring with sensor data from real-world objects. We formalize our approach using the E-GSM artifact-centric language. Since the association between real-world objects and process instances is often only determined at runtime, our approach also caters for dynamic binding and unbinding at runtime.

  67. Stefan Bachhofner, Isabella Kis, Claudio Di Ciccio, Jan Mendling (2017) Towards a Multi-parametric Visualisation Approach for Business Process Analytics. In: CAiSE Workshops 2017, 85-91, Springer. DOI: 10.1007/978-3-319-60048-2_8.
    Read the pre-print. Download the BiBTeX entry.

    Visualisation is an integral part of many scientific areas and is reportedly an important tool for learning and teaching. One reason for this is the picture superior effect. Nevertheless, little research endeavour has been carried out so far to effectively apply visualisation principles to the emerging field of business process analytics. In this paper a novel multi-parametric visualisation approach is proposed in such a context. General visualisation principles are used to create, evaluate, and improve the approach in the design process. They are drawn from a wide range of fields, and are synthesised from theory and empirical evidence.

  68. Isabella Kis, Stefan Bachhofner, Claudio Di Ciccio, Jan Mendling (2017) Towards a Data-Driven Framework for Measuring Process Performance. In: BPMDS/EMMSAD 2017, 3-18, Springer. DOI: 10.1007/978-3-319-59466-8_1.
    Read the pre-print. Download the BiBTeX entry.

    Studies have shown that the focus of Business Process Management (BPM) mainly lies on process discovery and process implementation & execution. In contrast, process analysis, i.e., the measurement of process performance, has been mostly neglected in the field of process science so far. However, in order to be viable in the long run, a process' performance has to be made evaluable. To enable this kind of analysis, the suggested approach in this idea paper builds upon the well-established notion of devil's quadrangle. The quadrangle depicts the process performance according to four dimensions (time, cost, quality and flexibility), thus allowing for a meaningful assessment of the process. In the course of this paper, a framework for the measurement of each dimension is proposed, based on the analysis of process execution data. A trailing example is provided that reflects the expressed concepts on a tangible realistic scenario.

  69. Claudio Di Ciccio, Fabrizio Maria Maggi, Marco Montali, Jan Mendling (2017) Resolving inconsistencies and redundancies in declarative process models. In: Information Systems, 64, 425-446. Elsevier. DOI: 10.1016/j.is.2016.09.005.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process models define the behaviour of business processes as a set of constraints. Declarative process discovery aims at inferring such constraints from event logs. Existing discovery techniques verify the satisfaction of candidate constraints over the log, but completely neglect their interactions. As a result, the inferred constraints can be mutually contradicting and their interplay may lead to an inconsistent process model that does not accept any trace. In such a case, the output turns out to be unusable for enactment, simulation or verification purposes. In addition, the discovered model contains, in general, redundancies that are due to complex interactions of several constraints and that cannot be cured using existing pruning approaches. We address these problems by proposing a technique that automatically resolves conflicts within the discovered models and is more powerful than existing pruning techniques to eliminate redundancies. First, we formally define the problems of constraint redundancy and conflict resolution. Second, we introduce techniques based on the notion of automata-product monoid, which guarantees the consistency of the discovered models and, at the same time, keeps the most interesting constraints in the pruned set. The level of interestingness is dictated by user-specified prioritisation criteria. We evaluate the devised techniques on a set of real-world event logs.

  70. Matej Puchovsky, Claudio Di Ciccio, Jan Mendling (2016) A Case Study on the Business Benefits of Automated Process Discovery. In: SIMPDA 2016, 35-49, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Automated process discovery represents the knowledge extraction phase of process mining. By exploiting transactional data from information systems, it aims to extract valuable process knowledge. Through process mining, an important link between two disciplines - data mining and business process management - has been established. However, while methods of both data mining and process management are well-established in practice, the potential of process mining for evaluation of business operations has only been recently recognised outside academia. Our quantitative analysis of real-life event log data investigates both the performance and social dimensions of a selected core business process of an Austrian IT service company. It shows that organisations can substantially benefit from adopting automated process discovery methods to visualise, understand and evaluate their processes. This is of particular relevance in today's world of data-driven decision making.

  71. Stefan Schönig, Cristina Cabanillas, Claudio Di Ciccio, Stefan Jablonski, Jan Mendling (2016) Mining Resource Assignments and Teamwork Compositions from Process Logs. In: Softwaretechnik-Trends, 36 (4).
    Read the pre-print. Download the BiBTeX entry.

    Process mining aims at discovering processes by extracting knowledge from event logs. Such knowledge may refer to different business process perspectives. The organisational perspective deals, among other things, with the assignment of human resources to process activities. Information about the resources that are involved in process activities can be mined from event logs in order to discover resource assignment conditions. This is valuable for process analysis and redesign. Prior process mining approaches in this context present one of the following issues: (i) they are limited to discovering a restricted set of resource assignment conditions; (ii) they are not fully efficient; (iii) the discovered process models are difficult to read due to the high number of assignment conditions included; or (iv) they are limited by the assumption that only one resource is responsible for each process activity and hence, collaborative activities are disregarded. To overcome these issues, we present an integrated process mining framework that provides extensive support for the discovery of resource assignment and teamwork patterns.

  72. Stefan Schönig, Claudio Di Ciccio, Fabrizio Maria Maggi, Jan Mendling (2016) Discovery of Multi-perspective Declarative Process Models. In: ICSOC 2016, 87-103, Springer. DOI: 10.1007/978-3-319-46295-0_6.
    Read the pre-print. Download the BiBTeX entry.

    Process discovery is one of the main branches of process mining that allows the user to build a process model representing the process behavior as recorded in the logs. Standard process discovery techniques produce as output a procedural process model (e.g., a Petri net). Recently, several approaches have been developed to derive declarative process models from logs and have been proven to be more suitable to analyze processes working in environments that are less stable and predictable. However, a large part of these techniques are focused on the analysis of the control flow perspective of a business process. Therefore, one of the challenges still open in this field is the development of techniques for the analysis of business processes also from other perspectives, like data, time, and resources. In this paper, we present a full-fledged approach for the discovery of multi-perspective declarative process models from event logs that allows the user to discover declarative models taking into consideration all the information an event log can provide. The approach has been implemented and experimented in real-life case studies.

  73. Anne Baumgraß, Mirela Botezatu, Claudio Di Ciccio, Remco Dijkman, Paul Grefen, Marcin Hewelt, Jan Mendling, Andreas Meyer, Shaya Pourmirza, Hagen Völzer (2016) Towards a Methodology for the Engineering of Event-Driven Process Applications. In: BPM Workshops 2016, 501-514, Springer International Publishing. DOI: 10.1007/978-3-319-42887-1_40.
    Read the pre-print. Download the BiBTeX entry.

    Successful applications of the Internet of Things such as smart cities, smart logistics, and predictive maintenance, build on observing and analyzing business-related objects in the real world for business process execution and monitoring. In this context, complex event processing is increasingly used to integrate events from sensors with events stemming from business process management systems. This paper describes a methodology to combine the areas and engineer an event-driven logistics processes application. Thereby, we describe the requirements, use cases and lessons learned to design and implement such an architecture.

  74. Taavi Kala, Fabrizio Maria Maggi, Claudio Di Ciccio, Chiara Di Francescomarino (2016) Apriori and Sequence Analysis for Discovering Declarative Process Models. In: EDOC 2016, 50-58, IEEE. DOI: 10.1109/EDOC.2016.7579378.
    Read the pre-print. Download the BiBTeX entry.

    The aim of process discovery is to build a process model from an event log without prior information about the process. The discovery of declarative process models is useful when a process works in an unpredictable and unstable environment since several allowed paths can be represented as a compact set of rules. One of the tools available in the literature for discovering declarative models from logs is the Declare Miner, a plug-in of the process mining tool ProM. Using this plug-in, the discovered models are represented using Declare, a declarative process modelling language based on LTL for finite traces. In this paper, we use a combination of an Apriori algorithm and a group of algorithms for Sequence Analysis to improve the performances of the Declare Miner. Using synthetic and real life event logs, we show that the new implemented core of the plug-in allows for a significant performance improvement.

  75. Fabrizio Maria Maggi, Marco Montali, Claudio Di Ciccio, Jan Mendling (2016) Semantical Vacuity Detection in Declarative Process Mining. In: BPM 2016, 158-175, Springer. DOI: 10.1007/978-3-319-45348-4_10.
    Read the pre-print. Download the BiBTeX entry.

    A large share of the literature on process mining based on declarative process modeling languages, like DECLARE, relies on the notion of constraint activation to distinguish between the case in which a process execution recorded in event data “vacuously” satisfies a constraint, or satisfies the constraint in an “interesting way”. This fine-grained indicator is then used to decide whether a candidate constraint supported by the analyzed event log is indeed relevant or not. Unfortunately, this notion of relevance has never been formally defined, and all the proposals existing in the literature use ad-hoc definitions that are only applicable to a pre-defined set of constraint patterns. This makes existing declarative process mining technique inapplicable when the target constraint language is extensible and may contain formulae that go beyond pre-defined patterns. In this paper, we tackle this hot, open challenge and show how the notion of constraint activation and vacuous satisfaction can be captured semantically, in the case of constraints expressed in arbitrary temporal logics over finite traces. We then extend the standard automata-based approach so as to incorporate relevance-related information. We finally report on an implementation and experimentation of the approach that confirms the advantages and feasibility of our solution.

  76. Claudio Di Ciccio, Han van der Aa, Cristina Cabanillas, Jan Mendling, Johannes Prescher (2016) Detecting flight trajectory anomalies and predicting diversions in freight transportation. In: Decision Support Systems, 88, 1-17. Elsevier. DOI: 10.1016/j.dss.2016.05.004.
    Read the pre-print. Download the BiBTeX entry.

    Timely identifying flight diversions is a crucial aspect of efficient multi-modal transportation. When an airplane diverts, logistics providers must promptly adapt their transportation plans in order to ensure proper delivery despite such an unexpected event. In practice, the different parties in a logistics chain do not exchange real-time information related to flights. This calls for a means to detect diversions that just requires publicly available data, thus being independent of the communication between different parties. The dependence on public data results in a challenge to detect anomalous behavior without knowing the planned flight trajectory. Our work addresses this challenge by introducing a prediction model that just requires information on an airplane's position, velocity, and intended destination. This information is used to distinguish between regular and anomalous behavior. When an airplane displays anomalous behavior for an extended period of time, the model predicts a diversion. A quantitative evaluation shows that this approach is able to detect diverting airplanes with excellent precision and recall even without knowing planned trajectories as required by related research. By utilizing the proposed prediction model, logistics companies gain a significant amount of response time for these cases.

  77. Claudio Di Ciccio, Fabrizio Maria Maggi, Jan Mendling (2016) Efficient discovery of Target-Branched Declare constraints. In: Information Systems, 56, 258-283. Elsevier. DOI: 10.1016/j.is.2015.06.009.
    Read the pre-print. Download the BiBTeX entry.

    Process discovery is the task of generating process models from event logs. Mining processes that operate in an environment of high variability is an ongoing research challenge because various algorithms tend to produce spaghetti-like process models. This is particularly the case when procedural models are generated. A promising direction to tackle this challenge is the usage of declarative process modelling languages like Declare, which summarise complex behaviour in a compact set of behavioural constraints on activities. A Declare constraint is branched when one of its parameters is the disjunction of two or more activities. For example, branched Declare can be used to express rules like “in a bank, a mortgage application is always eventually followed by a notification to the applicant by phone or by a notification by e-mail”. However, branched Declare constraints are expensive to be discovered. In addition, it is often the case that hundreds of branched Declare constraints are valid for the same log, thus making, again, the discovery results unreadable. In this paper, we address these problems from a theoretical angle. More specifically, we define the class of Target-Branched Declare constraints and investigate the formal properties it exhibits. Furthermore, we present a technique for the efficient discovery of compact Target-Branched Declare models. We discuss the merits of our work through an evaluation based on a prototypical implementation using both artificial and real-life event logs.

  78. Michael Hanser, Claudio Di Ciccio, Jan Mendling (2016) A New Notational Framework for Declarative Process Modeling. In: Softwaretechnik-Trends, 36 (2), 53-56.
    Read the pre-print. Download the BiBTeX entry.

    In order to capture flexible scenarios, a declarative approach to business process modeling describes constraints that limit a process' behavior instead of specifying all its allowed enactments. However, current graphical notations for declarative processes are tough to understand, thus hampering a widespread usage of the approach. To overcome this issue, we present a novel notational framework for representing declarative processes, devised in compliance with well-known notation design principles.

  79. Michael Hanser, Claudio Di Ciccio, Jan Mendling (2016) A Novel Framework for Visualizing Declarative Process Models. In: ZEUS 2016, 5-12, CEUR-WS.org. Best Presentation Award at the 8th Central-European Workshop on Services and their Composition (ZEUS 2016).
    Read the pre-print. Download the BiBTeX entry.

    The declarative approach to business process modeling has been introduced to deal with the issue of managing flexible processes. Instead of explicitly representing all the allowed enactments of a process, the approach describes the constraints that limit its behavior. However, current graphical notations for declarative processes are prone to be difficult to understand, thus hampering a widespread usage of the approach. To overcome this issue, we present a novel notation framework for visualizing declarative processes, which is devised in compliance with well-known notation design principles.

  80. Anne Baumgrass, Cristina Cabanillas, Claudio Di Ciccio (2015) A Conceptual Architecture for an Event-based Information Aggregation Engine in Smart Logitics. In: EMISA 2015, 109-123, GI.
    Read the pre-print. Download the BiBTeX entry.

    The field of Smart Logistics is attracting interest in several areas of research, including Business Process Management. A wide range of research works are carried out to enhance the capability of monitoring the execution of ongoing logistics processes and predict their likely evolvement. In order to do this, it is crucial to have in place an IT infrastructure that provides the capability of automatically intercepting the digitalised transportation-related events stemming from widespread sources, along with their elaboration, interpretation and dispatching. In this context, we present here the service-oriented software architecture of such an event-based information engine. In particular, we describe the requisites that it must meet. Thereafter, we present the interfaces and subsequently the service-oriented components that are in charge of realising them. The outlined architecture is being utilised as the reference model for an ongoing European research project on Smart Logistics, namely GET Service.

  81. Anne Baumgrass, Claudio Di Ciccio, Remco M. Dijkman, Marcin Hewelt, Jan Mendling, Andreas Meyer, Shaya Pourmirza, Mathias Weske, Tsun Yin Wong (2015) GET Controller and UNICORN: Event-driven Process Execution and Monitoring in Logistics. In: BPM (Demos) 2015, 75-79, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Especially in logistics, process instances often interact with their real-world environment during execution. This is challenging due to the fact that events from this environment are often heterogeneous, lack process instance information, and their import and visualisation in traditional process engines is not sufficiently supported. To address these challenges, we implemented GET Controller and UNICORN, two systems that together enable event-driven process execution and monitoring. Their application is shown for a logistics scenario.

  82. Claudio Di Ciccio, Fabrizio Maria Maggi, Marco Montali, Jan Mendling (2015) Ensuring Model Consistency in Declarative Process Discovery. In: BPM 2015, 144-159, Springer. DOI: 10.1007/978-3-319-23063-4_9. Best Paper Award of the 13th Int. Conference on Business Process Management (BPM 2015).
    Read the pre-print. Download the BiBTeX entry.

    Declarative process models define the behaviour of business processes as a set of constraints. Declarative process discovery aims at inferring such constraints from event logs. Existing discovery techniques verify the satisfaction of candidate constraints over the log, but completely neglect their interactions. As a result, the inferred constraints can be mutually contradicting and their interplay may lead to an inconsistent process model that does not accept any trace. In such a case, the output turns out to be unusable for enactment, simulation or verification purposes. In addition, the discovered model contains, in general, redundancies that are due to complex interactions of several constraints and that cannot be solved using existing pruning approaches. We address these problems by proposing a technique that automatically resolves conflicts within the discovered models and is more powerful than existing pruning techniques to eliminate redundancies. First, we formally define the problems of constraint redundancy and conflict resolution. Thereafter, we introduce techniques based on the notion of an automata-product monoid that guarantee the consistency of the discovered models and, at the same time, keep the most interesting constraints in the pruned set. We evaluate the devised techniques on real-world benchmarks.

  83. Claudio Di Ciccio, Mitchel H.M. Schouten, Massimiliano de Leoni, Jan Mendling (2015) Declarative Process Discovery with MINERful in ProM. In: BPM (Demos) 2015, 60-64, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Declarative process models consist of a set of constraints exerted over the execution of process activities. DECLARE is a declarative process modelling language that specifies a set of constraint templates along with their graphical notation. The automated discovery of DECLARE models aims at finding those constraints that are verified throughout a given event log. In this paper, we present a fast scalable tool for mining DECLARE models in ProM. Its usage is described with its application on a use case, based on a publicly available real-life benchmark.

  84. Thomas Baier, Claudio Di Ciccio, Jan Mendling, Mathias Weske (2015) Matching of Events and Activities - An Approach Using Declarative Modeling Constraints. In: BPMDS/EMMSAD 2015, 119-134, Springer. DOI: 10.1007/978-3-319-19237-6_8.
    Read the pre-print. Download the BiBTeX entry.

    Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. This event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semiautomatic approach that maps events to activities using the solution of a corresponding constraint satisfaction problem. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. The evaluation with an industry process model collection and simulated event logs demonstrates the effectiveness of the approach and its robustness towards non-conforming execution logs.

  85. Claudio Di Ciccio, Mario Luca Bernardi, Marta Cimitile, Fabrizio Maria Maggi (2015) Generating Event Logs through the Simulation of Declare Models. In: EOMAS 2015, 20-36, Springer. DOI: 10.1007/978-3-319-24626-0_2.
    Read the pre-print. Download the BiBTeX entry.

    In the process mining field, several techniques have been developed during the last years, for the discovery of declarative process models from event logs. This type of models describes processes on the basis of temporal constraints. Every behavior that does not violate such constraints is allowed, and such characteristic has proven to be suitable for representing highly flexible processes. One way to test a process discovery technique is to generate an event log by simulating a process model, and then verify that the process discovered from such a log matches the original one. For this reason, a tool for generating event logs starting from declarative process models becomes vital for the evaluation of declarative process discovery techniques. In this paper, we present an approach for the automated generation of event logs, starting from process models that are based on Declare, one of the most used declarative modeling languages in the process mining literature. Our framework bases upon the translation of Declare constraints into regular expressions and on the utilization of Finite State Automata for the simulation. An evaluation of the implemented tool is presented, showing its effectiveness in both the generation of new logs and the replication of the behavior of existing ones. The presented evaluation also shows the capability of the tool of generating very large logs in a reasonably small amount of time, and its integration with state-of-the-art Declare modeling and discovery tools.

  86. Claudio Di Ciccio, Javier D. Fernández, Jürgen Umbrich (2015) Improving the Usability of Open Data Portals from a Business Process Perspective. In: ODQ 2015, 6-10, Network of Excellence in Internet Science.
    Read the pre-print. Download the BiBTeX entry.

    Open Data portals are considered to be the cornerstones of the Open Data movement, as they offer an infrastructure to publish, share and consume public information. From a business perspective, such portals can be seen as a non-profit data marketplace, in which users try to satisfy their demand and offer requirements in several different processes. In this work, we argue that studying these so far unexplored interaction processes bears the potential to make the portals more effective. We first outline a research roadmap to better understand the behaviour of consumers and publishers by mining the interaction logs of Open Data portals. Then, we discuss potential services on the basis of these outcomes, which can be integrated in current portals to optimize the interaction, improve data quality and user experience.

  87. Claudio Di Ciccio, Andrea Marrella, Alessandro Russo (2015) Knowledge-intensive Processes: Characteristics, Requirements and Analysis of Contemporary Approaches. In: J. Data Semantics, 4 (1), 29-57. Springer. DOI: 10.1007/s13740-014-0038-4.
    Read the pre-print. Download the BiBTeX entry.

    Engineering of knowledge-intensive processes (KiPs) is far from being mastered, since they are genuinely knowledge- and data-centric, and require substantial flexibility, at both design- and run-time. In this work, starting from a scientific literature analysis in the area of KiPs and from three real-world domains and application scenarios, we provide a precise characterization of KiPs. Furthermore, we devise some general requirements related to KiPs management and execution. Such requirements contribute to the definition of an evaluation framework to assess current system support for KiPs. To this end, we present a critical analysis on a number of existing process-oriented approaches by discussing their efficacy against the requirements.

  88. Claudio Di Ciccio, Massimo Mecella, Jan Mendling (2015) The Effect of Noise on Mined Declarative Constraints. In: Data-Driven Process Discovery and Analysis 2015, 1-24, Springer. DOI: 10.1007/978-3-662-46436-6_1.
    Read the pre-print. Download the BiBTeX entry.

    Declarative models are increasingly utilized as representational format in process mining. Models created from automatic process discovery are meant to summarize complex behaviors in a compact way. Therefore, declarative models do not define all permissible behavior directly, but instead define constraints that must be met by each trace of the business process. While declarative models provide compactness, it is up until now not clear how robust or sensitive different constraints are with respect to noise. In this paper, we investigate this question from two angles. First, we establish a constraint hierarchy based on formal relationships between the different types of Declare constraints. Second, we conduct a sensitivity analysis to investigate the effect of noise on different types of declarative rules. Our analysis reveals that an increasing degree of noise reduces support of many constraints. However, this effect is moderate on most of the constraint types, which supports the suitability of Declare for mining event logs with noise.

  89. Claudio Di Ciccio, Massimo Mecella (2015) On the Discovery of Declarative Control Flows for Artful Processes. In: ACM Trans. Manage. Inf. Syst., 5 (4), 24:1-24:37. ACM. DOI: 10.1145/2629447.
    Read the pre-print. Download the BiBTeX entry.

    Artful processes are those processes in which the experience, intuition, and knowledge of the actors are the key factors in determining the decision making. They are typically carried out by the “knowledge workers”, such as professors, managers, researchers. They are often scarcely formalized or completely unknown a priori. Throughout this paper, we discuss how we addressed the challenge of discovering declarative control flows, in the context of artful processes. To this extent, we devised and implemented a two-phase algorithm, named MINERful. The first phase builds a knowledge base, where statistical information extracted from logs is represented. During the second phase, queries are evaluated on that knowledge base, in order to infer the constraints that constitute the discovered process. After an outline of the overall approach and an insight on the adopted process modeling language, we describe in detail our discovery technique. Thereupon, we analyze its performances, both from a theoretical and an experimental perspective. A user-driven evaluation of the quality of results is also reported, on the basis of a real case study. Finally, a study on the fitness of discovered models with respect to synthetic and real logs is presented.

  90. Johannes Prescher, Claudio Di Ciccio, Jan Mendling (2014) From Declarative Processes to Imperative Models. In: SIMPDA 2014, 162-173, CEUR-WS.org. DOI: 10.13140/2.1.1577.4409.
    Read the pre-print. Download the BiBTeX entry.

    Nowadays organizations support their creation of value by explicitly defining the processes to be carried out. Processes are specifically discussed from the angle of simplicity, i.e., how compact and easy to understand they can be represented. In most cases, organizations rely on imperative models which, however, become complex and cluttered when it comes to flexibility and optionality. As an alternative, declarative modeling reveals to be effective under such circumstances. While both approaches are well known for themselves, there is still not a deep understanding of their semantic interoperability. With this work, we examine the latter and show how to obtain an imperative model out of a set of declarative constraints. To this aim, we devise an approach leading from a Declare model to a behaviorally equivalent Petri net. Furthermore, we demonstrate that any declarative control flow can be represented by means of a Petri net for which the property of safety always holds true.

  91. Margus Räim, Claudio Di Ciccio, Fabrizio Maria Maggi, Massimo Mecella, Jan Mendling (2014) Log-Based Understanding of Business Processes through Temporal Logic Query Checking. In: CoopIS 2014, 75-92, Springer. DOI: 10.1007/978-3-662-45563-0_5.
    Read the pre-print. Download the BiBTeX entry.

    Process mining is a discipline that aims at discovering, monitoring and improving real-life processes by extracting knowledge from event logs. Process discovery and conformance checking are the two main process mining tasks. Process discovery techniques can be used to learn a process model from example traces in an event log, whereas the goal of conformance checking is to compare the observed behavior in the event log with the modeled behavior. In this paper, we propose an approach based on temporal logic query checking, which is in the middle between process discovery and conformance checking. It can be used to discover those LTL-based business rules that are valid in the log, by checking against the log a (user-defined) class of rules. The proposed approach is not limited to provide a boolean answer about the validity of a business rule in the log, but it rather provides valuable diagnostics in terms of traces in which the rule is satisfied (witnesses) and traces in which the rule is violated (counterexamples). We have implemented our approach as a proof of concept and conducted a wide experimentation using both synthetic and real-life logs.

  92. Cristina Cabanillas, Claudio Di Ciccio, Jan Mendling, Anne Baumgrass (2014) Predictive Task Monitoring for Business Processes. In: BPM 2014, 424-432, Springer. DOI: 10.1007/978-3-319-10172-9_31.
    Read the pre-print. Download the BiBTeX entry.

    Information sources providing real-time status of physical objects have drastically increased in recent times. So far, research in business process monitoring has mainly focused on checking the completion of tasks. However, the availability of real-time information allows for a more detailed tracking of individual business tasks. This paper describes a framework for controlling the safe execution of tasks and signalling possible misbehaviours at runtime. It outlines a real use case on smart logistics and the preliminary results of its application.

  93. Claudio Di Ciccio, Fabrizio Maria Maggi, Jan Mendling (2014) Discovering Target-Branched Declare Constraints. In: BPM 2014, 34-50, Springer. DOI: 10.1007/978-3-319-10172-9_3.
    Read the pre-print. Download the BiBTeX entry.

    Process discovery is the task of generating models from event logs. Mining processes that operate in an environment of high variability is an ongoing research challenge because various algorithms tend to produce spaghetti-like models. This is particularly the case when procedural models are generated. A promising direction to tackle this challenge is the usage of declarative process modelling languages like Declare, which summarise complex behaviour in a compact set of behavioural constraints. However, Declare constraints with branching are expensive to be calculated.In addition, it is often the case that hundreds of branching Declare constraints are valid for the same log, thus making, again, the discovery results unreadable. In this paper, we address these problems from a theoretical angle. More specifically, we define the class of Target- Branched Declare constraints and investigate the formal properties it exhibits. Furthermore, we present a technique for the efficient discovery of compact Target-Branched Declare models. We discuss the merits of our work through an evaluation based on a prototypical implementation using both artificial and real-world event logs.

  94. Cristina Cabanillas, Enver Campara, Claudio Di Ciccio, Bartholomäus Koziel, Jan Mendling, Johannes Paulitschke, Johannes Prescher (2014) Towards a Prediction Engine for Flight Delays based on Weather Delay Analysis. In: EMoV 2014, 49-51, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    In our work, we investigate in how far weather conditions have an actual impact on the punctuality of a flight. Following up on the insights gained in this step, we determine categories of impacts to allow for more generalisation. Subsequently, we use the categories and apply them in a prediction model. We fill the model with historical data. Accordingly, the model and corresponding data are the foundation for live predictions on actual flights. Our investigation indicates that the conditions' impact increases significantly once they appear closer to the airports. It identifies four weather conditions which have a significant impact on flights. These conditions lead to different lenghts of delay, which are considered within the linear equitation to predict the delays for prospective flights.

  95. Cristina Cabanillas, Andreas Curik, Claudio Di Ciccio, Manuel Gutjahr, Jan Mendling, Johannes Prescher, Jan Simecka (2014) Combining Event Processing and Support Vector Machines for Automated Flight Diversion Predictions. In: EMoV 2014, 45-47, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    In this work, the research is focused on transportation processes involving aircrafts. In particular, the objective is to design and realise a service-oriented software architecture allowing for the run-time automated detection of aircrafts’ diversions. A diversion consists in the landing of the aircraft in an airport that differs from the planned one. Though rare, diversions can seriously prejudice the successful completion of the transportation process. In the example, adverse weather conditions in the area of Schiphol impose the pilot to make the aircraft land in Brussels. Therefore, the LSP must reroute the truck from Schiphol to the Belgian airport to let goods be delivered to the final destination. In order for these corrective actions to be effective, it is crucial that the LSP is aware of the aircraft diversion as soon as possible. Unfortunately, experience reveals that the communication between LSPs and cargo airlines are not as prompt as required. Specifically, LSPs do not have access to real-time information and are only notified of the diversion once the aircraft has landed at another airport. This delayed notification threatens the ability of LSPs to meet their objectives. For this reason, the approach presented here sets out to reduce the impact of diversions by detecting them in a timely manner, i.e., as soon as an anomalous behaviour is recognised, while the aicraft is still flying. This approach utilises data that are publicly available, i.e., event streams reporting subsequent flight positions, altitude and speed. Thus, it is independent of the communication with airlines.

  96. Claudio Di Ciccio, Massimo Mecella (2013) Mining Artful Processes from Knowledge Workers' Emails. In: IEEE Internet Computing, 17 (5), 10-20. IEEE. DOI: 10.1109/MIC.2013.60.
    Read the pre-print. Download the BiBTeX entry.

    In this paper we present MailOfMine, an approach and a software tool aimed at automatically building a set of workflow models, which represent the artful processes laying behind the knowledge workers' activities, on top of a collection of email messages. The advantages are numerous: the unspecified agile processes that are autonomously used become formalized. Since such models are not defined a priori by experts but rather inferred from real-life scenarios that actually took place, they are guaranteed to respect the true executions (often Business Process Management tools are used to show the discrepancy between the supposed and the concrete workflows). Moreover, such models can be shared, compared, preserved, so that the best practices might be put in evidence from the community of knowledge workers, to the whole business benefit. Finally, an analysis over such processes can be done, so that bottlenecks and delays in actual executions can be found out. In MailOfMine, workflow models are described according to a declarative approach, with a specific visual notation. After presenting the architecture of the system, this paper reports some performance tests and the results of the application of the tool on a real case study.

  97. Claudio Di Ciccio, Massimo Mecella (2013) Studies on the Discovery of Declarative Control Flows from Error-prone Data. In: SIMPDA 2013, 31-45, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    The declarative modeling of workflows has been introduced to cope with flexibility in processes. Its rationale is based on the idea of stating some basic rules (named constraints), tying the execution of some activities to the enabling, requiring or disabling of other activities. What is not explicitly prohibited by such constraints is implicitly considered legal, w.r.t. the specification of the process. Declarative models for workflows are based on a taxonomy of constraint templates. Constraints are thus instances of constraint templates, applied to specific activities. Many algorithms for the automated discovery of declarative workflows associate to each constraint a support. The support is a statistical measure assessing to what extent a constraint was respected during the enactment(s) of the process. In current state-of-the-art literature, constraints having a support below a user-defined threshold are considered not valid for the process. Thresholds are useful for filtering out guesses based on possible misleading events, reported in logs either because of errors in the execution, unlikely process deviations, or wrong recordings in logs. The latter circumstance can be considered extremely relevant when logs are not written down directly by machines reporting their work, but extracted from other source of information. Here, we present an insight on the actual capacity of filtering constraints by modifying the threshold for support, on the basis of real data. Then, taking a cue from the results performed on such analysis, we consider the trend of support when controlled errors are injected into the log, w.r.t. individual constraint templates. Through these tests, we demonstrate by experiment that each constraint template reveal to be less or more robust to different kinds of error, according to its nature.

  98. Claudio Di Ciccio, Massimo Mecella (2013) A Two-Step Fast Algorithm for the Automated Discovery of Declarative Workflows. In: CIDM 2013, 135-142, IEEE. DOI: 10.1109/CIDM.2013.6597228.
    Read the pre-print. Download the BiBTeX entry.

    Declarative approaches are particularly suitable for modeling highly flexible processes. They especially apply to artful processes, i.e., rapid informal processes that are typically carried out by those people whose work is mental rather than physical (managers, professors, researchers, engineers, etc.), the so called “knowledge workers”. This paper describes MINERful++, a two-step algorithm for an efficient discovery of constraints that constitute declarative workflow models. As a first step, a knowledge base is built, with information about temporal statistics gathered from execution traces. Then, the statistical support of constraints is computed, by querying that knowledge base. MINERful++ is fast, modular, independent of the specific formalism adopted for representing constraints, based on a probabilistic approach and capable of eliminating the redundancy of subsumed constraints.

  99. Claudio Di Ciccio, Massimo Mecella, Monica Scannapieco, Diego Zardetto, Tiziana Catarci (2012) MailOfMine - Analyzing Mail Messages for Mining Artful Collaborative Processes. In: Data-Driven Process Discovery and Analysis 2012, 55-81, Springer. DOI: 10.1007/978-3-642-34044-4_4.
    Read the pre-print. Download the BiBTeX entry.

    Artful processes are informal processes typically carried out by those people whose work is mental rather than physical (managers, professors, researchers, engineers, etc.), the so called “knowledge workers”. In this paper we propose the MailOfMine approach, to automatically build, on top of a collection of email messages, a set of workflow models that represent the artful processes laying behind the knowledge workers activities.

  100. Mario Caruso, Claudio Di Ciccio, Ettore Iacomussi, Eirini Kaldeli, Alexander Lazovik, Massimo Mecella (2012) Service Ecologies for Home/Building Automation. In: SyRoCo 2012, 467-472, IFAC. DOI: 10.3182/20120905-3-HR-2030.00191.
    Read the pre-print. Download the BiBTeX entry.

    Service ecologies are networks of services pervasively embedded in everyday environments, e.g., smart homes, where they are composed and orchestrated in order to provide advanced functionalities. In this paper, we show how the interplay of off-line and on-line composition of services can improve flexibility and adaptiveness.

  101. Giuseppe De Giacomo, Claudio Di Ciccio, Paolo Felli, Yuxiao Hu, Massimo Mecella (2012) Goal-based Composition of Stateful Services for Smart Homes. In: CoopIS 2012, 194-211, Springer. DOI: 10.1007/978-3-642-33606-5_13.
    Read the pre-print. Download the BiBTeX entry.

    The emerging trend in process management and in service oriented applications is to enable the composition of new distributed processes on the basis of user requests, through (parts of) available (and often embedded in the environment) services to be composed and orchestrated in order to satisfy such requests. Here, we consider a user process as specified in terms of repeated goals that the user may choose to get fulfilled, organized in a kind of routine. Available services are suitably composed and orchestrated in order to realize such a process. In particular we focus on smart homes, in which available services are those ones offered by sensor and actuator devices deployed in the home, and the target user process is directly and continuously controlled by the inhabitants, through actual goal choices. We provide a solver that synthesizes the orchestrator for the requested process and we show its practical applicability in a real smart home use case.

  102. Claudio Di Ciccio, Andrea Marrella, Alessandro Russo (2012) Knowledge-intensive Processes: An Overview of Contemporary Approaches. In: KiBP 2012, 33-47, CEUR-WS.org.
    Read the pre-print. Download the BiBTeX entry.

    Engineering of knowledge-intensive processes is far from being mastered. Processes are defined knowledge-intensive when people/agents carry them out in a fair degree of “uncertainty”, where the uncertainty depends on different factors, such as the high number of tasks to be represented, their unpredictable nature, or their dependency on the scenario. In the worst case, there is no predefined view of the knowledge-intensive process, and tasks are mainly discovered as the process unfolds. In this work, starting from three different real scenarios, we present a critical comparative analysis of the existing approaches used for supporting knowledge-intensive processes, and we discuss some recent research techniques that may complement or extend the existing state of the art.

  103. Claudio Di Ciccio, Massimo Mecella (2012) Mining Constraints for Artful Processes. In: BIS 2012, 11-23, Springer. DOI: 10.1007/978-3-642-30359-3_2.
    Read the pre-print. Download the BiBTeX entry.

    Artful processes are informal processes typically carried out by those people whose work is mental rather than physical (managers, professors, researchers, engineers, etc.), the so called “knowledge workers”. MailOfMine is a tool, the aim of which is to automatically build, on top of a collection of email messages, a set of workflow models that represent the artful processes laying behind the knowledge workers activities. After an outline of the approach and the tool, this paper focuses on the mining algorithm, able to efficiently compute the set of constraints describing the artful process. Finally, an experimental evaluation of it is reported.

  104. Claudio Di Ciccio, Massimo Mecella, Mario Caruso, Vincenzo Forte, Ettore Iacomussi, Katharina Rasch, Leonardo Querzoni, Giuseppe Santucci, Giuseppe Tino (2011) The Homes of Tomorrow: Service Composition and Advanced User Interfaces. In: EAI Endorsed Trans. Ambient Systems, 11 (1), e2. ICST. DOI: 10.4108/trans.amsys.2011.e2.
    Read the pre-print. Download the BiBTeX entry.

    Home automation represents a growing market in the industrialized world. Today's systems are mainly based on ad hoc and proprietary solutions, with little to no interoperability and smart integration. However, in a not so distant future, our homes will be equipped with many sensors, actuators and devices, which will collectively expose services, able to smartly interact and integrate, in order to offer complex services providing even richer functionalities. In this paper we present the approach and results of SM4All - Smart hoMes for All, a project investigating automatic services composition and advanced user interfaces applied to domotics.

  105. Claudio Di Ciccio, Tiziana Catarci, Massimo Mecella (2011) Representing and Visualizing Mined Artful Processes in MailOfMine. In: HCI-KDD 2011, 83-94, Springer. DOI: 10.1007/978-3-642-25364-5_9.
    Read the pre-print. Download the BiBTeX entry.

    Artful processes are informal processes typically carried out by those people whose work is mental rather than physical (managers, professors, researchers, engineers, etc.), the so called “knowledge workers”. MailOfMine is a tool, the aim of which is to automatically build, on top of a collection of e-mail messages, a set of workflow models that represent the artful processes laying behind the knowledge workers activities. This paper presents its innovative graphical syntax proposal and the interface for representing and showing such mined processes to users.

  106. Claudio Di Ciccio, Massimo Mecella, Monica Scannapieco, Diego Zardetto, Tiziana Catarci (2011) MailOfMine - Analyzing Mail Messages for Mining Artful Collaborative Processes. In: SIMPDA 2011, 45-59.
    Read the pre-print. Download the BiBTeX entry.

    Artful processes are informal processes typically carried out by those people whose work is mental rather than physical (managers, professors, researchers, engineers, etc.), the so called “knowledge workers”. In this paper we propose the MailOfMine approach to automatically build, on top of a collection of e-mail messages, a set of workflow models that represent the artful processes laying behind the knowledge workers activities.

  107. Claudio Di Ciccio, Massimo Mecella, Monica Scannapieco, Diego Zardetto (2011) Groupware Mail Messages Analysis for Mining Collaborative Processes. In: SEBD 2011, 397-404.
    Read the pre-print. Download the BiBTeX entry.

    Nowadays, the most of the research related to workflows has considered the management of formal business processes. There has been some discussion of informal processes, often under names such as “artful business processes”: informal processes are typically carried out by those people whose work is mental rather than physical (managers, professors, researchers, etc.), the so called “knowledge workers”. With their skills, experience and knowledge, they are used to perform difficult tasks, which require complex, rapid decisions among multiple possible strategies, in order to fulfill specific goals. In contrast to business processes that are formal and standardized, often informal processes are not even written down, let alone defined formally, and can vary from person to person even when those involved are pursuing the same objective. Knowledge workers create informal processes “on the fly” to cope with many of the situations that arise in their daily work. While informal processes are frequently repeated, since they are not written down, they are not exactly reproducible, even by their originators, nor can they be easily shared. Their outcome releases and their information exchanges are very often done by means of e-mail conversations, which are a fast, reliable, permanent way of keeping track of the activities that they fulfill. The objective of the research proposed in this position document is to automatically build, on top of a collection of e-mails, a set of workflow models that represent the artful processes which lay behind the knowledge workers activities.

  108. Tiziana Catarci, Claudio Di Ciccio, Vincenzo Forte, Ettore Iacomussi, Massimo Mecella, Giuseppe Santucci, Giuseppe Tino (2011) Service Composition and Advanced User Interfaces in the Home of Tomorrow: the SM4All Approach. In: Ambi-Sys 2011, 12-19, Springer. DOI: 10.1007/978-3-642-23902-1_2.
    Read the pre-print. Download the BiBTeX entry.

    Houses of tomorrow will be equipped with many sensors, actuators and devices, which collectively will expose services. Such services, composed in an automatic way, and invokable through adaptive user interfaces, can support human inhabitants in their daily activities. In this paper we present the approach and some results of the SM4All EU project (http://www.sm4all-project.eu/), which is investigating automatic services composition and advanced user interfaces applied to domotics.

  109. Riccardo De Masellis, Claudio Di Ciccio, Massimo Mecella, Fabio Patrizi (2010) Smart Home Planning Programs. In: ICSSSM 2010, 377-382, IEEE. DOI: 10.1109/ICSSSM.2010.5530212.
    Read the pre-print. Download the BiBTeX entry.

    In pervasive (ubiquitous) computing an increasing amount of devices are embedded and interconnected in the user's environment, e.g., a smart house. The system needs to adapt to the user's varying contexts and goals. The aim is to provide transparent services, reacting to input from the users and to the state of the environment. As user's requirements increase and new devices are inserted, new services need to be dynamically created. We present a technique that allows the user to express planning programs (i.e., procedures allowing to go through different states of the environment) and to have it realized through automatic service composition techniques.

  110. Roberto Baldoni, Claudio Di Ciccio, Massimo Mecella, Fabio Patrizi, Leonardo Querzoni, Giuseppe Santucci, Schahram Dustdar, Fei Li, Hong-Linh Truong, Laura Albornos, Francisco Milagro Lardies, Pablo Antolin Rafael, Rassul Ayani, Katharina Rasch, Marianela Garcia Lozano, Marco Aiello, Alexander Lazovik, Antonio Denaro, Giorgio Lasala, Paolo Pucci, Clemens Holzner, Febo Cincotti, Fabio Aloise (2009) An Embedded Middleware Platform for Pervasive and Immersive Environments for-All. In: SECON 2009, 213-215, IEEE. DOI: 10.1109/SAHCNW.2009.5172921.
    Read the pre-print. Download the BiBTeX entry.

    Embedded systems are specialized computers used in larger systems or machines to control equipments such as automobiles, home appliances, communication, control and office machines. Such pervasivity is particularly evident in immersive realities, i.e., scenarios in which invisible embedded systems need to continuously interact with human users, in order to provide continuous sensed information and to react to service requests from the users themselves. The SM4All project investigates an innovative middleware platform for inter-working of smart embedded services in immersive and person-centric environments, through the use of composability and semantic techniques for dynamic service reconfiguration. This is applied to the challenging scenario of private houses and home-care assistance in presence of users with different abilities and needs (e.g., young, able-bodied, aged and disabled). This paper presentes a brief overview of the SM4All system architecture.

Projects