key: cord-0111393-n43yr4t1 authors: Estrada-Torres, Bedilia; del-R'io-Ortega, Adela; Resinas, Manuel title: Dialog-based Automation of Decision Making in Processes date: 2021-09-02 journal: nan DOI: nan sha: ae75d63da0b5dd2d48c2fe871ce77a289b4c40e8 doc_id: 111393 cord_uid: n43yr4t1 The use of chatbots has spread, generating great interest in the industry for the possibility of automating tasks within the execution of their processes. The implementation of chatbots, however simple, is a complex endeavor that involves many low-level details, which makes it a time-consuming and error-prone task. In this paper we aim at facilitating the development of decision-support chatbots that guide users or help knowledge workers to make decisions based on interactions between different process participants, aiming at decreasing the workload of human workers, for example, in healthcare to identify the first symptoms of a disease. Our work concerns a methodology to systematically build decision-support chatbots, semi-automatically, from existing DMN models. Chatbots are designed to leverage natural language understanding platforms, such as Dialogflow or LUIS. We implemented Dialogflow chatbot prototypes based on our methodology and performed a pilot test that revealed insights into the usability and appeal of the chatbots developed. In recent years, the use of artificial intelligence techniques, automated tools for the execution of tasks within business processes and the change in communication strategies between companies with customers have encouraged a large number of organizations to implement virtual assistants that provide information, solve doubts or help in the achievement of a specific task [1] . Often these virtual assistants take the form of chatting bots, also known as chatbots, which are tools designed to interact with users through friendly conversations using natural language in a way that simulates interaction with a human [2, 1] . Chatbots have proven to be useful in a variety of areas such as healthcare [3] , marketing, education, business, and e-commerce [4] . Furthermore, they have widespread use as the digital assistants that are now on every mobile phone or on home controllers. Some authors also consider that this type of tool can go beyond its current capabilities and act as teammates of human workers in a collaborative way in complex processes [5] . One of the aspects that have propelled the use of chatbots is the improvement of the development of chatbots in the last years. Chatbot development platforms have abstracted away many details related to the natural language processing of a chatbot through the automated recognition of user intents, and have provided a general framework in which the conversation flow can be defined. However, a chatbot developer is still needed to implement a specific conversation flow, to deal with many low-level details about parameters or entities, to provide a set of training phrases for each of the conversation steps, and to provide a generic set of fallback options that guides users who do not know the capabilities the chatbot gives, amongst others. Implementing these tasks is usually time-consuming and error-prone even for simple chatbots [6] . The hypothesis of our research is that chatbots that serve a similar purpose have many aspects on their structure and conversation flow that can be reused. Therefore, it could be possible to use these commonalities to design methodologies that partially automate the development of some particular types of chatbots. Specifically, in this paper, we exploit this idea by proposing a methodology to partially automate the development of a particular type of chatbots, namely decision-support chatbots. A decision-support chatbot is a task-oriented chatbot whose purpose is to help or guide users to make decisions. The usefulness of this kind of chatbots can be illustrated with two use cases. Let us imagine a knowledge worker that is performing a knowledge-intensive process in the context of a bank. As it is common these days, to perform her tasks, she interacts with other process participants not through email, but through a workstream collaboration tool like Microsoft Teams 1 or Slack 2 [7] . To perform her tasks, she needs to know the risk category of a person, which is a decision that is partially automated based on a decision table. Without leaving her collaboration environment, she asks the chatbot for the risk category of an existing customer. After a couple of interactions, in which the chatbot collects the required information such as the risk score of the customer, she obtains the answer. The second example is related to end-users of a business process, like customers, citizens, and patients. In many cases the interaction or part of the interaction of an end-user with a business process involves asking her for information to get a decision. For instance, during the COVID-19 outbreak, many healthcare services around the globe have implemented apps that ask people about their symptoms and their contacts with infected people, such as [8] , providing information to end users and helping to decrease the medical workload. Based on their answers, the app may advice about the best next step. These interactions can be easily performed by a decision-support chatbot, as shown in [9] , removing the need to create dedicated apps for that, and making them more accessible to all kinds of users. Our methodology removes or systematizes many of the low-level tasks that chatbot developers need to perform to implement a decision-based chatbot like the ones described above, letting the developer focus only on the definition of the decision using decision tables. In order to evaluate our approach, chatbot prototypes were implemented using Dialogflow 3 , a chatbot development framework. A set of pilot users from academia and industry interacted with these prototypes so that we gathered insight into the usability and appeal of the bots that were developed in this way. The rest of this paper is structured as follows. Section 2 introduces a motivating example for this work. Sections 3 and 4 describe related work and key concepts of chatbots, respectively. The proposed methodology is described in Section 5. The implementation of chatbots, the evaluation of the proposal and its limitations are presented in Section 6. Finally, Section 7 concludes the paper and describes challenges for future work. To illustrate the methodology we propose, we use the example based on a bank scenario, in which it is necessary to decide the risk category of a person (customer). The risk category can be high, medium or low, depending on three criteria: if the person is an existing customer of the bank, what the application risk score that she provides is and what the assigned credit score is. The automation of decisions requires the organization to have previously identified and defined decisions. In this paper, we use the Decision Model and Notation (DMN) [10] to illustrate and model the decisions. We have chosen it because it is a well-known standard that provides a notation for describing and modeling repeatable decisions like this one in a readily understandable way for analysts, technical developers and business users. Specifically, our contribution focuses on decision tables, which are used for the definition of expressions, calculations, if/then/else logic, amongst other. However, we must emphasize that our approach is independent of the specific notation used to represent decision tables. A decision table, as shown in Figures 1 and 2 , define the possible combinations between the three criteria required to determine the risk category. More specifically, the decision (Risk Category) is resolved using a set of decision rules (rows from 1 to 12 for Figure 1 , and from 1 to 8 for Figure 2 ). Each decision rule is composed of a set of input (3) and output entries (1) and is identified by a rule indicator. The hit policy indicator (U, unique) defines the output values to be selected in the decision-making process. Decision tables can be defined horizontally or vertically. In these examples, they are all vertical, where each column with input expression (Existing Customer, Risk Score and Credit Score) represents a type of input entry; and the last column (Risk Category) represents the decision output. All decision table entries in Figure 1 are required. The entries in Figure 2 are slightly different because some of them are not significant (wildcards, "-"), the credit score in rules 1 and 8. Due to the fact that there is a large number of proposals aimed at checking the quality of definition of the decision tables in DMN, such as the overlap between rules [11] , or to verify the consistency, correctness and coverage of decision rules [12, 13, 14] , in this article, we assume the decision tables used to build a chatbot were built correctly. The aim of this research is to generate a decision-support chatbot that interacts with a user in a specific domain defined by a decision table. Figure 4 shows an example of the interaction between a chatbot and a user to perform the decision. The chatbot asks for the input entries of the decision table, except when the input entry is defined as "-". In that case the chatbot must determine whether it is necessary or not to ask for the attribute value. For example, if after an interaction the user indicated the attributes Existing customer = True and Application risk score = 50, it is not necessary to ask for the Credit score value, because independently of the value provided by the user, the output value will be "LOW" (see Figure 2 ). In addition, our proposal also allows and guides the development of chatbots from decision hierarchies. That is, decisions where value of one or more of the inputs are the result of a previous decision. For example, we could define another decision table (see Figure 3 ) to decide whether a person is suitable for a job in an organization based on the risk category obtained from the other table. Chatbots interact with the user simulating and reproducing conversations turn by turn using natural language through the exchange of written messages or voice commands [1] . Based on the classification of Hussain et al. [2] , here we focus on task-oriented and text-based chatbots. Chatbots are considered useful for the improvement of productivity and the speed and ease of use of their interfaces [8] . They have been widely used in contexts as varied as healthcare [3] , e-commerce [15] , customer service [16] , marketing [17] among others [4] , or modern assistants like Siri, Alexa or Google Assistant. Gwendal et al. [6] highlighted the need for in-depth knowledge of the tools for deploying which increases costs. They proposed a framework for defining chatbots, but one that requires human intervention. Our proposal could complement this framework by providing all these elements for decision-support chatbots. On the other hand, Syed et al. [18] identified challenges related to RPA tools. Among these challenges are the lack of methodological support for their implementation, as well as the systematic design, development and evolution. With our proposal we address both, since we propose a systematic solution for the development of chatbots in a semi-automatic way, with the aim of reducing development time and effort. After reviewing the literature we have not found any work that addresses the creation of chatbots based on decision models like DMN decision tables. However, there are solutions to implement task-oriented chatbots in related domains. In [19] , authors proposed a chatbot for the collaborative creation of models based on conversations. And in the context of business processes, López et al. [20] described a methodology and a prototype to transform a BPMN model into a chatbot based on AIML language 4 . Although its evaluation revealed shortcomings in the fluency and certainty of the chatbot responses, it also shows the high potential of the proposal and the multiple possibilities of extension. Chatbot design typically relies on parsing techniques, pattern matching strategies and Natural Language Understanding (NLU) to process user inputs. The latter has become the dominant technique thanks to the popularization of libraries and cloud-based services such as Dialogflow, wit.ai or LUIS, which rely on Machine Learning techniques and Natural Language Processing techniques to understand user input [6] . According to [6] , a NLU chatbot, also known as agent [1] , contains a recognition engine that matches user inputs with intentions (intents) during a conversation turn. It also contains an execution component capable of executing actions for each intent, such as responses to the user. Intentions or Intents are defined through training phrases, which are input examples that allow the recognition engine to identify the different phrases that a user can utilize to express an intention. For instance, an intent can be defined to recognize that the user wants to perform a decision. Some training phrases for such an intent could be "I want to determine the risk category," or "What is the risk category of an existing customer with a risk score of 35." In each training phrase, concrete values can be recognized. These values are called parameters. For instance, in the second example, there are two parameters, namely: existing customer equals to true and risk score equals to 35. The type of the parameters is defined by a specific structure called entity, which determine how data from a user input is extracted. NLU platforms provide predefined entities that match many common types of data like numbers, dates, times, colors, or e-mail addresses. In addition, it is possible to define custom entities for enumerated values. A custom entity is composed of a set of entries. Each entry is made up of a reference value and a set of synonyms for that reference. For example, if we define a "boolean" entity, two entries are required: true and false. For the true entry, the values yes, ok, correct, could be synonyms for it. Each intent can be associated to input and output contexts to control the flow of the conversation. Contexts are identified by a string like awaiting risk score. When an intent is matched, any configured output context for that intent becomes active. While any contexts are active, the chatbot is more likely to 4 http://www.aiml.foundation/ match intents that are configured with input contexts that correspond to the currently active contexts. In addition, contexts store the parameter values provided by a user in one intent so that it can be used in other intents. Finally, for each intent that is recognized, one or several actions can be executed. These actions include sending a response to the user, setting an additional output context, or invoking an external API, amongst others. Figure 5 represents interactions between the key concepts of chatbots described above. In this section, the methodology we propose to build chatbots from decision tables is described. The methodology takes as input a decision table or a hierarchical definition of decision tables. Figure 6 describes the main steps of the methodology. In the following subsections we describe how each of these steps should be performed in order to build a chatbot that meets the following three conversational requirements: • R1: The information required to make the decision can be provided in any order. In other words, the user does not have to learn a predefined structure that has to be used to provide the information to the chatbot. • R2: The user can provide several input values at the same time, even in the first interaction to improve efficiency, especially for advanced users. This means that, for our example, the chatbot must be able to deal with phrases like "What is the risk category of an existing customer with a risk score of 35." By doing so, the chatbot does not have to ask again about the risk score and whether it applies to an existing customer, lowering the number of interactions with the user and making the conversation more human-like. • R3: Only information that is necessary should be asked to the user. This means that in a chatbot developed for the decision table in Figure 3 , if the user said that currently employed is true, the risk category should not be asked because its value is irrelevant for the decision. As a first step, we propose to identify and extract all the DMN elements of the decision model to be mapped with entities, intents, parameters, contexts, actions and training phrases. The main decision table and the hierarchy of decisions should be identified, in case there is more than one decision table, the inputs required for each decision table and the input type for each expected value. These data will be used in the following phases. To build a chatbot several entities can be needed. In this step we propose that an entity called ent must be created for each input whose type is not supported by the system entities of the NLU platform. In particular: • If the input type is boolean, the entity has two entries: true and false. In addition, several synonyms for them must be defined. These synonyms include the typical "ok" and "yes" (and their false counterparts), but also it is considered as true the name of the input and as false its negation in several forms. For instance, for the input existing customer, an entity ent existingcustomer is created with two entries {True, False}, where synonyms(True)={yes, an existing customer, with an existing customer} and synonyms(False)={no, non existing customer, not an existing customer, without an existing customer} • If the input type is an enumerated data type, the entity has one entry for each of the enumerated values of the data type. Here, no synonyms are provided by default. However, the chatbot developer can provide them if it makes sense for the domain. According to their function, and with the intention of fulfilling the conversational requirements, we propose to classify intents into three types: decision intents, input intents and support intents. Each one is described below. A decision intent ( intent) is created for each DT in the DMN model. Their main goal is to identify the decision that the user wants to perform. This means it must recognize phrases that convey this intent like "I want to determine the risk category," or "I want to know a risk category," or maybe directly "risk category." In addition, according to requirement R2, the decision intent must also be able to gather as many information from that user input dealing with phrases like "What is the risk category of an existing customer with a risk score of 35." As a consequence, one parameter (p ) for each input of the decision table is added to the intent. The type of each parameter is either a system entity related to the type of the input or the custom entity already defined for the input. Finally, decision intents have no input context and one predefined output context that represents the chosen decision. However, other output contexts are added during the action phase as described in the next section. Intents. An input intent ( intent) is created for each input in the DMN model. Their main goal is to gather information about each of the inputs after a question made by the chatbot (e.g., "what is the credit score?"). However, like in the decision intent, it is necessary to provide a mechanism to capture information about other inputs that the user might include together with the response. Therefore, the phrases that need to be recognized are like "it is 37", or "the credit score is 37 and the risk score is 25." As a consequence, again one parameter (p ) for each input of the decision table is created. The only difference is that the parameter that corresponds to the intent associated with the same input will be defined as required. If a required parameter is not provided, the intent cannot continue with the normal conversation flow and will request this parameter from the user. For example, in the existing customer intent, p existingcustomer is required and p riskscore and p creditscore are optional. Input intents have one input context awaiting , which is activated to signal the moment in which that intent is requested within the conversation flow. Their goal is to make communication with the user more fluid and friendly and provide help if necessary. Unlike decision intents and input intents, they are independent of the DMN model and, hence, they can be reused in all chatbots without any change. In our proof of concept, we have included four intents with the following purposes: • To manage when an expected value is not provided in the user's phrase. • To recognize a user greetings and start the conversation by asking which decision she wants to use. • To recognize a goodbye or thank you from the user to end the conversation. • To recognize when the user is asking for help on how to use the chatbot. Other intents can be added if necessary. For instance, one could add an intent that recognizes when a user asks for the inputs for which he or she has already provided a value, or all of the inputs for which values must be provided. In fact, having the possibility of reusing these intents in all decision chatbots is a very appealing feature of our approach. Each intent definition contains a set of training phrases, which are input examples used to detect which intent the user refers to. These training phrases also detail how to extract the information to fill the parameters of the intent from the user's message. For instance, the sentence "I want to determine the risk category of a non existing customer with a risk score of 90" should match with the main decision intent and has to fill two parameters, namely existing customer, which evaluates to false, and risk score, which evaluates to 90. The generation of training phrases is one of the most relevant, but also more time-consuming tasks while developing a chatbot based on NLU. In our approach we provide a semi-automatic way to generate these phrases based on natural language generation (NLG). Traditionally, NLG has been used in chatbots to generate its responses to the users. However, with the advent of NLU chatbots, new NLG tools have been particularly designed for building training phrases for chatbots. The reason is that this task is slightly different from other NLG tasks because the goal is not to build a set of phrases intended for humans, but instead to build a dataset for training the chatbot. Therefore, it is not strictly necessary that the resulting phrases are fully syntactically correct because the NLU layer already deals with this aspect. Furthermore, the natural language generation must be designed to provide a wide variety of examples. There are several tools that can be used for this task. The approach followed by them is to build a generation specification that define the patterns of text that are used to create the dataset. In this paper, we use the open source project Chatito 5 . Figure 7 depicts an extract of such a specification for the example chatbot. It includes an intent entity called dmnexample1 that can be generated from two alternative sentences. Each of the sentences refers to alias entities like init or decision that provide alternatives for the phrase generation. Alias entities can be made optional by adding a question mark at the end of the alias name like in˜[of parameters?]. Finally, slot entities, which represent parameter values, are annotated with @. The procedure to generate training phrases requires the DMN model together with some hints about how to deal with some input parameters are processed and used to automatically build a specification for each of the intents of the chatbot. The user can then optionally refine the specification obtained automatically in the previous step. Finally, the NLG tool is used to generate the set of training phrases that will be used to train each intent in the NLU chatbot. Next we focus on how the first step is performed. The pattern for decision intents is represented by an optional, initial ([init] phrase like "I want to know the,"; the name of the decision ([decision]), which can be either the output or the name of the decision in the DMN model, and two additional patterns to include parameters, namely: of-params, and with-params, although more patterns could be easily added following a similar approach. Of-params include inputs that can be expressed better using the preposition of like existing customer in our example. On the contrary, with-params include inputs that can be more naturally expressed using the preposition with like risk score or credit score. The information of whether a parameter is an of-param or a with-param must be provided by the chatbot developer. If no information is provided, all parameters are considered to be with-params. Regardless of the type of pattern, a slot entity is created for each input. The values of the slot entity depends on the type of the input and include the domain of the input or a subset if the domain is infinite like a number or a date or a string. The only exception is if the type of the input is a boolean. In that case, if the input is a with-param it adds a second type of slot entity for the input, which includes the name of the input with "with" and "without", i.e., "with existing customer" and "without existing customer." On the other hand, if the input is an of-param there is only one value entity whose values are the name of the input and its negation: "an existing customer," "a non existing customer," and "not an existing customer." Having the slot entities defined, the pattern of of-params include the preposition of and k-permutations of the slot entities created for each input with k = {1, . . . , n}, where n is the number of inputs. Furthermore, if the input is of a generic type like numbers, dates or strings, the name of the input is added after the slot entity created. If the number of permutations is too large, then only all 1-permutations, one n-permutation and a random subset of the other k-permutations are chosen. This can be done because we are only creating training examples, not defining the whole set of possible phrases the chatbot has to recognize. Regarding with-params, the pattern follows the same approach, but starting with the preposition with. However, unlike in the previous case, the slot entity is not used directly, but an alias entity that provides different alternative ways in which the value of the attribute can be provided. Some examples are: "a credit score of," "a value of credit score of," or "credit score as." The permutations used to generate the training phrases for the parameters allows the user to provide the information about the different inputs in any order following requirement R1. As we said before, once the phrases are generated, the user can refine them if desired. The pattern for input intents is composed by an alias entity that represents the answer to the question ([answer]) and, optionally, some additional parameters. The former includes options like providing directly the slot entity, or surrounding it with some additional text like "it is" or "the parameter name is." Because of the way the questions are made, in this case there is no difference between of-params and with-params. Concerning the additional parameters, they are added following the same approach as before, although with some slight changes. Since the context of this phrase is slightly different than the decision intent, a new alias entity is added that allows the definition of of-params using the pattern "it is", and with-params using the pattern "the parameter name ." For instance, after the question "what is the credit score?", a possible answer could be: "it is 37 and the risk score is 47." Furthermore, the same entities for specifying of-params and with-params than in decision intents is also included, although in of-params, "of" is changed by "it is." This means that another possible answer could be: "37, and it is an existing customer with a risk score of 25." Obviously, the input that appears in the question is removed from all these entities. Finally, note that some heuristics could be added to improve the automated process for some common types of inputs. For instance, if an input is Age, the generation could be adapted so that instead of creating phrases that include a 23 age, it creates phrases that include 23 years old. Each intent has an associated action that is performed when the intent is recognized. The most straightforward action is a canned response provided by the chatbot. This is the action used with the support intents. For example, a farewell intent may answer You're welcome, Come back soon, etc. However, the action that needs to be implemented for decision and input intents is more elaborated. Our proposal uses the algorithm shown in Figure 8 , which depicts the response action, every time a question turn occurs in which the user answers. It assumes we have three functions available for the decision at hand: inputs(), which returns all the inputs of the decision; decision(parameters), which returns the decision for the given set of parameters, where parameters is a map that assigns a value to each input, and is necessary(input, parameters), which returns whether the given input is necessary given the current values assigned to the parameters. For instance, in the example of Figure 3 , is necessary(Risk Category, Currently employed = true) would evaluate to false because the value of Risk Category does not affect the decision, whereas is necessary(Risk Category, Currently employed = false) would evaluate to true. The execution of the functions is carried out in an external service. Following Algorithm 1, it receives as input the parameters recognized in the intent and iterates over the inputs of the decision (line 2). Then, it is checked if a parameter value has already been provided for each input (line 3). If the input is missing, it is checked whether with the current parameters, the missing input is really necessary by means of function is necessary (line 4). Therefore, only required information is asked to the user as imposed by requirement R3. If this is the case, an output context is created and activated with the name of the missing input and a response like "What is the Risk Score value?" is sent requesting the missing value. This is performed by function ask for param (lines [5] [6] [7] [8] . If the parameters have been provided for all the required inputs, the decision is made and a response is sent including the decision result (lines 9-11). To illustrate the result of the methodology described in the mapping of the previous steps, Figure 9 shows Our methodology is designed to allow the use of only one chatbot to support multiple decisions. To this end, it is necessary to consider two different situations depending on the relationship between the decisions. If decisions are independent of each other, it is enough to repeat the same methodology for each decision and add the resulting intents, entities and actions to the same chatbot. The only thing that needs to be considered is to add some prefix when generating all the names in order to avoid a name clash between them. When the conversation with the chatbot starts, the chatbot will try to match the user input with all of the decision intents, which are the only intents without input context. Once a decision intent is matched, the context of the chosen decision will be set. This will prevent other decision intents or input intents of different decisions from being matched with user inputs. If there is a hierarchical relationship between the decisions, i.e., when one of the inputs of a decition table is the result of another decision table, there is no need to apply the methodology to each decision separately. Instead, it is enough to apply it to the decision in the top of the hierarchy. Only two things need to be considered in this case. First, the set of inputs used to build the chatbot must be the union of the inputs of all the decision tables that belong to the hierarchy. And second, function is necessary needs to be extended to consider that inputs could be found in different decision tables. The proposed methodology is not restricted to a single tool for generating chatbots. To evaluate the proposed methodology, chatbots were built following the methodology. Then, the opinions of 17 people from academia and industry were collected through online questionnaires. We were interested in knowing the perception of the usefulness of the result and the opinion about the use experience. In this way we evaluate, for example, if they consider the support attempts useful, or the phrases used to issue the response actions. The following is a list of the questions included in the evaluation questionnaire. Q1 : How was your interaction with the chatbot? 6 https://github.com/Adartse/DMNChatbots Q2: How useful did you find the assistance provided by the chatbot in case you asked for it? (only answer if you used it) Q3: Considering the decision table below, did the decision-support chatbot ask you the right questions to come to a decision? Q4: Do you see potential for this kind of applications in organizations? Q5: What did you like/dislike about the chatbot? Q6: Do you have any suggestions to improve the Decision-Support Chatbot? The full details of the results are available online 6 . With respect to the first four questions, which have a linear scale (from 1 to 4), we get that: • For Q1, 47% considered that they had a fluent or very fluent conversation with the chatbot. Only 13% considered the conversation not fluid at all. • For Q2, 93% used some support command during the conversation, of which 61% consider that they received useful information. • For Q3, Only 10% of the respondents considered that the chatbot did not ask the right questions to make the decision, another 10% considered that it did so poorly, and the remaining 80% considered that it did so correctly or absolutely correctly. • And for Q4, 80% considered that this type of application has potential or large potential in organizational contexts. In this article we introduced a novel methodology that allows the creation of chatbots, in a semi-automatic way, through the systematic transformation of DMN decision tables. The chatbots built following these steps are able to ask for the required inputs to evaluate the decision rules defined to obtain a concrete decision output. This is done reducing the number of interactions with the user, for example when wildcards are included in the decision definition or collecting information about several inputs at the same time. In addition, the methodology provides the opportunity of reusing domain-independent parts of chatbots, like those provided by support intents, in all decision chatbots. The generated chatbots are prototypes and as it is derived from the evaluation, although in most cases satisfactory conversations are obtained that show an adequate response to the provided inputs, there are still functionalities that can be improved or extended, such as providing more support to the user during the conversation by allowing questions about the already provided inputs, rectifying previously given values, raising what-if scenarios, or adding patterns to the automatic sentence generation to improve the users' ability to detect sentences. However, it should be remembered that the aim of this article is not to provide a production-ready chatbot, but to prove that it is possible to generate it semi-automatically from DMN models. In addition to enriching interactions with the user, we foresee the need to continue working to achieve the automatic transformation of DMN models defined in a standard way, for example using XML files. Chatbots and Conversational Interfaces: Three Domains of Use A Survey on Conversational Agents Aquabot: a diagnostic chatbot for achluophobia and autism Chatbots: Are they really useful? Machines as teammates: A research agenda on ai in team collaboration Multi-platform Chatbot Modeling and Deployment with the Jarvis Framework Exploring affordances of slack integrations and their actualization within enterprises-towards an understanding of how chatbots create value Why People Use Chatbots Osakidetza -Protocolo COVID-19 Decision Model and Notation (DMN) specification. V1.3 Disambiguation of DMN Decision Tables A new approach for measuring rule set consistency Evaluation of the decision performance of the decision rule set from an ordered decision table Semantics, Analysis and Simplification of DMN Decision Tables SuperAgent: A customer service chatbot for e-commerce websites A new chatbot for customer service on social media How chatbots influence marketing Robotic process automation: Contemporary themes and challenges Collaborative Modeling and Group Decision Making Using Chatbots in Social Networks From Process Models to Chatbots The answers to Q5 and Q6 are closely related. Although we received positive feedback highlighting the usefulness of this type of tools, especially for more complex decision making scenarios, the main comments on what people disliked and should be improved were focused on (i) providing more information on the context of the decision to be made and (ii) expanding and customizing the help options during the conversation. To solve (i) during transformation process, additional information on the decisions should be requested in order to build the context information intents as new Support Intents. And with regard to (ii), a very similar solution is proposed: collect information on the type and range of values expected for each input and build in a set of additional Support Intents. Both comments will serve as a basis to further expand the methodology proposed here to encourage increasingly friendly conversations and will be the basis for developing tools to automate the transformation of DMN models into chatbots.