key: cord-0921055-0v63vraq authors: Mulcahy, Robert S. title: Creating Effective eLearning to Help Drive Change date: 2020-11-02 journal: J Chem Health Saf DOI: 10.1021/acs.chas.0c00091 sha: 82691f374f7588e118e4594b117fdc0d2d89b821 doc_id: 921055 cord_uid: 0v63vraq [Image: see text] Covid-19 has forced many institutions into rapid adoption of eLearning, but course designers who are used to live classrooms are often unsure how to design for virtual ones. The purpose of creating a course is to help drive change; if learners could already do what they needed to do, there would not be a need for the course. However, courses themselves are rarely sufficient to create change. Therefore, leaders in charge of driving a particular change should be thinking about evaluation and existing barriers to change in addition to making formal courses available. A number of research-based instructional design principles are discussed in this paper along with special considerations for eLearning. It is common for organizations to identify areas where behaviors are not meeting expectations. Employees may not be following appropriate safety procedures, for instance, or they might be insufficiently documenting their work. Management often decrees, "We need more training." "More training" is by itself, unfortunately, generally insufficient for changing behavior in an organization. Certainly, formal learning, whether in classrooms, at a distance, or on demand, can play an important role by equipping learners with needed skills and knowledge, but the strategy for driving change must also address the motivations, constraints, and accountability of the target population. This focus on driving change creates considerations different from learning in academic institutions. This Article explores the role that learning, particularly eLearning, has in driving change and provides research-based guidance for creating impactful eLearning. eLearning formats include on-demand, self-paced instruction and virtual synchronous classrooms. 1 The Covid-19 pandemic has placed a spotlight on virtual classrooms; therefore, this paper will place more emphasis on virtual classrooms than on-demand instruction. It is worth noting at the outset that a large established research base indicates that the medium a class is delivered in (e.g., live classroom versus virtual classroom) matters less than the design of the class, at least in terms of instructional outcomes. The key question for virtual, then, is how to create effective designs given the strengths and limitations of the medium. This paper will first make the case that the starting point of driving change is establishing an evaluation strategy; then, it will survey some common barriers to change in order to demonstrate how training and instruction may be an important component of driving change but is by itself insufficient to the task. The paper will then explore research-based instructional design principles that are not specific to any medium before extending those principles specifically to eLearning. Designing instruction and driving change are both highly sensitive to context and full of nuance. While researchers continue to explore the psychologies of learning and change, there is no formula for success, and what works in one place or at one time may not work elsewhere. Thus, when designing both instruction and change strategies, it is critical to consider how success will be evaluated. In simplest terms, if what you are doing is not working, you should do something else, and evaluation is necessary to determine if the strategies are succeeding. For decades, the gold standard for instructional evaluation in corporate settings has been the Kirkpatrick Model. 2 The Kirkpatrick Model breaks evaluation into four levels: reaction, learning, behavior, and results. The reaction level measures learner satisfaction. Knowing how learners feel about a program or course is importantno one wants to make people spend hours doing things they feel are a waste of timebut unfortunately there is little correlation between satisfaction and learning, 3,4 much less the level of postcourse real-world behavior change. If postcourse evaluation surveys are the only practical way to evaluate the effectiveness of a particular course, questions centered on self-efficacy are most likely to corollate closest to actual learning. 3 An example of self-efficacy-focused question is, "On a scale from 0 (cannot do at all) to 100 (highly certain can do), rate how confident you are right now that you can [description of skill taught in course]." 5 The second Kirkpatrick level measures learning, typically through testing. A course that requires, for instance, a learner to pass an exam in order to receive credit is operating at this level. Well-designed assessments can indicate the level of knowledge that learners have when they leave the course (though without pretesting it is difficult to establish how much of this knowledge was gained from the course and how much was prior knowledge), but they do not indicate whether learners will transfer learning to real-world application. The third level measures behavior change in real-world settings. Sometimes, this can be achieved by monitoring metrics that the organization is already using to evaluate performance. For instance, an increase in sales achieved by individuals who took a particular course, but not by salespeople who did not take the course, is evidence that learners are applying what they learned. Observational data may also be collected periodically to see if the desired behavior has been implemented and sustained. The fourth level calculates overall impact. To extend the previous sales example, if the organization added up all the sales increases from all the employees that took the course, that could be a measure of overall impact. The fourth level of evaluation is sometimes extended to calculate return on investment by creating a model to calculate bottom line financial benefits of the behavior and compare that against the cost of the program, 6 which can include direct costs like tuition and indirect costs like lost productivity during the times spent learning. Of course, impact is not always straightforward to quantify. If an organization sent its managers to a course on giving performance feedback, it would take some thought to determine the best way to quantify the impact. While the Kirkpatrick model is the most widely known evaluation model, it is not without its critics. The model has been criticized for being too basic. 7, 8 The model, for instance, may indicate that a learning program failed, but it will not necessarily provide insight on why the program failed or how to fix it. The model has also been criticized for being too complex. 9,10 Levels three and four in particular are usually skipped for lack of time and/or ability to make these determinations in complex environments. 11 The model also provides little guidance on how to partner with an organization's leaders to make sure that evaluation findings have impact. 12, 13 Collecting data that will not lead to real-world change in the design or execution of learning programs is a poor use of resources. Researchers and practitioners continue to propose alternative evaluation models. Thalheimer's Learning-Transfer Evaluation Model (LTEM) places high emphasis on real-world transfer (at its highest tier extending evaluation to the impact that a program has had on the entire organization as well as on the community, society, and the environment). 8 For example, the fifth tier (of eight) in LTEM urges evaluators to measure decision-making competence. Whereas assessments of learning often focus on facts and terminology, measuring decision-making competence must be done by presenting learners real-world scenarios and asking them to make decisions. Thalheimer notes that measures of competence must be spaced out after a learning event to ensure learners are retaining what they have learned rather than just cramming and forgetting. In contrast, Brinkerhoff's Case Success Method approaches evaluation from the perspective that complex models are simply not practical in the real world and instead focuses on distilling evaluation down to impactful essentials. 9 A really interesting twist offered by the Case Success Method is that it focuses on the learners who are most successful at transferring learning from a course to the real world. If, for instance, a company with offices all over the country sent some people from each office to a centralized course, evaluation might involve using surveys to determine which learners from the program were most successful using the new skills once they got back to their offices. From there, interviews with those learners could be used to find likely contributing factors to their success. What prior knowledge did those learners have? What support did they have in their local office? And so on. The critical point overall is that the leaders of change must identify what their goals are and their strategies for evaluating success. When it comes to instruction, merely following best practices does not guarantee success. The only way to know what works in a specific environment is to experiment and evaluate. To change behavior, telling people what to do and why (i.e., training) is generally insufficient. Successful change leaders understand barriers that can keep people from using knowledge and skills they possess. Below are some examples of barriers that change leaders should consider. Perhaps the biggest barrier to change is the power of social norms. 14 People do what they perceive the social norm to be. An interesting illustration of this phenomenon came from an experiment conducted in hotel rooms. Researchers compared the performance of two different bathroom tags in how effective they were at inspiring people to reuse their towels. 15 One tag pointed out that reusing towels was good for the environment. The other tag argued that people should reuse their towels because that is what most other people do. And, indeed, hotel patrons were significantly more likely to reuse their towels in rooms with the tags that compared them to other people; leaving their towels on the floor violated the social norm. The key principle is that people only move when they sense that the herd is moving. Understanding this tendency can significantly affect messaging. For instance, a leader who was frustrated that 85% of employees had failed to transition to a new process might be tempted to communicate, "Only 15% of people have transitioned to the new processwe need to do better!" Messages like that can undermine the intent because they establish a norm that most people are still using the old process. If instead the leader pointed out that the number of people using the new process had jumped 50% last week, that could be equally accurate (even if adoption rates only went from 10% to 15%) while at the same time suggesting that using the new process is becoming the social norm. Medical research provides another demonstration that people change their behavior if they think they are out of step with peers: Researchers found that telling doctors how they compare against their peers in terms of inappropriately prescribing antibiotics brought overprescription of antibiotics down 81% much more effective than interventions centered around education and reminders. 16 A second important barrier to change is that people ascribe significant value to the tasks and processes they are used to doing and following. Sometimes change is additivefor instance, adding a safety checkand people may resist displacing any of their current work to add a new duty. Thus, it is important to tell people what to stop doing in order to create capacity for new duties. For every ask, a stop. 17 Telling people what to stop doing to create capacity also shows commitment from leadership; leaders are willing to give up something of value in order to implement the change. Even this may meet with resistance, of course. Giving people logical reasons why they should do something a different way often results in them becoming entrenched in how they are currently doing it. This element of human nature has been dubbed the backfire ef fect. 18 A third important barrier to change is that people resist change that feels large. The seminal demonstration of this phenomenon was Freedman and Fraser's 1966 demonstration of the foot-in-the-door technique. 19 They demonstrated that most homeowners will refuse to display a large sign for a worthy causeunless you first ask them to display a small innocuous window sign and then come back after time has passed. At that point, they have emotionally committed to being the type of person who displays a sign for that cause, so the large sign is a much easier sell. In the intervening decades, as described in a metastudy by Burger, 20 other researchers have demonstrated the veracity of the foot-in-the-door technique across a number of different contexts. The point here is not to survey all possible barriers to change, but rather to suggest that consideration of how to frame and drive change is necessary before considering what kinds of learning are needed to support it. Instruction without appropriate strategies for driving change and holding people accountable risks creating knowledge that is quickly forgotten through lack of application, which is a waste of effort and money. High-quality instruction is often a necessary component for driving change. People cannot comply with requests to change what they are doing if they do not understand what is being asked of them, how they should do it, and why it works that way. The recent replication crisis 21 in social science research was an excellent reminder of the importance of validating research findings in numerous studies across diverse contexts. Fortunately, there are spaces, such as the Institute of Education Sciences's What Works Clearinghouse, where practitioners can access guidance on which instructional design principles are supported by a deep research base. 22 Instructional design has been defined by one professional society as "the creation of learning experiences and materials in a manner that results in the acquisition and application of knowledge and skills." 23 The following is a survey of four instructional design principles especially critical to adult learners which are often overlooked, particularly in professional settings. The principles outlined below apply to all learning design, not just eLearning, but will provide the basis for the discussion to follow about their implications for technology-driven distance learning. Generally, the most critical element in a successful course is active learning. Listening is an inefficient way to absorb information, and absent the feedback that comes from attempts to apply knowledge, learners reliably overestimate how well they understand the content. 24, 25 Learning instead is most efficient when periods of direct instruction (lecture) are short and punctuated with realistic and relevant practice that allows learners to accurately assess their level of understanding and gives them an opportunity to receive feedback geared toward correcting their knowledge gaps or misperceptions. Interactivity should also align with instructional objectives. It is common for instructors to begin courses by reading or paraphrasing a set of instructional objectives like "Given a tax issue, learners will identify the appropriate resource to consult." Sharing instructional objectives at the beginning of a course does not necessarily correlate with increased learning, 26 but nonetheless the careful creation of instructional objectives is a critical part of course design. 27 The instructional objectives should align with the real-world problem to solve. The example objective above suggests that one of the problems that new tax professionals encounter is that they do not know which resources to consult to help them solve new problems that come up. To address this objective, let us say the instructor presented various common tax references and described the kinds of information within each, followed by practice activities. A perfect match for the instructional objection would be to give learners tax problems and asked them which resource they should consult. If, instead, the only practice provided asked learners to describe in the abstract, for a given resource, what kinds of information may be found in that resource, that would be a mismatch for the objective. That kind of practice would focus on abstract knowledge, where the objective is focused on performing a real-world skill. Practice should ultimately focus on applying knowledge. If instead the real problem was that new tax professionals, even if they can find the right resource, still have trouble generating useful tax advice, then the objective above would not go far enough. It would need to be accompanied by an objective such as "Given a common tax issue, learners can research the issue and offer appropriate advice." In that case, the practice suggested above might be a stepping stone to additional instruction and practice, with the practice this time culminating in giving learners common tax issues and having them determine what advice they would give. Instructional objectives describe the problems that course designers want to help learners solve. It is useful to think about problem solving on a continuum from well-structured to illstructured problems. 28, 29 Well-structured problems are prob-ACS Chemical Health & Safety pubs.acs.org/acschas Research Article lems where success or failure is clear and that have a procedure that can be used to solve the problem every time. There is a correct way to use an eye wash in the event of exposure to chemicals. On the other hand, ill-defined problems are complex; the criteria for success can be debatable, and there may be many equally valid solution paths. (There is even a step up: wicked problems are problems that may not even be solvable and where experts might even disagree on what the problem actually is. 30 ) Instructional paths should vary based on problem type. For well-structured problems, particularly ones where learners have to react without hesitation, instruction should focus on building automaticitythat is, practicing and receiving feedback until performance reaches an acceptable level (and providing periodic practice in the real world to ensure automaticity has been maintained). As problem solving becomes less structured, the number of solution permutations grows. Practicing rote procedures becomes less important, and understanding the underlying principles becomes more important. Applied, practical practice is still critical, though. While it is common to see "understand" as a verb in instructional objectives, in most situations understanding is insufficient. The real objective is to prepare participants to apply that understanding. If they do not practice using what they learn to make decisions in realistic scenarios, then knowledge is often inertavailable in memory, but useless because learners do not know when to retrieve it and how to apply it. A common impulse among subject matter experts (SMEs) is to sacrifice practice in order to cover all the material. If the length of a course is predetermined, and the SME wants to add important new material, they will often replace active learning with lecture so they can cover more. Or if an instructor is running short on time, they will skip interactive pieces in order to achieve coverage. Since active learning is more effective than listening, cutting interactivity to cover more material is likely to result in learners remembering less. The more complex and ill-structured the skills being taught, the harder it is to create realistic practice in the classroom. For instance, a class teaching project management in complex environments might be difficult to simulate realistically and may not be practical. However, it is still imperative to practice realistic decision making. Thus, course designers should be looking for realistic case studies to explore with their classes, making opportunities to ask questions like, "Given the facts in this case to this point, what decision would you make (and why)?" The next instructional principle provides more guidance about how to approach interactive cases. Experts understand problems fundamentally differently than novices. 31 That is unfortunate because experts are, understandably, the people usually asked to design and deliver courses. It is difficult for someone who is an expert on a topic to understand what is going on in the minds of beginners. It is much easier for experts, for instance, to learn from abstract information, so experts are prone to presenting information in the abstract, making unwarranted assumptions about prior knowledge, and underestimating how hard problems are to solve and how much practice is needed. This is sometimes called the "curse of expertise". 32 One approach to overcoming the curse of expertise is being deliberate about the inclusion of a series of concrete, real-world examples. For example, if an SME in the national office of a CPA firm set out to teach inexperienced auditors how to properly document their auditing work, the SME would likely be tempted to go into a deep explanation of the professional standards specific to workpaper documentation. That is not badthe standards describe the critical principlesbut incomplete. Merely equipping auditors to parrot the standards does not necessarily equip them to put those abstract standards into action. The SME designing the course would be well-advised to explore both good and bad examples of real-world documentation in the course. Contrasting good and bad examples is useful because if learners only see good examples or only see bad examples they may not key into the attributes that make them good or bad. Explicit contrast is critical. One approach that has moderate support in the research literature is the worked example 33 an example that describes the entire path to the solution. To extend the audit documentation case, a worked example might explain step-bystep what an experienced auditor did and how they created proper documentation (and why they did it the way they did, perhaps even including false starts where the experienced auditor created inadequate documentation, realized their mistake, and backed up to fix it). Studying worked examples can increase learning. For instruction related to safety practices, real-world stories can be especially instructive. 34 Not only are the stories of safety gone wrong compelling, but the storytelling provides the opportunity to explore the decision-making pointsa realworld worked example. Why did the accident victim make that particular decision? What contributing factors were present (e.g., fatigue)? Could you see yourself making that same decision in that situation? Honor the messiness of the real world; otherwise, the stories will not ring true. As noted above, it is easy for experts to underestimate the number of examples that novices will need to really understand the content. Even if an instructor ends up having more examples than they need, they can always be given to learners to study later as needed. One last note on the audit documentation example in the spirit of matching practice to objectives, if the objective of the course is that auditors will be able to create workpaper documentation that adheres to professional standards, it would be important not only to study good and bad examples of documentation, but also to practice creating documentation. The considerable expenses associated with travel, lodging, and food make it tempting in corporate settings to cram lots of classroom learning into as short a time as possible. However, the research shows that learning is more effective if spread out over time. 35 Four 1 h classes spread out over a month is more effective than one 4 h class. This is called spaced learning. Technologymediated instruction is making spaced learning less of a barrier since it cuts out travel time and related costs of bringing learners back together. The positive benefits of spaced learning are only present if the points of learning are carried through from one segment to the next. 36 In other words, if a 4 h class consisted of four rather different topics that do not relate closely to each other and each one is pulled into its own separate class, then the positive effects of spaced learning dissipate. Spaced learning works when an idea is explored in the first class, then sometime later is picked up, reviewed, and explored further. Millions of years of evolution have made humans excellent processors of visual information. The language processing centers of our brains are much younger and more limited. However, instructors often cram slides with words. Because the language processing centers of our brains can only handle one stream of semantic meaning at a time, this puts learners in the position of trying to read and listen at the same time, which overloads the language center and depresses comprehension. Richard Mayer's Principles of Multimedia Learning 37 lay out many useful principles for screen and visual design that are beyond the scope of this Article, but the critical principle to remember is that instructors should avoid displaying lots of text while they are talking. Likewise, displaying visuals that are not instructionally meaningful, like most clip art, also decreases learning because they are distracting. The best case is to come up with visuals that complement the spoken instruction. For instance, if the instructor is describing a series of events, a timeline highlighting major points would be more effective than a bulleted list of events. It would give learners a memorable, instructionally meaningful visual to attach knowledge to and would leverage the visual processing centers and language processing centers in a complementary fashion. Likewise, an instructor describing a process can display a flowchart while they explain. Tables and diagrams are additional examples of ways to organize information visually. Information does not always lend itself to visualization (for example, principles of workpaper documentation). In these situations, instructors should favor the display of relevant artifactssuch as the actual wording of the professional standardsrather than bulleted lists of points. That said, for those times when bulleted lists are necessary, instructors should consider the minimum number of words needed in the list in order to make the organizational structure of the ideas clear. Instructors sometimes object to this practice on the basis that the slides become the notes that learners later refer back to. If that is a concern, consider putting the verbose explanations in the speaker notes and sending it out as a learner guide. Alternatively, instructors sometimes object because they use the detailed slides as a crutch to ensure they remember everything they want to cover. In those cases, separate speaker notes for the instructor are preferable because they do not interfere with learning. All of the instructional principles described above apply to both teaching in a classroom and teaching via technology, whether teaching in real time in virtual classrooms or designing ondemand, self-paced learning resources. It is important to emphasize that, all else being equal, the medium that a course is delivered in (classroom, virtual classroom, on-demand, etc.) does not affect the amount of learning that happens. Live classrooms are not inherently more effective than eLearning and vice versa, at least in terms of learning outcomes. The same content delivered with the same design will deliver the same instructional outcomes no matter the medium. 38,39 This feels counterintuitive, but it is supported by decades of research. Pedagogy trumps medium. That said, not all instructional designs are feasible in all media. For example, self-paced eLearning does not lend itself to small-group cooperative problem solving, and classroom learning does not lend itself to adaptive learning that branches based on the needs of individual learners. It is also worth noting that the reputation of eLearning sometimes suffers due to the availability of cheaply made or poorly designed courses that do not take advantage of the strengths of the medium. Due perhaps to its ubiquity, the reputation of classroom learning is more resistant to this availability bias even though its quality also suffers when instructors rely heavily on passive lectures. Instructors are starting to embrace blended learning as a way to take advantage of the strengths of each medium. Blended learning is the mixing of different methods of course delivery. For instance, a course that asks learners to take an eLearning first, then come to a live classroom portion, then communicate as a class in following weeks via a virtual classroom would be an example of blended learning. One form of blended learning, the flipped classroom, turns the traditional classroom model on its head by pushing direct instruction (lecture, examples, explanations) to recorded media that can be viewed anywhere so that learners then come to class to apply what they learned to solve problems. Flipped classrooms take advantage of the strengths of on-demand learning (it can be paused and reviewed; advanced eLearning can include animations, branching, interactivity, and individualized learning paths based on assessments) and of classroom learning (cooperative learning and individualized attention from instructors). However, as has been made obvious during the Covid-19 pandemic, 40 moving from classroom instruction to online instruction is challenging. This is partly due to issues of equity, as many students do not have access to the hardware or stable high-speed Internet connections, but also to the reality that tools for online learning cannot readily reproduce some common classroom pedagogical approaches. Whole-class discussions to explore ill-structured problems, for instance, are hampered by low resolution, lagginess, and inability to see everyone in the class at once. As much as possible, then, instructors should do the following: • Blend classes across prerecorded media and live virtual classrooms. They should create and assign short recordings as prerequisites centered on the conceptual content and then focus the live time on applied problem solving and answering questions. • Make liberal use of polling questions to ensure that instructors are aware of how well learners are mastering the material. Instead of "Any questions?" or "Ready to move on?", they should ask application-based questions that match the learning objectives. It is also important to have a plan for what to do if a critical mass of learners are not demonstrating mastery. The more active the learning, the better. One instructor at the author's firm gained a reputation for asking polling questions to the point where someone complained: "He asked so many polling questions that I had trouble getting work done at the same time." All instructors of virtual learning should seek the reputation of not giving learners time to multitask. • Use breakout rooms for small group problem solving. Large group discussions are a challenge online, but small group breakouts are an increasingly common feature in virtual learning platforms. The positive effects of cooperative learning disappear by the time group sizes reach six people, so keep group sizes to three or four. 41 Groups can be assigned randomly, but under certain circumstances it may be useful to group people manually by their own areas of expertisefor example, to create connections or to ensure the groups have the right set of skills to solve the problems. Instructors should pop into individual breakout rooms to get updates and to ask probing "What if?" questions to ensure that groups understand the problems conceptually. • Employ multiple instructors or a moderator to field questions typed in through the chat feature. A moderator is especially handy for dealing with technical issues that come up related to the webcast technology, but can also be useful for gathering content questions and feeding them to instructors at appropriate times. • Time instruction so it coincides closely to real-world usage, or better yet, per the recommendation above around spaced learning, span instruction so that it unfolds over time but culminates near to the time when learners are likely to use the skills in the real world. There is also something to be said in instruction for taking chances, finding ways to be vulnerable and different, particularly in eLearning. 27 By its nature, there is not a formula for how to be different. It usually means being vulnerable, but also, back to the key principle at the top of the paper, seeking feedback and being deliberate about evaluation in order to figure out what really works. This Article has focused mainly on live instruction, whether in a classroom or at a distance, as on-demand instruction is still more expensive and difficult to produce. However, the advantages of being able to access just the right instruction at the moment of need and move at one's own pace are compelling and can fit well as part of a larger instructional strategy. The evolution of on-demand eLearning over the past 60 years has seen text-based, mainframe-housed instruction delivered via plasma displays and created on specialized systems 42 grow into immersive multimedia that can be created using a wide variety of tools and housed on the Internet for access by all. Matching learners to the right on-demand instruction at the right time, however, remains a challenge. Learners may not know that just the right eLearning exists to help them solve a problem, or they may not know where or how to find it, or they may not even know that they need it. Two encouraging developments may help us overcome this challenge. The first is falling barriers to production. A couple of decades ago, one needed to learn relatively expensive and complicated tools to create self-paced online instruction. Things have changed quite a bit. The wife of the author of this article teaches adult English language learners in a large local district; a few years ago, she decided it would be useful to be able to point her learners to online resources for additional help beyond her classroom, but could not find any suitable resources. She would not characterize herself as having high technological literacy, but was able, with only minimal coaching and support, to use screen capture tools to record short videos that she could post and link to. She has learned how to augment these videos with online quizzes. All of her tools, mostly Quicktime for recording, Google Forms for quizzes, and Google Drive to house the videos, have been free to use and access, including free of advertising. This accessibility of tools has allowed her to create exactly the right materials for the needs of her learners, and analytics have shown her that her resources have been used much wider than that. Even as the technology is becoming more accessible, time is still a barrier. In corporate settings, every hour of live, face-toface training takes on average 43 h to plan and deliver. Interactive, self-paced eLearning takes several times as long approaching 200 h per hour of instruction to prepare. 43 One would expect live virtual classroom courses to fall somewhere in between, depending on the extent of incorporation of interactive elements like polling questions that have to be programmed ahead of time. The second trend of interest is the potential for bots to help connect people to the right learning at the right time. Today, ondemand instruction is used because either someone has curated it and assigned it to learners, or learners have gone looking for help and were able to find it, either on the web or in a learning management system they have access to. It is not hard, though, to picture digital assistants that can be aware of what someone is trying to achieve and pointing out resources they did not even know existed. For instance, if a business consultant had a client meeting on their calendar, a digital assistant could in theory look at that client's industry and proactively suggest recent trending articles relevant to that industry. Or a digital assistant for an auditor might notice that they are confronting an accounting issue they have not encountered before, ask them a few questions to gauge conceptual understanding, and then point them either to resources that they have seen before as a means of review or to new resources. Digital learning assistants may not be here tomorrow, but universities have already experimented with training bots based on the work done by human teaching assistants, 44 suggesting that the future may not be that far away. ■ REFERENCES E-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning Evaluating Training Programs: The Four Levels A Review and Meta-Analysis of the Nomological Network of Trainee Reactions Fudging the Numbers: Distributing Chocolate Influences Student Evaluations of an Undergraduate Course. Teaching of Psychology Guide for Constructing Self-Efficacy Scales Adolescence and Education The Bottom Line on ROI: Basics, Benefits, & Barriers to Measuring Training & Performance Improvement ACS Chemical Health & Safety pubs.acs.org/acschas Research Article Training Evaluation in Italian Corporate Universities: A Stakeholder-Based Analysis Brinkerhoff, R. The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training Evaluation and Continuous Improvement with a Community Focus A Critical Analysis of Evaluation Practice: The Kirkpatrick Model and the Principle of Beneficence. Evaluation and Program Planning Are We Doing the Right Thing?: Food for Thought on Training Evaluation and Its Context A Room with a Viewpoint: Using Social Norms to Motivate Environmental Conservation in Hotels Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial The Simplest Way to Inspire Change When Corrections Fail: The Persistence of Political Misperceptions Compliance without Pressure: The Foot-in-the-Door Technique 21) Nosek, B. A. Estimating the Reproducibility of Psychological Science Flawed Self-Assessment: Implications for Health, Education, and the Workplace A Framework for the Evaluation of the Effectiveness of Adjunct Questions and Objectives Designing Successful E-Learning Instructional Design Models for Well-Structured and III-Structured Problem-Solving Learning Outcomes Using Cognitive Tools to Represent Problems Dilemmas in a General Theory of Planning The Relation between Problem Categorization and Problem Solving among Experts and Novices The Curse Of Expertise: The Effects of Expertise and Debiasing Methods on Prediction of Novice Performance Variability of Worked Examples and Transfer of Geometrical Problem-Solving Skills: A Cognitive-Load Approach Tales of Disaster: The Role of Accident Storytelling in Safety Teaching Distributed Practice in Verbal Recall Tasks: A Review and Quantitative Synthesis Spacing Learning Events Over Time: What the Research Says 38) Distance Education and Technological Advancement Media Will Never Influence Learning What We Lose When We Go From the Classroom to Zoom Effects of Within-Class Grouping on Student Achievement: An Exploratory Model How Long Does It Take to Create Learning? A Teaching Assistant Named Jill Watson|Ashok Goel| TEDxSanFrancisco ACS Chemical Health & Safety pubs.acs.org/acschas Research Article