key: cord-0721201-6o91govz authors: nan title: ABET Accreditation During and After COVID19 - Navigating the Digital Age date: 2020-12-01 journal: IEEE Access DOI: 10.1109/access.2020.3041736 sha: 374af5269474b9f22a6484dc295857d935a2b9bd doc_id: 721201 cord_uid: 6o91govz Engineering accreditation agencies and governmental educational bodies worldwide require programs to evaluate specific learning outcomes information for attainment of student learning and establish accountability. Ranking and accreditation have resulted in programs adopting shortcut approaches to collate cohort information with minimally acceptable rigor for Continuous Quality Improvement (CQI). With tens of thousands of engineering programs seeking accreditation, qualifying program evaluations that are based on reliable and accurate cohort outcomes is becoming increasingly complex and is high stakes. Manual data collection processes and vague performance criteria assimilate inaccurate or insufficient learning outcomes information that cannot be used for effective CQI. Additionally, due to the COVID19 global pandemic, many accreditation bodies have cancelled onsite visits and either deferred or announced virtual audit visits for upcoming accreditation cycles. In this study, we examine a novel meta-framework to qualify state of the art digital Integrated Quality Management Systems for three engineering programs seeking accreditation. The digital quality systems utilize authentic OBE frameworks and assessment methodology to automate collection, evaluation and reporting of precision CQI data. A novel Remote Evaluator Module that enables successful virtual ABET accreditation audits is presented. A theory based mixed methods approach is applied for evaluations. Detailed results and discussions show how various phases of the meta-framework help to qualify the context, construct, causal links, processes, technology, data collection and outcomes of comprehensive CQI efforts. Key stakeholders such as accreditation agencies and universities can adopt this multi-dimensional approach for employing a holistic meta-framework to achieve accurate and credible remote accreditation of engineering programs. Outcome Based Education (OBE) is an educational theory that bases every component of an educational system on essential outcomes. At the conclusion of the educational experience, every student should have achieved the essential or culminating outcomes. Classes, learning activities, assessments, evaluations, feedback, and advising should all help students attain the targeted outcomes [1] - [7] , [67] . OBE models have been adopted in educational systems The associate editor coordinating the review of this manuscript and approving it for publication was Vlad Diaconita . at many levels around the world [8] , [9] . A list of current signatories of the Washington Accord presents strong evidence of a global migration towards OBE [10] . The Accreditation Board of Engineering and Technology (ABET) is a founding member of the Washington Accord since 1989 [11] . Recently, the Canadian Engineering Accreditation Board (CEAB) updated its accreditation criteria to adopt the OBE model [12] . In 2014, the National Commission of Academic Accreditation and Assessment (NCAAA) in Saudi Arabia was established, using the OBE model [13] . This shift makes institutions focus more on assessing the expected learning outcomes rather than the quality of the VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ offered curriculum. However, competition to improve rankings of programs has forced many institutions to pursue minimal requirements (for speed) during accreditation processes [14] - [16] , [67] . Accreditation was the prime driver for outcomes assessment [14] , [15] and the topic of more than 1,300 journal articles between 2002 and 2004 [17] . Consequently, several aspects of established accreditation processes in many institutions may not truly reflect the paradigm and principles of authentic OBE [16] - [22] , [67] . Another exhaustive systematic study of 99 research articles by Cruz, Saunders-Smits and Groen (2019) [23] concluded that due to global accreditation requirements the number of published studies from 2000 to 2017 related to assessment and evaluation of transversal skills had significantly increased. They observed that international quality standards for assessment and evaluation of transversal skills such as communication, innovation/creativity, lifelong learning or teamwork were undefined and deficient. Specifically, inadequate standards of language of learning outcomes, validity and reliability of assessments, and vague rubrics, all exacerbated the evaluation of transversal skills. The current format of measuring ABET, Engineering and Accreditation Commission (EAC) revised 7 Student Outcomes (SOs) and associated Performance Indicators (PIs), and evaluation of the alignment of the Program Educational Objectives (PEOs) is definitely a cumbersome affair for programs and institutions that utilize manual processes. The general advice provided was to be selective in using assessments for measuring these SOs to minimize overburdening faculty and program efforts for accreditation [11] , [24] , [67] . This may be acceptable for the fulfillment of accreditation criteria, but from the OBE model, student-centered point of view, it does not facilitate CQI. Consequently, assessments become deficient, tend to become summative and do not include formative processes, since good assessment practice refers to all activities which can provide necessary feedback to revise and improve instruction and learning strategies [25] , [26] . Additionally, the learning outcomes data measured by most engineering institutions is rarely classified into all three learning domains of the revised Bloom's taxonomy [27] and their corresponding categories for levels of learning. Generally, institutions classify courses of a program curriculum into three levels: Introductory, Reinforced, and Mastery, with outcomes assessment data measured for the mastery level courses in order to streamline the documentation and effort needed for effective program evaluations. This approach presents a major deficiency for CQI in a student-centered OBE model because performance information collected at just the Mastery level is at the final phase of a typical quality cycle and is too late for implementing remedial efforts. Instead, student outcomes and performance information should progress from the elementary to advanced levels and must be measured at all course levels for the entire curriculum [24] , [28] , [29] , [67] . A holistic approach for a CQI model would include a systematic measurement of PIs in all three Bloom's domains of learning and provide information on attainment of learning within each domain's learning levels. Compliance for outcomes assessment has been quoted by many [22] , [30] - [36] , [67] as a major issue in achieving realistic CQI. Many faculty members are not keen to get involved in the assessment process, mostly because the manual assessment and evaluation tools employed lack integration of essential components, require manual data entry, and multiple analytical computations to often yield results which do not accurately represent the actual state of student learning. Instructors are, therefore, unable to realize the tangible benefits of using valid outcomes assessment processes that enhance teaching and learning in an authentic OBE model. Myriad complexities attributed to improper tools that do not integrate multiple components of direct/indirect outcomes assessment for identification of failures, remedial actions and CQI may be identified as the root cause for the lack of faculty involvement. Therefore, there is a dire need to explore ways to improve faculty engagement in the assessment process at the course and program levels. A paper-free webbased digital system with a user-friendly interface would help encourage faculty participation while integrating multiple outcomes assessment processes for CQI. The indispensable necessity of the state of the art digital solutions to automate and streamline outcomes assessment for achieving excellent CQI results and accreditation has been adequately explained in research literature [20] , [32] , [34] - [37] , [67] . ABET's CQI criterion CR4 requires programs to track quality improvement resulting from corrective actions for failures in student performance extracted from evaluating outcomes at the course and program levels [11] . Gloria Rogers' training slides suggest that quality processes can take about 6 years to fully complete a cycle of assessment and evaluation activity. Therefore, ABET evaluators generally require 6 years of CQI data to be available in record with programs and at least 2 years of well documented course materials, SOs based objective evidence and other CQI information as display material during audit visits [11] . A detailed study of an accreditation effort in Canada, in 2011, estimated that the University of Alberta, Edmonton engineering programs spent more than a million dollars, collected more than a ton of data and exhausted more than 16,000 hours of preparation time for the Canadian Engineering Accreditation Board (CEAB) accreditation visit [19] . Similarly, engineering programs worldwide allocate staggering amounts of time and resources for preparing CQI data and display materials for accreditation, but unfortunately, since they employ manual CQI processes, assessment and evaluation data is often deficient and lacks the rigor and quality required by a student-centered authentic OBE model to attain the required standards of holistic learning. Jeffrey Fergus, chair of the ABET Engineering Accreditation Commission (EAC) also echoed a similar opinion regarding ABET's criterion 4, CR4 or Continuous Improvement as being the most challenging for engineering programs worldwide [39] . Several aspects of manual CQI models have been highlighted as being problematic such as standards of learning outcomes statements, vague performance criteria, lack of topic-specific analytic rubrics, reliability and validity issues with assessment and evaluation criteria, random sampling of outcomes data, lack of proper alignment, lack of comprehensive coverage of Bloom's three domains of learning, lengthy quality and evaluation cycles, inability to achieve real-time learning improvements in cohorts etc. [21] - [26] , [28] , [29] , [42] , [43] , [44] , [46] , [49] , [67] . Several digital solutions have been proposed in recent literature to alleviate the aforementioned issues with manual CQI systems [19] , [20] , [32] , [34] - [37] , [40] , [41] , [45] - [48] , [67] . In consideration of the latest ground breaking developments related to digital automation of CQI processes, several accreditation bodies such as ABET have incorporated special terms in their accreditation policy (I.E.5.b. (2) ) to accommodate engineering programs that choose to maintain digital display materials for accreditation audits [11] . Several ABET symposia conducted in the last 5 years have consistently presented digital technology as viable options for automating the otherwise cumbersome manual CQI processes [11] . Additionally, the COVID19 global pandemic conditions, by force majeure, have altered the normal protocol of onsite accreditation visits. Many accreditation agencies have either deferred or announced virtual visits for the upcoming accreditation cycles [10] - [13] , [18] , [50] , [51] . Virtual visits would mandate engineering programs to maintain digital documentation for reporting CQI information to enable remote audits. Therefore, the current prolonged pandemic conditions resulted in an unplanned and inadvertent boom in the digitization of education. This means both accreditation agencies and engineering programs have challenges to develop guidelines and frameworks for implementing CQI systems using practical digital solutions that are based on authentic OBE frameworks and fulfill the requirements of international engineering quality standards. The two top standards of the Council for Higher Education Accreditation, (CHEA) recognition criteria as stated by Eaton (2012) are 1) Advance academic quality: accreditors have a clear description of academic quality and clear expectations that the institutions or programs they accredit have processes to determine whether quality standards are being met and 2) Demonstrate accountability: accreditors have standards that call for institutions and programs to provide consistent, reliable information about academic quality and student achievement to foster continuing public confidence and investment [38] . Now, with thousands of engineering programs seeking accreditation in the US alone, and given the list of issues prevalent in CQI processes, qualifying credible program evaluations based on reliable and accurate outcomes information is becoming increasingly complex, high stakes and far reaching. Accreditation agencies are faced with a challenging task of implementing high standards, encompassing auditing frameworks and processes with fully trained staff to remotely examine and qualify CQI systems employed by engineering programs. The IEA and ABET have therefore, not indicated any changes in their accreditation criteria after COVID19 [10] , [11] . However, the auditing frameworks should encompass essential OBE theory, best practices for assessment, use of digital quality systems, automated data collection and reporting mechanisms, to remotely audit programs' CQI efforts for the attainment of SOs. In this research, we explore a meta-framework for examining Integrated Quality Management Systems (IQMS) implemented at the Faculty of Engineering programs for ABET accreditation using Mixed Methods Theory Based Impact Evaluations (MMTBIE) [52] . Evaluations that focus on summative program outcomes sometimes are called impact evaluations [53] . The Evaluation Gap Working Group (2006) concluded as part of the consideration of credible program evaluations that many impact evaluations fail to provide strong evidence because, even when changes are observed after the program has been initiated, often, the evaluators are unable to demonstrate that the changes were likely caused by the underlying program-potentially leading at best to unsubstantiated evidence, and at worst to misleading or even harmful conclusions [53] . A succinct statement of research findings made by the Evaluation Gap Working Group (2006) clearly sums up a general state of current program interventions, ''Of the hundreds of evaluation studies conducted in recent years, only a tiny handful were designed in a manner that makes it possible to identify program impact'' (p. 17) [53] . Onwuegbuzie and Hitchcock (2017) stated that programs in education that are currently taking place across various countries, or are being planned, need rigorous impact evaluations that provide trustworthy evidence of change for future decision making [52] . They noted that to date, the majority of impact evaluations across various fields, including the field of education, have involved the use of quantitative methods, namely, experimental methods, quasiexperimental methods, and non-experimental methods [54] . However, as per the work of James Bell Associates (2008), qualitative methods are preferred over quantitative ones, especially when examining process effects, whereby data are collected on a regular and continual basis to monitor and describe how specific services, activities, policies, and procedures are being implemented throughout the program [55] . Qualitative methods are also employed when conducting a theory-based impact evaluation, wherein the causal chain from inputs to outcomes and impact are mapped out, and the assumptions underlying the intervention are tested [56] ; or when conducting what is known as a participatory impact assessment, whereby staff work with local stakeholders to develop their own evaluation [57] . The principle of mixing methods has a long history in program evaluation work [58] which continues to the present [59] , but unfortunately, mixed methods techniques have probably been underutilized in impact evaluations. Onwuegbuzie and Hitchcock (2017) emphasized a strong need for an evaluation meta-framework that is comprehensive, flexible, and meets enhanced complexity of programs. Their work provided a new and comprehensive definition of impact evaluations-called comprehensive impact evaluation-that VOLUME 8, 2020 draws out the importance of collecting and analyzing both quantitative and qualitative data, thereby resulting in a rigorous approach that can allow for strong inferences. Based on Donaldson's (2007) [60] view they expanded mixed methods impact evaluations by incorporating evaluation theory (i.e., guiding criteria that indicate what an appropriate evaluation is and how evaluation should be conducted), social science theory (i.e., a framework for understanding the nature and etiology of desired or undesired outcomes and for developing intervention strategies for influencing those outcomes), and/or program theory (i.e., checking assumptions that underlie the specific interventions and how they are expected to bring about change). Building on White's (2009) work [61] dealing with theory-based impact evaluations, they outlined an 8-phase MMTBIE: Phase 1: understand the local and broader context; Phase 2: understand the construct(s) of interest; Phase 3: map out the causal chain that explains how the intervention is expected to produce the intended outcomes; Phase 4: collect quantitative and qualitative data to test the underlying assumptions of the causal links; Phase 5: determine the type and level of generalizability and transferability; Phase 6: conduct a rigorous evaluation of impact; Phase 7: conduct a rigorous process analysis of links in the causal chain; and Phase 8: conduct a meta-evaluation of the process and product of the MMTBIE. The phases are arbitrary but based on prior work related to theory based impact evaluations, they follow the general steps for evaluation. In this study, we discuss various aspects of relevant phases of this meta-framework and utilize key elements of the 8 phases as indicators to examine the CQI processes implemented at the Electrical Engineering (EE), Civil Engineering (CE) and Mechanical Engineering (ME) programs of the Islamic University in Madinah. Engineering programs seeking accreditation and quality standards in a digital age during and after the COVID19 global pandemic, would therefore, benefit from publications that provide detailed and practical guiding frameworks based on an authentic OBE model, to help implement state of the art digital quality management systems, that seamlessly automate collection and reporting of CQI data for remote audits. Program evaluation using a novel meta-framework presented in this study, would help accreditation auditors consider a range of aspects such as the context, construct, causal links, processes, technology, data collection and outcomes results of CQI activity required for credible remote audits of automated digital quality systems. The driving force behind this research is to examine the benefits and limitations of the application of essential theory of an authentic OBE model for the implementation of a holistic and comprehensive educational process that maximizes opportunities for the attainment of successful student learning. The objective is to conduct a MMTBIE of state of the art IQMS implemented at the Faculty of Engineering's EE, CE and ME programs (2014-20) using digital technology and OBE methodology for ABET accreditation. In particular, the researchers sought to answer research questions that would help the engineering programs fulfill ABET EAC criterion related to CR4, Continuous Improvement [11] . Do the IQMS implemented at the EE, CE and ME programs: 1. Adequately fulfill essential elements of the philosophy, paradigm, premise and principles of authentic OBE? 2. Comprehensively cover all aspects of ABET's outcomes assessment model? 3. Include sustainable instruments or processes for data collection and reporting of learning outcomes information? 4. Provide a listing and description of the assessment processes used to gather the data upon which the evaluation of each student outcome is based? ( [67] . A summary of a qualitative comparison of various types of CQI data and key aspects of the manual and automated approaches for their reporting is presented. Finally, we explore the application of a metaframework for examining IQMS for achieving ABET accreditation, based on a recent study proposing MMTBIEs [52] that outlined 8 phases following the general steps for evaluation. In this study, we will utilize the recommended conditions, actions, and specific questions of the 8 phases of the MMTBIEs meta-framework as indicators to examine the IQMS implemented at the Faculty of Engineering. The results of this study provide a multidimensional approach for rigorous remote verification of increasingly complex and high stakes evaluations based on reliable and accurate cohort learning outcomes for engineering programs and accreditation agencies. The MMTBIE approach presented comprehensively fulfills the requirements of ABET accreditation criterion, CR4 on Continuous Improvement. The findings of this study are expected to enlighten decisions by accreditation bodies and engineering programs to select the right course of action during and after the COVID19 global pandemic for collection, documentation and reporting of massive amounts of CQI data required for remote engineering accreditation audits. Appendices attached to this paper provide necessary evidentiary information related to the processes and technology implemented in the 6 PDCA quality cycles, tools/instruments used, survey results, relevant meeting minutes of program level quality improvement decisions, samples of CQI reports, and tabulated results of evaluation using the meta-framework. Table 1 summarizes the evidentiary information provided in the appendices. The MMTBIE of the IQMS implemented at the Faculty of Engineering EE, CE and ME departments from 2014 to 2019 involved 43 faculty members and 823 students from multiple cohorts of the 4-year bachelor of science programs. A selective literature review related to engineering program evaluations for accreditation was completed to conduct an effective OBE theory based qualitative analysis of CQI systems. We primarily considered research on accreditation topics in popular engineering education and educational psychology journals and conference proceedings spanning the last 15 years. The results of the literature review were parsed using an OBE theory based qualitative analysis of CQI systems to yield the summary below: 1. Accreditation is the prime driver for outcomes assessment for most higher education institutions in the US and worldwide [14] , [15] , [17] . 2. Most essential principles of authentic OBE philosophy and paradigm are either not targeted nor achieved [14] , [19] , [21] - [24] , 28], [29] , [42] , [46] , [49] , [55] , [62] , [67] . 3. Learning models are generally not understood and used comprehensively as the founding framework for CQI efforts [3] , [16] , [42] , [49] , [62] , [67] PIs is deficient and lacks alignment with actual learning activities [3] , [16] , [28] , [29] , [42] , [49] , [62] - [64] , [67] . 5. PIs are mostly generic and lack the required specificity to achieve required validity and reliability in assessment and evaluation [16] , [21] , [23] , [24] , [26] , [28] , [29] , [35] , [40] - [42] , [46] , [49] , [67] - [69] . 6. Most rubrics are generic, simplistic and vague, and lack the necessary detail to accurately assess several hundred complex student learning activities of any engineering specialization [16] , [21] , [24] , [42] , [67] - [69] . 7. Majority of assessment models just target learning activity in the cognitive domain. Learning activity related to psychomotor and affective domains are mostly, not assessed [6] , [16] , [20] - [22] , [24] , [28] , [29] , [40] - [42] , [46] , [49] , [62] , [64] , [67] - [69] . 8. CQI information for all students is not collected, documented or reported [5] , [19] , [22] , [24] , [26] , [28] , [29] , [35] , [40] - [42] , [43] , [44] , [53] 9. Most program evaluations are based on a small set of random samples of student activity [20] , [22] , [24] , [28] , [29] , [35] , [40] , [41] , [44] , [46] , [67] , [70] - [74] . 10. Independent raters are used to apply generic rubrics to past course portfolios [35] , [41] , [42] , [46] , [67] , [70] , [72] , [74] . 11. Real-time corrective actions using formative assessments are rarely implemented [22] , [24] , [28] , [29] , [35] , [36] , [41] , [43] , [44] , [67] , [72] . 12. Course evaluations do not incorporate appropriate weightage for aggregating outcomes results from various types of assessments [22] , [24] , [40] , [46] , [49] , [67] . 13. Program evaluations do not incorporate appropriate weightage for aggregating multiple course and skill levels [20] , [24] , [36] , [40] , [41] , [46] , [49] , [63] , [67] . 14. CQI efforts are not realistic and programs mostly employ reverse engineering to link corrective actions to outcomes evaluations results [24] , 34], [36] , [37] , [41] , [43] , [46] , [67] 15. Academic advising systems are not based on student outcomes information [35] , [67] The findings of this selective literature review helped identify several major issues with prevalent manual CQI systems, and also reinforced our opinions developed from first-hand observations of more than a decade of regional and international accreditation and consulting experience. In summary, the several issues highlighted, contradict fundamental OBE frameworks, implement obsolete assessment practices or work against the fundamental principles of CQI. Programs and accreditation agencies should ensure that authentic OBE theory is the foundational source from which assessment concepts are induced, those, in turn would support the development of practical frameworks to implement sustainable CQI systems. Accountability of programs through evaluating student achievement is intrinsically important for governing bodies and key stakeholders, as it reinforces and provides monitoring of high standards and rigor in engineering programs. The philosophy, paradigm, premises and principles of Authentic OBE form the basis for theoretical frameworks that lead to the development of crucial models which act as the foundation of the IQMS implemented at the Faculty of Engineering. Several essential concepts are then induced from OBE theory, best practices for assessment and ABET criterion 4, CR4 on continuous improvement. Essential techniques and methods based on this conceptual framework are then used to construct a practical framework consisting of automation tools, modules and digital features of a state of the art, web-based software, EvalTools R [48] . As shown in Figure 1 , EvalTools R facilitates seamless implementation of CQI processes based on an authentic OBE model and consisting of 6 comprehensive Deming-Shewart (1993) Educational institutions following the OBE model should ensure all learning activities, assessments, evaluations, feedback, and advising help students to attain the targeted outcomes. International and regional QA agencies and academic advising organizations strongly recommend that educational institutions implement all CQI processes based on learning outcomes. To better understand the scope of this research and the limitations of prevalent CQI systems 'following' outcomes-based approaches, we begin with a brief introduction to some essential elements of OBE which were developed by associates at the High Success Network [1] , [2] . The keys to having an outcomes-based system are: a. Developing a clear set of learning outcomes around which all of the educational system's components can be focused; and b. Establishing the conditions and opportunities within the educational system that enable and encourage all students to achieve those essential outcomes. OBE's two key purposes that reflect its ''Success for all students and staff'' philosophy are: a. Ensuring that all students are equipped with the knowledge, competence, and qualities needed to be successful after they exit the educational system; and b. Structuring and operating educational systems so that those outcomes can be achieved and maximized for all students. OBE's 4 power principles are: a. Clarity of focus: Firstly, this helps educators establish a clear picture of the desired learning outcomes they want and provides students with indications of their expected performance [3] , [4] . Secondly, student success on this demonstration becomes the top priority for instructional planning and designing student assessment [3] , [4] . Thirdly, the clear picture of the desired learning outcome is the starting point for curriculum, instruction, and assessment planning and implementation, all of which must perfectly align with the targeted outcomes [3] , [4] . And fourthly, the instructional process in the classroom begins with the teacher's actions, sharing, explaining, and modeling the outcome from day one and continually thereafter, so that clearly indicates what is required so the ''no surprises'' philosophy of OBE can be fully realized. This enables students and teachers to work together as partners toward achieving a visible and clear goal [1] - [3] . b. Expanded Opportunity: requires staff to give students more than one chance to learn important things and to demonstrate that learning. Initially, those who implemented OBE applied this approach to small segments of learning that students could accomplish in relatively short amounts of time. But the definition of outcomes and their demonstration has expanded dramatically over the past two decades, which has forced a rethinking of the entire concept of opportunity and how it is structured and implemented in educational institutions [1] . There are at least five dimensions of opportunities: Time, Methods and Modalities, Operational Principles, Performance Standards, and Curriculum Access and Structuring, which are all significant in expanding students' opportunities for learning and success [1] , [2] , [5] . c. High Expectations: means increasing the level of challenge to which students are exposed and raising the standard of acceptable performance, they must reach to be called ''finished'' or ''successful''. OBE systems have applied this principle to three distinct aspects of academic practice: standards, success quotas, and curriculum access. First, most OBE systems have raised the standard of what they will accept as completed or passing work. This is done, of course, with the clarity of focus, expanded opportunity, and design down principles [1] , [2] , [4] . As a result, students are held to a higher minimum standard that ever before. Second, most OBE systems have changed their thinking about how many students can or should be successful. They have abandoned bell-curve or quota grading systems in favor or criterion-based systems, and this change of perspectives and practices reinforces the previous strategy [1] , [2] , [4] . Third, realizing most students will rise only to the level of challenge they are afforded, many OBE systems have eliminated low-level courses, programs, or learning groups from the curriculum [1] , [2] , [4] , [5] . d. Design Down: means staff begin their curriculum and instructional planning where they want students to ultimately end up and build back from there. This challenging but powerful process becomes clear when we think of outcomes as falling into three broad categories: culminating, enabling, and discrete. Culminating outcomes define what the system wants all students to be able to do when their official learning experiences are complete [3] , [4] . In fully developed OBE systems, the term ''culminating'' is synonymous with exit outcomes. But in less fully developed systems, culminating might apply to what are called program outcomes and course outcomes [3] , [4] . Enabling outcomes are the key building blocks on which those culminating outcomes depend. They are truly essential to students' ultimate performance success. Discrete outcomes, however, are curriculum details that are ''nice to know'' but not essential to a student's culminating outcomes [3] , [4] . The design down process is governed VOLUME 8, 2020 by the ''Golden Rules''. At its core, the process requires staff to start at the end of a set of significant learning experiences -its culminating point -and determine which critical learning components and building blocks of learning (enabling outcomes) need to be established so that students successfully arrive there. The term ''mapping back'' is often used to describe the first golden rule. The second rule states that staff must be willing to replace or eliminate parts of their existing programs that are not true enabling outcomes [3] , [4] . Therefore, the challenges in a design down process are both technical -determining the enabling outcomes that truly underline a culminating outcome -and emotional -having staff be willing to eliminate familiar, favorite, but necessary, curriculum details [2]- [5] . From a future-focused, transformational perspective, the four defining principles of OBE are restated as [3] : a. Clarity of Focus on future role-performance abilities of significance. b. Continuous Opportunities to engage in and develop role-performance abilities. c. High Engagement in authentic contexts that advance performance abilities. d. Bring role-performance learning and engagement down to young learners too. In summary, all components of educational systems that implement an OBE model should focus on aiding all students to successfully attain the targeted outcomes for achieving intended learning aimed by international standards of engineering education and curriculum [11], [65] . Based on authentic OBE theory, best practices for assessment and ABET accreditation requirements, several concepts were formulated to aid in the development of models; tools, techniques, methods and processes that act as essential guidelines for employing practical frameworks to implement the IQMS at the Faculty of Engineering. The following sections elaborate on conceptual frameworks dealing with selecting learning models; defining goals, objectives, outcomes and performance indicators; developing rubrics; curriculum design; course delivery; assessment and evaluation; and CQI efforts. An important observation made by the Faculty of Engineering is that Bloom's 3 learning domains present an easier classification of specific PIs for realistic outcomes assessment compared with other models that categorize learning domains as knowledge, cognitive, interpersonal, communication, IT, numerical and/or psychomotor skills [13] . In addition, categories of learning domains which seem very relevant for the engineering industry and career-related requirements may not be easy to implement practically when it comes to classification, measurement of PIs, and realistic final results for evaluating CQI. A hypothetical Learning Domains Wheel as shown in Figure 2 was developed by the Faculty of Engineering to analyze the popular learning domains models available, including Bloom's, with a perspective of realistic measurement of outcomes based on valid PIs classification that does not result in a vague indicator mechanism for CQI in engineering education [24] . Learning domains categories mentioned in this paper specifically refer to broad categories with well-defined learning levels selected for the classification of specific PIs [24] . The Learning Domains Wheel was implemented with Venn diagrams to represent details of the relationship of popular learning domains categories, interpersonal skills, and the types of knowledge [24] . A detailed explanation of the coverage of required engineering knowledge and skills sets for popular learning models including the NCAAA 5 domains model [13] presented valid and logical arguments based on issues related to redundancy in selecting domains for PIs classification. The cognitive domain involves acquiring factual, conceptual knowledge dealing with remembering facts and understanding core concepts. Procedural and metacognitive knowledge focus on problem-solving, which includes problem identification, critical thinking and metacognitive reflection. Remembering facts, understanding concepts and problem solving are essential, core and universal cognitive skills that would apply to all learning domains [7] , [75] . Problem identification, definition, critical thinking and metacognitive reflection are some of the main elements of problem-solving skills. The main elements of problem-solving skills apply to all levels of learning for the three domains. Activities related to any learning domain require operational levels of four kinds of knowledge: factual, conceptual, procedural and metacognitive [75] that are proportional to the expected degree of proficiency of skills for effective execution of tasks. For example, successfully completing psychomotor tasks for solving problems involves acquiring specialized factual, conceptual, procedural and metacognitive knowledge of various physical processes with acceptable levels of proficiency. Similarly, an affective learning domain activity, such as implementing a code of professional ethics, involves acquiring factual, conceptual, procedural and metacognitive knowledge related to industry standards, application process, level of personal responsibility and impact on stakeholders. Hence, the psychomotor and affective domains skills overlap with the cognitive domain for the necessary factual, conceptual, procedural and metacognitive areas of knowledge [24] . The learning domains categories such as interpersonal, IT, knowledge, cognitive, communication, numerical skills etc., exhibit significant areas of overlap as shown in the Learning Domains Wheel in Figure 2 . A high-level grasp of the relationship of these categories demonstrates the process of the selection of learning domain categories. For example, interpersonal skills, as shown in Figure 2 , is too broad a category, thereby presenting serious problems in PIs classification and realistic outcomes measurement when grouped with other skills sets such as learning domains categories [24] . Numerical skills are used for decision-making activities in the affective domain and also for the effective execution of psychomotor activities in physical processes. Numerical skills are an absolute subset of cognitive skills for any engineering discipline. IT skills cover some areas of psychomotor (connection, assembly, measurement, etc.), affective (safety, security, etc.) and cognitive (knowledge of regional standards, procedural formats, etc.) domains. Leadership and management skills require effective communication and teamwork [24] . This large overlap of skills within multiple learning domains presents a serious dilemma to engineering programs in the PIs classification and measurement process. A difficult choice must be made whether to select the most appropriate learning domain category and discard the others or repeat mapping similar PIs to multiple learning domain categories for each classification [24] . Defining the learning levels for the overlapping categories to precisely classify PIs would also be challenging [24] . Finally, learning domain categories with significant areas of overlap would result in the repeated measurement of common PIs in multiple domains and the accumulation of too many types of PIs in any single learning domain category, thus obscuring specific measured information. Therefore, for practical reasons the categories of learning domains have to be meticulously selected with a primary goal of implementing a viable PIs classification process to achieve realistic outcomes measurement for program evaluation [24] . Crucial guidelines were logically derived from the Learning Domains Wheel for the selection of the learning domains categories and listed as follows [24] : 1. Very broad learning domains categories consist of many skill sets that will present difficulty in the classification of PIs when grouped with other categories and will result in the redundancy of outcomes data; for example, interpersonal skills grouped with IT, communication or psychomotor, etc. 2. Avoid selection of any two skills sets as learning domains categories when one is an absolute subset of another. Just select either the most relevant one or the one which is a whole set. For example, select cognitive or numeric skills, but not both; if both are required, select cognitive as a category since it is a whole set. Numeric skills, its subset, can be classified as a cognitive skill. 3. If selecting a certain skill set that is a whole set as a learning domains category, then it should not contain any other skills sets which are required to be used as learning domains categories; e.g., do not select affective as a learning domains category since it is a whole set if you also plan on selecting teamwork skills as a category. 4. A learning domain category could contain skills sets which will not be utilized for PIs classification; e.g., affective learning domain category containing leadership, teamwork and professional ethics skills sets; leadership, teamwork and professional ethics will NOT be a learning domain category but will be classified as affective domain skill sets. Bloom's 3 domains, cognitive, affective and psychomotor, are not absolute subsets of one another. They contain skill sets as prescribed by the 11 or 7 ABET EAC SOs which are not learning domains categories. Therefore Bloom's 3 learning domains satisfy selection guidelines derived from the Learning Domains Wheel and facilitate a relatively easier classification process for specific Pis [24] . Calculation of term-wide weighted average values for ABET SOs using this classification of specific PIs resulted in realistic outcomes data since most of the PIs were uniquely mapped to each of the 3 domains with minimal overlap and redundancy [24] . A ''design down'' [2] - [5] mapping model was developed as shown in Figure 3 exhibiting authentic OBE design-down flow from goals, PEOs, SOs, course objectives, COs to PIs. This figure illustrates trends in levels of breadth, depth, specificity and details of technical language related to the development and measurement of the various components of a typical OBE ''design down'' process [2] - [5] . Goals and objectives are futuristic in tense and use generic language for broad application. The term 'w/o' (without) in the figure highlights the essential characteristics of goals and objectives. Goals and objectives do not contain operational action verbs, field-specific nominal subject content, or performance scales. Student and course outcomes do not contain performance scales. Performance scales should be implemented with the required descriptors in rubrics [68] . PIs should be specific to collect precise learning outcomes information related to various course topics and phases of a curriculum while addressing various levels of proficiency of a measured skill [24] , [42] . Adelman's thorough work strengthens our argument that the required language of learning outcomes for cognitive and psychomotor learning activities should be specific [16] . He assertively states that verbs describing a cognitive or psycho-motor operation act on something, i.e. they have a specific nominal context. The nominal context can be discipline/field-specific, e.g. error analysis in chemistry; an art exhibit in 2-D with 3 media. Field-specific statements are endemic to learning outcome statements in Tuning projects. Finally, without a specific nominal context you do not have a learning outcome statement [16] 3) BLOOM'S 3 DOMAINS TAXONOMIC LEARNING MODEL AND 3-SKILLS GROUPING METHODOLOGY; IDEAL LEARNING DISTRIBUTION Figure 4 shows the design flow for the creation of holistic learning outcomes and their PIs for all courses corresponding to Introductory, Reinforced and Mastery levels spanning the curriculum. The Faculty of Engineering programs studied past research [24] , [41] , which grouped Bloom's learning levels in each domain based on their relation to the various teaching and learning strategies. With some adjustments, a new 3-Level Skills Grouping Methodology [24] , [41] as shown in Table 2 was developed for each learning domain with a focus on grouping activities which are closely associated to a similar degree of skills complexity. Ideally, all courses should measure the elementary, intermediate and advanced level skills with their COs, specific PIs and associated assessments. However, Introductory level courses should measure a greater proportion of the elementary level skills with their COs, PIs and assessments. On the other hand, Mastery level courses should measure more of the advanced, but fewer intermediate and elementary level skills [24] , [35] , [41] . Figure 5 indicates an ideal learning level distribution of COs and PIs for the Introductory, Reinforced and Mastery level courses. The measurement of outcomes and PIs designed following such an ideal distribution result in a comprehensive database of learning outcome information, which facilitate a thorough analysis of each phase of the learning process and enable a comparatively easier mechanism for early detection of student performance failures at any stage of a student's education [24] , [35] , [41] . The OBE model was chosen due to the many benefits discussed earlier and to fulfill regional and ABET accreditation standards. ABET accreditation criteria CR2: PEOs; CR3: SOs; and CR4: Continuous Improvement [11] have been implemented in the assessment model, which require that programs make decisions using assessment data collected from students and other program constituencies, thus ensuring a quality program improvement process. This also requires the development of quantitative/qualitative measures to ensure students have satisfied the COs, which are measured using a set of specific PIs/assessments and, consequently, the program level ABET SOs [11], [35] , [36] , [40] - [42] , [46] , [49] , [67] , [70] . Figure 6 shows the outcomes assessment model adopted by the Faculty of Engineering. The assessment model involves activities such as a comprehensive review of the PEOs, SOs, PIs/assessments and COs leading to further improvement in the program. All activities in the various phases of the CQI process actively involve faculty members [24] , [35] , [42] , [67] . I believe that we should be giving more attention to small-scale assessments conducted continuously in college classrooms by discipline-based teachers to determine what students are learning in that class. . . The advantage of thinking small in assessment is that the classroom is the scene of the action in education. If the ultimate purpose of assessment is to improve teaching and learning, then the results of a successful assessment must eventually bear directly on the actions of teachers in their classrooms. This means that the feedback from any assessment must reach classroom teachers and be perceived by them as relevant to the way they do their jobs. One way to do that, albeit not the only way, is to start in the classroom collecting assessment data that teachers consider relevant'' [76] . Due to accreditation requirements for assessment and evaluation, the majority of programs have planned assessments and satisfaction ratings on a macro level. These are generally referred to as outcomes assessment measures [77] and involve using standardized tests, focus groups, independent raters, vague and generic rubrics. However, these plans do not adequately assess student learning goals specific to the university's program, nor do they provide information that would help instructors improve student learning in their courses. On the other hand, reinforcing Cross's (2005) opinions [76] , well-planned course level assessments can provide better opportunities for data collection of SOs data for accreditation evaluations. Course embedded assessments are also referred to as ''classroom-based'' assessments. Course embedded assessment is the process of using artifacts generated through routine classroom activities to assess the achievement of SOs. Teaching materials and routine classroom assignments are designed to align with COs and corresponding PIs. Ammons and Mills (2005) clearly state the benefits of alignment of embedded assessment to instructors, ''Course-embedded assessment may have strong appeal to faculty who want to engage in a systematic way of reflecting on the relationship between teaching and learning'' [78] . Embedded assessments build on the daily work (assignments, exams, course projects, reports, etc.) of students and faculty members. These assessments help avoid the use of external independent raters that are usually employed for rescoring past course portfolios for accreditation purposes. According to Ammons and Mills (2005) , the major benefit of course embedded assessment is that ''the instruments can be derived from assignments already planned as part of the course, data collection time can be reduced'' [78] . Gerretson and Golson (2004) stated that the advantage of assessment at the classroom level is that it ''uses instructor grading to answer questions about students learning outcomes in a nonintrusive, systematic manner'' [79] . A composite advantage of course embedded assessments in regards to the fulfillment of accreditation requirements are that they can be used at the course level to help instructors determine attainment of COs, and can be used at the program level to assist in measuring to what degree the program level SOs are being met. Embedded Assessments is not just of interest to the instructor teaching the course, but also to other faculty members in the program whose courses build on the knowledge and skills learned in the course [78] . The basis of the embedded assessment model in FCAR is the EAMU performance vector [46] , [80] , [81] . The EAMU performance vector [81] , [82] counts the number of students that passed the course whose proficiency for that outcome was rated Excellent, Adequate, Minimal, or Unsatisfactory as defined by: Excellent: scores >= 90%; Adequate: scores >= 75% and < 90%; Minimal: scores >= 60% and < 75%; and Unsatisfactory: scores < 60%. Program faculty report failing COs, SOs, PIs, comments on student indirect assessments and other general issues of concern in the respective course reflections section of the FCAR. Based upon these course reflections, new action items are generated by faculty. Old action items are carried over into the FCAR for the same course if offered again. Modifications and proposals to a course are made with consideration of the status of the old action items [46] , [80] , [81] . Combining Spady's (1992,1994 a, b) fundamental guidelines related to the language of outcomes [3]- [5] , key concepts from Adelman's work (2015) on verbs and nominal content [16] , and some essential details on the hierarchical structure of outcomes from Mager's work (1962) [83] led to a consistent standard for learning outcome statements that were accurately aligned to the course delivery using a structured format for COs and specific PIs. Essential principles for learning outcome statements are summarized as below: 1. Intended outcomes must be measurable 2. Language of outcomes should describe what learner's do using operational action verbs 3. Conditions of learning activities should be described by nominal subject content 4. Level of acceptable performance must be clearly indicated (PIs) 5. Multiple statements can be used for each learning outcome (PIs) These essential principles for learning outcome statements help develop detailed design rules for COs and PIs. This enables holistic course delivery that is tightly aligned to outcomes with achievement of ideal learning distribution in all the 3 domains of learning and sequential coverage of all major topics [35] , [42] , [64] , [67] . 1. The PIs should be approximately aligned to the operational action verbs and nominal subject content in COs. 2. The PIs should be at a similar skill level as the corresponding activity in the CO. 3. The PIs should align with the complexity and methods used in assessments planned to measure corresponding learning activities mentioned in the CO 4. The PIs should be more specific than COs and indicate names of techniques, standards, theorems, technology, methodology etc. 5. The PIs should provide major steps to analyzing, solving, evaluating, classifying etc. so they can be utilized to develop hybrid rubrics 6. Several PIs should be used to assess multiple learning activities relating to multiple domains and 3-levels skills. Figure 7 shows an example of detailed COs design methodology for an EE program's course. The design of COs and their PIs was meticulously completed by using appropriate action verbs and subject content, thus rendering the COs, their associated PIs, and assessments at a specific skill level-elementary, intermediate or advanced. Figure 8 shows an example from a ME course (ABET SOs 'a-k' example applicable to SOs '1-7'). In this example, CO_7: Calculate and measure velocity and flow rate of fluid dynamics problems using Bernoulli equations; and its associated specific PI_11_39: Analyze the friction 219008 VOLUME 8, 2020 effects in viscous fluid flow in a circular pipe; calculate the Reynolds number to classify as laminar or turbulent flow; obtain the friction factor: by extracting from Moody's charts (turbulent flow); or by using analytical equations (laminar flow); calculate the major and minor pressure losses for laminar and turbulent flows using pressure drop equations; measured by assessment Final Exam Q3 are of similar complexity and at the same level of learning. The corresponding category of learning is reinforced-cognitive-analyzing. Therefore, COs would be measured by PIs and assessments strictly following the 3-Level Skills Grouping Methodology [24] , [41] . The hybrid rubric is a combination of the holistic and analytic rubric developed to address the issues related to (a) validity: precision and accuracy of an assessment's alignment with outcomes, PIs; and (b) inter/intra-rater reliability: detail of specificity of acceptable student performances; when dealing with the assessment of complex and very specialized engineering activities. The hybrid rubric is an analytic rubric embedded with a holistic rubric to cater to the assessment of several descriptors that represent all the required major steps of specific student learning activity for each PI/dimension listed [42] . The hybrid rubric's advantage is reinforced by the finding of an exhaustive empirical research that reviewed 75 studies on rubrics, summarized their benefits, and concluded that the top most benefit is from rubrics that are analytic, topic-specific, and complemented with exemplars and/or rater training [11] . Figure 9 shows an ABET SO 'e', specific PI dealing with problem-solving: ''Simplify a given algebraic Boolean expression by applying the k-map and express in POS form'', and its hybrid rubric. The hybrid rubric also contains a column to indicate the percentage of total score allocation for each descriptor (major step of learning activity) corresponding to a certain PI [42] . The scales implemented are obtained from Estell's FCAR [80] , E, A, M and U performance vectors [46] , [80] , [81] that stand for the Excellent: (100-90)%, Adequate: (89-75)%, Minimal (74-60)% and Unsatisfactory: (0-60)% categories respectively. Spady's OBE philosophyfour power principles of authentic OBE [2] - [5] are applied here as guidelines for the development and implementation of specific PIs and hybrid rubrics [42] : 1. Clarity of focus: Subject specialists within a program form sub-groups to select appropriate course content, topics, learning activities and their skills/complexity levels based on student standards for the development of specific PIs and their hybrid rubrics. The language of specific PIs and hybrid rubrics should have sufficient transparency in meaning to promote easy faculty comprehension and application resulting in perfect implementation of scientific constructive alignment and use of the ''unique assessments'' philosophy [22] , [24] , 35], [38] , [48] , [49] , [50] , [62] , [63] , [67] , [69] , where a single assessment does not map to more than one specific PI. The language of the specific PIs and descriptors should have an approximate correspondence with student learning activities, so both, students and faculty, can clearly understand the various scales of performance expectations [42] . The Excellent scale 'E', of the hybrid rubric, should clearly identify required steps for excellent performance in using a specific major method, say 'M i ', for performing a certain task. A major method would be a complex engineering activity involving several unique steps for completing a specific task. There should be only one specific hybrid rubric designed to assess one major method or technique applied to complete a particular task. Any alternative major methods, say 'M 1 , M 2 . . . M n ', that complete the same task, let's say 'T ', and deemed necessary curricular content by the instructor, should be assessed independently, with rubrics of their own. This would eradicate the possibility of producing ''excellent'' performing engineering graduates who have partial knowledge of necessary curricular content or lack required engineering skills [42] . 3. Expanded opportunity: Use hybrid rubrics and their descriptors to be consistent in rating assessments. Give the student prior notice on what is expected by rehearsing examples of problems using the developed hybrid rubrics. Provide feedback on student graded work clearly highlighting performance issues. Use criterion-based standards and provide opportunities to improve based on some minimally required expectations [42] . Employ weighted averaging to scientifically aggregate a combination of various types of assessments and student performances [20] , [24] , 36], [40] , [41] , [46] , [49] , [63] , [67] . Strictly avoid using pure averaging to conduct a quantitative evaluation of outcomes assessments [5] . 4. Design down: Develop PIs, hybrid rubrics in perfect alignment with the institutional mission, PEOs, SOs and COs. To achieve this, mission statements and PEOs should be designed scientifically, and avoiding the use of vague and redundant language. Learning outcome and PIs information should be used for the implementation of scientific constructive alignment to develop and align assessments with their teaching/learning strategies, scoring, evaluation, feedback and CQI efforts [42] . The Hybrid Rubrics support and facilitate instruction and intelligent design of outcomes assessments. An important point to note is that based on the type of student learning activity, the dimensions of a hybrid rubric can consist of interdependent, sequential steps such as steps 1, 2, 3 and 4; or independent, non-sequential components such as semantics of English language, structure of system, theoretical/mathematical model, operational information, neat sketches etc. The dimensions of rubrics can also be a combination of these two types of information. The detailed specific/generic PIs model adopted by the Faculty of Engineering enables the development of hybrid rubrics that contain dimensions with a maximum spread of breadth and depth of a course topic or student learning activity. The weightage distribution of the various steps or compo-nents of rubrics conveniently supports the development and implementation of grading for assessments targeting various knowledge and skills levels. The comprehensive breadth and depth of content covered in dimensions of hybrid rubrics enables instruction and provides detailed guidance to students in various learning activities related to problem-solving, design, experimentation, teamwork, report writing, presentation etc. Faculty members are not bound to apply the entire content of developed hybrid rubrics to the design of all assessments in a course. They can flexibly extract and appropriate necessary content from comprehensive rubrics to design assessments targeting measurement of required skills and/or knowledge corresponding to specific levels of learning in a course. Instructors can select specific dimensions or portions of multiple dimensions of rubrics and apply their corresponding grading distribution to the design of assessment. Considering an example as shown in Figure 10 , if a faculty member would like to use steps 1, 2 and 3 of a comprehensive rubric, which has 4 interdependent, sequential steps sharing 10, 20, 35 and 35 % respectively of the total weightage? Then the designed assessment can contain three parts corresponding to three required steps 1,2 and 3; where parts 1, 2 and 3 are assigned a grading distribution of 15, 30 and 55 %, respectively. The CE, EE and ME programs have initiated the development and implementation of Hybrid Rubrics in 2017, targeting major learning activities in fundamental engineering courses. According to the assessment plan, at the end of 2019, Hybrid Rubrics covered major engineering knowledge areas and skill sets related to most of the core engineering courses. Implementation of Hybrid Rubrics in instruction and assessment design will be an ongoing parallel effort involving intense tuning and continuous improvement. Once the implementation of rubrics achieves an acceptable standard for core engineering courses with significant benefits to instruction and assessments design, instructors can then focus on elective courses for the development and implementation of Hybrid Rubrics. Table 3 shows an ABET SO 'k', techniques, tools and skills, ME program's specific PI ''[abet_PI_11_54] Psychomotor: Adaptation Draw the stress transformation for plane stress condition in mechanical components using Mohr's circle graphically using geometrical instruments or AutoCAD; Extract the orientation and direction of the state of the stress from given element; compute the stress transformation for plane stress condition in mechanical components using Mohr's circle graphical method; extract the information related to principal stresses, orientation and direction of other stresses from the solution; determine the normal and shear stresses of given orientation'' and its hybrid rubric. The hybrid rubric also contains a column to indicate the percentage of total score allocation for each descriptor and the EAMU scales. The Office of Quality and Accreditation at the Faculty of Engineering has developed elaborate, step by step, instructional videos for developing hybrid rubrics for the CE. ME and EE programs [84] - [86] . ME 224 Mechanics of Materials course final exam Q6 example (Term 382 ABET SOs 'a-k') illustrates how course outcomes, their PIs are used to develop hybrid rubrics and apply them in instructions and assessments. This example shows how CO5, PI_11_54 accurately align to final exam Q6 and how its hybrid rubric (Table 3 ) is used to develop the grading policy (Table 4 ). The CO6: Calculate stress transformation on different planes in a member subjected to normal and shear loading, utilizes PIs: Use a graph paper attached and any calculation on this page. The graph should be properly labeled for all the values and axes. Final Exam Question 6 has been allocated 20 points out of which 2 points for extracting information from the element, 3 points for marking the graph properly, 10 points for accurately drawing the point, followed by the right Mohr's circle on the labelled graph and 5 points for extracting information from the Mohr's circle. Realistic measurements of learning outcomes are achieved by specifying weights (similar to what has been suggested regarding the relevance of weights for learning outcomes measurement by Moon, 2007 [6] ; Liu and Chen, 2012 [81] ) to different assessments according to a combination of their course grading policy and type. The first rationale in order of priority is the type of assessment so that higher weight is assigned to laboratory/design related assessments compared to purely theoretical assessments since laboratory/design work cover all three domains of Bloom's taxonomy [27] cognitive, psychomotor and affective (as suggested by Salim, Hussain & Haron, 2013 [62] ) or final exams over quiz since the final exam is more comprehensive and well-designed than a quiz and the students are generally more prepared for a final exam with many of their skills reaching a higher level of maturity and proficiency by then [41] . The second rationale in priority is to account for the percentage contribution of the given assessment to the final grade, which is derived from the course grading scale [41] . Table 5 shows the 4 course formats developed by the Faculty of Engineering at Islamic University to calculate the weighting factors for different assessment types. The rationale for developing a standardized assessment template for the Faculty of Engineering programs is: a) To classify four kinds of course formats (refer Table 5) i. Courses without labs and without project/term paper ii. Courses without labs and with project/term paper iii. Courses with labs and without project/term paper iv. Courses with labs and with project/term paper b) To classify assessments as initial, culminating, complex etc. and emphasize major assessment components that are holistic and the true reflection of actual students learning involving 3 domains of learning: cognitive, psychomotor and affective. c) To develop appropriate weighting factors for different assessments in various course formats to accurately reflect combination of grading scale and level of learning. Faculty first select the course format which matches their course design to obtain the multiplication factors for different assessment types. Then for a specific assessment type in the given course, its final weighting factor % is calculated by obtaining the product of its course grading scale and multiplication factor [24] , [35] . The formula for calculating the Weighting Factor (WF) for specific assessments is shown by Equation (1). Several software applications are cited in literature including True Outcomes R for outcomes assessment due to the inadequacy of Blackboard R [32] . EvalTools R 6 is chosen as the platform for outcomes assessment instead of Blackboard R since it is the only tool that employs the FCAR and EAMU performance vector methodology [35] , [41] , [46] , [48], [80] . This methodology facilitates the embedded assessments model by using existing curricular scores for outcomes measurement and assists in achieving a high level of automation of the data collection process. Mead and Bennet (2009) have also explicitly stated the practical efficacy of embedded assessments aligned with learning outcomes, thus avoiding unwanted resources spent on creating additional assessments [29] . Unfortunately, the focus of their work is predominantly on cognitive skills. They specifically mention the development of specific performance criteria and associated rubrics to be able to effectively create assessments that are accurately aligned to target student engineering activity in courses. The enhanced FCAR + Specific PIs methodology employed by EvalTools R provides effective CQI with embedded assessment technology and supports a holistic delivery coverage of curriculum by covering all the 3 domains and associated learning levels of Bloom's Taxonomy. The EvalTools R 6 FCAR module provides summative and formative options and consists of the following components: course description, COs indirect assessment, grade distribution, COs direct assessment, assignment list, course reflections, old action items, new action items, SOs and PIs assessment [35] , [41] , [46] , [48] , [80] . Web-based software EvalTools R 6 provides electronic integration of Administrative Assistant System (AAS), Learning Management System (LMS), Outcomes Assessment System (OAS) and Continuous Improvement Management System (CIMS) facilitating faculty involvement for realistic CQI [35] , [41] , [46] , [48] , [80] . The CIMS feature electronically integrates Action Items (AIs) generated from program outcomes term reviews with the Faculty of Engineering standing committees' meetings, tasks list and overall CQI processes. Figure 12 shows the architecture design of EvalTools R 6. EvalTools R 6 uses a database abstraction layer to interface with the database [48]. This design allows interface to any database; however, MySQL is used as the primary database server. Sessions and Class files are separate from the presentation layers. The structure of the architecture shown in Figure 12 has proven adaptive and agile for design changes or add-on modules [48] . EvalTools R is designed for day-to-day classroom activity and for gauging whether learning and teaching delivery is meeting standards. Its outcomes assessment module, in particular, integrates proven best assessment practices including a rubric-driven assessment model and an FCAR assessment model. EvalTools R product suite comprises of the following independent and yet integrated products [48]: • EvalTools R Survey -an online survey system that handles end-of-term survey, alumni survey, senior-exit survey, employer survey and other customizable surveys • EvalTools R LMS -covers essential elements for managing day-to-day classroom activities such as lessons, assignments, grade book, etc. • EvalTools R OAS -an Outcomes Assessment System that is unique in its class and covers best assessment practices. It has a proven 14 years' record of aiding universities for ABET accreditation. Recently, it also enabled universities to achieve excellent results with Middle-States accreditation • EvalTools R CIMS -A Continuous Improvement Management System which electronically integrates corrective actions generated by outcome assessments and evaluation with the concerned stakeholders The FCAR was initially developed by John K. Estell, Commissioner, Computing Accreditation Commission (CAC) ABET Inc. The FCAR has gradually expanded to include Performance Indicators (PIs) [46] and later classification of PIs according to Bloom's three domains and their learning levels [24] , [41] , [42] , [67] . The Performance Vector Table ( PVT) is explained in the later part of this section. The PVT Table facilitates the collection of outcomes data for all students assessed in the class [82] . Results of outcomes assessments are evaluated based on performance criteria, which have been published in much cited research on FCAR evaluations [46] , [80] . The FCAR presents a structured format for the presentation of various aspects of course evaluations. The FCAR template utilized in the web-based software EvalTools R 6 provides formative and summative options for real-time and deferred action based course evaluations. Two diagnostic options are available for faculty course evaluation purposes i) FCAR basic: displays old ported actions, new actions, reflections and EAMU vector results without plots for all assessments corresponding to each CO and ii) FCAR analytic: displays detailed histogram plots for student performances in all assessments with their weighting factors corresponding to each CO [46] , [48] , [67] . The overall FCAR structure consists of multiple items indicated in Figures A1-A6 of Appendix A. Figure A7 of Appendix A shows the process flow [42] for the FCAR + specific/generic PIs model classified per Bloom's 3 domains using a 3-Levels Skills Grouping Methodology adopted by the EE, ME and CE programs at the Islamic University in Madinah [24] , [41] , [42] . The FCAR model implements ABET criteria which require the development of quantitative/ qualitative measures to ensure students have satisfied the COs which are measured using a set of specific or generic PIs/assessments and consequently the program level ABET SOs. Course faculty is directly involved in the teaching and learning process and interacts closely with all the enrolled students. An ideal CQI cycle would therefore include the course faculty in most levels of its process, to generate and execute action items that can directly target real-time improvement in student performances for ongoing courses. Models that involve program faculty or assessment teams that are not directly involved with the enrolled students will definitely not support real-time CQI which is an essential element of an authentic OBE system. The noteworthy aspect of this model shown in Figure A7 is that course faculty is involved in most CQI processes, whether at the course or program level [41] , [42] , [67] . The FCAR methodology applies various performance criteria for outcomes assessment and evaluation of individual students, class groups or programs [24] , [41] , [42] , [46] , [80] . Table 6 below illustrates EAMU PI levels, Heuristic rules for PVT and Heuristic rules to classify performance vectors in PVT [24] , [35] , [41] , [45] , [46] . An important point to note is that descriptors for EAMU scales shown in Table 6 are generic and applied to all PIs unless instructors opt to apply topic-specific descriptors of hybrid rubrics for assessing certain PIs of interest. In Figure B1 (Appendix B), we see the performance vector for a civil engineering course, CE_201 Statics, showing the performances of 16 students for seven Course Outcomes (COs). In this clipped portion of the entire table generated by EvalTools R 6, we see COs 1, and 2 assessed for all 16 students in the class using multiple assessments. Aggregation of different types of assessments aligned to a specific learning outcome at the course level is achieved using a scientific weighted averaging scheme. This scheme gives priority to certain types of assessments over others based on their coverage of learning domains, percentage of course grading scales and maturity of students' learning at the time of taking the assessments. Details of this weighted averaging approach have been provided by Hussain, Addas and Mak (2016) [24] . The CO1: 'Define fundamental concepts of statics, system of units and perform basic unit conversions'; is assessed for every student in the class using multiple relevant assignments such as quiz 1 (QZ_1) and midterm-1 question 1 (Mid Term-1 Q-1), which are aligned to specific performance indicators and aggregated together using this scientific weighted averaging scheme. The performance vector provides details of each student's performance in multiple assessments aligned to PIs that correspond to all the COs. Figure 14 summarizes the aggregate score achieved for all COs and their EAMU vectors for CE_201 Statics course. EvalTools R 6, employing the FCAR assessment model, facilitates electronic storage of the outcomes and assessment information for each student collected from several courses in every term. The FCARs from each course are further processed into a PVT for each SO. Assessment Level WFs Calculation Procedure EvalTools R 6 Weighting factor calculation procedure for EAMU performance vector methodology facilitates the allocation of weights to different types of assessments. Assessments such as final exams capture student performances at maximum levels of maturity and therefore deserve higher weightage as compared to other initial assessments. On the same note, assessments that involve complex learning activities such as engineering design related to multiple learning domains also necessitate their dominance in overall outcomes aggregation. Steps Employed by EvalTools R 6 to calculate the EAMU Vectors [24] , [35] , [46] , [81] 1. Faculty use EvalTools R 6 Assignment Setup Module to identify an assignment with a set of specific questions, or split an assignment to use a specific question or sub-question for outcomes assessment with relatively high coverage of a certain PI mapping to CO, ABET SO (for EAMU calculation). 2. EvalTools R 6 removes students who received DN, F, W or I in a course from EAMU vector calculations, and enters student scores on the selected assignments and questions for remaining students. 3. For each student, EvalTools R 6 calculates the weighted average percentage on the assessments, a set of questions selected by faculty. Weights for assessments are set according to the product of their percentage in the course grading scale and multiplication factor based on the course format (refer Table 5 ) and entered in the weighting factor section of the Assignment Setup Module. 4. EvalTools R 6 uses the average percentage to determine how many students fall into the EAMU categories using the pre-selected EAMU assessment criteria (refer Table 6 ). 5. EvalTools R 6 calculates the EAMU average rating by rescaling to 5 for a weighted average based on a 3-point scale as shown in Equation (2). (2) Table 7 shows an example of how EAMU vectors are computed for a specific PI. Assessments HW3 and HW8 are selected for measuring a specific PI ABET_PI_5_3. These assessments are weighted according to the course grading policy and multiplication factor. Let's say the weights are 5% for HW3 and 7% for HW8. The percent-weighted score is computed by Equation (3): The PI EAMU classification for each student in the class, as indicated in the second column is obtained from this % weighted average. The PI EAMU vector (3,1,1,2) for the entire class in the last column is obtained based on the count of students belonging to each of the categories as defined by: Excellent: scores >= 90%; Adequate: scores >= 75% and < 90%; Minimal: scores >= 60% and < 75%; and Unsatisfactory: scores < 60%. In this case, there are 3 students with scores belonging to E; 1 student in A; 1 student in M; and 2 students in U; categories. Finally, the weighted average of the EAMU vector for this specific PI_5_3 is 2.86, which is obtained as per Equation (2) In Figure B3 (Appendix B) is a CE_201 Statics course example showing Final Exam Q1 WF calculation using equation (1) . The course format 1 from Table 5 is applied since there are no labs and/or project assessments in this course. The WF for Final Exam Q1 is calculated as 80%. Figure B4 (Appendix B) shows a portion of the analytical FCAR for CE_201 Statics course. CO6 is aggregated for assessments QZ4 (WF 1.25%), Final Exam Q1 (WF 80%) and Q2 (WF 80%). The Final Exam Q1 and Q2 dominate QZ4 contribution in the overall weighted average computation of CO6. The philosophy behind the implementation of this Hierarchy-Frequency Weighting-Factor Scheme (HFWFS) for program learning domains evaluations is to consider a combination of two critical factors: (a) to implement a hierarchy of skills by giving prevalence to those assessments that measure skills of the highest order over others. For example, mastery-advanced level PIs will have a higher prevalence than those for the reinforced-advanced level; and (b) to consider the counts of assessments implemented in a certain learning level since outcomes assessment is directly equivalent to learning. Table 8 shows the calculation of weighting factors for various learning levels of the Mastery, Reinforced and Introductory courses, which are then applied to measured PIs in given course levels to compute the final program ABET SO 'a' value. The detailed calculation for each column in Table 8 is reported in past research [41] and also shown here: The Learning Distribution % (LD) Equation (4) shows the percentage of total assessments implemented in all courses for each learning level. Table 8 shows that for ABET SO 'a' (SO_1), 6 assessments out of 70 were implemented in reinforced-level courses measuring intermediate level PIs for all 3 domains composite. Assessments in this level accounted for 8.57% of learning. The Progressive Distribution % (PD) Equation (5) calculates PD by summing LD values according to the hierarchy of the skills levels. Mastery courses and advanced skill levels are assigned the highest progressive distribution value. The Relative Distribution % (RD) Equation (6) calculates RD by dividing the PD(i) value with LD(m): the non-zero minimum value (learning level 'm') of the set of LD values corresponding to all the learning levels 1 to i. The Weighting Factors WF(i) for the various measured learning levels given by Equation (7) for ABET SO 'a' (SO_1) are calculated by multiplying LD(i) with RD(i). This section illustrates how the weighted average value of 3.42 for ABET SO 'f' (SO_6) highlighted in Table 9 is obtained. The values in the rightmost column WF(i) in Table 9 are the weights for different learning levels related to ABET SO 'f'. Figure B5 shows the detailed list of specific PIs measured by the CE program in term 382 (Spring 2018) for ABET SO 'f' (SO_6) and classified according to Bloom's 3 domains and learning levels. Table 9 shows the weighted average values, weighting factors WF(i) for learning levels, Bloom's learning levels for specific PIs measured from reinforced and introductory level courses for ABET SO 'f' program evaluation. Figure B6 (Appendix B) shows WFs defined for 3-Level Skills Advanced, Intermediate and Elementary measured in Mastery, Reinforced and Introductory courses. Since assessments corresponding to SO 'f' (SO_6) in term 382 (Spring 2018) covered PIs and skills targeting an advanced level of the Affective domain: Internalization; in just Mastery and Reinforced courses, the WFs for other skill levels were obviously defined as zero and thus not taken into account. For example, consider the ABET_PI_6_5 shown in Table 9 . It is classified as an Affective domain Internalizing level per Bloom's learning model and is an Advanced skill level per the 3-Level Skills Grouping Methodology [24] , [41] . This PI is measured in a Mastery course CE_482, Contracts and Construction Engineering. It has an EAMU value of 3.21. From Figure B6 (Appendix B), the PI weighting factor for the Advanced level is 300. The column labeled Avg * WF displays 963 as the product of the EAMU weighted average value 3.21 with the PI weighting factor 300. The final ABET SO 'f' weighted average value is calculated according to Equation (8) . The sum of values in column Avg * WF is 3162.25. This sum value is then divided by 925, the sum of the column WF, giving 3.42 as highlighted in yellow in Table 9 . The Faculty of Engineering programs have developed a state of the art digital database consisting of specific and generic PIs classified as per the 3 Bloom's domains and their learning levels through a very exhaustive and elaborate ongoing process to comprehensively measure engineering activities corresponding to ABET SOs for various skills levels in the Introductory, Reinforced and Mastery level courses while fulfilling Washington Accord engineering graduate attributes, knowledge, and professional competency profiles [68] . Figure C1 and safety, manufacturability, and sustainability considerations Justification: Student learning activity related to design for ABET SO '2' is classified as an internalization learning level of the Bloom's affective domain. Since students learning activity is driven by design objectives related to fulfillment of customer requirements and realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability, this PI is classified as an internalization learning level. Affective: Internalizing values Contribute actively to prepare the team contract with collaboration of team members and faculty; define conditions of team contract such as general policy, operations, scope of project, team project roles, major assignments, meeting schedule, communications and policy for conflict resolution; elaborate individual and team member strengths and weaknesses with faculty and colleagues related to the definition of team roles; collect and verify CVs appropriately aligned to required roles; submit signed team contract with finalized assignment of team roles Justification: Student learning activity related to teamwork for ABET SO '5' is classified as an internalization learning level of Bloom's affective domain. Students actively participate in preparing a team contract that consists of general policy, operations, scope of project, team project roles, major assignments, meeting schedule, communications and policy for conflict resolution. Since the efficacy of the team operation depends on the team contract defining professional ethics, this PI is classified as an internalization learning level. EvalTools R provides dual features to instructors wherein they can program rubrics tailored to assessments in courses or utilize the rubrics aligned to the PIs databases. The hybrid rubrics are used by both students and instructors for estimating the level of performances and verify score marking for various assessments. The database consists of rubrics related to PIs for the 7 ABET SOs classified according to three domains of Bloom's taxonomy. As shown in Figure 13 , The Faculty of Engineering Quality and Accreditation (QA) Office in close coordination with all faculty members of the EE, ME and CE programs has employed a 3 years rotating plan for development and implementation of a sophisticated database of hundreds of rubrics. The plan was implemented in term 391 (Fall 2018) wherein 3 rubrics were developed covering major learning activities in every core course for application in Mid-terms 1 and 2, and Final exams. The second iteration for development of a set of 3 additional rubrics began in term 411 (Spring 2020). Table 10 shows COs, PIs and rubrics implemented at the CE, EE and ME programs for various types of learning activities such as teamwork, safety regulations, professional ethics, experimentation, capstone design, problem solving, report writing, poster and oral presentations, metacognition in lifelong learning. Appendix C PIs and Hybrid Rubrics provides samples of PIs and Hybrid Rubrics databases on the EvalTools R platform. The complete assessment strategy for each measured ABET SO and estimation of program-level competencies is provided in the 3 phase SOs, PIs and learning domains evaluation modules' term summary [24] , [41] , [67] . The term summary contains detailed information on the type of assessments used, their course levels, counts, learning distributions, and skill levels of the associated performance indicators [24] , [41] , [67] . Any existing deficiencies in current assessment models for measured ABET SOs are identified through a detailed 3 phase program term review process conducted by faculty members. Student performances at the course level are measured using PIs and then aggregated at the program level with scientific weighting factors for a corresponding term to contribute to the final SO value [24] , [41] , [67] . Figure D1 (Appendix D) shows a sample PIs evaluation snapshot showing the revised 7 ABET SOs results for the EE program term 391 (Fall, 2018). In this case, the SO_7 related to lifelong learning is examined for any failing PIs. Color-coded results correspond with the performance criteria and heuristics rules mentioned in Table 6 . As shown in Figure D2 (Appendix D), PIs evaluations list the SOs results with contributing courses which can be accessed using the activate FCAR options. This enables reviewers with the capability to audit any potential issues with course reflections and subsequent actions. The PIs and SOs evaluation is focused on failing SOs and PIs for analysis and discussions relating to improvement. Figure D3 (Appendix D) shows the PI review comments for PI_4_7 and ABET SO '4' for the EE program in term 391. All the comments of the reviewers from PIs evaluations are rolled up into the SOs evaluation executive summary report. A cut portion of the executive summary report showing ABET SO_5 for the EE program term 391 (Fall 2018) is shown in Figure 14 below. An overall summary with final status of performance for revised ABET SO 5 is shown as Meeting Expectations. A list of reviewers and failing PIs with any documented corrective actions are reported in the executive summary. EvalTools R provides the following program term review evaluation reports in printable word or pdf format [41] , [67] : a) SO executive summary b) Detailed SO/PI executive summary c) SO/PI Performance Vector Table PVT summary and d) Course reflections/action items Cut portions of these reports are presented in Appendix E for better understanding. In the programs' term review learning domains evaluation, estimated learning distributions in Bloom's 3 domains and their 3 skills levels are compared with target ideal values to generate several CQI activities such as the modification or development of: teaching and learning activities; course outcomes; course topics; and assessments and associated PIs to correct the existing learning distribution deficiencies [41] . The FCAR embedded assessment methodology, Hierarchy Frequency Weighting-Factors Scheme (HFWFS) combined with digital technology, promotes easy development and usage of formative assessments, making each phase of the course, curriculum delivery transparent to all stakeholders and provides precise information of where and why performance weaknesses exist for timely remedial actions. The implemented assessment and evaluation methodology encourages faculty to use relevant information for real-time modifications. The generation of assessments and their mapping to specific PIs for measurement followed up with failure identification, and remedial action is a total faculty affair, thereby creating the ideal situation for CQI in engineering education [24] , [41] , [42] , [67] . Since assessments are equivalent to learning in the OBE model [25] , [89] , the Faculty of Engineering has decided to consider the type of assessments, their frequency of implementation, and the learning levels of measured specific PIs in Bloom's 3 domains for courses and overall program evaluations [41] . At the course level, the types of assessments are classified using the course formats chart to calculate their weighting factors [24] , [35] , [41] , [67] , which are then applied using the setup course portfolio module of EvalTools R 6 [23] . The results can be seen in the FCAR and are used for course evaluations. The program level ABET SO evaluations employ a weighting scheme HFWFS, which considers the frequency of assessments implemented in courses for a given term to measure PIs related to specific learning levels of Bloom's domains [41] . Figure 15 shows the EE program term 382 (Spring, 2018) composite learning domains evaluation data for 11 ABET SOs. For each SO, the counts of total assessments and their aggregate average values are tabulated for each learning level [41] . Figure 15 also shows the overall percentage learning distribution in each learning level for all the 11 ABET SOs. The counts of assessments in various learning levels and their calculated values for all 11 ABET SOs are displayed for each learning domain [41] . The ABET SO 'a' (SO_1) is highlighted for understanding. The bottom portion of Figure 15 shows average values calculated on a 5.0 scale for the cognitive, affective and psychomotor domains, providing a good overall indication of how the program has performed in each learning domain. The pie chart indicates the EE program term 382 outcomes assessment activity percentage distribution in the 3 Bloom's learning domains. Figure 38 shows analytical results of learning distribution for 11 ABET SOs in the individual cognitive, affective and psychomotor-Bloom's domains of learning. A detailed term review report for each program was compiled with information on efforts for improvement targeting comprehensive coverage of each ABET SO to achieve curriculum delivery according to the Ideal Learning Distribution Model. Figure D5 (Appendix D) shows the learning domains composite and individual ABET SOs learning domain evaluations review reports for the EE program for a specific term, in which the ABET SOs coverages of the Bloom's 3 domains and their learning levels, categorized as per the 3-Skills Level Methodology, are studied and discussed. In the left column, a report of discussion and reflections for composite learning domains evaluation and learning distribution for individual SOs are indicated, where the overall percentage distribution of learning in the 3 domains, ABET SOs coverages, are analyzed and comments entered with a possible categorization of serious and other types of concerns for corrective action. In the right column, corrective actions for both composite and individual SOs learning domain evaluations are reported for follow up activity related to improvement in teaching/learning strategy, infrastructure, administrative process, or refinement of the current term's SOs assessment plan. The Faculty of Engineering has implemented student advising systems employing the FCAR + specific/generic PIs classified per Bloom's domains and 3-Levels Skills Grouping methodology, and EvalTools R [67] , [90] . A YouTube video also presents some detail of the features of this module and how individual student skills data is collected by using specific PIs, course assessments and integrated by faculty into academic advising [88] . Figure 16 illustrates a list of ABET SOs for previous (a-k) criteria calculated from PIs measurements for a typical student evaluation. The student skills SOs data is realistic and corresponds closely with actual student performances since 16 essential elements of precision assessment have been implemented to ensure outcomes data is as accurate as possible [24] , [41] , [67] , [90] . Figure 16 shows how the ABET SO data is computed for each individual student. The PVT methodology of the automated FCAR facilitates the term wise collection of all (a-k or 7) SOs assessment data for each student. Appropriate WFs are applied to various assessments and skill types to obtain a high level of accuracy in the final outcomes data computations. Advisors and students can review analytical detail regarding student outcome performances and use the diagnostic features of EvalTools R advising module to obtain precise term wise information regarding contributing courses and various types of assessments [67] , [90] . Figure F1 (Appendix F) clearly indicates the type of assessments, EAMU scale and score, WF, term and overall average PI score. b: ACADEMIC ADVISING REPORTING USING EVALTOOLS R 6 Advisors are electronically assigned advisees on the advising module of EvalTools R . Advisors create digital repositories of meetings information with their advisees using EvalTools R . The benefit of this digital system is the ease of access and quick traceability into the history of student meetings and notes. The program coordinator can upload the current degree plans for advisor access. As shown in Figure F2 (Appendix F), advisors upload necessary documentation like academic plans, transcripts or any other pertinent information for advising or career guidance. All notes added by the advisor can be either made visible to students or strictly confidential for access by the advisor and the program coordinator [67] , [90] . Advisors can very easily verify whether students actually access their advising notes so that follow up actions in future meetings are adequately planned. The Faculty of Engineering programs are intending to implement advanced features related to the evaluation of professional development and lifelong learning using the advising module provided by EvalTools R . OBE is an educational theory that bases every component of an educational system around essential outcomes. At the conclusion of educational experiences, every student should have achieved essential or culminating outcomes. Classes, learning activities, assessments, evaluations, feedback, and advising should all help students attain the targeted outcomes. The National Academic Advising Association (NACADA) guidelines for academic advising also state that each institution must develop its own set of student learning outcomes and the methods to assess them [91] . NACADA states student learning outcomes for academic advising are ''an articulation of the knowledge and skills expected of students as well as the values they should appreciate as a result of their involvement in the academic advising experience''. These learning outcomes answer the question, ''What do we want students to learn as a result of participating in academic advising?'' [91] . Assessment of student learning should be a part of every advising program. ABET Criterion 1 for accreditation specifically states ''Student performance must be evaluated. Student progress must be monitored to foster success in attaining student outcomes, thereby enabling graduates to attain program educational objectives. Students must be advised regarding curriculum and career matters'' [11] . So individual student skills data or results would be both a fundamental requirement and pivotal base for the entire academic advising process to initiate and continue successfully. In fact, the ongoing and continual assessment of individual student skills would actually be the litmus test for a successful academic advising process. Figure F3 (Appendix F) shows how an elementary form of academic advising based on outcomes was initiated by engineering programs in the term 381 (Fall 2017). Currently, advisors report the failing ABET SOs and have a general discussion of the composite results with students. Eventually, the intention is to gradually expand the scope of this advising process to interact with both students and their course instructors with valuable feedback for enhancement of target skills derived from advising meetings based on outcomes [67] , [90] . The main categories for corrective actions shown in Figure 17 for Faculty of Engineering programs' CQI process flow are program and course level actions. Faculty members perform assessment and evaluation, failure analysis of course outcomes, and write reflections, then generate real-time and deferred course level actions. The sequential content of course topics, WFs, and corresponding PIs data for assessments facilitate the application of formative corrective approaches for real-time mediation of student performance failures. Other actions related to any deficiency in culminating assessments, course topics, lecture outline etc. may necessitate deferred actions that will be applied by the instructor in the next offering of the course. As shown in Figure 17 some course actions are not the scope of the faculty and are therefore elevated in program term reviews as program-level actions to be electronically transferred with appropriate prioritization to concerned administrative committees for closure. The Program and Administrative actions are either elevated or transferred to the concerned committee or are generated by the committee itself. As shown in Figure G1 (Appendix G), meeting minutes consist of items such as brainstorming, selected agenda items and included or generated AIs. Attendance sheets and any other documentation related to meetings are uploaded in meeting minutes' folders. Each meeting is assigned a unique electronic ID and is closed once finalized by the chair of the Program. AIs are either generated or transferred electronically. AIs are prioritized as Urgent (2 weeks closing time), High (3 weeks closing time), Normal (1-month closing time), Medium (2-month closing time), Low (3 months closing time). Each AI is assigned a unique electronic ID, consists of a time stamp, assignee and assigner information. The status of AI is either open or closed and relevant remarks are entered by the assignee/assigner at the time of change of status. Figure G2 (Appendix G) shows a sample window of AIs in the tasks list for the ME program committee in CIMS module of EvalTools R . As shown in Figure G3 (Appendix G) below, multiple folders have been created for EE, ME and CE program committees to maintain digital information corresponding to program-level CQI activity relating to various categories such as ABET, ME Committee, Course Folders, NCAAA, Program Term Reviews etc. The ME program's ABET folder, as shown in Figure G4 (Appendix G) consists of the following subfolders: Objective evidence for CQI activity related to the PDCA quality cycles Q 1 and Q 2 is also indicated in parentheses for the subfolders above. Figure G5 (Appendix G) shows the data collected as objective evidence for several terms for CQI activity related to PDCA quality cycle Q 2 : Course evaluation, feedback, and improvement for the EE program. Committee meetings folders as shown in Figure G6 (Appendix G) have been created for CE program meetings based on the month and their electronic ID. Meetings minutes and associated documentation are uploaded accordingly in corresponding folders. NCAAA folders contain any documentation related to the Saudi Arabian National Accreditation Agency NCAAA [13] . Reviews folders shown in Figure G7 (Appendix G) consist of evidential documentation related to PDCA quality cycle Q 3 : Program Term Review such as executive summary, SO/PI PVT, Course reflections/actions reports for SOs, PIs and Learning Domain Evaluations related to the CE program term reviews which are conducted every term The Faculty of Engineering has studied various options for developing its assessment methodology and systems [19] , [20] , [22] , [28] , [29] , [32] , [34] , [36] , [37] , [43] - [45] , [47] , [80] to establish actual CQI and not just to fulfill ABET accreditation requirements [11] . The following points summarize the essential elements chosen by the faculty to implement state-of-the-art assessment systems for achieving realistic CQI in engineering education [24] , [41] , [90] : 1. OBE assessment model 2. ABET, EAC outcomes assessment model employing PEOs, 11/7 ABET EAC SOs and PIs to measure COs. 3. Measurement of outcomes information in all course levels of a program curriculum: Introductory, Reinforced and Mastery. 4. The FCAR utilizing the EAMU performance vector methodology 5. Well-defined performance criteria for course and program levels. 6. A digital database of specific PIs and their hybrid rubrics classified as per Bloom's revised 3 domains of learning and their associated levels (according to the 3-Level Skills Grouping Methodology). 7. Unique Assessment mapping to one specific PI. 8. Scientific Constructive Alignment for designing assessments to obtain realistic outcomes data representing information for one specific PI per assessment. 9. Integration of direct, indirect, formative, and summative outcomes assessments for course and program evaluations. 10. Calculation of program and course level ABET SOs, COs data based upon weights assigned to type of assessments, PIs and course levels. 11. Program as well as student performance evaluations considering their respective measured ABET SOs and associated PIs as a relevant indicator scheme. 12. The Program Term Review module of EvalTools R consisting of 3 parts a) Learning Domains Evaluation b) PIs Evaluation and c) ABET SOs Evaluation 13. A student academic advising module based on measured learning outcomes data 14. Electronic integration of the Administrative Assistant System (AAS), the Learning Management System (LMS), the Outcomes Assessment System (OAS) and the Continuous Improvement Management System (CIMS), facilitating faculty involvement for realistic CQI. 15. Electronic integration of AIs generated from program outcomes term reviews with the Faculty of Engineering standing committees' meetings, tasks, lists and overall CQI processes (CIMS feature) 16. Customized web-based software EvalTools R facilitating all of the above. According to the process flow for FCAR + Generic/Specific PIs model, which implements OBE principles and ABET accreditation criteria, the PEOs, SOs, COs, PIs and hybrid rubrics have to be developed, implemented, assessed, evaluated for deficiencies and improved based on subsequent actions for CQI. Therefore, elaborate CQI processes embedded in quality (Plan, Do, Check and Act) PDCA cycles proposed by Deming and Shewart [65] have been implemented at the CE, EE and ME programs at IU. Table 11 shows some detail regarding the process, participants, and frequency of assessment and evaluation activity implemented in various PDCA quality cycles to establish an IQMS for achieving holistic learning. A list of major assessment and evaluation activity related to the various PDCA quality cycles is provided below for a better understanding of SOs in a course -Q 1 (every term) 4. Review performance criteria and perform any major modification of program PIs database -Q 4 (3-year cycle) 5. Develop educational strategies, assessments aligned to performance criteria Q 1 (every term) 6. Develop, implement and review rubrics and assessment methods used to assess performance criteria -Q 1 (every term) 7. Collect and evaluate course level direct/indirect SOs assessment data, report finding and create actions -Q 1 (real-time throughout the term) 8. Implement course actions -Q 2 (termwise) 9. Evaluate program SOs data, report finding and create actions -Q 3 (termwise) 10. Implement program actions -Q 6 (termwise) Deming championed the work of Walter Shewhart, including statistical process control, operational definitions, and what Deming called the ''Shewhart Cycle,'' which had evolved into PDCA for CQI [65] . The four phases of a typical CQI cycle are 1) PLAN: developing the educational plan 2) DO: implementing the plan 3) CHECK: monitoring processes/results, conducting failure analysis, implementation of a plan to identify any variations to required processes or deficiencies in intermediate or final results and 4) ACT: Generate and implement appropriate corrective actions to remediate the observed deficiencies or mitigate projected failures. The Faculty of Engineering implemented state of the art IQMS consisting of 6 PDCA quality cycles based on authentic OBE principles using a web-based digital platform EvalTools R to achieve holistic engineering education for all students. The PDCA quality cycles are designed to employ digital automation wherever necessary for integrating various comprehensive quality monitoring, feedback and improvement processes to establish effective CQI. The PDCA cycles aid in the fulfillment of required ABET accreditation criteria for CQI. Specifically, they establish CQI processes related to the development, implementation, monitoring, feedback and improvement of programs' PEOs, SOs, COs, PIs and hybrid rubrics. A comprehensive CQI process flow consisting of six PDCA quality cycles is shown in Figure 18 , is explained in the following sections and listed below: Q 1 : COs, PIs and hybrid rubrics development Figure 19 lists learning activities, course topics and assessments as course inputs. The course inputs provide a fundamental guiding framework for the development of COs, PIs and their hybrid rubrics. Based on evaluation results, faculty members may decide to modify any aspect of the course inputs to reme- year. Upon completion of the first iteration, CE, EE and ME program committees reviewed subsequent improvement in the quality of teaching/learning and reported their recommendations for any modifications to the implemented rubrics. In some cases, an additional set of rubrics was developed from a select group of PIs remaining in the database that targeted other major learning activities in core courses to enhance the overall quality of student learning. The improvement activities in the current 2019-2020 academic cycle involve the application of some modifications to existing rubrics and development of select additional rubrics. An example of the development of COs, PIs and hybrid rubrics for a mechanical engineering course, ME_323, Theory of Machines is explained in Section 1 of Appendix J. Table J1 shows a list of COs, their PIs corresponding to revised ABET SOs (1-7) for course ME_323. The sequential order of COs, PIs target major learning activities corresponding to main course topics and comprehensively cover Bloom's 3 learning domains and their learning levels, achieve an Ideal Course Learning Distribution and fulfill required standards of engineering knowledge and skills. Samples of application of rubrics to various learning activities such as Capstone design and experimental work are also shown in Section 1 of Appendix J. The PDCA Quality Cycle Q 2 , shown in Figure 20 , consists of processes that ensure proper completion of all course work each term. Firstly, it ensures the course syllabi contain accurate information and is provided to students in the first week of the term, followed by a mid-term audit (FCAR midterm checklist) of COs, PIs, teaching/learning strategies, etc. and final End of Term (EOT) check for completion of all course assessment, evaluation and feedback for improvement processes. Any deficiency uncovered during any stage of this quality cycle is communicated to the concerned faculty members for corrections. The Quality and Accreditation (QA) office works in coordination with the ABET coordinator of the engineering program to effectively manage all activities in this cycle. Once the EOT is approved by the QA office, it is presented to the supervisor, Quality Development (QD) for authentication and subsequent reporting in EvalTools R 6. The final authentication clears the way for the program to proceed to program term review evaluations. Section 2 of Appendix J shows samples of syllabi audit, FCAR midterm and EOT checklists. Details of various administrative committees with their respective members are setup electronically employing the EvalTools R 6 CIMS module. Each committee maintains a schedule of action items with details on their assignment and priority level, discussions, brainstorming, creation/closure dates, and status information. Any committee can add new action items or review existing ones for status updates and closure. The advanced features of the CIMS module provide each committee the functionality to categorize an action item as per the given selection range of priority levels: low, normal, medium, high or urgent. The action items are sorted electronically according to their priority levels. Transfer or elevate features allow committees to move those action items which are beyond their scope or responsibility to another appropriate department or committee within the Faculty of Engineering or University for completion. Appendix G elaborates on the CIMS by providing relevant screenshots that present the module's essential features. A specific program term review committee reviews the measured ABET SOs, related PIs information while considering this as a good indicator scheme and concludes its report with significant analysis and discussions as to whether a certain ABET SO is Below, Meeting or Exceeding expectations for the program in a designated term [24] , [35] , [41] , [67] . Section 3 of Appendix J provides samples of the program term review process. The Program Term Review module of EvalTools R 6 consists of three parts i) Learning Domains Evaluation ii) PIs Evaluation and iii) ABET SOs Evaluation. The PIs and SOs evaluations are focused on failing SOs and PIs for analysis and discussions relating to improvement. Weighted average values of ABET SOs and PIs with a scientific color coding scheme as per PVT heuristic rules shown in Table 6 indicate failures for investigation. Courses contributing to failing PIs and SOs are examined. The Faculty of Engineering has presented elaborate YouTube video presentations that detail the automation of outcomes assessment, showing some CIMS features such as action items elevation from the FCAR to task lists of standing committees for actual CQI [87] , [88] D. PDCA QUALITY CYCLE Q 4 : PIs 3 YEAR MULTI-TERM REVIEW As shown in Figure 22 , the Faculty of Engineering programs conduct a PIs multi-term review every 3 years to check the validity of PIs in regards to technical content, learning level classification, relevancy to industry, alignment to program SOs, COs, curriculum and student learning activity. Any recommendations for modification to the PIs database is approved by a program council meeting. Issues related to redundancy, futuristic content or basic inaccuracies have been uncovered in the last multi-term PIs review conducted in term 382 (Spring 2018). Multiple examples of major types of modifications to the CE, EE and ME PIs database with their justifications are reported in Table J17 (Appendix J). The Faculty of Engineering programs' assessment model includes a culminating PDCA Quality Cycle Q 5 , a multi-term program SOs review which is conducted every three years (see Figure 23 ). This review entails a thorough trend analysis of all program SOs by the program faculty. Almost 6 terms of outcomes data are collected and reviewed for overall improving trends of performance. If more than 80% of the SOs displays a positive trend, then the program multi-term SO review results in an Exceeding Expectations decision. If 60% to 80% of the program SOs display an improving trend, then the decision is Meeting Expectations. When more than 60% of program SOs display a negative trend in overall performance, then the multi-term SO review results in a Below Expectations decision. The Below Expectations decision necessitates an examination of language, content and scope of the failing SOs besides several other corrective actions. A detailed report of recommendations for improvement, including any modifications to SOs is sent to the EAC for review and approval (Sections 5.i, ii, iii, iv and v of FIGURE 23 . PDCA quality Cycle Q 5 : SOs multi-term review process flow for ME program. Appendix J provide 5 years multi-term SOs executive summary, performance criteria and trend analysis reports). The Faculty of Engineering programs' multi-term outcomes data is a summary of aggregation of thousands of outcomes assessment data points collected over 5 years from the termwise program and course evaluation results. They comprise reflections, actions, discussions, decisions based on a detailed review of information from FCARs, COs, PIs, SOs program evaluations. In summary, the [2014-18] multi-term SOs (a-k) trend analyses resulted in a Meeting/Exceeding Expectations decision for the three engineering programs (Section 5.v of Appendix J). The results of these reports have had a strong multi-dimensional impact on the opinions of all stakeholders of the engineering programs (students, alumni, faculty, employers) to stimulate their response, involvement and eventual contribution to several types of corrective actions (refer EAC committee review meeting, Section 6.iii of Appendix J). These actions have improved multiple aspects of the Faculty's education process at different levels ranging from teaching/learning strategies, enhancement of direct/indirect assessments, quality of advising, curriculum standards, infrastructure and facilities, sustainability of CQI processes, and expanded institutional support. The PEOs review, revision and improvement process is mainly adopted from Sundaram's work (2013) [92] . Several programs across the US have adopted this process and achieved successful results with ABET over multiple accreditation cycles. The PEOs review and improvement process consists of internal and external components. The various phases of this process are shown in Figure 24 and listed below in chronological order: The Definition and Development of PEOs as shown in Figure 25 are completed by program faculty members (Internal Review) and comprises of the following essential elements: such as global contributors [95] . Once the PEOs are developed, mapping tables are created clearly aligning them to essential components of the University Mission. Figure J34 (Appendix J) shows The Islamic University Mission Statement with key phrases highlighted. The phrases underlined in Figure J34 are those that relate directly to the EE, CE and ME PEOs. Table J29 (Appendix J) shows the PEOs statements for the ME, CE and EE programs. Table J28 (Appendix J) shows the relationship between the PEOs and various parts of the University mission statement. As shown in Figure 26 , the constituencies of the CE, ME and EE programs are: undergraduate students, alumni, faculty and Industry. The influence and inputs of the constituencies on the PEOs review process are mentioned below: Industry inputs: The EAC represents industry for the engineering programs. The EAC consists of engineers, engineering managers, and business leaders from the local industry as well as educators from academia in CE, ME and EE program-related disciplines. It is an advisory committee that serves the engineering programs. The primary charter of this group is to: 1. Provide advice and counsel on curriculum, facultyindustry interaction, outcomes assessment, and program development 2. Identify technical needs of the regional industry in general and/or individual companies in particular for research, development, and continuing education. 3. As part of the objectives, the EAC is also to promote joint research and development projects and grants. Student inputs: Undergraduate academic work is assessed in every one of the core courses and their comments on the course-exit survey are reviewed each term, providing input for program improvement. Faculty use EvalTools R as their Learning Management Toolset for posting their course materials and assignments. Course outcomes are automatically displayed to students whenever students access their course materials or assignments from EvalTools R . Students are also well-informed of the key assignments for each course that are collected as objective evidence in the course portfolio. Alumni inputs: Alumni input is sought one-three years after graduation from the program to judge whether what they learned from the program allowed them to perform as expected. Faculty inputs: The faculty, who are at the heart of the assessment process, not only plan the learning process and deliver courses and labs, but also assess effectiveness at the course level at the end of each term. Faculty members are required to write reflections on each course they teach, review and close action items accordingly, and also suggest any new action items if appropriate. Utilizing EvalTools R , Program Term Review Committees review course portfolios along with new action items suggested by faculty each term to determine if the action items are appropriate for the next cycle of course offerings. As described above, all program constituents are included in the program assessment process and provide feedback on the program. The PEOs assessment process is conducted in an iterative cycle; beginning with the University mission, which in turn influences the departments' PEOs. The departments used the ABET SOs (a-k) as the student outcomes in the first PEO review which is conducted every 5 years. With these in mind, each course outcome and assessment method is carefully examined for better coordination among courses, and setup in order to reach a complete coverage of the student outcomes for achieving the PEOs. ABET recently changed their criteria regarding the assessment of PEOs attainment, they currently just require a review of relevancy [11], [35] . ABET also removed their requirement for employer surveys due to difficulties programs faced with obtaining alumni employment information. As such, even though the attainment of PEOs and incorporation of employer surveys are crucial, the programs focus their PEOs assessment process primarily on review of their relevancy. Since the PEO looks at a timeline of three to five years after a student graduates, the review cycle for PEOs is expected to be conducted once, in a cycle, every five years to gauge the PEO relevancy to the needs of the program's constituents. Since any corrective actions are based on results of student outcomes assessment, which involve a different cycle of assessment, it is clear that SOs assessment provides a major input to gauge whether or not PEOs are eventually met. Figure 27 illustrates the review processes of EE, CE and ME PEOs based on the Sundaram Model (2013) [92] . The review process seeks input and insight for gauging the success of achieving the PEOs from two different avenues, the external view of meeting the PEOs as they are intended, and the internal view of meeting the PEOs for providing the necessary skill sets to prepare students. FIGURE 27. ME, EE and CE PEOs review process (reproduced from Sundaram's model [92] ). The focus is on addressing the following two questions: 1. How well do the programs prepare students for the intended PEOs? 2. How well are the programs' graduates really doing in the workforce? For the PEOs assessment process, the data are rolled up and gathered primarily from these two sources: 1) External view of meeting PEOs based on alumni and employer feedback (which is not indicated in the Sundaram's model but conveniently accomplished by using EvalTools R remote survey suite shown in Section 6.ii, indirect assessments, Appendix J). 2) Internal view of meeting PEOs from SOs attainment process (refer Tables J34, J35 and J36 of Section 6.ii, Appendix J). However, regarding the PEOs evaluation process related to relevancy, in addition to taking the inputs from the PEO assessment process, the crucial question is: How relevant our PEOs are, in meeting the needs of the constituents? To address the needs of the constituents, the review process seeks feedback primarily from the EAC and faculty members. The results of PEO attainment provide key direction to the discussions (see Section 6.iii EAC review, Appendix J). Any action items generated from the review process, such as changes to PEOs at this level (Level 1 PEO assessment) may have a substantial impact on the programs' educational practices. Action items will ultimately flow down to Level 2 SOs assessment and then Level 3 COs assessment, as indicated in Figure 27 . The assessment for the attainment of SOs is based on data from senior-exit, alumni and employer surveys, and roll-up data from the embedded course assessment process that uses FCAR as the main vehicle for the assessment (refer Section 6.ii, Appendix J). The PDCA quality cycle Q 1 : COs, PIs, Rubrics Development, as mentioned in Section VI.A involves course design based on design rules for the development of COs and PIs as referred to in Section IV.C.2 and rubrics aligned to 3 major learning activities assessed in Mid-term and Final examinations as mentioned in Section IV.C.3 Hybrid Rubrics. Figure 13 also indicates the hybrid rubrics development and implementation process for the 2018-19 cycle. Table 12 indicates the average time spent for COs, PIs and hybrid rubrics development activities based on inputs from the QA office while considering variations based on the type of course format and instructor expertise. The Faculty of Engineering EE, ME, and CE programs conducted detailed surveys in coordination with the QA office to estimate additional time spent by instructors in documentation and reporting efforts for implementing online course portfolios using the FCAR + generic/specific PIs automated model (PDCA cycle Q 2 ). Several faculty and lecturer inputs were collected for various courses to determine maximum workloads related to score entry, PIs mapping, naming, scanning and uploading documentation for Low (L), Medium (M) and High (H) samples of student graded work. Figure H1 (Appendix H) shows an example of such inputs collected for an electrical engineering course, EE 421, Wireless and Mobile Communications. Table 13 shows a quantitative analysis of the time spent by faculty in performing various course level data reporting activities. The data reporting activities common to any assessment/evaluation system, whether automated or manual, are score entry, scanning, naming and uploading student work. Therefore, the data reporting activities taken into account for estimating any additional time spent are those which are specific to the FCAR + generic/specific PIs model such as score entry, mapping PIs for split questions and creating reflections and action items for failing COs and PIs. The conclusion of this finding was that faculty spent additional time ranging from 5 to 8 hours per course for an average of 15 students depending on whether the course involves lab and/or project work. As a part of continuous improvement efforts to reduce workload for faculty to collect data, it was decided in the Quality and Accreditation Committee meeting on October 4th, 2016 that the EE, ME and CE programs' faculty members would scan and upload the Low, Medium and High student work samples as objective evidence. Based on academic freedom, this program-wide decision did not restrict those faculty members who wished to scan and upload assignments of all students for record-keeping or student feedback purposes. Additionally, the Islamic University policy limits a max enrollment of 25 students per course section. Additional staff currently on study leave, pursuing higher degrees, are expected to return in the coming academic years. The faculty strength is expected to increase, resulting in lesser and widely distributed workload for each instructor. Based on these enrollment limits and favorable current and projected faculty strengths, the researchers anticipate a sustainable CQI process in the near future. The time spent by faculty members in program-level CQI activities were also considered to accurately estimate the sustainability of overall CQI processes for the automated FCAR + generic/specific PIs model using EvalTools R . The program level activities include: 1. Program term reviews which involve SOs, PIs and Learning Domains Evaluations (PDCA Cycle Q 3 ) 2. Multi-term PIs review conducted every 3 years (PDCA Cycle Q 4 ) 3. Multi-term SOs Trend Analysis conducted every 3-5 years (PDCA Cycle Q 5 ) and 4. PEOs Analysis and Review (PDCA Cycle Q 6 ) Program term reviews SOs evaluations examine only failing PIs (20-25% of total PIs assessed in a term), involve all the program faculty and are comprehensively completed in 3 hour sessions. Multi-term PIs and SOs evaluations are completed after every 3-5 years in two or three 2 hour sessions. Table 14 indicates the breakdown of time spent by faculty members in various program level CQI activities. The time spent for both course and program level CQI activity is practically manageable, and the current assessment and automated QA processes are seamlessly implemented at the Faculty of Engineering programs for the last 6 years (from Fall 2014). Sets of 6 years course, program level and administrative committees CQI data are available on a google cloud-based digital environment [67] . State of the art digital technology and enhanced FCAR methodology using web-based software called EvalTools R is employed to automate the data collection, assessment, evaluation, feedback and closing the loop processes [48] . Specific features or modules of the web-based software EvalTools R promote social distancing during COVID19 pandemic using remote operations and help achieve a significant reduction in manpower and resource expenditure with optimizations in multiple CQI tasks as listed below: 1. Embedded assessments technology coupled with a few mouse clicks for PIs mapping easily facilitates the collection of assessment information for all the PIs at the course level. 2. PIs assignment to course outcomes is linked right from the course assessment setup feature and subsequently, PI data is automatically generated for program SOs evaluations. 3. PVT feature facilitates the collection of outcomes data for all students with no additional data collection efforts besides routine course work required by instructors. 4. The development of hybrid rubrics is supported through direct derivation from the detailed language of the specific PIs statements listed in digital databases. 5. CQI documentation related to student, course and program level assessments, analytical diagnostics, curriculum maps, evaluation reports (FCAR, course action items matrix, SOs/PIs program evaluations executive summary and multi-term trend analysis), and grade books are automatically generated in standard digital formats. 6. Remote course exit, senior exit, employer, alumni and faculty satisfaction surveys are all conducted electronically and statistical data assimilated into necessary evaluation reports for QA purposes. 7. The CIMS module connects corrective actions generated from program term reviews' SOs based evaluations to 20 standing administrative committees. The module provides a state of the art electronic repository consisting of thousands of corrective actions and other CQI information. An enormous saving of faculty resources otherwise spent in collecting and reporting CQI information related to committee work is achieved by instant access to an exhaustive digital repository of administrative committee meetings, minutes, corrective actions tied to outcomes, the status of actions, time stamps, ownership details etc. EvalTools R is a state of the art web-based software that employs the FCAR embedded assessment methodology and effectively integrates LMS, AAS, OAS and CIMS Systems [48] into one digital platform. EvalTools R provides seamless automation of CQI processes and its Remote Evaluator Module collects, organizes and reports massive amounts of CQI data for remote accreditation audits [67] . Figure 28 shows the evaluator module site map indicating menu tabs for program assessment, program evaluation, program committees, survey instruments, course syllabi and student advising. Every major aspect of ABET accreditation display data requirement is comprehensively covered and CQI information is conveniently integrated into a user-friendly dashboard. Table 15 lists all the major tabs on the dashboard, corresponding CQI information and the coverage of ABET criteria 1 to 8 [11] . Appendix I presents some samples of data and brief explanations for CQI information related to the various tabs listed in Table 15 . Millions of documents of evidential CQI data for the EE, CE and ME programs are available on a cloud-based environment for remote accreditation audits. Various types of CQI data and essential aspects for achieving their high quality are derived from specific requirements of ABET accreditation criteria related to CR2: PEOs, CR3: Student Outcomes and CR4: Continuous Improvement. Specifically, PEOs review and attainment, SOs assessment methodology and plan, PIs alignment with assessments and application of rubrics, accurate program and course level evaluations, integration with indirect assessments, implementation of corrective actions and ability to achieve realtime and deferred improvements etc. are some of the items extracted from ABET accreditation criteria CR 2,3 and 4 for developing requirements for various types of acceptable CQI data. Table 16 shows a summary of a qualitative comparison of various types of CQI data and key aspects for manual and automated systems. Several types of CQI data such as PEOs review, SOs and PIs assessments, course materials, surveys, committee actions, course and program evaluations etc. are required display material for accreditation audits. Digital platforms such as EvalTools R offer seamless collection, documentation and reporting of massive amounts of reliable and valid CQI data, thereby making remote audits a convenient and practically feasible affair for both programs and evaluators. The current global pandemic conditions, coupled with the overall benefits of automated CQI systems, present compelling justification for engineering programs and accreditation bodies to shift to digital platforms and automated collection, reporting and presentation of CQI data for remote accreditation audits. Engineering programs seeking accreditation should, therefore, make insightful decisions to actively search for appropriate digital solutions that support authentic OBE assessment methodology for implementing automated CQI systems to enable remote accreditation audits. Accreditation agencies on the other hand, should seriously consider promoting digitization of CQI data and offer training programs to their evaluators for a smooth migration to remote accreditation audits in the coming years during the COVID19 pandemic and beyond. In this research, we applied a meta-framework for evaluating IQMS implemented at the Faculty of Engineering programs for achieving ABET accreditation. The meta-framework is based on a recent study proposing MMTBIEs [52] . The study outlined 8 phases of the meta-framework based on prior research following the general steps for evaluation. We utilize the recommended essential steps/aspects of 8 phases of the meta-framework to extract required examination criteria. Context refers to the social, political, economic, and cultural milieu in which the intervention, treatment, program, or policy takes place. The local and broader context of the MMTBIE has to be understood in this phase. The objective of the IQMS implemented at the Faculty of Engineering programs was to help its beneficiaries, the enrolled students, attain ABET SOs during and upon completion of the period of study and attainment of the PEOs a few years (3-5 years) after graduation during employment. The MMTBIE conducted should verify and confirm cohorts' attainment of ABET SOs during and upon completion of a period of study and attainment of PEOs a few years after graduation and during employment. Onwuegbuzie and Hitchcock (2017) adopted Bronfenbrenner's (1979) ecological systems model [96] to conduct evaluations at one or more of Bronfenbrenner's (1979) four levels: a) Micro-MMTBIEs, Level 1: MMTBIEs wherein an intervention, treatment, program, or policy subjected to one or more persons or groups occur within his/her/their immediate environment[s]. b) Meso-MMTBIEs, Level 2: MMTBIEs wherein an intervention, treatment, program, or policy subjected to one or more persons or groups occur within other systems, and also the interaction of these systems, in which he/she/they spend time c) Exo-MMTBIEs, Level 3: MMTBIEs wherein an intervention, treatment, program, or policy subjected to one or more persons or groups occur within systems by which he/she/they that might be influenced but of which he/she/they are not explicitly a member, and VOLUME 8, 2020 MMTBIEs wherein an intervention, treatment, program, or policy subjected to one or more persons or groups are studied within the larger sociocultural world or society surrounding him/her/them. Based on Bronfenbrenner's (1979) ecological systems model [96] , the IQMS implemented at the Faculty of Engineering EE, CE and ME programs involve evaluations in all the four levels which are outlined in Table K1 (Appendix K). From Table K1 , it is evident that the local and broader context of the MMTBIE adequately incorporates regional and international standards for quality in education by examining attainment of the ABET SOs during and upon completion of study and attainment of PEOs a few years after graduation during employment. The second phase, understanding the construct(s) of interest, is accomplished by conducting an extensive review of the literature. As part of the literature review, the evaluator should identify the relevant framework(s) that underlies the evaluation, namely: theoretical, conceptual and practical frameworks [97] . The construct(s) of interest would then help obtain either input or output variables to the evaluation. Section IV: Theoretical, Conceptual and Practical Frameworks presents a detailed discussion of relevant frameworks based on an exhaustive literature review and Figure 1 conveniently displays all the essential elements of the theoretical, conceptual and practical frameworks that facilitate the seamless implementation of CQI processes consisting of 6 comprehensive PDCA quality cycles. Table K2 (Appendix K) presents details on the relevant frameworks, construct(s) of interest and variables with references to corresponding subsections of Section IV of this research paper. As mentioned in Section IV, authentic OBE theory forms the basis for theoretical frameworks that lead to the development of crucial models which act as the foundation of the IQMS implemented at the Faculty of Engineering. Several essential concepts are then induced from OBE theory, assessment best practices and ABET criterion 4, CR4 on continuous improvement. Essential techniques and methods based on this conceptual framework are then realized as a practical framework consisting of automation tools, modules and features of a state of the art digital platform, web-based software EvalTools R . EvalTools R is effectively used for the seamless implementation of the IQMS comprising of six PDCA quality cycles Q 1 to Q 6 . A highly structured and systematic description of theoretical, conceptual and practical frameworks, their constructs and variables that adequately fulfill accreditation criteria and achieve required quality standards would therefore qualify the IQMS for ABET accreditation. We utilize a framework proposed by Onwuegbuzie and Hitchcock (2017) to validate the mapping of causal links of the IQMS implemented at the Faculty of Engineering EE, CE and ME programs [52] . The framework refers to the requirements of White's (2009) model which suggests that the causal chain be mapped to determine how the intervention is expected to produce the intended outcome(s) [61] . That is, the causal chain links inputs to outcomes and impacts [61] . Some form of theory-evaluation theory, social science theory, and/or program theory-should govern the mapping of causal links. As part of mapping out the causal chain, the potential directionality should also be assessed to ensure observed outcomes and impacts are the result of the project activities, and not vice versa [61] . A set of supplemental evaluation questions has to also be answered for rigorous analyses [55] . Evaluators should use a mixed methodological approach to collect both qualitative and quantitative data to rigorously evaluate all the underlying assumptions of the causal links of a given program intervention. In Table K4 (Appendix K) , we provide a summary of data collection activity for qualitative or quantitative evaluations occurring in each PDCA quality cycle with references to the various sections of this research paper. Extensive distribution of comprehensive qualitative and quantitative analyses are presented in Table K4 for assessing underlying assumptions in each PDCA cycle thereby qualifying the program interventions at the Faculty of Engineering as credible MMTBIEs that fulfill mixed methodological approach requirements for phase 4 of the meta framework proposed by Onwuegbuzie and Hitchcock (2017) [52] . Onwuegbuzie and Hitchcock (2017) present several important aspects evaluators need to consider regarding statistical data related to qualitative and quantitative analyses conducted in program interventions. Specifically, they discussed aspects such as sample size, statistical power for quantitative data, reaching saturation for qualitative data and types of generalizability and transferability [52] . Their work highlighted the possibility for discrepancy to exist between the planned sample characteristics, sample size(s), and sampling scheme(s) and those that ended up being realized as a result of factors such as mortality, nonresponse, untrustworthy responses, and the like. Onwuegbuzie and Hitchcock (2017) categorically state ''Impact evaluators need to be able to select a sample size large enough and information-rich enough to assess impact heterogeneity, bearing in mind that impact can vary as a function of factors such as intervention design, participant (i.e., beneficiary) characteristics, time, and characteristics of the community (e.g., urbanicity, population size, socioeconomic setting, socio-cultural factors).'' [52] Quantitative data that are subjected to inferential statistics should achieve adequate statistical power and qualitative data should reach data and theoretical saturation to increase validity and credibility. Impact evaluators also have to determine the type and level of generalizability and transferability. Onwuegbuzie, Slate, Leech, and Collins (2009) identified five types of generalization: external statistical generalizations (making generalizations, predictions, or inferences on evaluation data and findings yielded from a representative statistical [i.e., optimally random and large] sample to the population from which the sample was drawn); internal statistical generalizations (which involve making generalizations and predictions from evaluation data and findings obtained from one or more representative or elite study participants); analytic generalizations (wherein the evaluator is striving to generalize a particular set of [case study] results to some broader theory); case-to-case transfer (i.e., making generalizations or inferences from one case to another [similar] case); and naturalistic generalization (i.e., the stakeholders make generalizations entirely or at least in part, from their personal or vicarious experiences) [98] . In this phase, we apply the main aspects mentioned by Onwuegbuzie and Hitchcock (2017) to examine the validity and credibility of qualitative and quantitative statistical data and type and level of generalizability and transferability [52] . To better understand the entries in Table K5 (Appendix K), we explain some fundamental aspects related to the sampling methodology and accuracy of outcomes evaluation employed at the Faculty of Engineering. Direct assessment outcomes data is evaluated at both the course and program level. Outcomes data is collected at the course level using embedded assessments of the FCAR and PVT technology. This facilitates collecting assessment data for all enrolled students in a class. There are two types of sampling that occur in relation to course-level assessment data. The first type is related to selecting the most appropriate assessments for measuring a specific PI and CO when the instructor has a choice of more than two assessments for each PI [35] . The second type of sampling deals with the selection of a set of students for any specific PI and CO data being assessed. It is important to note here that manual CQI systems are generally frugal in their selection of either the number of assessments or sample size of students considered for assessments due to time and resource constraints. The FCAR and PVT technology of EvalTools R enables collecting outcomes data for several assessments and all students in the class [24] , [41] , [46] , [67] . Therefore, the course level statistical outcomes data is comprehensive and heterogeneous as it represents the complete set of course cohorts in all classes. Other crucial aspects of the course level statistical quantitative data are validity and reliability. Since, we are dealing with outcomes assessment, several factors contribute to the attainment of a high level of accuracy for assessment data. The Faculty of engineering programs ensure quality standards for assessment and evaluation data by employing the following: a. Implementation of Bloom's Mastery Learning Model [99] to develop and administer a curriculum b. Adopt the gold standards of Mager's [83] and Adelman's [16] outcomes design principles c. Classify COs and specific PIs as per Bloom's three domains and their learning levels and assign electronic indices for tracking and automated EAMU average computations [24] . d. Develop and implement hybrid rubrics for major course learning activities [42] . e. Implement unique assessments (where multiple PIs cannot map to a single assessment) [22] , [24] , [35] , [38] , [48] , [49] , [50] , [62] , [63] , [67] , [69] f. Implement tight scientific constructive alignment of outcomes to assessments using rigorous quality assurance processes [24] , [35] , [67] . g. Implement course level weighting factors for aggregating outcomes data from various types of assessments [24] , [35] , [41] , [67] . Unlike manual systems which advocate limited sampling from select courses in the final phases of the engineering curriculum [70] - [74] , the program level SOs, PIs and learning domains evaluations conducted at the Faculty of Engineering programs collect data from all courses in all levels of the curriculum [24] , [41] , [42] , [67] . Once again, FCAR and EAMU vector technology helps EvalTools R collect and extract specific assessment information from all relevant courses for program-level outcomes evaluations. Section IV.D.1 FCAR and PVT discusses the HFWFS program level weighting factor scheme and the accuracy of program level skills aggregation achieved due to its application. Section IV.E Practical Framework -Summary of Digital Technology and Assessment Methodology presents a detailed discussion of several essential elements incorporated into the Facutly of Engineering assessment model ensuring high standards of validity and reliability. Therefore, both samples size and statistical power of quantitative data for course and program level evaluations are qualified for the MMTBIEs of the Faculty of Engineering programs. The qualitative data was collected from surveys and program and administrative committee reviews. Samples of surveys and relevant results are provided in Section 6.ii PEOs Assessment Data of Appendix J. In general, the program and administrative committees reviews have maximum response rates due to mandatory attendance requirements. The surveys conducted were comprehensive, 5 points likert format, with the alignment of the questions to required knowledge and skills of students. The surveys also provided participants with the opportunity to provide feedback in the form of comments. The student surveys related to course exit, senior exit (refer to Tables J40, J41 , J42, J43, and J44, J55 for the CE, ME and EE programs respectively) achieved an overall average of more than 70% as response rate for indirect assessment data collected for the last 6 years. On the other hand, the surveys pertaining to alumni (refer to Tables J47, J48 and J49 for the CE, ME and EE programs respectively), employer (Table J58 shows a 3 year summarized result for CE, ME and EE programs) and EAC (refer to Table J33 ) have received higher response rates as mentioned in Section 6.i and 6.ii PDCA Quality Cycle Q6: PEOs 5-Years Review Process of Appendix J. The quality of responses was deemed fairly acceptable for those received from the alumni, EAC and employers based on a qualitative examination of responses and comments. The responses indicated involvement, calculated opinions and valuable comments targeting quality improvement. On the other hand, student feedback related to course exit surveys reflect a lack of understanding and involvement on the part of the majority of students. Consequently, the QA office has recorded this observation and begun implementing a series of remedial actions to improve this deficiency. Therefore, in regards to saturation of qualitative data collected from student course exit surveys we regard this as not achieved and undergoing improvement. The reasons are not related to any deficiency in underlying theory or framework but rather to the interest and involvement of the students. Onwuegbuzie and Hitchcock (2017) state that in Phase 6, programs should conduct a rigorous evaluation of impact, either prospectively (i.e., beginning during the design phase of the intervention) or retrospectively (i.e., usually conducted after the implementation phase) by using a credible counterfactual, which measures what would have happened to beneficiaries of the intervention in its absence, with the impact being estimated by comparing counterfactual outcomes to those observed under the intervention [52] . They state that selecting an appropriate counterfactual is a vital task in this phase and suggest using a control or comparison group to define the counterfactual outcomes. The control group has to be identified in a way that avoids selection, confounding factors, and contamination, any of which might lead to a spurious relationship between the intervention and its outcome [52] . The Faculty of Engineering had approved budgetary support and manpower to complete required tasks related to fulfillment of requirements for ABET accreditation and preparation for an audit visit at the end of 2019. The allocation of resources or manpower required for the creation and management of additional efforts with control groups for comparison of processes and outcomes for a counterfactual without any planned intervention was not institutionally recognized since it is not mandated by ABET, Washington Accord or NCAAA accreditation criteria [11], [13] , [66] . Therefore, a robust and accurate alternative, multi-term SOs data evaluations [2014] [2015] [2016] [2017] [2018] with trend analysis, was employed to confirm the impact of the implementation of the IQMS at the Faculty of Engineering. In our opinion, the multi-term SOs data was a better option to study the impact of the intervention since outcomes data was quantitative, valid and reliable, collected from direct assessments using state of the art digital technology, under the strict supervision of quality assurance staff, and following world-class assessment best practices. Several issues related to the management of control groups and strict regulation of interference conditions with the intervention were totally avoided. Multi-term executive summary reports (refer Section 5.ii ME Program Sample -Multi-term Executive Summary Report for ABET SOs Tables J55, J56 and J57, Appendix J). In phase 7, we conduct a rigorous process analysis of all the PDCA quality cycles by reviewing the frameworks, construct(s) of interest, inputs, outputs and make observations to confirm the fulfillment of intermediate and final outcomes of the intervention. PDCA quality cycle Q 1 deals with the development of the course and curriculum delivery plans and is based on authentic OBE theory. Specifically, Bloom's Mastery Learning Model [99] is implemented to help students progress from low order to higher-order thinking skills using the Ideal Course Learning Distribution Model presented earlier in Section IV.B. 3 Ideal Learning Distribution. The QA office has thoroughly audited course work in each term for the CE, ME, and EE programs for ensuring compliance with the design rules standards for COs, PIs and Hybrid Rubrics. Several hundred thousand documents related to complete course work portfolios for the CE, EE and ME programs for fall, spring and summer terms covering the period are available on a cloud-based environment. The EvalTools R Remote Evaluator Module was used effectively for remote audits by ABET evaluators for 6 weeks prior to the actual visit at the end of November 2019. The quality cycle Q 2 involves intensive quality management efforts to ensure monitoring and control of all course work using a FCAR checklist according to standards and models described in the Section IV.B Conceptual Framework -Models. The FCAR checklist consists of qualitative and quantitative components as noted in Table K4 . The EOT checklist covers essential course activity from all the courses and is completed following comprehensive audits conducted after the final exams each term. EOT approval by the supervisor, QD clears the way for program term reviews, which consist of Learning Domains, PIs and SOs evaluations. Section VI.C PDCA Quality Cycle Q3: Program Term Review -Learning Domains, PIs and SOs Evaluation provides in-depth explanations of this phase of program evaluations. The course work from a given term act as inputs and are aggregated to collect SOs and PIs data from FCAR information. Learning domains evaluations help in managing curriculum delivery by monitoring counts of assessments for each SO and learning levels of the three learning domains. The outputs of this quality cycle are updated SO assessment plans, curriculum maps and evaluation reports such as the SOs executive summary, SO/PI PVT data, course action items matrix and learning domains evaluation results. The various qualitative and quantitative analyses employed in the quality cycle Q 3 are shown in Table K6 (Appendix K). Course work and CQI data for all the terms from the fall of 2014 to date are complete and uploaded to a cloud-based environment. PIs created in quality cycle Q 1 every term are VOLUME 8, 2020 stored in a digital database and are comprehensively reviewed every 3 years in the PDCA quality cycle Q 4 . The last review was conducted in spring 2018. The Section IV.D.2 Specific PIs database elaborates on the philosophy behind the classification of PIs according to the affective, cognitive and psychomotor domains of Bloom's taxonomy. Any redundant, inaccurate, moved etc. PIs are corrected and the database with any affected rubrics is updated. Table J17 in Section 4 PDCA Quality Cycle Q4: PIs 3 Year Multi-Term Review of Appendix J shows samples of PIs modifications for the EE, ME and CE programs. The multi-term SOs average values are rolled up in a multi-term SOs executive summary for program and external advisory committees review. The results of the multi-term summary for EE, ME and CE programs indicated stabilization of SOs results towards the spring of 2018 (Section 5.i, Appendix J). The trend and forecast analyses for most SOs indicated upcoming improvement in the following year's SOs results, thereby receiving Meeting Expectations result for the EE and ME programs and Exceeding Expectations for the CE program (Sections 5.iii, 5.iv and 5.v, Appendix J). The review committees concluded that assessment instrument quality and application to teaching and learning with a follow-up to the closure of several hundreds of real-time and deferred corrective actions contributed to the overall improvement of attainment of SOs. The evidentiary improvement information obtained from multi-term SOs data coupled with thousands of CQI data points collected by rigorous qualitative and quantitative analyses in each PDCA quality cycle clearly pointed to tight alignment models connecting outcomes with teaching, learning, assessments, feedback and improvement of student learning. Therefore, from process analysis conducted in phase 7, as shown in the Table K6 , we find that all elements of the causal links assiduously follow the mentioned theoretical, conceptual and practical frameworks, plus work in a tightly cascaded connection to directly contribute to an overall improvement in the attainment of EE, ME and CE program SOs. The PDCA quality cycle Q 6 : PEOs 5 year review is the metaanalyses proposed by Onwuegbuzie and Hitchcock (2017) in phase 8 since it integrates a rigorous mixed methods evaluation of both the process and product of the IQMS implemented at the Faculty of Engineering programs. The External Advisory Committee with adequate representation from faculty, alumni and industry, forms an integral part of the review and analyses efforts happening in the PDCA quality cycle Q 6 . The process evaluation part involves a mixed methods analyses of the EE, ME and CE programs' vision, mission, PEOs, SOs, curriculum, Capstone design and industrial training courses, and CQI systems and processes. The product evaluation part comprises of a mixed methods analyses of the EE, ME and CE programs' multi-term SOs results, alumni, senior exit and employer feedback followed with trend analysis, forecasting of SOs multi-term data. From the engineering programs' perspective, the contribution to the development of the graduate attributes stops with the education process up until the course of study. Therefore, the multi-term SOs executive summary report [2014-18] is considered a comprehensive and conclusive internal representation of knowledge and skills of cohorts who are a product of a complete and full quality cycle of the education process at the Faculty of Engineering programs. For an external source of feedback on PEOs information, data is collected from graduates who are now alumni and pursuing challenging careers in the industry. Engineering programs endeavor to collect critical information from employers and alumni as regards to how engineering education offered to these cohorts actually helped them in career and future growth prospects. Key aspects of information gathered from surveys pertain to their application of theory learnt during education to real-life engineering problem-solving, design and experimentation activity; transversal skills; entrepreneurship activity, professional development and career growth; community service, research and consulting contributions; and positive cultural and societal impact through exemplary morals derived from Islamic ethics. Engineering programs usually collect this critical information by using various mechanisms and tools such as likert surveys, invited focus groups, outreach programs etc. The feedback received is reviewed carefully by both program and external advisory committees to understand the areas of weakness and strength in the education process so that appropriate remedial actions can be developed to effectively target specific improvements. Table K7 (Appendix K) shows qualitative and quantitative process analyses employed for PDCA quality cycles Q 1 to Q 5 . The last portion of Table K7 shows the PDCA quality cycle Q 6 which is the meta-analyses phase 8 of the MMTBIE of the Faculty of Engineering EE, ME and CE programs involving both process and product evaluations. The product evaluation deals with aspects related to the attainment of the PEOs a few years after graduation. The process analyses cover qualitative review of the curriculum, Capstone project design work, industrial training experience, teaching/learning process, CQI systems, lab and other infrastructure matters. The quantitative analyses involve a review of multi-term SOs executive summary reports and trend analyses along with COs, PIs and SOs data for capstone design and industrial training courses. The qualitative and quantitative analyses conducted for the process and product evaluations in the PDCA quality cycle Q 6 involve multiple levels of audits that include the program committee, QA office, QD supervisor and finally, the External Advisory Committee. The rigorous QA procedures based on authentic frameworks and coupled with an exhaustive array of qualitative and quantitative analyses for both the process and product evaluations of the Faculty of Engineering IQMS qualify the MMTBIE in phase 8 as credible since they adequately fulfill the criteria presented by Onwuegbuzie and Hitchcock (2017) [52] . The driving force behind this research is to examine the benefits and limitations of the application of essential theory of an authentic OBE model for the implementation of a holistic and comprehensive educational process that maximizes opportunities for the attainment of successful student learning. The objective is to be able to remotely conduct during global pandemic conditions, a MMTBIE of state of the art IQMS implemented at the Faculty of Engineering's EE, CE and ME programs (2014-20) using digital technology and OBE methodology to achieve ABET accreditation. Yes. The outcomes data collection and reporting processes are sustainable and have been implemented systematically and seamlessly since Fall 2014. A couple of million documents of evidentiary data in the form of course materials, student work and CQI information is available on a cloud-based environment. ABET evaluators were provided access to this display material using the EvalTools R Remote Evaluator Module. Section VII Sustainability of Course and Program Level CQI Processes provides a detailed explanation of the sustainability of data collection and reporting processes. 4: DO THE IQMS IMPLEMENTED AT THE EE, CE AND ME PROGRAMS PROVIDE A LISTING AND DESCRIPTION OF THE ASSESSMENT PROCESSES USED TO GATHER THE DATA UPON WHICH THE EVALUATION OF EACH STUDENT OUTCOME IS BASED? (ABET, 2019 EAC CRITERIA, SELF-STUDY TEMPLATE, CR4: SECTION A.1) Yes. Table 11 in Section V. Description of Assessment Process and Activity lists the assessment and evaluation activity, timeline and ownership for all the six PDCA quality cycles Q 1 to Q 6 . Yes. Table 11 in Section V. Description of Assessment Process and Activity lists the frequency of assessment and evaluation activity in all the six PDCA quality cycles Q 1 to Q 6 . The CQI activity is managed by IQMS using EvalTools R which integrates AAS, LMS, OAS and CIMS. CIMS feature provides significant savings in terms of CQI activity documentation, tracking and history. Section IV.D.7 CIMS provides a detailed explanation of its features and capabilities. Yes. Table 6 in Section IV.D.1 FCAR and PVT provides an elaborate explanation of the performance criteria and heuristic rules which clearly indicate the expected level of attainment for all the SOs. However, any hybrid rubrics implemented by instructors can define additional performance criteria for specific assessments. DO THE IQMS IMPLEMENTED AT THE EE, CE AND ME PROGRAMS PROVIDE TOOLS, RESOURCES TO EFFECTIVELY DOCUMENT AND MAINTAIN THE RESULTS OF EVALUATIONS? (ABET, 2019 Yes. The CIMS feature provides tasks lists for both program and administrative committees that show generated action items, history, remarks, status, ownership, time stamps, etc. This feature presents significant savings of manpower and resources, which is otherwise spent in tracking, extraction, and preparation of hard copies in organized formats as display material by manual CQI systems (refer Appendix G: CIMS). Yes. The EE, CE and ME programs' successful attainment of ABET accreditation in 2020 for a full six years period up to 2026, with a majority of strengths, and without any deficiency, concern or weakness, is a credible testimony of the practicality and global quality standards of the digital IQMS [11] . The theoretical, conceptual and practical frameworks discussed in this research paper present to engineering programs a perfectly viable methodology and practical digital technology based on OBE models that fully satisfy IEA's Washington Accord accreditation criteria, graduate attributes and competency profiles [10] . The meta-analyses, sustainability evaluation, and EvalTools R Remote Evaluator Module, presented in this study prove that digital technology based on authentic OBE methodology can indeed be the panacea to the challenges faced by both engineering programs and accreditation agencies to implement IQMS and conduct their credible remote audits in an unchartered digital age during and after the COVID19 global pandemic. With a majority of positive aspects, one limitation of our system, the allocation of resources to scan paper documents, is currently performed by either the lecturers or teaching assistants. Future research can target the development of state-of-the-art digital systems that automate outcomes assessment development and scoring processes. This technology would integrate with and enhance existing digital systems to significantly reduce the overhead related to the overall time spent by faculty in the outcomes assessment process and scanning work done by lecturers. Specifically, the Faculty of Engineering, QA office intends to pursue ground-breaking automation technology to push the frontiers in outcomes assessment by including optical character recognition features in remote online marking and scoring tools to assess digital versions of hard copies of student exam sheets fed into high-end large scale scanners with barcode reading capability. The bar coding on digital copies of students' exams would help align with corresponding exam templates that automatically map to the COs, specific PIs, rubrics and SOs. This technology would automate the outcomes mapping, manual score entry, file scan and upload efforts, thereby resulting in enormous savings of manpower and other resources. Additionally, Zoom video conferencing shall be integrated in version 7 of EvalTools R to roll out early 2021, supporting virtual tours of lab facilities, and faculty/student interviews, thereby significantly enhancing remote audit capabilities. The cutting edge innovations in digital technology can dramatically revolutionize the implementation of OBE quality systems for higher education and accreditation, especially during the COVID19 global pandemic and beyond. According to Eaton (2015) , retired president of the CHEA, accreditation is the primary means by which colleges, universities and programs assure quality to students and the public. Accredited status is a signal to students and the public that an institution or program meets at least threshold standards for its faculty, curriculum, student services and libraries [100] . The two top standards of the CHEA's recognition criteria (Eaton, 2012 ) are 1) Advance academic quality: accreditors have a clear description of academic quality and clear expectations that the institutions or programs they accredit have processes to determine whether quality standards are being met and 2) Demonstrate accountability: accreditors have standards that call for institutions and programs to provide consistent, reliable information about academic quality and student achievement to foster continuing public confidence and investment. Unanimously, student achievement and accountability pose the biggest challenges to improving the quality of higher education in the world today [38] . In order to meet these challenges, an OBE model for student learning, along with several quality standards in higher education, have been adopted by accreditation agencies and educational institutions globally over the past two decades [10] - [13] , [18] , [51] . Washington Accord lays down international quality standards based on learning outcomes for engineering accreditation. Graduate attributes, knowledge and problem-solving profiles specify technical and transversal knowledge and skills which students should attain during and after completion of engineering education [66] . Accreditation standards require engineering programs to demonstrate student learning outcomes with established and sustainable CQI processes based on clearly defined performance criteria. ABET's criterion 4, is regarded by many educators as the most challenging for engineering programs to fulfill. To drive the point home, instead of citing several sources, we quote Fergus (2012), chair of ABET's Engineering Accreditation Commission, ABET fellow and chairperson of accreditation committee at the Minerals, Metals and Materials Society (TMS), ''Establishing, implementing and documenting processes to determine if graduates are meeting expectations and if students are attaining student outcomes is a significant challenge, For a continuous improvement process to be effective, it must be sustainable. Collecting assessment data at a rate that cannot be maintained and in amounts that cannot be properly evaluated is counterproductive. Data should be collected continuously at rates that do not detract from educating students and in amounts that can be evaluated to provide useful information on the effectiveness of the program. If data is being collected that is not providing useful information, then the process should be modified to obtain useful data-improvement of the process is part of continuous quality improvement.'' [39] Two essential points arise from this statement as confirmed through findings of this research and more than a decade of intensive consultation and accreditation experiences of the authors. Firstly, continuous improvement based on outcomes assessment is, by far, the most challenging aspect of accreditation. Secondly, both accreditation agencies and programs have to decide on how to proceed when precariously balancing the need for data quality and the type and amount of data, sampling models, frequency and methods of collection. According to OBE, assessment and quality experts referred to in the introduction to this paper, the two aspects related to data are interchangeable. Sufficient amounts of relevant and valid data have to be sampled appropriately, collected using precision methods and evaluated accurately. Without collecting data in all courses and for multiple assessments in various phases of course and curriculum delivery, programs can never attain real-time CQI, since they do not have sufficient data to be able to indicate failures for timely remedial action. Any CQI model which does not solve problems at hand, but relies on a deferment plan, does not fulfill the requirements of CQI at all. Such a CQI model does not address the urgent learning needs of enrolled cohorts but is rather based on a program centered model. Another major challenge for accreditation agencies is to substantiate the claims of ''OBE'', if all student outcomes data is not included. Washington Accord and ABET have announced student-centered education systems employing Bloom's Mastery Learning Model [99] and Taxonomy [27] , but they do not seem to fulfill the gold standard of OBE viz. to establish educational systems, in which ''all students can learn and succeed''. Students cannot learn and succeed, especially if, they cannot access basic information relating to their attainment of outcomes, which is an essential requirement for gauging student achievement and establishing accountability for engineering programs. Obviously, and as per the literature cited in the introductory sections of this research, most accredited programs using manual CQI systems and processes do not assess all students due to the massive amounts of data involved and the huge costs in terms of time and other resources needed for data collection and reporting. The literature review of this research highlighted several issues with manual CQI systems and also cited references to digital solutions adopted by several programs. ABET has also been show-casing digital solutions in their symposia for almost a decade. But, probably due to commercial and practical reasons, there has not been a mandate for digital platforms since thousands of programs in the US and across the globe are still using manual CQI systems. Additionally, the looming international crisis due to the COVID19 global pandemic, which seems like it will be a prolonged affair, with severely limited regional and international movement and travel, has resulted in drastic changes to the format of education delivery globally. The COVID19 global pandemic conditions, by force majeure, have also affected the normal protocol for onsite accreditation visits. Many accreditation bodies, including ABET, have either deferred or announced virtual audits for upcoming accreditation cycles. The limitations of manual CQI systems coupled with the global crisis conditions caused by the COVID19 pandemic have forced both accreditation agencies and engineering programs to rethink about the role of digital solutions as a panacea for remote and virtual audits. The key question is whether digital solutions would be the necessary or preferred choice for engineering programs pursuing renewal or initial accreditation. Obviously, the answer to this question would unfold in the coming years based upon the spontaneity of engineering programs in collectively responding to accreditation requirements with digital solutions. Unfortunately, the global COVID19 pandemic does not absolve programs from accountability to the students and the public for meeting required standards of engineering education. Virtual accreditation audits will need to place a greater focus on the quality of digital CQI display data so that programs can establish credibility and meet accreditation requirements. Contrary to some uninformed opinions, there are no simplistic quantitative metrics that can be recommended to accreditation agencies and programs for verifying the accuracy and credibility of rigorous program evaluations. In fact, engineering programs may also ingeniously review some aspects of the assessment methodology and technology presented in this research and creatively produce enhanced authentic OBE assessment models or digital tools. As suggested by Onwuegbuzie and Hitchcock (2017), credibility and rigor of evaluation rest on many aspects such as using mixed methods for analyses, accurate theoretical and conceptual frameworks, appropriate context for evaluations, constructs of interest, well defined causal links, meta-analyses of processes and products, and quality of outcomes data [52] . The evaluation results and K tables reported in this study thoroughly examine all these aspects using 8 phases of a comprehensive meta-framework and provide detailed guidelines for a multi-dimensional mixed methods approach to achieve credible MMTBIEs. Essential elements that ensure the quality of CQI data such as sampling schemes, data and theoretical saturation for qualitative analyses, statistical power of quantitative data, generalizability and transferability, sustainability, data collection and reporting methods etc. have been adequately discussed in this research. We also show how embedded assessment methodology using the FCAR and PVT with specific PIs and hybrid rubrics presents significant savings to instructors and helps ensure outcomes data is valid, reliable and tightly aligned to learning activities. The documentation and reporting features of EvalTools R could help programs actively facilitate social distancing norms since both faculty and students can interact remotely and exchange digital versions of necessary educational information such as outcomes results, advising notes, syllabi, lessons, online assessments, assignments, gradebook results etc. The most arduous task of maintaining a trail of CQI history, all the way up to closed corrective actions, is transformed into a seamless and totally manageable digital affair with the help of the CIMS Module. The Remote Evaluator Module provides accreditation auditors with an allin-one remote display dashboard with tabs to conveniently access a wealth of evidential information such as course portfolios, curriculum maps; performance criteria and heuristics rules; course and program evaluations results; PEOs, SOs, PIs and rubrics databases; single term and multi-term SOs, executive summary reports; SOs based objective evidence; complete CQI history including detailed committee activity; and advising records [90] . The discussions on the limitations of current digital technology and proposed solutions present an exciting new frontier of research dealing with the automation of development of outcomes based assessments and their remote marking capabilities. Detailed theoretical and practical frameworks presented in this research provide comprehensive information for the implementation of IQMS. The results of evaluation and discussions provide valuable insights on conducting credible program interventions by showing how various phases of a novel meta-framework help to qualify comprehensive digital CQI systems. In conclusion, the findings of this study offer both accreditation agencies and engineering programs significant exposure to the overwhelming benefits of an outcome-based digital IQMS for seamless management of automated data collection and reporting to enable credible remote accreditation audits during the COVID19 global pandemic and beyond. No potential conflict of interest was reported by the authors. There is no funding for this project WAJID HUSSAIN (Senior Member, IEEE) is a renowned world expert on authentic OBE, QA processes, outcomes assessment, and program evaluation for accreditation using digital technology and software. He has extensive experience supporting and managing outcomes assessment and CQI processes to fulfill regional and ABET accreditation requirements. He joined the academic field coming from an intensive engineering background at Silicon Valley and more than 20 years' experience with mass production expertise in a Billion-dollar microprocessor manufacturing industry. Over the last two decades, he has managed scores of projects related to streamlining operations with the utilization of state-ofthe-art technology and digital systems with significant experience working with ISO standard quality systems. He received the LSI Corporation Worldwide Operations Review 1999 Award for his distinguished contributions to the Quality Improvement Systems. He was the Lead Product Engineer supporting the Portal Player processor for Apple's iPod plus many other world-famous products at LSI Corporation. He led the first 'tuning' efforts in the Middle East by developing a complex database of thousands of outcomes and hundreds of rubrics for the engineering disciplines at the Islamic University. He developed and implemented state-of-the-art Digital Integrated Quality Management Systems for the engineering programs and led their successful ABET accreditation efforts. His research interests include CQI using digital technology, quality and accreditation, outcomes assessment, education and research methods, and VLSI manufacture. He is currently a reviewer for several international conferences. He has been invited keynote or presenter in more than 40 international OBE and education conferences. He is a Senior Member of the IEEE Education Society, a Board Member of the IN4OBE, and a member of the AALHE and ASEE. WILLIAM G. SPADY received the Ph.D. degree from the University of Chicago, in 1967. He began his academic career as a Professor with the Graduate School of Education, Harvard University. This was followed by major national leadership positions in education and the founding of his own national consulting company in 1991. Since then, he has lectured at major educational conferences throughout the world on cutting-edge approaches to a range of topics related to OBE, leadership , human potential, paradigm change, and learner empowerment. He is recognized across the world as a dynamic and compelling consultant and presenter. He has published nine highly acclaimed professional books, scores of journal articles, and many solicited chapters in the books of others. A list of his books can be found on his website. He has been the subject of three doctoral dissertations, and has been honored with a Center on Transformational Leadership and Learning in the Philippines bearing his name. His experience and expertise in Outcome-Based Education are unmatched globally. Known internationally as ''The Father of OBE,'' he has earned the reputation as the recognized worldwide authority on future-focused, paradigm-shifting, personally-empowering approaches to Transformational OBE. For over four decades, he has spearheaded major OBE initiatives throughout North America, South Africa, Australia, The Philippines, Saudi Arabia, and the United Arab Emirates on expanding the vision, shifting the paradigm, and improving the performance of learners, educators, and educational systems. This work has bolstered his recognized expertise in organizational change, transformational leadership development, strategic organizational design, and elevated models of learning and living. He is the lead person for ABET Accreditation at the EE Department, IUM, and his efforts in implementing continuous improvement and quality in outcome-based education led to successful ABET Accreditation of the EE program in 2020. He has also authored and coauthored several journals and IEEE proceeding publications. His current research interests include next-generation of millimeter-wave (mm-wave) radio-over-fiber systems and mm-wave/THz signal generation mode-locked lasers, and antennas design/characterization for the Wi-Fi/IoT/UAVs/FANETs/5G systems and millimeter-wave frequency bands. He is an active reviewer for many reputed IEEE journals and letters. LINDSEY CONNER is an internationally renowned education expert who is known for her research on innovation in Education and teaching. Her commitment to social justice and prioritization of specific actions underpins her philosophy and leadership to empower people to stretch their potential for social mobility. She was previously the Deputy Pro-Vice-Chancellor (Education), and the Associate Dean Postgraduate Research and Dean (Education) at the University of Canterbury, New Zealand. Her leadership in Education is drawn from nearly 26 years as a Researcher and Teacher Educator in science education, including research on the application of technologies within disciplines and in cross-disciplinary contexts. She has also led an international partnership project on implementing ICT in schools and was the Director of the Science and Technology Education Research Hub which has had international acclaim. She has a strong international profile and led the seven country Pacific Circle Consortium research project on Teacher Education for the Future. In 2013, she was an invited consultant to NIER (Japan) working with the Ministry of Education Japan on infusing competencies across curriculum. She was also a funded visiting fellow at SEAMEO RECSAM, Penang (2013) and a consultant in developing science teaching standards for the South East Asian Ministries of Education (2014), which have been implemented in 11 Countries. She has developed courses and supported researchers from universities in Bangladesh, China, Malaysia, and South Korea. Previously, she was the New Zealand's Coordinator for the OECD Innovative Learning Environments Project and Commissioner for the New Zealand Olympic Education Committee. Organizing for results: The basis of authentic restructuring and reform Beyond traditional outcome-based education Outcome-based education's empowering essence Choosing outcomes of significance Outcome-Based Education: Critical Issues and Answers Linking Levels, Learning Outcomes and Assessment Criteria Bologna Process-European Higher Education Area Teaching Strategies for Outcome Based Education Developments in outcome-based education Outcome-based education: The future is today Accord Signatories Accreditation Criterissa Accreditation Resources and Criteria Saudi Arabian National Center for Academic Accreditation and Evaluation, Saudi Arabia Institutional Assessment Practices Across Accreditation Regions Regional Accreditation and Student Learning Outcomes: Mapping the Territory To imagine a verb: The language and syntax of learning outcomes statements. National Institute of Learning Outcomes Assessment (NILOA) Higher education: Waking up to the importance of accreditation Middle States Commission of Higher Education. Principles for Good Practices: Regional Accrediting Commissions An engineering accreditation management system,'' presented at the 2nd Conf ACAT: A Webbased software tool to facilitate course assessment for ABET accreditation,'' presented at the 7th Int Programme outcomes assessment models in engineering faculties Continuous improvement in the assessment process of engineering programs Evaluation of competency methods in engineering education: A systematic review Engineering program evaluations based on automated measurement of performance indicators data classified into cognitive, affective, and psychomotor learning domains of the revised bloom's taxonomy,'' presented at the ASEE Annu on-automated-measurement-of-performance-indicators-data-classifiedinto-cognitive-affective-and-psychomotor-learning-domains-of-therevised-bloom-s-taxonomy Inside the black box: Raising standards through classroom assessment Taxonomy of Educational Objectives: The Affective Domain Practical framework for Bloom's based teaching assessment of engineering outcomes Work in progress: Practical framework for engineering outcomes-based teaching assessment-A catalyst for the creation of faculty learning communities Assessment essentials: Planning, implementing, and improving assessment in higher education (review) Ensure program quality: Assessment a necessity,'' presented at the IEEE Eng Web-based course information system supporting accreditation Can ABET really make a difference? Work in progress: Engaging faculty for program improvement via EvalTools: A new software model,'' in Proc A digital integrated quality management system for automated assessmentof QIYAS standardized learning outcomes A Web-based course assessment tool with direct mapping to student outcomes Automating outcomes based assessment An overview of US accreditation. Council of Higher Education Accreditation Program improvement through accreditation Digitally Automated Assessment of Outcomes Classified Per Bloom's Three Domains and Based on Frequency and Types of Assessments Quality improvement with automated engineering program evaluations using performance indicators based on Bloom's 3 domains Specific, generic performance indicators and their rubrics for the comprehensive measurement of ABET student outcomes Performance measurement and continuous improvement of undergraduate engineering education systems Developing a comprehensive assessment program for engineering education Systematic means for identifying and justifying key assignments for effective rules-based program evaluation Integrated FCAR model with traditional rubric-based model to enhance automation of student outcomes evaluation process Automating Outcomes Based Assessment Assessment of program outcomes for ABET accreditation Framework and Accreditation System Awarded by an Authorised Agency to a HEI (Higher Education Institution Accreditation Documents and Criteria A meta-framework for conducting mixed methods impact evaluations: Implications for altering practice and the teaching of evaluation When Will we Ever Learn? Improving Lives Through Impact Evaluation Practical Program Evaluation: Assessing and Improving Planning, Implementation, and Effectiveness Evaluation Brief: Conducting a Process Evaluation A theory driven evaluation perspective on mixed methods research Monitoring and impact assessment of community-based animal health projects in southern Sudan: Towards participatory approaches and methods Toward a conceptual framework for mixed-method evaluation designs Mixed Methods Research and Culture-Specific Interventions: Program Design and Evaluation Program Theory-Driven Evaluation Science: Strategies and Applications Theory-based impact evaluation: Principles and practice An instrument for measuring the learning outcomes of laboratory work Connecting the Dots: Developing Student Learning Outcomes and Outcome-Based Assessments Industrial training courses-A challenge during the COVID19 pandemic The New Economics for Industry, Government Graduate Attributes and Professional Competencies Beyond outcomes accreditation The use of scoring rubrics: Reliability, validity and educational consequences Improving instruction and assessment via bloom's taxonomy and descriptive rubrics Assessment Methodologies for ABET Accreditation: Success Factors and Challenges Developing an accreditation process for a computing faculty with focus on the IS program Office of Assessment. Office of the Provost Small Sample Size A program assessment guide: Best practices for designing effective assessment plans, academic affairs Problem Solving: Much More Than Just Design Feedback in the Classroom: Making Assessment Matter. American Association for Higher Education Assessment Forum Outcomes assessment of accounting majors Course-embedded assessments for evaluating cross-functional integration and improving the teaching-learning process Synopsis of the use of course-embedded assessment in a medium sized public university's general education program Improving upon best practices: FCAR 2.0,'' in Proc Selective and objective assessment calculation and automation ACMSE'12 Performance assessment of EC-2000 student outcomes in the unit operations laboratory Preparing Instructional Objectives: A Critical Tool in the Development of Effective Instruction Developing CE Rubrics. Wajid Workshop Hybrid Rubrics CE Developing EE Rubrics. Wajid Workshop Hybrid Rubrics EE Developing ME Rubrics. Wajid Workshop Hybrid Rubrics ME Automated Engineering Program Evaluations-Learning Domain Evaluations-CQI Specific Performance Indicators What assessment can and cannot do,'' Pedagogiska Magasinet Outcome based education-Student's outcomes data for implementation of effective developmental advising using digital advising systems, higher education Pedagogies, Routledege,'' Taylor Francis J., to be published Concept of Academic Advising Drafting program educational objectives for undergraduate engineering degree programs Saudi Arabian National Vision 2030, Saudi Arabia UNESCO (2019). document IBE/2019/WP/CD/30 REV, UNESDOC Digital Library Transformational OBE. OBE Evolution The Ecology of Human Development: Experiments by Nature and Design On the theoretical, conceptual, and philosophical foundations for research in mathematics education Mixed data analysis: Advanced integration techniques Learning for mastery Accreditation and recognition in the United States This research is based on the results of a rigorous 5-year program for the implementation of a comprehensive OBE model involving curriculum, teaching, learning, advising and other academic and quality assurance processes for the CE, ME and EE engineering departments at the Faculty of Engineering, Islamic University in Madinah. The program efforts were directly led by the corresponding author and three of the co-authors, who at the time were the Director of the Quality and Accreditation Office at the Faculty of Engineering, Islamic University and ABET coordinators for three departments. The authors thank the faculty members for their cooperation and support in completing the necessary quality assurance and academic teaching processes that enabled the collection of required data for obtaining final results.