The Association for Psychological Science has selected studies from the University of Notre Dame for a flash talk at the association’s annual meeting May 26-29 in Chicago. In these studies, the researchers focused on how the shift to mostly online, off-campus environments affected testing for both students and faculty at universities.
The studies were led by Teresa Ober, assistant research professor of psychology, and Ying (Alison) Cheng, professor of psychology and associate director of the Lucy Family Institute for Data and Society. Based on a survey administered to 996 undergraduate students throughout the U.S., the research team found that even after accounting for demographic variables (gender, race/ethnicity, parental educational attainment), undergraduate students who reported greater pandemic-induced stress tended to have greater test anxiety and were less confident in their computer skills. Female students also reported being less confident in computer skills and having greater pandemic-related stress when completing assessments. Students from underrepresented racial/ethnic minority groups in STEM were more confident in their computer skills.
The team also surveyed 145 faculty members teaching at more than 80 different U.S. institutions. Instructors provided feedback on how they dealt with the shift to online learning during the 2020-21 academic year in assessing student knowledge and learning. They shared how they prioritized content, adjusted their grading, prepared students for exams, navigated the difficulties administering assessments online and handled issues of academic integrity. These insights will help create recommendations for best practices for creating more equitable online and remotely administered testing.
Below, Ober and Cheng share more about their research.
It seems as if the pandemic offered the opportunity for impactful applied research. Is that what prompted you to do this research on students and faculty?
The COVID-19 pandemic certainly disrupted conventional research and teaching practices. Events such as a pandemic raise a lot of questions about how people naturally cope or adapt to a new set of circumstances. As educational and quantitative psychologists, some of our recent collaborative work together has started to examine different dimensions of online learning and assessment. Given our work in this space, we were interested in understanding the assessment experiences of college students and practices of college instructors during a period of disruption that not only affected the conventional process of administering in-person assessments, but also was likely to have affected their capacity to concentrate and process information (e.g., cognitive load). We use the term “assessments” to refer broadly to tests, quizzes and exams, as well as other assessment formats.
You found that females appeared to assess themselves as having lower computer efficacy and greater pandemic-related stress in assessment contexts and that underrepresented minority students assessed themselves as having greater computer self-efficacy. Please explain the significance/ramifications of this.
It may be helpful to note that self-efficacy is a belief about one’s ability on a task or in knowledge domain, and while it tends to be related to one’s performance, there are still many other factors that explain differences in performance.
The results of our survey found that female students compared with male students were more skeptical about their ability to use computers effectively, and also more likely to report experiencing greater stress when completing assessments during the pandemic than male counterparts. This is an unfortunate finding, but one that is in keeping with past research that suggests female students are more inclined to underestimate their abilities to use technology and to experience greater test anxiety. Both factors are associated with lower performance, with some evidence even suggesting that low confidence and great anxiety can actually decrease performance.
In terms of underrepresented minority students rating themselves higher in computer self-efficacy, at first glance, this seems a promising finding — that students from these historically underrepresented and underserved groups in STEM (based on estimates of participation within the STEM workforce by the National Science Foundation) are confident in their ability to use computers. However, an implication of this is that such students may be less inclined to ask for help completing a task on a computer when in fact they could benefit from seeking help.
For surveyed faculty, can you give examples of notable answers regarding how they pivoted?
Concerns around academic integrity and cheating clearly emerged in the responses gathered from students and teachers. It was interesting that we found two general perspectives about handling academic integrity in tests administered online. Some instructors described specific methods for detecting cheating (e.g., deriving correlations between test-takers’ answers), while others described ways to mitigate cheating (e.g., administering more conceptual and open-ended exam questions). In addition, some instructors noted it was very difficult to balance the aim of assessing student learning while mitigating the likelihood of cheating in a practical and respectful way, with some responses even stating that the experience caused them to re-evaluate their purpose and intent in administering class assessments.
Based on this research, will you make recommendations for best practices?
There are several possible implications of these findings. One implication might be that exposing college instructors to a range of online and remote teaching skills may be most beneficial. Another implication concerns assessment design principles. It might be beneficial for faculty to recognize the situational nature of learning, which could include considering how different test formats affect test-takers’ experiences and performance.
Interestingly, many of the same general themes emerged between instructors and students from slightly different angles. Such themes included the issues related to academic integrity and cheating and the appropriateness of administering assessments online (with respect to such factors as the time allocation, and device access or internet connectivity, ability to get assistance when needed, etc.). In this regard, receptivity to the needs of test-takers should help in developing more valid and useful assessments of their learning. It’s also important to acknowledge that our studies have not fully considered how the circumstances affected students with disabilities and may have needed accommodations. This obviously deserves further attention in follow-up research.
Are there other assessment scenarios that this research could be applicable to?
Within the field of education, there is growing interest in the feasibility of administering standardized assessments for licensure, certification or credentialing purposes in online and remote contexts, or using the test-taker’s own device to complete assessments that are proctored in-person. Clearly there are concerns about doing so in a way that remains consistent between test-takers and doesn’t jeopardize the validity or fairness of the assessment and score interpretation.
There is also evidence that medical assessments are increasingly shifting to online, computerized formats. The issue of computer efficacy and technology divide, especially for patients in impoverished areas, is another concern that deserves close investigation.