key: cord-0922657-y2kcfekj authors: Tzafilkou, Katerina; Perifanou, Maria; Economides, A. A. title: Development and validation of students’ digital competence scale (SDiCoS) date: 2022-05-16 journal: Int J Educ Technol High Educ DOI: 10.1186/s41239-022-00330-0 sha: 4899373c69ea107edbc98e6ec9dcf57409cc1e27 doc_id: 922657 cord_uid: y2kcfekj Towards the transition to blended and remote education, evaluating the levels of students’ digital competence and designing educational programs to advance them is of paramount importance. Existing validated digital competence scales usually ignore either important digital skills needed or new socio-technological innovations. This study proposes and validates a comprehensive digital competence scale for students in higher education. The suggested instrument includes skills of online learning and collaboration, social media, smart and mobile devices, safety, and data protection. The scale was evaluated on a sample of 156 undergraduate and postgraduate students just before and at the beginning of the COVID-19 crisis. The final scale is composed of 28 items and six digital competence components. The evaluation study revealed valid results in terms of model fit criteria, factor loadings, internal validity, and reliability. Individual factors like the students’ field of study, computer experience and age revealed significant associations to the scale components, while gender revealed no significant differences. The suggested scale can be useful to the design of new actions and policies towards remote education and the digital skills’ development of adult learners. Digital competence (DC) traditionally reflects a person's ability to use digital technologies in a critical, collaborative, and creative way; also, the person should have the knowledge, skills, and attitude to be perceived as having the competence on a domain (European Commission, 2019a; Marusic & Viskovic, 2018; Suwanroj et al., 2017 Suwanroj et al., , 2018 . A student's perceived digital competence reflects his/her Information and Communication Technologies (ICT)-based knowledge and skills that can be used to perform ICTrelated tasks (Meng et al., 2019) . Recent works confirm that students' perceived ICT competence significantly affect their academic achievement (Park & Weng, 2020) and highlight the importance of understanding the global ICT trends on mobile, Internet and social media use (We Are Social & Hootsuite, 2020) . The European Commission (2020) also reports that such skills of social media and mobile use should be included in the Digital Competence and New Skills Agenda. Research shows that there are several 'barriers' in supporting young adults' digital skills development; such barriers include the poor access to technology and limited support networks (Eynon & Geniets, 2016) . The authors also explain that lack of experience and of digital skills decreases the levels of perceived usefulness of Internet in young people's lives. Also, according to Cullinan et al. (2021) , one-in-six higher education students are at risk of poor access to Internet, posing a significant barrier to attend their courses during the pandemic. The European Commission (EC, 2018a) admits that there is an urgent need to speed up the exchange of good practices in the field of adult digital education. Attempting to measure and quantify the students' , teachers' or citizens' digital skills, several studies have developed methodologies to identify the key components of digital competence (e.g., All Aboard!, 2015; European Commission, 2019a). The newest version of the European Digital Competence Framework (DigComp 2.0) describes which skills are required to use digital technologies "in a confident, critical, collaborative and creative way to achieve goals related to work, learning, leisure, inclusion and participation in our digital society" (European Commission, 2019a) . Several other frameworks suggest different versions (e.g., ESCO, 2019; Fraillon et al., 2019; UNESCO, 2018 ) of a digital competence framework, while recent studies attempt to extend the previous DC scales by including contemporary skills of critical thinking, communication, etc. (Peart et al., 2020) . However, these studies mainly concern the generic population and are not student oriented. Most important, the recently emerged digital skills regarding mobile/e-learning, mobile/e-commerce and social media activities are considered in a limited number of studies (e.g., Perifanou, & Economides, 2019b; Lee et al., 2015) . Last, several research studies are focused on measuring the students' digital skills across different contexts and regions using existing students' DC frameworks, but only a few (e.g., Alarcón et al., 2020; Kuzminska et al., 2018) attempted to quantitatively evaluate or adjust the applied scales. Motivated by the afore described research gap, this study seeks to quantitatively adjust and evaluate a recent instrument on the students' DC, forming a validated students' digital competence scale (SDiCoS) that can be applied in the context of remote education and university students. The suggested validated scale is based on a recently proposed framework and instrument (Perifanou, & Economides, 2019a ) which aims at measuring individuals' digital skills and knowledge on today's computer and Internet use, as well as social media and mobile activities. Also, since previous studies reported the effects of personal factors on students' digital skill components (He & Zhu, 2017; Tømte & Hatlevik, 2011) and online learning (Yu, 2021) , this study also seeks to explore the potential differences across the DC components based between different groups of students. Towards this goal, the main research objectives (ROs) are formed as follows: RQ1: To develop and quantitatively validate a scale to measure the students' digital competencies considering the context of remote education. RQ2: To explore the significant differences in the students' digital skills, between different groups of students including their gender, age, field of study and experience in computer use. Overall, the findings can contribute towards the design of a comprehensive DC scale that considers recent technological trends, and concerns both undergraduate and postgraduate students' competence items. Also, it might be practically useful towards the design and implementation of actions or policies to detect DC gaps and reinforce the adult learners' digital competence in remote and blended learning. Several previous studies examined the structure of digital competence models and instruments by applying statistical methods. Many of those studies (e.g., Oberländer et al., 2020; Tondeur et al., 2017; Touron et al., 2018) performed first and/or second order confirmatory factor analyses (CFA). Other studies performed exploratory factor analyses (EFA) to identify the main components that form digital competence scale (e.g., Internet skills scale, Technology/ICT Literacy, etc.) either for students' (e.g., Lau & Yuen, 2014; van Deursen et al., 2016) or teachers' digital skills (e.g., Siddiq et al., 2016; Touron et al., 2018) . Furthermore, much of the research in students' DC regards the examination of structural relationships between the components (e.g., Aesaert et al., 2015; Hatlevik et al., 2015; Schmid & Petko, 2019) or it has been implemented out of the educational context, mainly focusing on the employment sector (e.g., Oberländer et al., 2020) . Table 1 selectively presents the scale size, components, and validation methods of previous quantitative studies that designed DC scales (either for students, teachers, or other individuals), in the context of higher, secondary, or primary education across different regions. As depicted in Table 1 , only a few studies have been validated in the population of undergraduate students and/or in European countries. Second, none of the cited studies has employed a partial least squares structural equation modeling (PLS-SEM) approach to identify or confirm a digital competence measurement scale, although PLS-SEM has been proved more reliable for applying confirmatory factor analyses, compared to Covariance-based (CB-SEM) approaches (Asyraf & Afthanorhan, 2013) . In the meanwhile, there have been several studies (Marusic & Viskovic, 2018; Suwanroj et al., 2017 Suwanroj et al., , 2018 that examined the structure of digital competence instruments by applying qualitative approaches (e.g., expert views and/or combined/reviewbased approaches). Recently, Economides (2019a, 2019b) proposed a The initial instrument of this study (Perifanou, & Economides, 2019a was composed of 56 items and four dimensions namely (i) Access, Search and Find, (ii) Use, Store, Manage, Evaluate and Delete, (iii) Communicate, Collaborate, and Share, and (iv) Create, Apply, Modify, Combine, Solve and Protect. The items in the four dimensions considered new digital innovations (e.g., social media and smart devices), as well as ethical and responsible behavior). For the needs of this study, some items were adjusted through rephrasing or adding explanatory comments and examples. Five experts in the field of Technology Enhanced Learning (TEL) reviewed the instrument's items regarding the wording, and the quality of the items, to minimize misperceptions. Then, after an initial PLS-SEM evaluation of the responded adjusted questionnaire, several items were removed due to low internal consistency scores, forming at last a six-component (by adding two components) instrument and 28 items. So, the initial 4 dimensions were adjusted to six components and the initial 56 items were reduced to 28 items. The proposed components are (1) Search, Find, Access (SFA); (2) Develop, Apply, Modify (DAM); (3) Communicate, Collaborate, Share (CCS); (4) Store, Manage, Delete (SMD); (5) Evaluate (EV); and (6) Protect (PR). The final instrument is presented in Appendix. All the items are measured on a 5-point Likert scale (1: strongly disagree to 5: strongly agree). The questionnaire's used terms were explained to the participants as follows: "Smart device = smartphone, tablet, laptop, pc, camera, navigator, game console, smart TV, etc.; Object = document, picture, movie, software, app, etc. ". During the period between January and February 2020, the DC questionnaire was distributed in a written form to students in two different undergraduate university courses (e-Commerce and e-Business, Information Systems in Management), and in April 2020 it was sent out online in three postgraduate programmes (Information Systems, The questionnaire items were measured on a five-point Likert scale from "Strongly disagree" to "Strongly agree". The questionnaire also asked for some social and academic information (gender, age, experience in mobile and computer use, average grade in last semester, etc.). The total population that was invited to participate in the survey voluntarily and anonymously was 300 students. All participants were asked to consent for their volunteer and anonymous participation in the study. It was not possible to identify the identity of any respondent and all ethics standards were met according to the university internal committee. Several students did not complete the questionnaire and after eliminating the invalid answers the final working sample was 156 students, 80 undergraduates and 76 postgraduates. The respondents' socio demographic characteristics are presented in Table 2 . Structural Equation Modelling (SEM) is considered as one of the most important statistical developments in social sciences (Hair et al., 2011) . SEM elaborates in a comprehensive and efficient manner the relationships among multiple independent and dependent constructs (the structural model) simultaneously (Gefen et al., 2000; Hair et al., 2010) . Moreover, SEM not only assesses the structural model but also evaluates the measurement model (Gefen et al., 2000 (Gefen et al., , 2011 . Researchers applying SEM can choose between a covariance base analysis (CB-SEM) or partial least squares (PLS-SEM) (Gefen et al., 2000; Hair et al., 2011) . Recently researchers introduced methods that provide consistent PLS-SEM estimations that can be used complementary or alternatively to CB-SEM (Bentler & Huang, 2014; Dijkstra, 2014; Dijkstra & Henseler, 2015) . Contrary to previous studies in the literature that used mainly CB-SEM approaches, this study applied a hybrid PLS-SEM and CB-SEM approach to evaluate the suggested scale, in terms of internal consistency, composite reliability, convergence validity and discriminant validity. PLS-SEM was applied for the following reasons: • According to the suggestions of Bentler and Huang (2014), Dijkstra (2014) , and Dijkstra and Henseler (2015) who proved that PLS-SEM can consistently mimic common CB-SEM approaches, PLS-SEM is an appropriate approach to study and validate the structure of a model. In this study, the primary scale validation is based on PLS-SEM CFA mainly because of the non-normality observed in the data (Shapiro & Wilk, 1965 ), the small sample size (Hair et al., 2014) , and the adequateness of the method compared to CB-based approaches, as suggested in Asyraf and Afthanorhan (2013) and Rigdon (2012) . • Furthermore, as recommended by Hair et al. (2011; p.144) a PLS-SEM approach should be implemented if "the goal is predicting key target constructs or identifying key 'driver' constructs" or if research is exploratory or an extension of an existing structural theory". Contrary, a CB-SEM approach should be chosen if "the goal is theory testing, theory confirmation or comparison of alternative theories". Although many researchers focus on comparing the differences of model estimations when using CB-SEM and PLS-SEM, both methods are complementary rather than competitive. Based on the above, our methodological approach was based on the following steps: i. A PLS-SEM CFA was applied to primarily test for the model structure validation, using the software SmartPLS; ii. A CB-based CFA replication was applied, using Amos software, to further examine the results of factor loadings and model fit values; iii. A second-order CFA was conducted, using Amos software, to further validate the results and examine whether a broad latent factor of the students' DC is composed by the six distinct DC factors. Finally, to examine any significant differences among students across the DC components we conducted non-parametric statistical methods. We conducted a Mann-Whitney test to examine gender differences and Kruskal-Wallis tests to examine differences based on the students' field of study and experience in computer use. The results of the PLS-SEM analysis suggest a good fit of the model on the values of NFI = 0.667 and Chi-Square = 843.442 according to the defined criteria of acceptance (Bryne, 2010; Hair et al., 2010; Kline, 2011) . The value of Root Mean Square Error of Approximation (RMSEA = 0.088) indicated a score higher than 0.08 and less than 1.0 which is usually accepted as a good fit value, since a value of range between 0.05 and 1.00 are acceptable (Bandalos, 2018; Browne & Cudeck, 1992) . Also, the scores of the loading factors were highly valid (> 0.5) (Awang et al., 2010) , and all the values of Cronbach alpha demonstrated internal consistency (Dijkstra & Henseler, 2015) . The bootstrapping results indicated that t (> 1.96) and p (< 0.01) values are all accepted and statistically significant. Composite reliability (CR) values indicate Internal consistency (Gefen et al., 2000) and average variance extracted (AVE) values indicate Convergent Reliability (Bagozzi & Yi, 1988; Chin, 2010; Fornell & Larcker, 1981) , as depicted in Table 3 . As depicted in Table 4 , the suggested students' DC measurement model supports the discriminant validity between the constructs (Fornell & Larcker, 1981) . A CB-SEM approach was also applied using the AMOS software and the maximum likelihood estimation to reinforce or compare the findings. The CB-SEM analysis validated the factor loadings of all items although indicating lower values. The approach revealed good results in terms of the fitness of the model: χ 2 /df = 2.02, Probability level = 0.000, RMSEA = 0.080. However, the comparative fit index (CFI) = 0.84, the Tucker-Lewis fit index (TLI) = 0.80 and revealed values slightly lower that the suggested thresholds or marginally accepted (Bandalos, 2018; Browne & Cudeck, 1992; Carmines & McIver, 1981; Hoyle, 1995; Muthén & Muthén, 2012) . Table 5 illustrates the unstandardized and standardized parameter estimates; as depicted, the critical ratio (C.R.) of constructs is more than 1.96 and all estimates are all statistically significant at the alpha level of 0.000 (Hair et al., 2010) . A second order CFA analysis was finally conducted via the AMOS software. The results indicated a good fit of the SDiCoS model (Bandalos, 2018; Muthén & Muthén, 2012) . RMSEA = 0.80, χ 2 /df = 2.04 and the p value is significant (p-value = 0.00). However, the increment fit indices (TFI = 0.84) show values below 0.9 and Hoelter values are below 200 indicating unsuitability of the sample size, mainly for a CB-based approach. Although CFI is not below 0.8 (= 0.83), it is accepted since "a value less than 0.10 or of 0.08 (in a more conservative version)" is a good fit of the model (Hu & Bentler, 1999) . Overall, we can conclude that both the first and the second-order CBA models are generally considered as much valid as the PLS-SEM model that appears to indicate strong validity and reliability scores. This study also examined the potential differences in students' groups according to (i) gender, (ii) age, (iii) field of study (Programme) and (iv) experience of computer use, across mean scores across the six DC constructs, as defined in RQ2. Results of CB-SEM CFA of the 28-items SDiCoS students' digital competence scale *** Means p-value is significant in AMOS output a Critical ratio (C.R.) of constructs is more than 1.96 and standardized estimates are significant (Hair et al., 2010) Interestingly, gender showed no significant differences. This finding agrees with recent reports regarding the digital skills of young adult females and males across Europe (European Commission, 2019b) although there is contradictory evidence as well (e.g., in He & Zhu, 2017) . Moreover, since previous studies (e.g., Burnett et al., 2010; Terzis & Economides, 2012; Tzafilkou et al., 2016 ) revealed significant gender differences in perception and acceptance towards computer-related tasks, this study results are encouraging to the future of the worldwide endeavor to eliminate the permanently existing gender gap in computing (European Commission, 2018b) . However, similar studies in secondary education students (Hinostroza et al., 2015) reveled no gender differences in computer related learning skills. As presented in Table 6 , age revealed one significant correlation (p < 0.05) with the factors of SMD and PR. Students between 25 and 35 revealed the highest levels in both constructs, while the youngest team of 18-24 expressed the lowest scores. This result implies that undergraduate students meet difficulties, or they lack the skills in protection and file management tasks and renders serious consideration since according to Eurostat (2020) younger Europeans (20-24) tend use Internet, text, and multimedia much more frequently than older groups (25-64), however they might lack some essential 'out-of-Internet' or 'out-of-social-media' skills like file management and file/data protection. Although computer experience was significantly correlated to only component (SMD), the field of study showed several significant correlations in the components of DAM, CCS, SMD and PR. The post-graduate students in Digital Marketing expressed the highest scores across all the DC components, while the undergraduate programme of e-Commerce and e-Business showed the lowest values. However, most of the postgraduate students participated in the survey during the COVID-19 crisis, hence future research should examine whether this situation affected their responses and caused the difference in the groups-compared results. Overall, the comparative results of this study can be generalized since the participants reflect a representative sample of higher education students in Greece, in terms of gender and age. Moreover, portions of different programmes (undergraduate and postgraduate) are considered in the study. However, different programmes (e.g., in different fields) or different regions might encounter significant differences in the students' characteristics. Hence more research should be conducted on different student population to reinforce and validate the findings. The main objective of this study (RQ1) was to measure and validate SDiCoS, a new students' digital competence scale encompassing several digital skills essential to the pre, during and post-pandemic context of ERE. The suggested model was based on a comprehensive instrument and framework designed by Economides (2019a, 2019b) which was informed by and extended previous DC frameworks (DIGCOMP, UNESCO, ESDF, ESCO, ICILS, etc.). The resulting six-factor and 28 items scale has been validated using a hybrid CFA approach combining SEM-PLS with CB-SEM CFA approaches and using SmartPLS and AMOS software. Results indicate that the PSL-SEM CFA produced valid values of construct validity and reliability and accepted model fit criteria, while the CB-SEM approach revealed a similar fit to the model (RMSEA = 0.08), but it scored lower factor loadings. These findings are in accordance with Asyraf and Afthanorhan (2013) who explained this issue via augmenting that PLS-SEM is more appropriate for CFA for not normally distributed data. Compared to previous quantitative studies (e.g., Alarcón et al., 2020; Kong et al., 2019; Peart et al., 2020; Suwanroj et al., 2019; Touron et al., 2018; etc.) the present study is the only one presenting a hybrid approach where both PLS-SEM and CB-SEM approaches are implemented for CFA of scale validation. Furthermore, contrary to previous studies that suggested too short (e.g., Lee et al., 2015) or quite long (e.g., Peart et al., 2020; Touron et al., 2018) instruments, the SDiCoS proposes a comprehensive model of six components and 28 items, providing a practical and easy to use instrument for future research on students' DC. SDiCoS includes all the essential components as derived from previous popular frameworks, being adjusted to the present technological trends. SDiCoS is a validated scale of students' digital competence that considers all six important skills components: (1) Search, Find, Access (SFA); (2) Develop, Apply, Modify (DAM); (3) Communicate, Collaborate, Share (CCS); (4) Store, Manage, Delete (SMD); (5) Evaluate (EV); and (6) Protect (PR). Previous scales on students' digital competence either ignore important components such as 'Protect' (e.g., Elstad & Christophersen, 2017; Koc & Barut, 2016; Lau & Yuen, 2014; Lee et al., 2015; Siddiq et al., 2016; Suwanroj et al., 2019) , or take a completely different approach by considering components such as "parental ICT attitude" (Aesaert et al., 2015) , "language integration" (Hatlevik et al., 2015) , "Internet political activism" (Choi et al., 2017) . Furthermore, only few scales have been validated for undergraduate university students (Elstad & Christophersen, 2017; Koc & Barut, 2016; Kuzminska et al., 2018; Lee et al., 2015; Suwanroj et al., 2019) , and postgraduate university students (Choi et al., 2017) . Finally, the current quantitative study is the only one (among the scale development and validation studies) carried out in a South European country (Greece) focusing on higher education students' digital skills. Thus, the current study seeks to contribute to the students' DC awareness across different regions and towards the design of homogenous students' DC scales worldwide. The SDiCoS scale is useful to reveal skill polarities and gaps in the students DC among the examined components. For example, as described in the results, younger students expressed lower perceived skills in protection and file management tasks, although they are more actively engaged in Internet and social media activities, compared to older age-groups of students. SDiCoS would be useful to the following stakeholders: a. Policymakers and decision-makers at national, and international levels who are responsible for taking strategic decisions for education, digital technologies, employment, economy, etc.; b. Directors of formal and continuing education institutes who work on setting goals, measuring, providing training and certification regarding their students' digital competence; c. Educators at educational institutes who design curriculum and syllabus for formal and informal training; d. Teachers, in service and in training, who would improve their digital competence and integrate digital technologies in their teaching practice; e. Teachers who would become aware of their students' digital competence needs, and take appropriate actions; f. Researchers on the use of digital technologies, on individuals' digital competence and digital skills. g. Instructional designers and educational institutions that plan to trace their teaching and learning strategies in the context of blended and online learning. For example, SDiCoS could help policymakers who aim to identify students' digital competence level to: • Design and organize educational adjustments and reforms, such as the emergent shift to remote education during the COVID-19 times or adjustments needed to the soft transition and/or maintenance to blended and online learning. In order to successfully design this shift, policymakers should know the level of students' (and teachers) digital competence (among other issues); • Design and financially support massive and specialized training on digital technologies to fight discrimination, digital divide, and non-inclusion of citizens with low digital competence, and boost innovation, employability, participation in the digital market, and digital society (e.g., e-commerce, e-banking, e-government). Although 82% of European individuals 16 to 24 years old have basic or above basic overall digital skills, only 60% of European individuals 25 to 64 years old have such skills (Eurostat, 2020) . The suggested SDiCoS can be used to design short-term sessions or extra ICT training when needed, to assist young and older students in acquiring all the basic digital skills that they potentially lack. Furthermore, the validated SDiCoS can serve internally, as a practical and useful tool to evaluate the students' perceived digital competence in higher and continuing education institutions, including their knowledge and skills on recent technological trends like social media and mobile use. One main limitation of this study is the small sample size for the CFA. Although, the sample size is efficient for the PLS-SEM approach, further research in encouraged on larger populations in the future. Also, the COVID-19 crisis emerged during the collection of the response. This situation has might affect the responses of the students that responded remotely duo to the school closure and further research should be conducted to explore the role of COVID-19 on the students' DC perceived items. Furthermore, this study examined any DC's differences with respect to gender, age, field of study and computer experience. Future researcher should investigate other factors that may affect DC or examine ERE specific components like skills in remote synchronous collaboration, and text-based online learning. Finally, it would be interesting to conduct future research at a later stage of the pandemic, to examine how and whether the students' DCs have been improved. This study develops and validates the SDiCoS scale to measure students' digital competence. The proposed scale takes into consideration recent technological trends and previous studies on DC frameworks and provides the conceptual basis for understanding the main DC components in the context of remote education. The generated sixfactor scale is composed of the following DC components: (1) Search, Find, Access; (2) Develop, Apply, Modify; (3) Communicate, Collaborate, Share; (4) Store, Manage, Delete; (5) Evaluate; and (6) Protect. Regarding RQ1, the validity of SDiCoS was tested through both PLS-SEM and CB-SEM methods. The PLS-SEM based CFA approach demonstrated the SDiCoS validity, resulting in highly valid consistency and reliability, and accepted model fit criteria. A CB-SEM replication of the CFA and a second-order CFA was also conducted to complement, compare, and reinforce the findings. Regarding RQ2, the statistical analysis indicated significant differences across the SDi-CoS constructs between different groups of students, including their age, field of study and computer experience. The SDiCoS model is usable for both undergraduate and post-graduate students in higher education and can be used to measure the students' digital competence across the main DC components, concerning the recently emerged technological trends like remote/online education, social media, smart devices, mobile and safety skills. Profiling the digital readiness of higher education students for transformative online learning in the post-soviet nations of Georgia and Ukraine Emergency remote teaching in higher education: Mapping the first global online semester Alternative ways of assessing model fit Structural equation modeling with AMOS Gender differences and programming environments: across programming populations Analyzing models with unobserved variables: Analysis of covariance structures How to write up and report PLS analyses What it means to be a citizen in the internet age: Development of a reliable and valid digital citizenship scale The disconnected: COVID-19 and disparities in access to quality broadband for higher education students PLS' Janus Face-Response to Professor Rigdon's 'rethinking partial least squares modeling: In praise of simple methods Consistent and asymptotically normal PLS estimators for linear structural equations Perceptions of digital competency among student teachers: Contributing to the development of student teachers' instructional self-efficacy in technology-rich classrooms Digital competencies, European skills, competences, qualifications and occupations Digital Education Action Plan Increase in gender gap in the digital sector-Study on Women in the Digital Age The Digital Competence Framework 2.0. Retrieved Women in Digital European skills agenda for sustainable competitiveness, social fairness and resilience Individuals' level of digital skills The digital skills paradox: How do digitally excluded youth develop skills to use the internet? Learning Evaluating structural equation models with unobservable variables and measurement error IEA international computer and information literacy study 2018 assessment framework An update and extension to SEM guidelines for administrative and social science research Structural equation modeling and regression guidelines for research practice Multivariate data analysis: A global perspective Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research An assessment of the use of partial least squares structural equation modeling in marketing research Digital diversity among upper secondary students: A multilevel analysis of the relationship between cultural capital, self-efficacy, strategic use of information and digital competence Factors (not) affecting what students do with computers and internet at home. Learning, Media and Technology The structural equation modeling approach: Basic concepts and fundamental issues Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives Development of youth digital citizenship scale and implication for educational setting Development and validation of New Media Literacy Scale (NMLS) for university students. Computers in Human Behavior Development and validation of an instrument for measuring digital empowerment of primary school students Digital competency of the students and teachers in Ukraine: Measurement, analysis, development prospects Developing and validating of a perceived ICT literacy scale for junior secondary school students: Pedagogical and educational contributions Understanding new media literacy: The development of a measuring instrument ICT competencies of students Measurement invariance of the ICT engagement construct and its association with students' performance in China and Germany: Evidence from PISA 2015 data Delphi study for the design and validation of a questionnaire about digital competences in higher education Mplus user' s guide Digital competencies: A review of the literature and applications in the workplace OECD Skills Outlook The relationship between ICT-related factors and student academic achievement and the moderating effect of country economic indexes across 39 countries: Using multilevel structural equation modelling Development of the digital and socio-civic skills (DIGISOC) questionnaire. Educational Technology Research and Development An instrument for the digital competence actions framework The digital competence actions framework Rethinking partial least squares path modeling: In praise of simple methods Does the use of educational technology in personalized learning environments correlate with self-reported digital skills and beliefs of secondary-school students? Computers and Education An analysis of variance test for normality (Complete Samples) Teachers' emphasis on developing students' digital information and communication skills (TEDDICS): A new construct in 21st century education. Computers and Education Investigating digital competencies for undergraduate students at Nakhon Si Thammarat Rajabhat University Confirmatory factor analysis of the essential digital competencies for undergraduate students in Thai higher education institutions Development of digital competency domains for undergraduate students in Thailand Computer based assessment: Gender differences in perceptions and acceptance Gender-differences in self-efficacy ICT related to various ICT-user profiles in Finland and Norway. How do self-efficacy, gender and ICT-user profiles relate to findings from PISA Developing a validated instrument to measure preservice teachers' ICT competencies: Meeting the demands of the 21st century Construct validation of a questionnaire to measure teachers' digital competence (TDC) Gender-based behavioral analysis for end-user development and the 'RULES' attributes. Education and Information Technologies National standards for essential digital skills A global framework of reference on digital literacy for Development and validation of the Internet Skills Scale (ISS) Global Digital Insights, 247. Retrieved May 30 The effects of gender, educational level, and personality on online learning outcomes during the COVID-19 pandemic Not applicable. See Table 7 .Authors' contributions MP and AE designed the proposed instrument. KT statistically validated and adjusted the suggested instrument. All authors reviewed the related literature. All authors read and approved the final manuscript. No funding was received. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Competing interests