Abstract
Background: Since the emergence of e-government in developing countries, several different measurement metrics in the form of models and frameworks have been utilised to evaluate e-government projects. Whilst e-government assessment topologies have developed over time, no measurement metrics exist to assess e-government service gaps according to the best knowledge of the researchers. Consequently, failure to assess e-government service gaps makes it difficult to take well-founded improvement actions, as these gaps are not obvious to the designers and developers of e-government systems.
Objective: The objective of this study was to explore dimensions or constructs that could contribute to the development of a multidimensional model for assessing e-government service gaps.
Methodology: An integrative literature review was conducted in Ebscohost, Wiley Online Library, Springer Link, Science Direct, Taylor and Francis journals, Sage Research Methods, JSTOR, Google Scholar, Emerald and the Electronic Journal of Information System in Developing Countries (EJISDC) using relevant search strings. The extracted articles were subjected to construct analysis in which constant-comparative analysis, thematic analysis and evaluation functions were used to cluster dimensions extracted from evaluation metrics according to their themes or constructs.
Results: Themes and constructs extracted from existing evaluation metrics resulted in the development of a multidimensional model that could be used for assessing e-government service gaps. Accordingly, the model consists of the following constructs: system functionality; service delivery; and service gaps.
Conclusion: The findings imply that the model can be used as a prescriptive tool during the design phase (pre-implementation phase) or in scaling up e-government projects and as an evaluation tool in the post-implementation phase.
Keywords: multi-dimensional model; assessing e-government; e-government service; service gaps; E-GSGAM.
Introduction
The term e-government is generally understood to mean the use of information technologies such as wide area networks, the Internet and mobile computing by government agencies to relate with citizens, businesses and other arms of government (Ngonzi & Sewchurran 2019). E-government is one of the foundations in the transformation drive of public service delivery. By implementing e-government, the majority of public services are expected to be provided electronically. The areas that have shown the most significant progress in the transformation drive of public service delivery include, but are not limited to, e-procurement; e-invoicing; e-payment; e-licensing; e-archiving; e-tendering; e-taxation; e-voting; e-democracy; e-submission; e-rental; e-compliance; e-assessment; e-participation; e-visa; e-health; e-learning; e-court; online passports, birth registration and permits applications; and online company registration (Baheer, Lamas & Samas 2020; Mukamurenzi, Grönlund & Islam 2019). Indeed, e-government is playing a critical role in transforming public services.
Accordingly, the transformation drive in public service is facilitated by the following e-government delivery models: Government-to-Government (G2G); Government-to-Employees (G2E); Government-to-Business (G2B); and Government-to-Citizens (G2C) (Ahmad et al. 2019; Ramdan, Azizan & Saadan 2014; Voutinioti 2014). G2G represents the backbone platform for e-government adoption, implementation and utilisation in the entire country (Voutinioti 2014); G2E represents an internal relationship between the government and its employees (Ramdan et al. 2014); G2B service delivery model denotes an online platform that enables government and business organisations to do business electronically (Ahmad et al. 2019); and G2C ensures that the citizens interact and transact with government far and wide (Ramdan et al. 2014).
Various studies have observed that these delivery models are widely used to demarcate e-government and form the basic models of assessing, evaluating and delivering e-government services (Alsaif 2014; Bayona & Morales 2017; Lessa 2019; Ramdan et al. 2014). However, Al-Balushi et al. (2016) argued that as e-government service delivery models mature, progressively, their services may overlap. Nevertheless, whether the services overlap or not, the models are susceptible to service gaps if they are not correctly implemented. Service gap is the extent to which e-government services are not fulfilled to the satisfaction of the intended beneficiary (users) of the e-government system (Herdiyanti et al. 2018). Hence, service gaps must be evaluated across e-government delivery models.
Keeping this in mind, the study proposes a model that will focus on multiple e-government delivery models (G2G, G2B and G2C) – thus, shifting from previous studies which have traditionally evaluated e-government in isolation by focusing their assessment effort on a single delivery model. Mostly, e-government assessment metrics have been centred on G2C although the majority of e-government systems are designed with multiple delivery models (Ahmad et al. 2019; Brown et al. 2017). Hence, assessing e-government service gaps in multiple e-government delivery models is critical in determining service deficiencies from an e-government system in its entirety.
Background to the study
Since the emergence of e-government in developing countries, several different measurement metrics in the form of models and frameworks have been utilised to evaluate e-government projects. These include, but are not limited to, E-Government Development Index (EGDI) (Dias 2020); modified service quality (SERVQUAL) measurement instrument (Ahmad et al. 2019); DeLone and McLean model (DeLone & McLean 2003); Technology Acceptance Model (TAM) (Sebetci 2015); Diffusion of Innovation (DOI) theory (Shuib, Yadegaridehkordi & Ainin 2019); Technology-Organisation-Environment (TOE) framework (Zabadi 2016); Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh, Thong & Xu 2016); and Layne and Lee maturity model (Layne & Lee 2001).
Whilst these measurement metrics provide a theoretical underpinning for evaluating e-government projects, they have nevertheless not escaped criticism from e-government scholars. For instance, Kunstelj and Vintar (2004) argued that EGDI is likely to distort e-government measurement because most countries tend to launch e-government through the ‘quick fix, quick wins’ principle to attain high rankings. Besides, EGDI has limited number of constructs and do not highlight the multidimensional nature of the electronically provided services such as e-government. Moreover, SERVQUAL, one of the most referenced models in evaluating service gap, is failing to catch up with the continuous developments in information systems such as e-government because the model was designed before the emergence of e-government concept (Ahmad et al. 2019). Also, the SERVQUAL measurement does not sufficiently clarify the attributes of e-services such as interactivity and intangibility, which are driven by the tremendous advancement of technology.
On the other hand, the DeLone and McLean model has been criticised for being incomplete by not emphasising on the service quality implications of e-government projects (Ramdan et al. 2014); whereas, it has been argued that TAM only focuses on measuring the intention to accept technology in a setting where the use of technology is voluntarily determined, thereby ignoring mandatory technologies such as e-government where citizens have limited choice on whether to accept technology or not (Ahmad et al. 2019).
Furthermore, maturity models which focus on evaluating e-government based on consistent stages of development – such as online presence, interaction, transaction, fully integrated and transformed e-government and digital democracy – treat e-government in a linear and incremental fashion (Perkov & Panjkota 2017). However, in practice, these stages are likely to develop concurrently depending on the following: established priorities of a country in the implementation of e-government projects; evolving needs and values of citizens; and where the benefits of e-government are situated. Perkov and Panjkota (2017:103) argued that ‘the conceptualisation of e-government maturity no longer holds for evaluating e-government as its goals and targets are constantly evolving in response to evolving values and the needs of citizens’. Thus, maturity models are susceptible to linearity as they do not take into consideration the dynamic nature in the deployment of e-government projects.
From the foregoing, it can be concluded that whilst e-government assessment topologies have developed over time, no measurement metrics exist to assess e-government service gaps according to the best knowledge of the researchers. Consequently, failure to assess e-government service gaps ‘makes it difficult to take well-founded improvement actions’ (Mukamurenzi et al. 2019:2), as these gaps are not obvious to the designers and developers of e-government systems. Thus, this research is but one of many factors in actually closing e-government service gaps in developing countries by exploring dimensions that could contribute to the development of a multi-dimensional model in assessing e-government service gaps.
Motivation of the study
This study was motivated by the following remarks from Sigwejo and Pather (2016):
The criticisms of [existing] measures are that they are ‘first generation metrics’ designed for developed countries, as opposed to developing countries; hence, the need to re-evaluate and customise the [measurement elements], establishing which ones are important and suitable for a typical African e-government service, which has a high failure rate compare to developed countries. (p. 2)
Major research question
Which dimensions and measurement elements used in existing evaluation of e-government metrics can be synthesised to form a multidimensional model for assessing e-government service gaps in the context of a developing country?
Research methodology
In this study, the fundamental methodology was integrative literature review supported by constant-comparison method, thematic analysis and evaluation function. The integrative review gave direction to construct the conceptual model based on findings from prior studies and existing e-government assessment topologies. According to Torraco (2016:404), ‘an integrative review of literature is a distinctive form of research that uses existing literature to create new frameworks, models, perspectives and knowledge from emerging or mature topics’. Here, the integrated review was used to address an emerging topic because the quest for e-government in developing countries is still an on-going process. The procedure for conducting the integrative review in this study involved the following steps: identification and retrieval of relevant studies; construct analysis.
Identification and retrieval of relevant studies
During data collection, articles were searched through electronic databases which include Ebscohost, Wiley Online Library, Springer Link, Science Direct, Taylor and Francis journals, Sage Research Methods, JSTOR, Google Scholar and Emerald and the Electronic Journal of Information System in Developing Countries (EJISDC), which is one of the famous Information and Communications Technology for Development (ICT4D) journals. Keywords used to collect data included: ‘e-government evaluation’, ‘e-government assessment’, ‘e-government evaluation model’ and ‘framework for assessing e-government’. Boolean logic operators (AND, OR) were used to widen the search (Ecker & Skelly 2010) whilst filters and phrase searches were utilised to refine the search to the specific topic (McGowan 2009). Abstracts, introduction and background, methods and discussions were carefully examined to justify the inclusion of the articles.
Furthermore, the snowball sampling technique was used to identify relevant articles (Wohlin 2014). Researchers used the E-government citizen satisfaction framework by Sigwejo and Pather (2016) as a start case, as proposed by Wohlin (2014). Accordingly, Sigwejo and Pather (2016) argued that:
The [existing] models and frameworks were designed based on evaluation dimensions derived from developed countries, which may differ from those of developing countries; therefore, rather than just adopting these existing measures, it seems far more logical to re-evaluate and customise the [measurement elements], establishing which ones are important and suitable for a typical African e-government service. (p. 2)
Likewise, snowball sampling enabled researchers to identify quality studies on e-government evaluation from previous authors by following a reference of references. Furthermore, by using a snowball sampling, researchers expected to collect as many articles on e-government evaluation as possible. The process of data collection iterated until the researchers could not find frameworks and models with new dimensions and measurable elements. Hence, the search process was terminated based on theoretical saturation (Ma & Kinchin 2010). This is a point in which further inquiry no longer offers new data about the study.
Construct analysis
According to Roy et al. (2012:35), ‘constructs represents different variables which are useful to understanding the phenomenon’. They are conceptualised as unidimensional or multidimensional depending on the degree of their abstraction (Kim 2017; Palotti, Zuccon & Hanbury 2018). Conceptually, a construct is construed as unidimensional when it can be measured using a single indicator, item or element (Kim 2017). On the other hand, a multidimensional construct pertains to a number of different but related dimensions regarded as a single theoretical concept (Palotti et al. 2018). Construct analysis was conducted to ensure that the model was appropriately specified.
A constant-comparative method (Eastwood, Jalaludin & Kemp 2014) was used together with the thematic method (Maguire & Delahunt 2017) to analyse the constructs for developing the conceptual model. In the constant-comparative analysis, each portion of data was compared with all other sections of relevant data. The method was considered appropriate for construct analysis because the researcher needed to identify the constructs and their measurement dimensions that are suitable for developing a multidimensional model. Furthermore, the constant-comparative method was used to make certain that there was no substantial overlap of dimensions; that is, dimensions for assessing e-government service gaps did not belong to more than one construct (theme). This was further achieved by creating a table of taxonomy (see Table 3) in organising and comparing an extracted measurable element with other elements in the same group as well as in the other group. Thus, elements that were close to each other were grouped together.
On the other hand, thematic analysis is a systematic process of identifying patterns and/or themes within qualitative data in order to group-related elements (Maguire & Delahunt 2017). According to Nowell et al. (2017), a theme is conceived to be a thread of fundamental meaning totally revealed at the interpretative level to unify ideas regarding the subject of inquiry. Thematic analysis is regarded as a flexible approach in analysing qualitative data because of its theoretical freedom. During data analysis, thematic analysis was used to cluster the measurable elements extracted from evaluation metrics according to their themes or constructs. Table 1 shows the constant-comparative method used together with the thematic approach as a means of qualitative data analysis.
TABLE 1: Constant-comparative method used together with the thematic approach. |
Stage 1: Read through the e-government assessment topologies
After conducting a comprehensive and profound exploration and analysis of contemporary and related literature, the researchers read through the e-government assessment topologies in order to gain an understanding of constructs and dimensions essential for e-government assessment.
Stage 2: Identify, define and describe measurement dimensions
During the analysis, the researcher identified and extracted the dimensions that were found relevant for developing initial constructs. A total of 21 dimensions (see Table 2) were identified from various e-government assessment topologies. Moreover, to aid the process of constant-comparative analysis, the definition of each dimension extracted from the e-government assessment topologies was checked from literature to determine their inclination, as there could be a thin line between constructs. Thus, the researcher organised the dimensions in a table and defined them to create textual data that would facilitate thematic analysis. Dimensions and their definition or description are presented in Table 2.
TABLE 2: Selected scholarly definitions or descriptions of measurement dimensions (elements) of e-government. |
Stage 3: Identifying constructs (themes)
Having identified the content for the 21 dimensions, the researcher started to analyse them thematically (Al-Debei & Avison 2010). The researcher looked for pertinent narratives in each definition or description to identify key concepts or phrases. It is important to note that only a single concept or phrase was identified from each definition or description. This was also important to ensure that dimensions did not belong to more than one constructor theme. In addition, dimensions which had similar definitions were merged in the taxonomy table. The use of thematic analysis over the extracted definitions and descriptions of the dimensions facilitated the building of a taxonomy that categorises the different dimensions into three exclusive constructs or themes that are presented in Table 3.
TABLE 3: The taxonomy for organising constructs and dimensions. |
Thus, thematic analysis led to the identification of the following multidimensional constructs or themes: functionality, delivery and service gaps. Dimensions whose definition or description was related to the technical attributes of the system were grouped under the functionality construct whilst those that related to the delivery capabilities of the system were grouped under the delivery construct.
In contrast, narratives that highlighted on system performance were grouped under the service gaps theme as the performance of a system determines whether there is a gap or not. The three constructs or themes developed through thematic analysis were perceived by the researcher as fitting to encapsulate the 21 dimensions extracted from the e-government assessment topologies. Despite being used to encapsulate the measurement dimensions, the constructs were also regarded as suitable for representing the theoretical abstraction of the phenomenon.
Apart from the definitions and descriptions of the dimensions, the constructs were also determined by taking into account that an e-government system needs to perform certain functions, deliver comprehensive e-services and satisfy users. Furthermore, to assist the constant-comparative analysis, the researcher gave a brief description of the identified three constructs in the following section.
Functionality: The functionality of the e-government system is defined by Sigwejo and Pather (2016) as the extent to which government systems are expected, by the users, to perform. This construct defines how the e-government system functions, that is, the correct technical functioning of the e-government system. The elements of functionality include, but are not limited to, responsiveness, navigation, reliability, interactivity and completeness.
Delivery: Delivery of e-government services is the electronic distribution of public services to offer a dependable service experience to a specific user-group using appropriate delivering channels. It is defined by Ahmad et al. (2019) as a continuous, recurring procedure for developing and delivering user-centric public services using technology. Accordingly, an effective e-government service delivery depends on accessibility, efficiency, accuracy, relevance, timeliness, completeness and transparency (Ahmad et al. 2019).
Service gaps: Pena et al. (2013) defined service gaps as the gap between the expectations of customers and the services provided to them. Specifically to this study, e-government service gap is the extent to which e-government services are not fulfilled to the satisfaction of the intended beneficiary (businesses and citizens) of the e-government system (Herdiyanti et al. 2018), either because the system is constrained to deliver the required services or because some of the expected services are not being provided.
Stage 4: Mapping constructs and dimensions
Dimensions extracted from e-government assessment topologies were mapped into three constructs using a table of taxonomy (see Table 3) for organising constructs and dimensions. In essence, ‘taxonomy is a systemising mechanism utilised to map any domain, system, or concept, as well as a conceptualising tool for different constructs and elements’ (Al-Debei & Avison 2010:361). The mapping process in this study was refined using constant-comparative analysis to ensure that dimensions aligned with the appropriate constructs. The outcome of this mapping strategy is a taxonomy which comprehends three unique constructs or themes and their respective dimensions.
Furthermore, using the evaluation function, dimensions were mapped into the same construct or theme based on the following specifics:
- Apiece, they are thematically analogous; that is, they converse matching or much related semantics and ideas about the construct or theme.
- They have contextual relationships that complement each other; thus, they become more useful in assessing e-government service gaps if clustered.
- The clustered dimensions as a whole articulate a distinctive compositional facet of the e-government assessment construct.
Out of 21 dimensions presented in Table 2, 16 were mapped into three constructs and further used in the following subsections to develop the conceptual model for assessing e-government service gaps. However, in order to avoid the inclusion of redundant dimensions in the development of the conceptual model, five dimensions were dropped because of the following reasons:
- Convenience is defined by the extant literature as similar to efficiency.
- Completeness referred to the degree to which services provided by an e-government system are sufficient to meet citizen expectations.
- Availability of e-government services also entailed the accessibility of e-government services to the citizens.
- Navigation was regarded as the indicator of ease of use.
- Personalisation was perceived by the researcher as unsuitable for assessing e-government in the developing context because it is normally achieved by highly matured (seamless) e-government systems.
The final constructs and dimensions are presented in Table 3 before the development of the conceptual model.
Proposed model: E-Government Service Gap Assessment Model (E-GSGAM)
In constructing the model, the researchers adopted a merger approach as proposed by Li and Shang (2019) in which various measurement elements from the models and frameworks reviewed in this study are combined to form a multidimensional model. Based on the integrative literature review, the researchers propose that the assessment of e-government service gaps can be performed from three possible dimensions (constructs): functionality, delivery and service gaps. These dimensions are depicted in Figure 1 together with their measurement elements where possible. The constructs and dimensions are translated into the model based on laws of interaction. Accordingly, laws of interaction are the statements of the relationship between the constructs and dimensions of the model (Holton & Lowe 2007). Thus, ‘the laws of interaction are those (statements) that describe the existing relation between the theory’s concepts (units) and that show the cause-effect relations between the concepts …’ (Campos, Atondo & Quintero 2014:81). The statements of interaction clearly state the manner in which constructs and dimensions should interact with each other in the model. Constructs and dimensions can be adequately mapped in the model if the nature of the interaction is established accurately. The following four laws of interaction were used to construct the conceptual model:
Law of interaction 1: Functionality of the e-government is enhanced by responsiveness, flexibility, integration, ease of use, interactivity, reliability and intangibility.
Law of interaction 2: Delivery of the e-government system is enhanced by efficiency, sufficiency, accessibility, accuracy, relevance, timeliness and transparency.
Law of interaction 3: Functionality and delivery capabilities of e-government influence the actual performance and expected performance of the system.
Law of interaction 4: Actual performance and the expected performance of the e-government system determine the e-government service gaps.
|
FIGURE 1: E-government service gap assessment model (E-GSGAM). |
|
Conclusion, limitations, recommendation and contributions
Assessment of e-government service gaps is a necessary condition in achieving both the quality of service and user satisfaction. Whilst models and frameworks have developed over time, metrics that cover the assessment of e-government service gaps are non-existent from the extant literature. The evaluation of e-government service gap is still missing and requires particular attention. Therefore, researchers conclude that without identifying service gaps, it will be difficult for governments to deploy e-government systems that provide comprehensive services.
Based on four laws of interaction, a conceptual model for assessing e-government service gaps was developed. In the conceptual model, functionality and delivery constructs represent the independent variables of the study. Singularly or jointly, functionality and delivery constructs influence the expected performance and the actual performance of the e-government system. Furthermore, by using the laws of interaction, it can be concluded that service gaps can also be measured using expected performance and actual performance of the e-government system.
The integrative review conducted in this study was restricted in the quantity and quality of the research papers considered for inclusion. Whilst this presents a limitation of the study, the approach reflects a significant finding for further research which should consider collecting data from the users of e-government systems as well as testing ad validating the conceptual model in a developing context.
Theoretically, the findings provide knowledge to the body of literature concerning the evaluation of e-government service gaps. The elements and constructs identified in this study form the foundation of developing a multidimensional model for assessing e-government service gaps. The study also contributes to the current themes on e-government research in developing countries, such as the e-government program evaluation and e-services. Overall, the study fills a knowledge gap on how e-government service gaps can be assessed using a model grounded on merged approach.
Practically, the model can be used as a prescriptive tool during the design phase (pre-implementation phase) or in scaling up e-government projects and as an evaluation tool in the post-implementation phase. The model will also enable the field of e-government to develop practical solutions and close e-government service gaps. Furthermore, it can be used for quality control or assurance during pilot testing of an e-government project and other similar e-services outside the e-government research community. Hence, the model will contribute significantly beyond the e-government domain.
Acknowledgements
Competing interests
The authors affirm that they have no financial or personal relationships which may have improperly influenced them in writing this article.
Authors’ contributions
Both authors contributed equally towards the research and the writing of the article.
Ethical consideration
This article followed all ethical standards for carrying out research.
Funding information
The authors declare that they did not receive grants from any funding agency in the public, commercial or not-for-profit organisations.
Data availability statement
Data sharing is not applicable to this article because the researchers did not collect primary data.
Disclaimer
The information contained in this article reflects the views and opinions of the authors and not of their institutions.
References
Abu-Shanab, E., Khasawneh, R., Li, Y., Shang, H., Herdiyanti, A., Adityaputri, A.N. et al., 2014, ‘Developing service quality using gap model – A critical study’, IOSR Journal of Business and Management 7(2), 92–100. https://doi.org/10.13189/aeb.2019.070204
Ahmad, K.M., Campbell, J., Pathak, R.D., Belwal, R., Singh, G., Naz, R. et al., 2019, ‘Satisfaction with e-participation: A model from the citizen’s perspective, expectations, and affective ties to the place’, African Journal of Business Management 7(1), 157–166. https://doi.org/10.1007/978-3-642-22878-0_36
Al-Balushi, F.M., Bahari, M., Abdul Rahman, A. & Hashim, H., 2016, ‘Conceptualization of e-government integration studies’, Journal of Theoretical and Applied Information Technology 89(2), 439–449.
Al-Debei, M.M. & Avison, D., 2010, ‘Developing a unified framework of the business model concept’, European Journal of Information Systems 19(3), 359–376. https://doi.org/10.1057/ejis.2010.21
Albar, H.A.M., Dahlan, A.A., Yuhefizar, E. & Napitupulu, D., 2017, ‘E-government service quality based on e-GovQual approach case study in West Sumatera province’, International Journal on Advanced Science, Engineering and Information Technology 7(6), 2337–2342. https://doi.org/10.18517/ijaseit.7.6.4226
Alsaif, M., 2014, ‘Factors affecting citizens’ adoption of e-government moderated by socio and cultural values in Saudi Arabia’, in Proceedings of the 13th European Conference on E-Government, July, England: Acpi, pp. 578–586.
Baheer, B.A., Lamas, D. & Sousa, S., 2020, ‘A systematic literature review on existing digital government architectures: State-of-the-art, challenges, and prospects’, Administrative Sciences 10(2), 25. https://doi.org/10.3390/admsci10020025
Bayona, S. & Morales, V., 2017, ‘E-government development models for municipalities’, Journal of Computational Methods in Sciences and Engineering 17(S1), S47–S59. https://doi.org/10.3233/JCM-160679
Brown, A., Fishenden, J., Thompson, M. & Venters, W., 2017, ‘Appraising the impact and role of platform models and Government as a Platform (GaaP) in UK Government public service reform: Towards a platform assessment framework (PAF)’, Government Information Quarterly 34(2), 167–182. https://doi.org/10.1016/j.giq.2017.03.003
Campos, H.M., Atondo, G.H. & Quintero, M.R., 2014, ‘Towards a theory for strategic posture in new technology based firms’, Journal of Technology Management & Innovation 9(2), 77–85. https://doi.org/10.4067/S0718-27242014000200006
DeLone, W.H. & McLean, E.R., 2003, ‘The DeLone and McLean model of information systems success: A ten-year update’, Journal of Management Information Systems 19(4), 9–30. https://doi.org/10.1080/07421222.2003.11045748
Dias, G.P., 2020, ‘Global e-government development: Besides the relative wealth of countries, do policies matter?’, Transforming Government: People, Process and Policy 14(3), 381–400. https://doi.org/10.1108/TG-12-2019-0125
Eastwood, J.G., Jalaludin, B.B. & Kemp, L.A., 2014, ‘Realist explanatory theory building method for social epidemiology: A protocol for a mixed method multilevel study of neighbourhood context and postnatal depression’, SpringerPlus 3(1), article 12. https://doi.org/10.1186/2193-1801-3-12
Ecker, E. & Skelly, A., 2010, ‘Conducting a winning literature search’, Evidence-Based Spine-Care Journal 1(1), 9–14. https://doi.org/10.1055/s-0028-1100887
Eze, U.C., Huey Goh, M., Yaw Ling, H. & Har Lee, C., 2011, ‘Intention to use e-government services in Malaysia: Perspective of individual users’, Communications in Computer and Information Science 252(2), 512–526. https://doi.org/10.1007/978-3-642-25453-6_43
Gebremichael, G.B. & Singh, A.I., 2019, ‘Customers’ expectations and perceptions of service quality dimensions: A study of the hotel industry in selected cities of Tigray Region, Ethiopia’, African Journal of Hospitality, Tourism and Leisure 8(5), 1–15.
Gupta, M.P. & Jana, D., 2003, ‘E-government evaluation: A framework and case study’, Government Information Quarterly 20(4), 365–387. https://doi.org/10.1016/j.giq.2003.08.002
Herdiyanti, A., Adityaputri, A.N. & Astuti, H.M., 2018, ‘ScienceDirect understanding the quality gap of information technology services from the perspective of service provider and consumer’, Procedia Computer Science 124, 601–607. https://doi.org/10.1016/j.procs.2017.12.195
Holton, E.F. & Lowe, J.S., 2007, ‘Toward a general research process for using Dubin’s theory building model’, Human Resource Development Review 6(3), 297–320. https://doi.org/10.1177/1534484307304219
Jaeger, P. & Matteson, M., 2009, ‘E-government and technology acceptance: The case of the implementation of section 508 guidelines for websites’, Electronic Journal of E-Government 7(1), 87–98.
Khameesy, N.E., Magdi, D. & Khalifa, H., 2017, ‘A proposed model for enhance the effectiveness of e-government web based portal services with application on Egypt’s government portal’, Egyptian Computer Science Journal 41(1), 22–37.
Kim, S., 2017, ‘Comparison of a multidimensional to a unidimensional measure of public service motivation: Predicting work attitudes’, International Journal of Public Administration 40(6), 504–515. https://doi.org/10.1080/01900692.2016.1141426
Kunstelj, M. & Vintar, M., 2004, ‘Evaluating the progress of e-government development: A critical analysis’, Information Polity 9(3), 131–148. https://doi.org/10.3233/IP-2004-0055
Layne, K. & Lee, J., 2001, ‘Developing fully functional e-government: A four stage model’, Government Information Quarterly 18(2), 122–136. https://doi.org/10.1016/S0740-624X(01)00066-1
Lessa, L., 2019, ‘Sustainability framework for e-government success: Feasibility assessment’, ACM International Conference Proceeding Series Part F1481, 231–239. https://doi.org/10.1145/3326365.3326396
Li, Y. & Shang, H., 2019, ‘Information & management service quality, perceived value, and citizens’ continuous-use intention regarding e-government: Empirical evidence from China’, Information & Management 57(3), 103197. https://doi.org/10.1016/j.im.2019.103197
Ma, D.S. & Kinchin, I.M., 2010, ‘Using concept mapping to enhance the research interview’, International Journal of Qualitative Method 9(1), 52–68. https://doi.org/10.1177/160940691000900106
Maguire, M. & Delahunt, B., 2017, ‘Doing a thematic analysis: A practical, step-by-step guide for learning and teaching scholars’, Aishe-J 3(3), 3351–3354.
McGowan, J., 2009, ‘Literature searching’, in P. Tugwell, B. Shea, M. Boers, P. Brooks, L.S. Simon, V. Strand & G. Wells (eds.), Evidence-based rheumatology, pp. 1–18.
Mukamurenzi, S., Grönlund, Å. & Islam, S.M., 2019, ‘Improving qualities of e-government services in Rwanda: A service provider perspective’, Electronic Journal of Information Systems in Developing Countries 85(5), 1–16. https://doi.org/10.1002/isd2.12089
Ngonzi, T. & Sewchurran, K., 2019, ‘User-stakeholders’ responsiveness: A necessary input for achieving in e-governance transformation in developing countries’, Electronic Journal of Information Systems in Developing Countries 85(6), 1–16. https://doi.org/10.1002/isd2.12107
Nowell, L.S., Norris, J.M., White, D.E. & Moules, N.J., 2017, ‘Thematic analysis: Striving to meet the trustworthiness criteria’, International Journal of Qualitative Methods 16(1), 1–13. https://doi.org/10.1177/1609406917733847
Palotti, J., Zuccon, G. & Hanbury, A., 2018, ‘MM: A new framework for multidimensional evaluation of search engines’, in CIKM ’18: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, Proceedings, pp. 1699–1702. https://doi.org/10.1145/3269206.3269261
Palvia, S.C.J. & Sharma, S.S., 2007, ‘E-government and e-governance: Definitions/domain framework and status around the world’, Foundations of e-government, pp. 1–12, viewed n.d., from http://www.iceg.net/2007/books/1/1_369.pdf.
Patsioura, F., 2014, ‘Evaluating e-government’, in Evaluating websites and web services: Interdisciplinary perspectives on user satisfaction, pp. 1–18. https://doi.org/10.4018/978-1-4666-5129-6.ch001
Pena, M.M., Maria, E., Maria, D., Tronchin, R. & Melleiro, M.M., 2013, ‘The use of the quality model of Parasuraman, Zeithaml and Berry in health services’, Revista da Escola de Enfermagem da USP 47(5), 1227–1232. https://doi.org/10.1590/S0080-623420130000500030
Perkov, J. & Panjkota, A., 2017, ‘Indicators and metrics for e-government maturity model in croatia’, Poslovna izvrsnost 11(2), 85–105. https://doi.org/10.22598/pi-be/2017.11.2.85
Ramdan, S.M., Azizan, Y.N. & Saadan, K., 2014, ‘E-government systems success evaluating under principle Islam: A validation of the Delone and Mclean model of Islamic information systems success’, Academic Research International 5(2), 72–85, viewed from www.savap.org.pk%0Awww.journals.savap.org.pk.
Rana, N.P., Dwivedi, Y.K., Lal, B., Williams, M.D. & Clement, M., 2017, ‘Citizens’ adoption of an electronic government system: Towards a unified view’, Information Systems Frontiers 19(3), 549–568. https://doi.org/10.1007/s10796-015-9613-y
Roberts, T. & Hernandez, K., 2019, ‘Digital access is not binary : The 5’ A’s of technology access in the Philippines’, Electronic Journal of Information Systems in Developing Countries 85(4), e12084. https://doi.org/10.1002/isd2.12084
Roy, S., Tarafdar, M., Ragu-Nathan, T.S. & Marsillac, E., 2012, ‘The effect of misspecification of reflective and formative constructs in operations and manufacturing management research’, Electronic Journal of Business Research Methods 10(1), 34–52.
Sebetci, Ö., 2015, ‘A TAM-based model for e-government: A case for Turkey’, International Journal of Electronic Governance 7(2), 113–135. https://doi.org/10.1504/IJEG.2015.069503
Shuib, L., Yadegaridehkordi, E. & Ainin, S., 2019, ‘Malaysian urban poor adoption of e-government applications and their satisfaction’, Cogent Social Sciences 5(1), 1565293. https://doi.org/10.1080/23311886.2019.1565293
Sigwejo, A. & Pather, S., 2016, ‘A citizen-centric framework for assessing e-government effectiveness’, Electronic Journal of Information Systems in Developing Countries 74(1), 1–27. https://doi.org/10.1002/j.1681-4835.2016.tb00542.x
Taherdoost, H., Sahibuddin, S. & Jalaliyoon, N., 2014, ‘Features’ evaluation of goods, services and e-services; Electronic service characteristics exploration’, Procedia Technology 12, 204–211. https://doi.org/10.1016/j.protcy.2013.12.476
Tarmizi, H., 2016, ‘E-government and social media: A case study from Indonesia’s Capital’, Journal of E-Government Studies and Best Practices 2016, article 10. https://doi.org/10.5171/2016.514329
Torraco, R.J., 2016, ‘Writing integrative literature reviews: Using the past and present to explore the future’, Human Resource Development Review 15(4), 404–428. https://doi.org/10.1177/1534484316671606
Venkatesh, V., Thong, J.Y.L. & Xu, X., 2016, ‘Unified theory of acceptance and use of technology: A synthesis and the road ahead’, Journal of the Association of Information Systems 17(5), 328–376. https://doi.org/10.17705/1jais.00428
Voutinioti, A., 2014, ‘Determinants of user adoption of e-government services: The case of Greek local government’, International Journal of Technology Marketing 9(3), 234. https://doi.org/10.1504/ijtmkt.2014.063854
Waller, P., Irani, Z., Lee, H. & Weerakkody, V., 2014, ‘Lessons on measuring e-government satisfaction: An experience from surveying government agencies in the UK’, International Journal of Electronic Government Research 10(3), 37–46. https://doi.org/10.4018/ijegr.2014070103
Wohlin, C., 2014, ‘Guidelines for snowballing in systematic literature studies and a replication in software engineering’, in Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering – EASE’14, pp. 1–10. https://doi.org/10.1145/2601248.2601268
Zabadi, A.M., 2016, ‘Adoption of information systems (IS): The factors that influencing IS usage and its effect on employee in Jordan telecom sector (JTS): A conceptual integrated model’, International Journal of Business and Management 11(3), 25. https://doi.org/10.5539/ijbm.v11n3p25
Zhou, R., Wang, X., Shi, Y., Zhang, R., Zhang, L. & Guo, H., 2019, ‘Measuring e-service quality and its importance to customer satisfaction and loyalty: An empirical study in a telecom setting’, Electronic Commerce Research 19(3), 477–499. https://doi.org/10.1007/s10660-018-9301-3
|