key: cord-0962152-pu5j9atj authors: Wunadavalli, Laxmi Tej; Satpathy, Sidhartha; Satapathy, Sujata; Singh, Sheetal; Singh, Angel Rajan; Kumar Chadda, Rakesh; Tiwari, Shraddhesh Kumar; Barre, Vijay Prasad title: Patient Satisfaction Scale for Hospitalized COVID-19 Patients: Development and Psychometric Properties date: 2022-03-23 journal: J Patient Exp DOI: 10.1177/23743735221086762 sha: 5d1bb2ffd8fcccd3388b270de923cb08a99b355b doc_id: 962152 cord_uid: pu5j9atj Objective: Patients’ appraisal of health care delivery system and services during COVID-19 could be an important yardstick for hospital administration and policy makers. The study attempted to develop and test the psychometric properties of a new patient satisfaction scale for COVID-19 patients. Methods: A total of 446 COVID-19-hospitalized COVID-19 patients in a tertiary care designated COVID-19 care hospital constituted the sample. Factor structure of scale was obtained using exploratory factor analysis (EFA). Internal consistency, split-half reliability, and validity (e.g., content, convergent, and divergent) were also evaluated. Results: Item reduction resulted in a 21-item scale consisting of three factors, namely COVID-19-focused treatment facility, COVID-19-appropriate hospital facility, and COVID-19-specific daily needs service facility. It demonstrated excellent internal consistency and reliability (Cronbach's alpha [α]: 0.93; Split-half reliability: 0.90), excellent content validity, and adequate convergent and divergent validity. The scale had no floor effects. Inter-index correlations were significant. To our knowledge: this scale is the first such psychometrically robust self-rated scale for patients’ perception about hospital services during COVID-19. Available in both Hindi and English languages, the scale provides a quick measure of patient experience regarding CCOVID-19-specific hospital services. As a complex and multifaceted construct, patients' experience is largely personal and subjective. This subjectivity is a result of patients' continuous cognitive and emotional appraisal of various aspects of hospital services/facilities in a given context and time period. The individual differences in such appraisals depend largely on patients' characteristics (particularly, sociodemographic and personality), their prior and changing expectations from the hospital, patients' perceptions of hospitals' genuine attempt in addressing their needs, their constant comparison between hospitals, their trust on the hospital, and their health conditions for which the treatment is sought for. However, hospitals' characteristics such as type of ownership (government or private), reputation and track record (preexisting general perception or reputation of a hospital and opinions of significant others), availability of basic and advanced treatment facilities, fees, and charges can influence patients' cognitive appraisal can also affect the outcome of their actual experience of the hospital and the resultant satisfaction or dissatisfaction. These patient and hospital-related dynamic variables which shape patients' appraisal, perception, and satisfaction are by and large maintained even during a highly contagious global pandemic despite the fact that patients' expectations and perception of control may be greatly affected due to such pandemic. People's understanding, reactions, acceptance, and response to COVID-19 varied, the health care providers and hospitals across the globe also struggled to revamp and rearrange the existing healthcare services (1) in terms of occupational safety, communication methods, infection control, in-take, and discharge protocols, disposal of dead bodies, boarding and lodging facilities, and cost. Again, patients' experience during COVID-19 can be shaped by other key determinants such as severity of COVID-19 conditions and type of treatment availed, hospital environment and infection control mechanisms, quality of hospital services, overall behavior of the staff, cost of hospitalization, and postdischarge (follow-up) facilities. (2) In addition, the unprecedented fear, apprehension, anxiety, uncertainty, and the resultant mental health condition of people in COVID-19 situation can also affect patients' appraisal, experience, and satisfaction of a hospital stay to a large extent. Moreover, satisfied patients reportedly are more adherent to the treatment recommendations, (3, 4) therefore patients' experience is worth exploring during COVID-19. Further, assessing patients' experience through a psychometrically sound measuring tool can reduce subjectivity and biases in assessment and provide a standardized tool to compare hospitals across countries. The findings of using a tool are generally considered more objective and scientific indicators, (5) hence can actually help in revamping and optimizing the health resources in resource-limited countries during the COVID-19 pandemic. The recently reported largely qualitative analyses on patients' satisfaction during COVID-19,(4-) highlighted specific issues while assessing patients' satisfaction/experience, for example., "knowing about COVID-19," "planning for, and responding to COVID-19," "being infected," "life in isolation and the room," and "post-discharge life". In addition, patient's perception of their illness and experience in themes such as concern, fears, and frustration, a change in outlook, were also explored. In addition to methodological deficiencies (poor sample size in three studies, and study design), these studies also did not assess COVID-19-specific/ COVID-19-focused hospital services/facilities availed by the hospitalized patients per se rather covid patients in general. Therefore, the findings though important but may have less direct implications for hospitals. Again, such evaluations lack sound methodology and focused analysis to assess and quantify patients' experience in the absence of application of any structured and psychometrically sound tool. Whereas, since 1980s, the emphasis has continued to measure the patients' experience in an objective manner either through questionnaires or scales in different settings including primary care and hospitals.(8) Though many of such tools did not report or lack psychometric robustness, (9) instruments such as Assessment of Patient Satisfaction Scale-15, Primary Care Satisfaction Scale-11, and Hospital Consumer Assessment of Healthcare Providers and Systems" ("Hospital CAHPS" or "HCAHPS") Adult Visit Survey have better psychometric properties. Various tools were also validated on different types of patients, for example, Patient Satisfaction and Preference Questionnaire (PASAPQ) Direct Comparison Version was validated on asthma and COPD patients; Patient Satisfaction with doctor-patient interaction was validated during the severe acute respiratory syndrome outbreak; Quality of Care Questionnaire has been validated for Swedish health care environments. To our knowledge, no patient satisfaction scale has been validated for COVID-19 patients and since COVID-19 situation is a uniquely grave situation for hospitals to reorganize several systematic changes for meeting the health care standards and safety of patients and hospital staff, it is imperative to measure the patients' satisfaction in the changes of health care delivery system. And outlining these parameters in-patient experience may help hospitals in better quality health care service delivery. Therefore, the objective of the study was to develop and test the psychometric properties of a new scale for COVID-19 in patients, which will be referred to as COVID-19 patient satisfaction scale-21 (Covid PSS-21) in this paper. Data was collected between May and October 2020 from a designated COVID-19 treatment center within the largest tertiary care medical college and hospital in India. Phase-I: The scale development process started in May 2020 during the initial stage of COVID-19 and the items were developed in three phases. Phase-I consisted of (1) a thorough literature survey of published articles using existing patient satisfaction/experience scales for COVID-19 in-patients, (2) multiple semi-structured interactions with hospitalized patients, and (3) structured interviews with professionals directly involved in COVID-19 care hospitals. As we did not have direct physical access to the patients, telephonic interaction with patients was done and physical interactions with professionals directly managing COVID-19 care hospitals was done, thus served as the primary sources of item generation. 1. Screening of literature: Each measurement tool had its own characteristics, design intention, target population, and usage focus. A total of 15 scales/tools consisting of 389 items were reviewed by 3 authors for possible adaptation. Scale domains and items in these scales were discussed and 15 items were agreed upon for possible inclusion in the draft scale. Then these items were modified in terms of language, sentence structure, and response format to make them more culturally applicable to Indian patients, however without changing the core theme/domain of the item. For example, Item No. 2 in Patient Satisfaction Questionnaire-18 (PSS-18)(10) "I think my doctor's office has everything needed to provide complete care" was modified in our draft scale as "I believe that the hospital had everything needed to provide complete medical care" (Item No. 14 in our current scale). Similarly, one item in Picker Patient Experience Questionnaire (PPEQ-15)(11) "Item 9-did you find someone on the hospital staff to talk to about your concerns? was modified in our draft scale as "general behaviour of the staff was cordial" (Item No. 11). 2. Once the first list of 138 patients admitted was received, one of the authors, who was not involved in the study for reviewing the scales, data entry, and analysis, randomly (telling any 15 numbers between 1 and 138) selected 15 patients for semi-structured telephonic interactions. However, the patient's satisfaction scale development was one of the deliverables in a COVID-19 intra-mural mental health research project, therefore, the purpose of such semi-structured interactions was more for initiating the provision of psychosocial support, and identifying and addressing their unmet material needs in the hospital. And finding out their views on patients' experience in the hospital was a small part of this process. Three authors were involved in this interaction process. We also found out their comfort regarding online information exchange keeping in the mind the Google form circulation for data collection. This also helped us to gauge two things: First, whether the patients are ready to participate in the larger study (this was important from the ethical point of view) and second, what were the expectations of these patients from the hospital and reasons of their dissatisfaction. 3. A total of 11 people consisting of faculty of hospital administration (n = 4), nurses (n = 3), and clinical psychologist (n = 4) working with hospitalized COVD-19 patients, in aCOVID-19-designated hospital, were included in the structured interview. An interview guide with seven thematic issues pertaining to patient satisfaction during COVID-19 hospitalization was prepared. These open themes (e.g., patients general satisfaction, hospital services, communication with hospital staff and treating team, financial implication, COVID-19-specific safety and infection control measures, and scale characteristics) were primarily factors (e.g., kindly let us know what would be the factors that would affect general satisfaction of COVID-19 hospitalized patients and why?, Will free treatment of COVID-19 affect patient experience/satisfaction and if yes, then to what extent, and so on) that could affect patient satisfaction of hospitalized patients. On the basis of the analysis of data gathered from these three steps, a rough draft of the items was prepared. A draft scale with 43 items (15 items pooled and modified from existing scales and 28 items on basis of semi-structured interactions with patients and structured interviews with professionals involved in COVID-19 care) spread in 5 domains (named general hospitality, nursing care and treatment, interpersonal communication, isolation and infection control facility, and financial burden) was prepared in the English language. The response options were arranged in a 5-point Likert scale (Strongly Agree/Satisfied = 5, Agree/Satisfied = 4, Not Sure = 3, Disagree/Dissatisfied = 2, Strongly Disagree/ Dissatisfied = 1). The score range was 21-105, where higher scores indicated higher patient satisfaction. Phase-2: Two trained clinical psychologists and one psychologist (all three were native Hindi speakers and professionally trained through English medium of instruction) proficient in both English and Hindi languages translated the scale into Hindi following WHO back translation guidelines. Items matching 90%-95% with the original English items were retained and items where the agreement between three translators did not match 90-95% alternative Hindi words (without changing the meaning and purpose of assessment of the original English word) were elicited. Content Validity of the Scale: Then the draft scale was sent to six subject matter experts (hospital administration discipline) for obtaining content validity. Feedback was obtained in two ways -item-specific and overall scale feedback. All items with more than 50% of the expert's agreement were retained.(12) Upon face validity check, length of scale and overlapping/repetitiveness of items were considered in detail, and scale was refined accordingly with consensus of six authors. Further, content validity ratio (CVR) was calculated for each item where items scoring between 0.8 and 1 were only retained immediately. Those in between >0.5 and <0.8 were again taken up for discussion and retained based on the clinical significance or suggested refinement by experts. Those below 0.5 were discarded. Thus, in total, 22 items were finalized at this stage. Phase-3: Pilot testing of 22 items on a sample of 5 adult COVID-19 patients isolated in a small COVID-19 ward of a tertiary care hospital was done. It had two components (13): the examination of the extent to which the questions reflect the domain construct being studied and the examination of the extent to which the questions in the scale enquires valid questions regarding patients' satisfaction regarding COVID-19-related services and facilities. One item on food outsourcing was deleted at this stage as all the patients found it not relevant (as no one was allowed to deliver food to the COVID-19 ward and all meals were supplied by the hospital only). Thus, the initial 43 items in the draft scale were reduced to 21 items (which included 5 pooled and modified items out of 15 items initially pooled and modified + 16 new items from 28 new items in the draft scale = 43 items) was retained for statistical analysis. The time taken to complete the scale varied from 5 to 8 min. Patient's feedback on item difficulty (to understand or to respond) was rated on a single item in a 3-point Likert scale: Not at all difficult, moderately difficult, and very difficult. All the participants rated it as not at all difficult. Study Population: The participants were 446 adult COVID-19-hospitalized patients in a COVID-19 care designated center of a tertiary care hospital other than the COVID-19 ward from where the pilot testing of the scale was done. The following assessment tools were used to collect data to find out convergent and divergent validity of the newly developed tool: Sociodemographic and Clinical Proforma was constructed for the study consisting of information on age, gender, socioeconomic status, education, occupation, monthly income, marital status, and state of domicile. The clinical information on co-morbidities, family history of COVID-19 status, and family history of psychiatric illness, date of admission, current symptoms, and death due to COVID-19 in the family were also sought. Self-Reporting Questionnaire-20 (14): A total of 20 dichotomous (true or false) items assessing psychological distress was included. Hospital Anxiety and Depression Scale (HADS): HADS containing 14 questions, including 7 each for rating anxiety and depression, was used to screen clinically significant anxiety and depression in patients.(15) A score of 8 and more were considered as cut-off for the presence of probable presence of the respective state. Multidimensional Scale for Perceived Social Support (16): MSPSS, containing 12 items on a 7-point Likert type response format, was used to measure the patient's perception of the adequacy emotional and social support of received from the family, friends, and significant others. Subsequent to the Institute Ethics Committee approval (IEC-320/27.04.2020), a list of potential participants was obtained through the hospital administration department of the COVID-19 care centers. A WhatsApp group was formed and patients were informed and requested to participate in the study. The options of filling in the hard copies and Goggle forms (through mobile smartphones) were explained to the participants through WhatsApp and in person (whenever feasible). The hard copies were circulated in all rooms of one COVID-19 care center at a time and when a patient was discharged the circulation was again done as per the list of new patients. Another WhatsApp group with a new list of patients was formed. All patients in all the groups were repeatedly informed about reading the information on the study and then consent for participation. Informed written consents were sought through hard copies of data sheets in the traditional style of data collection. Hard data sheets were collected and opened in due course after following COVID-19 protocol of restrictions of the visit to COVID-19 ward and of fomite transmission. All Google forms were automatically stored in an excel sheet and the data from hard copies were entered into the same excel sheet. However, except six Google forms all were hard data sheets, so it was decided to include these six in the remaining data. Reliability: Reliability of the CovidPSS-21and its domains were assessed by using Cronbach's alpha (α). In addition, split-half reliability corrected with the Spearman-Brown formula was assessed by comparing the odd versus even items of the scale. These methods for assessing reliability evaluate internal consistency and the degree to which the content or specific items of the scale contribute to error variance. Validity: Content validity of scale item was obtained using CVR was calculated according to the formula CVR = (Ne −N/2)/(N/2) developed by Lawshe(12) where Ne is the number of experts rated an item as essential, and N is total No. of experts in the expert review panel. Pearson's correlation coefficient was found out using MSPSS for convergent validity and using SRQ-20 and HADS divergent validity. Factor Analysis: Before conducting factor analysis, values of the Kaiser-Meyer-Olkin (KMO) and Bartlett's Test of Sphericity (BTS) were obtained to ensure that the data was adequate and appropriate for conducting exploratory factor analysis (EFA), while Kolmogorov-Smirnov (K-S) was done to check normalcy of the distribution. EFA was done using principal component analysis (PCA). Varimax method of orthogonal rotation was used for rotating factors. Factors having an eigenvalue of more than 1 were extracted with a minimum factor loading of 0.40 for retention of items. Statistical analysis: Data analysis was done through the computer software named Statistical Package for the Social Sciences (SPSS) 21.0 version. 17 A total of 761 patients were invited, 461 returned back the filled in tools (response rate is 60.58%), and after removing 15 invalid sheets (half filled or unfilled tools), 446 valid data were included for analysis. Thus, the sample consisted of 446 (male = 356; female = 90) COVID-19-hospitalized patients. The age of the sample ranged from 18 to 80 years with a mean age of 35.9 years (SD = 11.78). However, the highest number of patients was in the age group of 26-35 (36.5%) followed by 16-25 (20 (Figure 1 ). Scale Description: CovidPSS-21 has 21 items on a 5-point response format (responses ranging from strongly agree/satisfied = 5 to strongly disagree/dissatisfied = 1) thus, scores ranged from 21 to 105 with a mean of 91.14 ± 9.89. The data skewness was −0.486 was within −0.5 to + 0.5. The scale kurtosis was 1.1, just slightly outside of the range of −1.00 to + 1.00, therefore suggesting the distribution to be slightly peaked distribution. The minimum score was 42 and maximum was 105. Additionally, by dividing the value of skewness and kurtosis with their standard error, since the value fell within the range of + 1.96 to −1.96, distribution was normal. (19) No respondents scored the lowest score of 21 on the scale thus suggesting no floor effects of the scale. However, 64 (14.35%) of respondents had the maximum score of 105 though suggesting the presence of acceptable ceiling effects however, still was within the historical acceptable range of Floor/Ceiling Effects as 15%. (20) The Measurement Errors: The systematic and random errors in the scale measurement were kept in regular check during the pilot test, data collection, removing the incomplete data sheet, and double-checking during data entry. First, telephonic feedback was obtained from the respondents during the pilot regarding how easy or hard was Covid PSS-21 and information about how the testing environment affected their performance. Second, data were collected by the first author only to reduce the risk of inadvertent error. Third, another two authors (SS and ST) double-checked the data sheets thoroughly for incomplete data sheets or left out items. All data entry for computer analysis was punched by one author (ST) and randomly verified at least half of the data by another author (SS). Fourth, the authors also ensured that there is no missing data and statistical procedures were run to normalize the data to adjust for measurement error. Factor Analysis: KMO measure of sampling adequacy was 0.934 which was meritoriously adequate for conducting EFA. (21) BTS was found to be significant (p < .000), thus indicating significant correlations among the items for factor analysis. Data normalcy was good as K-S value was significant (p-value < .05) after square root transformation. Since all 21 items had a factor loading value of more than 0.40, no item was deleted during factor analysis, hence Cronbach's α value after each item deletion was not calculated. Results of the initial EFA analysis extracted three factors which explained 60.384% of the cumulative variance in the data. Out of three factors, COVID-19-appropriate hospital facility had nine items, COVID-19-focused treatment facility contained seven items, and COVID-19-specific daily needs services facility had five items. The factor structure was considered stable. (22, 23) The placement of items was found to be by and large domain appropriate. Figure 2 shows the scree plot of the initial eigenvalues for each possible factor and the point of inflection was indicated as 3, therefore, all three were considered to examine the cross-loading of items. As mentioned earlier, factor loading of all 21 items was above 0.40, hence an item was retained in a factor where it had the highest cross-loading value.(24) However, despite higher factor loading, on authors agreement one item on communication (Item No. 18) was changed from covid specific daily needs facility to COVID-19-appropriate hospital facility viewing its relevance in the later factor. Subsequently, factor naming was done based on the review of underlying construct of each item. Thus, in the final structure of the scale, out of 21 items COVID-19-appropriate hospital facility had 10 items (Item Nos. 4,5,6,11,12,13,14, 18,19, and 20) , COVID-19-focused treatment facility contained 7 items (Item Nos. 7, 8, 9, 15, 16, 17, and 21) , and COVID-19-specific daily needs services facility had 4 items (Item Nos. 1, 2, 3, and 10). Table 1 presented the final three-factor model of CovidPSS-21 as emerged from EFA in the study. Homogeneity of each index was estimated by means of the Cronbach's α statistic. Cronbach's α for CovidPSS-2 remained excellent at 0.932 indicating excellent internal consistency. Alpha value remained high for factor 1 (0.926), and factor 2 (0.844), and was acceptable for factor 3 (0.726). Split-half reliability with Spearman-Brown Coefficient and Guttman Split-Half Coefficient remained as good, that is,0.905 for the whole scale and 0.898, 0.788, and 0.725 for three factors, respectively. The inter-domain correlation matrix indicated a high significant positive correlation of factor 1 with factor 2 (r = 0.706, p < .01) and factor 3 (r = 0.714, p < .01). The new CovidPSS-21 scale had excellent content validity (0. 912). It also had a significant positive correlation with MSPSS (r = 0.156, p < .01). Thus, the size of the correlation (Hinkle & Wiersma, 2003) indicated that the scale had adequate convergent validity. The negative correlation of CovidPSS-21 with SRQ (r = −0.168, p < .01) and HADS (r = −0.121, p < .05 for anxiety and r = −0.114, p < .05 for depression) reflected that psychological distress, depression, and anxiety among the hospitalized COVID-19 patients were conceptually distinct concepts than those measured in CovidPSS-21, thus indicated its satisfactory divergent validity. It is accepted that despite the difficulty in pinpointing what satisfaction entails if patients are dissatisfied, health care hasn't achieved its goal.(25) Moreover, how patient experience is measured is an important field of study in healthcare particularly when legal frameworks in many countries have mandated the use of measures of the quality of care such as patient satisfaction to evaluate the effectiveness of health care services. (26) The quantitative approach provides accurate methods to measure patient satisfaction through standardized questionnaires (either self-reported or interviewer-administrated or by telephone). (27, 28) In the absence of a patients' satisfaction/experience tool specific to COVID-19 patients, we developed and examined the psychometric properties of a new patients' satisfaction scale (CovidPSS-21) sampling hospitalized COVID-19 patients and gearing up hospitals to meet safety and treatment standards following evolving COVID-19 protocols. The sample size of 446 in the development and psychometric property testing of CovidPSS-21 was as per many standards in the literature such as an ideal ratio of respondents to items is 10:1(42), using 300 respondents after initial pretesting (Clark & Watson, 1995), a range of 200-300 as appropriate for factor analysis,(29,30) a minimum of 300-450 is required to observe an acceptable comparability of patterns, and that replication is required if the sample size is < 300,(29) and very good as per a graded scale of sample sizes for scale development. (31) The development of 43 items in the item generation stage was more than twice the final CovidPSS-21 scale items and this was as per exiting standards which suggested that the initial pool of items developed should be at minimum twice as long as the desired final scale. (32, 33) The data skewness value of −0.486 indicated the distribution to be fairly symmetrical. (34) However, the scale kurtosis was 1.1, just marginally outside of the range of −1.00 to + 1.00, therefore suggesting the distribution to be slightly peaked distribution. Additionally, by dividing the value of skewness and kurtosis with their standard error, since the value fell within the range of + 1.96 to −1.96, distribution was normal. (19) No respondents scored the lowest score 21 on the scale thus suggesting no floor effects of the scale. However, 64 (14.35%) of respondents had the maximum score of 105 though suggesting presence of ceiling effects however, still was within the historical acceptable range of Floor/Ceiling Effects as 15%, (20) although others have suggested it to be up to 10%(35) and even 5%.(36) The minimum score was 42 and maximum was 105. This minimal ceiling effects, however, raised the mean score of CovidPSS-21. The higher mean score on CovidPSS-21 could be attributed to a few factors: (1) no financial implications of the inpatient hospital facility provided to the patients, particularly in the context of economic slowdown, job loss, uncertainty about future of occupational opportunities. And we assume that free treatment in governments hospital (compared to a huge cost at any private hospital for COVID-19 treatment) may actually boost the satisfaction level; (2) the score could be hospital specific as this was the best rated medical college and hospital in the country; and (3) telephonic interaction with few patients indicated that the single room with attached bath-toilets helped them to maintain their privacy and feel-good factor as they compared themselves with other key government hospitals functional largely with dormitory facilities. Also, hospital's high reputation and various systems in place could have perhaps affected patients' perception to maintain their trust and hope for positive disease outcomes and treatment outcomes which are key to patients' satisfaction during COVID-19 pandemic. (37, 38) Item reduction of CovidPSS-21 was focused on applicability in routine covid care in designated COVID-19 care centers in a large hospital. Therefore, three steps of item redundancy were followed (39, 40) : (1) elimination of redundancy or avoiding overlap between the items was done by targeting inter-item correlations > = 0.70, (2) safeguarding sufficient scale distribution by avoiding floor effects (our scale had no floor effects), and (3) establishing consistency by removal of items not loading > = 0.40 on one of the newly found factors. (41, 42) This three-factor model was also reflected in the scree plot where three points of inflections above the eigenvalue 1.0 were noted. Although we did not calculate the item difficulty index, during the pilot testing of CovidPSS-21, all the respondents reported the difficulty level of items as moderate. (42) The three-factor model that emerged in the present study demonstrated multidimensionality of the scale. Earlier studies also reported the importance of items in multiple areas that contribute to patients' satisfaction. (26) Psychometric Properties often been regarded as an acceptable threshold for reliability; however, 0.80 and 0.95 are preferred for the psychometric quality of scales. (41) (42) (43) (44) However, due to the unavailability of a patient satisfaction scale for COVID-19-specific health care facilities, we had to use the MSPSS which was conceptually close to satisfaction but not equal. A traditional patient satisfaction scale (e.g., PSS-18 (45) ): for hospitalized patients could have been used for better convergent validity but keeping social isolation and restrictions of physical communication with humans we thought social support scale would be a better instrument. CovidPSS-21 also had high item content validity (CVR = 0.91) and also demonstrated adequate convergent and divergent validity. The content validity reflected an expert consensus on evidence of content relevance, representativeness, and technical quality.(46) Thus, it can be considered as a robust scale with demonstrated psychometric properties including reliability and validity on a relatively large representative sample on a variety of measures as suggested by Cook and Beckman(47); Salmond (48) ; DeVellis(49); Pittman and Bakes. (50) This scale also provides interpretable data and scientifically robust results as suggested by Cano and Hobart. (51) We used totally different constructs for measuring divergent validity and psychological distress as measured by SRQ and Anxiety-Depression as measured by HADS. This is as suggested in the literature that the studied new instrument is differentiated from behavioral manifestations of other constructs, which on theoretical grounds can be expected not to be related to the construct underlying the new instrument under investigation. (42) Both the scales were psychometrically sound and very commonly used globally. Analyzing the discriminative responsiveness of the scale we tried to see the ability of the instrument to make distinctions between groups of respondents. The finding of no statistically significant gender differences on CovidPSS-21 was in contrast to studies that reported females to be significantly less satisfied than males attributed that female inpatients generally being less positive experiences in areas of communication about medicines, discharge information, and facility cleanliness. (52) (53) (54) (55) Extrapolating from this finding, it is safe to say that perhaps due to COVID-19 related no touch, no contact safety protocols the communication with the treating team was minimum and modes of communication also were same for both and facility cleanliness received a lot of emphasis from the hospital. Therefore, there were no gender differences on CovidPSS-21. Interestingly, the finding of younger people (here aged 16-30 years) significantly less satisfied than the patients aged between 31 and 45 years also supported by earlier patient's satisfaction scale development studies demonstrated that CovidPSS-21 has demonstrated discriminative responsiveness. (42, 53, 54) There can be a few advantages of using CovidPSS-21 during the COVID-19 situation. First, one specific advantage of CovidPSS-21 over other traditional measures of patients' satisfaction is that it contains all items related to COVID-19-specific situations (e.g., social Housekeeping and sanitation services were provided regularly. .488 3. I was able to maintain daily self-care throughout these days .554 4. My room was well ventilated and furnished .751 5. My boarding and lodging were taken care of .763 6. I felt safe in this environment .697 7. Medical consultation/treatment for my chronic illnesses other than COVID-19 was provided. .501 8. Patient care coordinator/ hospital people used to visit me daily .671 9. I could easily contact doctors / nursing staff through voice/video call on Duty Mobile number given to me .517 10. Required medicines for all my illnesses were supplied to me .502 11. General behaviour of the staff was cordial. .631 12. All financial expenses towards boarding, lodging, treatment, etc., during this period was borne by the hospital only. isolation, difficulty in physical communication, no family attendant, no visitor, no financial implication, etc.) enforced by the hospital as per covid safety guidelines. This removes the risk of introducing rater biases thus, capturing accurately the first-hand experience of COVID-19-specific hospital services of the hospitalized patients. Second, since it has sound psychometric properties, it can serve as an objective indicator of tasks for improvement on a specific domain with low mean scores. Third, since items are not hospital specific both private hospitals and government hospitals can use this scale to improvise their services and facilities. Although 446 was a reasonable sample size for scale with 21 items, scale generalizability can be achieved through large samples in government and private hospitals. The ceiling effect could have been minimized by including samples from other government-owned hospitals. Physicians could have been included in the key interviews to increase the scope of more patient satisfaction items. Due to discharge and COVID-19-specific protocols we could not use CovidPSS-21 repeatedly, for the same group of patients, to eliminate the risk of systematic errors and validation studies may take care of this in the future now through online data. Future studies can establish cross-cultural validity in their sociocultural context and undertake a confirmatory factor analysis to verify the factor structure. CovidPSS-21 is a psychometrically sound scale to measure patients' satisfaction among the hospitalized COVID-19 patients. It can be useful for both government and private hospitals to identify gaps in their services, facilities, and treatment so as to improve upon their COVID-19-specific health care facilities and ultimately enhance the ambit of satisfied patients. Surgery in COVID-19 patients: operational directives Impact of COVID-19 pandemic on patient satisfaction and surgical outcomes: a retrospective and cross sectional study Physician communication and patient adherence to treatment: a meta-analysis Switching doctors: predictors of voluntary disenrollment from a primary physician's practice Patient satisfaction survey as a tool towards quality improvement Application and preliminary outcomes of remote diagnosis and treatment during the COVID-19 outbreak: retrospective cohort study Incorporating SPACES recommendations to the COVID-19 ward care approach at the royal bournemouth hospital SARS-CoV-2 infection and COVID-19: the lived experience and perceptions of patients in isolation and care in an Australian healthcare setting Rapid implementation of a COVID-19 remote patient monitoring program Validation of a patient satisfaction questionnaire in primary health care Assessing the practicing physician using patient surveys: a systematic review of instruments and feedback methods The Patient Satisfaction Questionnaire Short Form (PSQ-18) The picker patient experience questionnaire: development and validation using data from in-patient surveys in five countries A quantitative approach to content validity Improving Survey Questions: Design and Evaluation Auser's guide to the Self-Reporting Questionnaire (SRQ). Geneva: Division of Mental Health, World Health Organization The hospital anxiety and depression scale The multidimensional scale of perceived social support Released 2012. IBM SPSS Statistics for Windows Revised Kuppuswamy's socioeconomic status scale: explained and updated The SAGE Dictionary of Statistics Quality criteria were proposed for measurement properties of health status questionnaires Sample size in factor analysis An item selection procedure to maximize scale reliability and validity Health measurement scales. A practical guide to their development and use Patient satisfaction -does it matter? Qual Assur Health Care Different combining process between male and female patients to reach their overall satisfaction Patient satisfaction measurement: current issues and implications Predictors of patient satisfaction with hospital health care Relation of sample size to the stability of component patterns Factor-analytic methods of scale development in personality and clinical psychology First Cours in Factor Analysis Handbook of Psychological Testing Handbook of psychology: Research methods in psychology Concepts in the analysis of qualitative data Interviews and focus groups Psychometric properties of the PROMIS physical function item bank in patients with spinal disorders. Spine (Phila Pa 1976) Measuring patient satisfaction: a cross sectional study to improve quality of care at a tertiary care hospital. Healthline Patient satisfaction: the new rules of engagement. The Health Care Blog Methodological approaches to shortening composite measurement scales Introduction to Psychometric Theory Coefficient alpha and the internal structure oftests Part B: Results Regarding Scales Constructed from the Patient Satisfaction Questionnaire and Measures of Other Health Care Perceptions Best practices for developing and validating scales for health, social, and behavioral research: a primer Current concepts in validity and reliability for psychometric instruments: theory and application Randomized controlled trials: methodological concepts and critique Scale Development Theory and Applications Measurement and instrument design The problem with health measurement A patient survey system to measure quality improvement: questionnaire reliability and validity Patient satisfaction surveys subsequent to hospital care: problems of sampling, non-response and other losses Reponse to questionnaire We acknowledge the AIIMS intra-mural funding for this research work. Dr. Laxmi Tej Wunadavalli-data collection and manuscript editing; Dr. Sujata Satapathy-conceptualization, literature review, and manuscript reading; Dr. Sujata Satapathy-conceptualization, literature review, data analysis, supervision of data entry, manuscript writing, and editing; Dr. Sheetal Singh-scale development process; Dr. Angel Rajan Singh-scale development process; Dr. Rakesh Kumar Chadda-Shraddhesh Kumar Tiwari-Data entry, data analysis, and manuscript literature review; Dr. Vijay Prasad Barre-scale development process, The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the All India Institute of Medical Sciences New Delhi (grant number A-Covid 61). Sujata Satapathy https://orcid.org/0000-0002-7315-7581