Skip Navigation
Skip to contents

RCPHN : Research in Community and Public Health Nursing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Res Community Public Health Nurs > Volume 35(3); 2024 > Article
Review Article
A Systematic Review of Questionnaire Measuring eHealth Literacy
Jung-Won Ahn1orcid, Mi Young Kim2orcid
Research in Community and Public Health Nursing 2024;35(3):297-312.
DOI: https://doi.org/10.12799/rcphn.2024.00752
Published online: September 30, 2024

1Associate Professor, Department of Nursing, Gangneung-Wonju National University, Wonju, Korea

2Associate Professor, College of Nursing, Hanyang University, Seoul, Korea

Corresponding author: Mi Young Kim College of Nursing, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, Korea Tel: +82-2-2220-0704 E-mail: miyoung0@hanyang.ac.kr
• Received: August 8, 2024   • Revised: September 9, 2024   • Accepted: September 9, 2024

© 2024 Korean Academy of Community Health Nursing

This is an Open Access article distributed under the terms of the Creative Commons Attribution NoDerivs License. (https://creativecommons.org/licenses/by-nd/4.0) which allows readers to disseminate and reuse the article, as well as share and reuse the scientific material. It does not permit the creation of derivative works without specific permission.

prev
  • 128 Views
  • 13 Download
  • Purpose
    This review aims to summarize the characteristics of currently used questionnaires measuring eHealth literacy and assess the quality of their psychometric properties in self-reported assessments within community settings.
  • Methods
    The systematic analysis was conducted using the COnsensus-based Standards for the selection of health Measurement INstruments checklist to evaluate the methodological quality of studies on measurement properties.
  • Results
    A total of 21 studies, including 19 questionnaires, were reviewed. The findings indicated that the quality of psychometric assessments for eHealth literacy was generally rated as 'good,' with most studies addressing multiple aspects of reliability and validity. Internal reliability, content validity, hypothesis testing, and responsiveness were particularly well-supported, each receiving over 10 sufficient ratings. However, there was limited evidence regarding measurement errors, test-retest reliability, criterion validity, and analyses of floor and ceiling effects.
  • Conclusion
    This study contributes to the enhancement of eHealth literacy measurement tool selection and improves the reporting of their validity and reliability, thereby increasing the credibility of future research.
The concept of health literacy has been expanded since it was first defined as the level of ability to obtain, process, and understand basic health information and services necessary for individuals to make appropriate health decisions [1,2]. The World Health Organization announced health literacy as one of the key competencies for promoting individual health, and later emphasized the need for a strategy to expand it into an organizational capacity that enables health equity [3]. Since the 2000s, the United States, Europe, Australia, and China have established and promoted national policies on health literacy at the national level [4-6]. In Korea, eHealth literacy was included as a key task in the 5th Comprehensive National Health Promotion Plan (2030) announced in 2021 [7].
Recent advancements in information and communication technology, alongside the Fourth Industrial Revolution, have introduced a paradigm shift in medical services. The integration of big data, artificial intelligence, the Internet of Things, and medical technology has increased society's reliance on these technologies [8]. In the past, health information was primarily obtained from medical professionals and textbooks. However, with the rapid develop ent of information and communication technology, the ability to search for, understand, and utilize health information online using computers and mobile devices has become crucial [8,9]. This shift has given rise to new terms such as eHealth, digital health, and mHealth, which describe the medical service environment leveraging these technologies [9,10]. eHealth involves providing health-related information online and using information and communication technologies, such as the Internet, to manage health and chronic diseases [10]. As the use of these technologies in healthcare grows, transitioning from traditional health literacy to eHealth literacy has become essential.
eHealth literacy encompasses the ability to effectively access, evaluate, understand, and apply health information from the Internet to address health issues. It includes effective search techniques, such as assessing quality, reliability, validity, and sources, as well as ethical considerations, like protecting one's own information, respecting and protecting others' information, and being a responsible digital citizen [9,10]. eHealth literacy extends beyond basic functional health literacy to include communication skills, interactive literacy, and critical literacy that help individuals interpret diverse information and derive meaningful insights [10,11].
In early studies, eHealth literacy was measured using unstructured questions about whether people could understand information from media such as computers and TV [12]. The first published structured self-report measurement tool was the eight-item eHealth Literacy Scale (eHEALS). Norman and Skinner stated that six abilities—digital literacy, health literacy, information literacy, scientific literacy, media literacy, and computer literacy—interact to enable people to effectively utilize health information [10,13].
Since the development of eHEALS, the use of mobile devices has become more widespread. In response, the scope of measurement tools has recently expanded to include interactive technologies utilizing information and communication technology [11,14], as well as digital content creation [15], in the assessment of eHealth literacy. Lee and her colleagues conducted a systematic review of eHealth literacy measurements published up until March 2021, which included seven measurement tools, covering studies that explored various factor structures of the widely used eHEALS [16]. Although it has been less than 20 years since eHEALS was first published, the concept of digital literacy is rapidly evolving with the advent of advanced digital technologies, such as generative Artificial Intelligence like chat Generative Pre-trained Transformer (ChatGPT), which is quickly becoming accessible to the general population. Consequently, ongoing research on eHealth literacy is anticipated both nationally and internationally.
Therefore, it is crucial to choose a measurement tool that accurately assesses the relevant concepts when evaluating eHealth literacy in the community population. Utilizing the evaluation results can aid in developing appropriate interventions and policies. In this study, we conducted a qualitative evaluation of the characteristics and psychometric properties of tools designed to measure eHealth literacy across all age groups in the community. We used the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist [17] as a framework for evaluating the measurement properties of self-report questionnaires. The findings aimed to offer guideline for selecting appropriate tools in future research and to provide foundational data for exploring research directions in the development and evaluation of eHealth literacy measurement tools.
Study Design
This study conducted a systematic review of research on questionnaires developed and evaluated to measure eHealth literacy, conducted in accordance with the COSMIN guidelines and the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) [17,18]. We focused on the four key elements suggested for systematic reviews of patient-reported outcome measures: 1) the construct; 2) the population; 3) the type of instrument; and 4) the measurement properties of interest [17,18]. The construct under review was eHealth literacy, and the population included adolescents and adults living in the community. The type of instrument was self-report questionnaires, encompassing all relevant measurement properties of these tools.
Eligibility Criteria
The inclusion criteria for systematic reviews were as follows: (1) studies that reported the development and validation process a self-reported questionnaire measuring eHealth literacy of adolescents and adults within community settings; (2) studies that re-evaluated the validity of an existing tool and categorized the domain differently from previous studies; (3) studies published in English or Korean; and (4) studies published in academic journals. The exclusion criteria were as follows: (1) studies for which the full text could not be found; (2) studies that measured only health literacy, excluding digital attributes; (3) studies that measured only digital literacy, excluding health-related attributes; (4) studies that measured eHealth literacy targeting patients with specific diseases; (5) studies that aimed to verify the cross-cultural validity of an existing tool; and (6) review studies.
Data Collection
Data collection included a comprehensive search of both domestic and international databases for literature published up until June 2024. The search was limited to English and Korean languages and used keywords and MeSH terms relevant to prior research. Databases searched included PubMed, CINAHL, Google Scholar, and RISS, with continuous manual searches for newly identified literature. Keywords such as ‘digital health literacy,’ ‘ehealth,’ ‘health literacy,’ ‘mhealth,’ ‘measurement development,’ and ‘psychometrics’ were employed. The search strategy used Boolean operators (AND/OR) and truncation (*) to enhance the sensitivity and specificity of the search. The search strategy outlined in the systematic review protocol for eHealth literacy measurement tools [19] was applied, along with the measurement properties and exclusion filters for PubMed [20] from COSMIN, to identify any tools that may have been omitted in previous search results.
Data Extraction
A total of 413 studies were identified through various academic sites: 83 from PubMed, 32 from CINAHL, 269 from Google Scholar, 13 from RISS, and 16 from manual searching. After excluding 47 duplicates, 366 studies were selected in the first round. Following a review of titles and abstracts, 318 studies were excluded for not meeting the research topic and selection criteria, leaving 48 studies for the second round. After reviewing the full texts of 48 studies, we excluded 27, including those that were translated versions of the original tool or had the same construct validity results as the original tool (n=12), studies that measured eHealth literacy for specific diseases such as diabetes or COVID-19 (n=5), studies that did not report enough statistical analysis results or failed to report the questionnaire development process (n=3), and review papers (n=5). Ultimately, we selected 21 studies for the review (Figure 1).
To examine the characteristics of the tools included in the final analysis, data were extracted on the author and published year, instrument name, target population and location, total number of items, domains and the number of items for each domain, scoring system, reliability, and validity. The extracted data was organized into a table format. To ensure the reliability of the data description, the information was compiled according to a standardized data description format, and the consistency of the data was reviewed separately by two researchers (Table 1).
Data Evaluation
The quality assessment of the psychometric properties of the questionnaire was evaluated as ‘sufficient (+)’, ‘insufficient (-)’, or ‘indeterminate (?)’ based on the updated COSMIN checklist criteria [17,18]. First, internal consistency was evaluated as (+) if Cronbach's alpha(s) were ≥ 0.70 for each unidimensional scale or subscale, and (-) if they were lower. Second, reliability was evaluated using intra-class correlation (ICC), test-retest reliability, or weighted Kappa, with a threshold of ≥ 0.70 for a (+) rating. Third, measurement error was assessed following several criteria: whether the two tests were conducted independently, whether the time interval between tests was appropriate, whether the subject's condition and the test environment remained stable without changes during the interval, and whether the smallest detectable change or limits of agreement were below the minimal important change.
Fourth, content validity was evaluated based on the clarity of the measurement goal, the definition of the target population, the measurement concept, item selection, and the understandability of the tool. It was also assessed whether subject and expert validity investigations were included. Fifth, construct validity was evaluated by examining whether exploratory factor analysis (EFA) and/or confirmatory factor analysis (CFA) were conducted within the framework of classical test theory, and whether Rasch analysis was performed within the framework of item response theory. For EFA, a rating of (+) was given if the explanatory power was 50% or higher. For CFA, a rating of (+) was given if the Comparative Fit Index (CFI) or Tucker-Lewis Index (TLI) was greater than 0.95, the Root Mean Square Error of Approximation (RMSEA) was less than 0.06, or the Standardized Root Mean Residuals (SRMR) was less than 0.08. Sixth, in hypothesis testing for construct validity, evaluations were based on whether hypotheses were set in advance and whether the direction and size of the hypotheses were presented and consistent.
Seventh, cross-cultural validity/measurement invariance was assessed to determine if the translated measurement tool accurately reflected the items of the original tool, considering cultural characteristics. A (+) rating was given if there were no differences in factors such as age, gender, and language between groups, or if the differential item functioning (DIF) was less than 0.02 (McFadden’s R2). Eighth, criterion validity was evaluated based on the degree of correlation with a gold standard. A rating of (+) was assigned if the correlation with the gold standard was 0.70 or higher, or if the Area Under the curve (AUC) was 0.70 or higher.
Responsiveness was assessed by evaluating whether hypotheses regarding time intervals, directions, and magnitudes were set in advance. A (+) rating was given if the correlation between tests was 0.50 or higher, if the degree of agreement between results and hypotheses testing was 75% or higher, or if the AUC was 0.70 or higher. Additionally, a (+) was assigned if the result value conformed to the hypothesis or if a cut-off value was determined and met the criteria. The evaluation of floor and ceiling effects was based on whether each was assessed at 15%.
Characteristics of Included Questionnaires
The general characteristics of the questionnaires included in the study are shown in Table 1. A total of 21 studies were reviewed, analyzing 19 distinct questionnaires. Among these, three studies (A3, A12, A18) focused on the eHEALS developed by Norman and Skinner, analyzing it as a single factor [13], two factors—information acquisition and application [21], and three factors—awareness, skill, and evaluation [22]. The studies were published from 2006 onward, with 1 study by 2010, 4 by 2015, 10 by 2020, and 6 up to June 2024. The number of domains varied from one to nine, and the diversity in domain names made quantification challenging, as they were often divided or grouped into multiple categories. Many questionnaires included domains such as ‘access to digital services,’ ‘evaluating mHealth information,’ ‘digital/operational skills,’ and ‘understanding.’ Other domains included were ‘communication,’ ‘motivation,’ ‘privacy,’ ‘application,’ ‘content creation,’ and ‘confidence.’
The number of items in each questionnaire ranged from 8 to 35. Participants in the studies were between 13 and 93 years old and lived in the community. Research was conducted in the United States, China, and the United Kingdom with approximately 3 to 4 studies in each country, followed by Korea, Taiwan, and Denmark with 2 studies each. Most questionnaires used Likert scale responses, with 12 studies employing a 5-point Likert scale. Regarding reliability, 21 studies reported Cronbach’s α, 4 reported test-retest reliability, 3 reported composite reliability, and 1 reported ICC. For validity, 14 studies reported EFA, 11 reported CFA, 6 reported both EFA and CFA, 1 study used Rasch analysis, and 1 study did not report the validity results.
Evaluation of Psychometric Properties
The results of the quality assessment of the psychometric properties of the eHealth literacy measurement tools are shown in Table 2. Cronbach’s α was reported in all studies for internal consistency assessment. Of these, 11 studies reported Cronbach’s α of .70 or higher for each domain, while 10 studies reported results below .70 or provided reliability data only for the entire questionnaire, which were evaluated as unsatisfactory,. Regarding reliability, five studies reported test-retest reliability or ICC, with only two studies (A7, A9) meeting the criterion of .70 or higher [23,24]. No studies reported measurement errors.
Content validity was reported in 15 studies, but two studies (A11, A12) did not meet the criteria, lacking adequate evaluation through external experts or pilot studies despite meeting the measurement purpose and research targets [13,25]. All studies except one study (A15), which did not report structural validity [26], conducted and reported EFA, CFA, or both. A total of 11 studies were rated as (+) for structural validity, while 9 studies were rated as (-). Statistical issues included factor loadings below .50 in four studies (A3, A6, A14, A17) [21,27-29], EFA explanatory power below 50% in one study (A1) [30], insufficient statistical analysis results in two studies (A8, A11) [25,31], and unmet model fit criteria in two studies (A16, A20) [15,32]. For hypothesis testing, 14 studies were rated as (+), 3 as (-), and 4 as (?). In studies rated as (+), hypotheses were set based on variables such as gender, age, education level, health level, or criterion validity, and the results supported these hypotheses.
Cross-cultural validity was rated as (+) in 6 studies, (-) in 7 studies, and (?) in 8 studies. Studies that met the criteria verified differential item functioning for age and gender and included procedures such as translation and back-translation and cognitive interviews for foreign language versions. Criterion validity was reported in six studies (A7, A9, A13, A19-21) [14,15,23,24,33,34], with one study (A13) [14] reporting criterion validity but receiving a (-) rating. Responsiveness was rated as (+) in 12 studies, (-) in 7 studies, and (?) in 2 studies. Six studies reported floor and ceiling effects (A3, A5-7, A10, A19) [21,23,27,33,35,36], while 15 studies did not report these effects.
In summary, 6 instruments had 5 or more sufficient (+) items, with A7 having the most, followed by A9, A21, A6, A19, and A20 [15,24,27,33,34]. There were 5 instruments with 0-1 sufficient (+) items, specifically A1, A8, A11, A14, and A15 [25,26,28,30,31]. However, among these, A15 had 1 insufficient (-) item, A8 and A14 had 3 insufficient (-) items, and A1 and A11 had 5 insufficient (-) items (Appendix 1).
In this study, we systematically examined the characteristics and psychometric properties of 19 questionnaires from 21 studies, developed to measure eHealth literacy through self-reported assessments targeting the general population in community settings. The COSMIN checklist was utilized for the evaluation [16,18]. Analysis of the publication years of the studies revealed that five studies, including eHEALS developed in 2006 [13], were published in the past 10 years, ten studies in the following 10 years, and six studies in the past four years. This indicates a growing interest in digital literacy and research around measurement tool development. Notably, many of these studies were published in countries like the United States, China, and the United Kingdom, likely due to the implementation of related policies in these regions [3,4,6,7]. Since the last systematic review of eHealth literacy instruments, papers have been published in six countries: Korea [15,23], China [24,34], Indonesia [26], and Belgium [32]. This shift confirms that research, which was previously conducted primarily in the United States, Canada, and European countries, is now being actively pursued in Asia.
Some studies focused on adolescents and young adults [13,21,24-26], but most included all age groups, demonstrating that the newly developed digital literacy tools are applicable across various age groups in the community. The sample sizes in these studies were sufficient for statistical reliability and validity verification [37]. The domains of the tools, as presented in Table 1, showed varying levels of distinction, making categorization complicated. It has been noted that questionnaires developed within the past 10 years have diversified, encompassing concepts of eHealth literacy. Consequently, various domains have emerged, such as ‘privacy’ [22,33], ‘mobile-based patient–doctor communication’ [25,26], and ‘content creation’ [27,28].
eHEALS was the first questionnaire developed among eHealth literacy measurement instruments, and it was translated into 18 languages and used in over 26 countries [16]. In this study, we included three studies [13,21,22] that analyzed eHEALS into one to three factors. For the single-factor study, both reliability and validity were sufficient. However, in the EFA analysis of the two-factor study, the factor loading was insufficient, ranging from .32 to .91, although internal reliability was sufficient [21]. In the study comparing the factorial validity of baby boomers in the US, UK, and New Zealand across three factors, structural validity was supported by CFA. However, internal reliability was insufficient as they only reported the Cronbach’s α for the entire tool and failed to demonstrate it for each factor [22].
Boston University operates the Health Literacy Tool Shed (tuftsmedicine.org), a database that includes various health literacy measurement tools, including eHealth literacy tools. As of July 2024, 280 health literacy tools were registered, nine of which are eHEALS related, including specific targets and translations [38]. Given that not all published tools are listed, the actual number of existing health literacy and digital literacy measurement tools is likely higher. While developing various measurement tools is crucial, the significance of this study lies in building a comprehensive database and analyzing the tools through systematic reviews, enabling future researchers to select appropriate tools and conduct suitable validation analyses.
In this study, among the reliability evaluation indices for eHealth literacy measurement tools in the 10 quality assessment areas suggested by COSMIN, the psychometric index that most consistently reported was internal reliability, with Cronbach’s α reported in all studies. There were three studies where only the reliability of the entire tool was reported without reporting by subscale, and seven cases did not meet the COSMIN standard of ≥.70 for each unidimensional subscale [17,39]. On the other hand, for test-retest and ICC, which indicate the stability of the instrument, and measurement error, the number of studies that reported these standards was small or absent, suggest the need to report Cronbach’s α for each subscale and report on test-retest test results and measurement error in relation to reliability reporting in future questionnaire development and validation studies.
In terms of validity, construct validity and content validity were reported in most studies. For construct validity, EFA, CFA, or both verifications were performed, except for one study [26]. The results of the quality assessment were insufficient in cases where statistical analysis results were not sufficiently reported. In most cases, factor loadings by item were not reported. In cases where factor loadings are presented in a figure, low pixel resolution made it difficult to read the numbers. Therefore, it is necessary to include factor loadings by item as an appendix or provide a figure with appropriate resolution. For instrument tools that are rated insufficient due to low factor loadings or poor model fit, future validation studies using the same tool are necessary to re-evaluate the psychometric quality, as there are currently no comparative studies available.
The quality assessment of content validity generally explained the aim of instrument use and target population, but two studies lacked sufficient process or explanation for expert validity or pilot surveys that could confirm the relevance, comprehensiveness, and comprehensibility of the items [40]. Six studies did not provide any such explanation. In this study, only six studies met the criteria for quality assessment. Since criterion validity requires a gold-standard tool or a short version of the instrument, it is often difficult to find a verification tool [41]. Accordingly, more studies establish hypotheses about relevant variables based on prior research and then verify those hypotheses. In this study, quality assessments for hypothesis testing were conducted, with 14 studies meeting the criteria, while four did not provide any hypotheses.
Only five studies met the quality assessment criteria for cross-cultural validity. Given the nature of digital-related topics, differences in results by age are expected, so differential item functioning analysis can be conducted to determine whether there are differences in results by gender, nationality, and language, excluding age [35]. However, in most cases, the relevant content was either missing or insufficient. Some studies considered and reported on floor and ceiling effects, but only six studies did so, indicating a need for more active reporting.
One limitation of this study is the potential for publication bias, as it cannot rule out biases from sources such as dissertations, conference presentations, and research reports. Furthermore, this study does not include any questionnaires other than eHEALS, limiting opportunities for a comprehensive quality appraisal and restricting the potential to make consistent recommendations for specific measurement tools.
We recommend using measurement tools such as A6, A7, A9, and A19-21, which have shown strong methodological evaluation results in various studies [15,23,24,27,33,34]. Additionally, conducting systematic reviews that compile the findings from these studies is recommended. As more research is conducted, quality appraisals of the same tool will become feasible. An exception is eHEALS, which was included in three studies in this review. It has demonstrated satisfactory quality in previous research [16] and is easy to apply due to its eight-item format. However, its limitation in assessing only a narrow scope of eHealth literacy should not be overlooked [11].
Today, eHealth literacy extends beyond basic digital device usage and Internet navigation. It now includes the ability to search for trustworthy information, engage in bi-directional communication [31,34], contribute to content creation within a community [26,33], and protect privacy [31,33]. Therefore, researchers should carefully select eHealth literacy measurement tools that align with their research objectives, rather than relying solely on tools that are easy to use or widely used in the past. Recent studies have incorporated these broader aspects of eHealth literacy. Notably, the number of items in six studies with a high number of sufficient (+) ratings ranged from 24 to 44, excluding shortened versions. For example, A7, which had the highest number of sufficient (+) ratings, was derived from a 47-item tool and shortened to 11 items. Since its validity and reliability were well-established, it should be repeatedly used and validated in future research. The field of eHealth literacy measurement is expected to expand significantly, particularly in performance-based methods [37] that go beyond self-report formats. We also recommend review studies on tools focusing on performance measurement rather than self-report formats.
In this study, a systematic review was conducted on questionnaires developed to evaluate eHealth literacy among adolescents and adults in community settings using the 2018 COSMIN guidelines. The review highlighted that while aspects such as construct validity, internal consistency, hypothesis testing, cross-cultural validity, criterion validity, and responsiveness were frequently verified, important areas like measurement error, test-retest reliability, and floor or ceiling effects were often overlooked. Based on these findings, we provide guidance for the selection and use of appropriate measurement tools in future research. Future studies should focus on addressing these gaps by adhering closely to the COSMIN methodology, which would enhance the robustness of eHealth literacy assessments. Additionally, psychometric evaluations should aim for comprehensive reporting of their properties to ensure the tools are applicable and reliable across diverse populations and settings.

Conflict of interest

The author declared no conflict of interest.

Funding

This paper was supported by research funds for newly appointed professors of Gangneung-Wonju National University in 2022.

Authors’ contributions

Jung-Won Ahn contributed to conceptualization, data curation, formal analysis, funding acquisition, methodology, visualization, writing-original draft, review & editing, investigation, supervision, and validation. Mi Young Kim contributed to conceptualization, formal analysis, methodology, writing-original draft, review & editing, investigation, resources, supervision, and validation.

Data availability

Please contact the corresponding author for data availability.

Acknowledgments

None.

Figure 1.
PRISMA flowchart of the search process.
rcphn-2024-00752f1.jpg
Table 1.
Characteristics of Included Studies
[No.] Author (year) Name of instrument Population (N) Contents Reliability Validity
Items Domains (number of items) Scoring system
[A1] Chinn & McCarthy (2013) All Aspects of Health Literacy Scale (AAHLS) Aged 15~82 (38.0±15.4) yrs, (N=146) community in UK 14 Functional health literacy (4)/ 3-point Likert scale, 1 (rarely) to 3 (often) Cronbach's α=.42~.82 EFA, 58.9% of total variance explained; construct validity r=.19~.59
Communicative health literacy (3)/
Critical health literacy (7)
[A2] Fuzhi et al. (2019) Health Information Literacy (HIL) Aged 45~65 (56.4±7.9) yrs, (N=1,132) rural China 14 Health information-seeking (4)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.51~.74 EFA, 69.8% of total variance explained; construct validity r=.23~.44
Health information evaluation (5)/
Health information consciousness (3)/
Health information application (2)
[A3] Holch & Marwood (2020) The eHealth Literacy Scale (eHEALS) Aged 20.1±2.2 yrs, (N=188) undergraduate students in UK 8 Information acquisition (3)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.77~90 EFA, FL=.32~.91, 64.6% of total variance explained
Information application (5)
[A4] Hsu et al. (2014) eHealth Literacy Scale (eHLS) College students† (N=525) in Taiwan 12 Functional eHealth literacy (4)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.70~.83, CFA, GFI=0.95, AGFI=0.93, CFI=0.95, RMSEA=0.06
Interactive eHealth literacy (4)/
Critical eHealth literacy (4)
[A5] Karnoe et al. (2018) eHealth Literacy Assessment (eHLA) toolkit Aged 18~60+ yrs†, (N=475) outpatient clinic and community in Denmark 44 Functional health literacy (10)/ 4-point set of response options, (different in each domain) Cronbach's α=.59~.94 Rasch analysis Anderson CLR, df, p-value was acceptable
Health literacy self-assessed (9)/
Familiarity with health and health care (5)/
Knowledge of health and disease (6)/
Technology familiarity (6)/
Technology confidence (4)/
Incentives for engaging with technology (4)
[A6] Kayser et al. (2018) The eHealth Literacy Question-naire (eHLQ) Adult 18~64 yrs & older adult 65≤ yrs†, (N=475) from community and health care settings with chronic conditions in Denmark 35 Using technology to process health information (5)/ 4-point Likert scale 1 (strongly disagree) to 4 (strongly agree) Composite reliability coefficient =.75~.87; Cronbach's α=.77~.86 CFA, FL=.43~.84, PPP=0.27
Understanding of health concepts and language (5)/
Ability to actively engage (5)/
Feel safe and in control (5)/
Motivated to engage (5)/
Access to digital services (6)/
Digital services that suit individual needs (4)
[A7] Kim et al. (2024) Mobile based digital Health Literacy Scale-Short Form 11 (MHLS-SF11) Aged 20~60+ yrs†, (N=299) online survey in republic of Korea 11 Health care (3)/ 4-point Likert scale Cronbach's α=.90~.92; ICC r=.75~.83; test-retest satisfactory CFA, FL=.73~92, CFI=0.99, TLI=0.98, RMSEA=0.06
Disease prevention (4)/
Health promotion (4)
[A8] Koopman et al. (2014) The Patient Readiness to Engage in Health Internet Technology (PRE–HIT) instrument Aged 27~75 yrs† with chronic disease, (N=200) community medicine clinics in USA 28 Health information need (5)/ 4-point Likert scale 1 (strongly disagree) to 4 (strongly agree) Cronbach's α=.57~.87; test-retest r=.60~.85 EFA, items removed FL<.3, other results not mentioned
Computer/Internet experience, expertise (4)/
Computer anxiety (4)/
Preferred mode of interaction (5)/
Relationship with doctor (3)/
Cell phone expertise (2)/
Internet privacy concerns (2)/
No news is good news (3)
[A9] Liu et al. (2021) eHealth Literacy Scale Web 3.0 (eHLS-Web3.0) Aged 17~25 (20.5±1.4) yrs, (EFA N=393) & (CFA N=741) college students in China 24 Acquisition (8)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Composite reliability =.93~.96; Cronbach's α=.91~.96; test-retest r=.86, EFA, FL=.42~.98, 71.7% of total variance explained; CFA, FL=.68~.90, CFI=0.92, TLI=0.91, RMSEA=0.06, SRMR=0.05
Verification (6)/
Application (10)
[A10] Liu et al. (2020) Digital Health Literacy Assessment (DHLA) Aged 20+ yrs†, (N=1,588) community in Taiwan 10 Digital health literacy (6)/ 5-point set of response options Cronbach's α=.77~92 EFA, FL=.63~.93, 76.6% of total variance explained
Belief in medicine (3)/
Belief in folk remedies (1)
[A11] Niemelä et al. (2012) Everyday Health Information Literacy (EHIL) Aged 16~18 yrs†, (N=217) secondary school in Finland 10 •   EHIL 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.56 EFA, data is not available
Motivation (4)/
Confidence (3)/
Evaluation (2)/
•   EHIL8‡(1) – difficulties in understanding information
[A12] Norman and Skinner (2006) The eHealth Literacy Scale (eHEALS) Aged 13~21 (15.0±1.2) yrs, (N=664) secondary school in Canada 8 Single domain (8) 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.88; EFA, FL=.60~.84, 56.0% of total variance explained
item-total correlation r=.51~.76; test-retest r=.40~.68
[A13] Paige et al. (2019) Transactional Aged 40+ (65.0±1.5) yrs, (N=283) from a research registry in USA 18 Functional (4)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.87~.92 EFA; CFA, FL (lambda=0.53~1.00), CFI=0.95, TLI=0.94, RMSEA=0.07, SRMR=0.06
eHealth Literacy Instrument (TeHLI) Communicative (5)/
Critical (5)/
Translational eHealth literacy (4)
[A14] Petrič et al. (2017) Extended eHealth Literacy Scale (eHEALS-E) Aged 15~90 (40.0±10.3) yrs, (N=644) registered user of online health community in Slovenia 20 Awareness of sources (3)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.52~.81 EFA, FL=.41~.78; CFA, CFI=0.94, RMSEA=0.06, SRMR=0.06
Recognizing quality and meaning (3)/
Understanding information (4)/
Perceived efficiency (4)/
Validating information (3)/
Being smart on the Net (3)
[A15] Rachmani et al. (2022) Digital Health Literacy Competencies for citizens (DHLC) Aged 13~68 (37.6±12.7) yrs, (N=383) community in Indonesia 26 A: Digital competency 8-point Likert scale 0 (unable to do) to 7 (very easy to do the activity and can solve the problems) Cronbach's α=.97; item-total correlation r=.45~.83 Data is not available
Information & data literacy (1)/
Communication & collaboration (6)/
Digital content creation (1)/
Safety (5)/ Problem solving (5)/
B: Health information literacy
Access (2)/ Management (2)/
Integration (2)/ Evaluation (2)
[A16] Scherrenberg et al. (2023) Digital Health Readiness Questionnaire (DHRQ) Aged 62.6±15.1 yrs, (N=315) outpatient patients in Belgium 15 Digital usage (4)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.74~.94 CFA, CFI=0.91, TLI=0.90, RMSEA=0.10, SRMR=0.07
Digital skills (5)/
Digital literacy (3)/
Digital health literacy (3)/
*Additional: digital learnability (5)
[A17] Seçkin et al. (2016) electronic-Health Literacy Scale (e-HLS) Aged 18~93 (48.8±16.4) yrs, (N=710) national web-based research panel in the USA 19 Action (13)/ 5-point Likert scale, 1 (never/ strongly disagree) to 5 (always/strongly agree) Cronbach's α=.93, item total correlation r=.11~.78 EFA, FL=.45~.86, 65.0% of total variance explained; CFA, FL=.42~.86, CFI=0.94, NFI=0.92, RMSEA=0.08
Trust (4)/
Communication (2)
[A18] Sudbury-Riley et al. (2017) The eHealth Literacy Scale (eHEALS) Baby boomers, aged ±60 yrs†, (N=996) from the USA, UK & NZ 8 Awareness (2)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.92(USA), .93(UK), .91 (NZ) CFA, FL=.73~.92, CFI=0.98~0.99, TLI=0.97~0.98, RMSEA=0.04
Skills (3)/
Evaluation (3)
[A19] van der Vaart & Drossaert (2017) The Digital Health Literacy Instrument (DHLI) Aged 18~84 (46.4±19.0) yrs, (N=200) general population in Netherlands 28 Self-reported scale 4-point scale 1 (very easy or never) to 4 (very difficult or often) Cronbach's α=.57~.89; ICC=.49~.81 EFA, FL=.56~.90, 76.0% of total variance explained
Operational skills (3)
Navigation skills (3)
Information searching (3)
Evaluating reliability (3)
Determining relevance (3)
Adding content (3)
Protecting privacy (3)
& Performance-based items (7)
[A20] Yoon et al. (2022) Digital Health Technology Literacy Assessment Questionnaire (DHTL-AQ) Aged 20~84 (46.5±13.0) yrs, (N=590) in republic of Korea 34 Digital functional literacy Yes/No or 4-point scale 1 (self-implementation) to 4 (do not know), cutoff value 22 out of 34 Cronbach's α=.87~.94 EFA, FL>.50; CFA, FL=60~91, CFI=0.82, TLI=0.81,
Knowledge based RMSEA=0.09, SRMR=0.07
Use of an app (9)/
Task based
ICT terms (11)/ ICT icons (9)/
Digital critical literacy
Evaluating reliability & rele-vance of health information (5)
[A21] Zhang & Li (2022) Problem-Based mHealth Literacy Scale (PB-mHLS) Aged 18+ yrs†, (N=552 & 433) mobile phone users in China 33 mHealth desire (3)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Composite reliability coefficient=.78~.88; Cronbach's α=.86~.95 EFA, FL=.55~.90, 78.7% of total variance explained; CFA, FL=.64~.90,
mobile phone operational skills (5)/ GFI=0.87, CFI=0.92,
Acquiring mHealth information (4)/ NFI=0.92, RMSEA=0.06
Acquiring mHealth services (3)/
Understanding of medical terms (3)/
Mobile-based patient–doctor communication (5)/
Evaluating mHealth information (6)/
mHealth decision-making (4)

no mean ages; independent question

Anderson CLR=Anderson conditional likelihood ratio; AGFI=adjusted goodness of fit index; CFA=confirmatory factor analysis; CFI=comparative fit index; df=degree of freedom; EFA=exploratory factor analysis; FL=factor loading; GFI=goodness of fit index; ICC=intraclass correlation coefficient; NFI=normed fit index; PPP=posterior predictive p-value; RMSEA=root mean square error of approximation; TLI=Turker-Lewis fit index; yrs=years.

Table 2.
Overall Rating and Quality of Evidence of Each Instrument
[No.] Author (year) Measurement Internal consistency Reliability Measurement error Content validity Structural validity Hypotheses testing Cross-cultural validity Criterion validity Responsiveness Floor & ceiling effects
[A1] Chinn & McCarthy (2013) All Aspects of Health Literacy Scale (AAHLS) ? ? + ? ?
[A2] Fuzhi et al. (2019) Health Information Literacy (HIL) ? ? ? + + ? ? + ?
[A3] Holch & Marwood (2020) The eHealth Literacy Scale (eHEALS) + ? ? ? ? ? +
[A4] Hsu et al. (2014) eHealth Literacy Scale (eHLS) + ? ? ? + + ? ? + ?
[A5] Karnoe et al. (2018) eHealth Literacy Assessment (eHLA) toolkit ? ? + + + ? +
[A6] Kayser et al. (2018) The eHealth Literacy Questionnaire (eHLQ) + ? ? + + + ? + +
[A7] Kim et al. (2024) Mobile based digital Health Literacy Scale-Short Form 11 (MHLS-SF11) + + ? + + + + + + +
[A8] Koopman et al. (2014) The Patient Readiness to Engage in Health Internet Technology (PRE–HIT) instrument ? + ? ? ? ? ?
[A9] Liu et al. (2021) eHealth Literacy Scale Web 3.0 (eHLS-Web3.0) + + ? + + + + + + ?
[A10] Liu et al. (2020) Digital Health Literacy Assessment (DHLA) + ? ? + + + ? +
[A11] Niemelä et al. (2012) Everyday Health Information Literacy (EHIL) ? ? ? ? ?
[A12] Norman and Skinner (2006) The eHealth Literacy Scale (eHEALS) + ? + + + ? + ?
[A13] Paige et al. (2019) Transactional eHealth Literacy Instrument (TeHLI) + ? ? + + + ? ?
[A14] Petrič et al. (2017) Extended eHealth Literacy Scale (eHEALS-E) ? ? + ? ? ? ?
[A15] Rachmani et al. (2022) Digital Health Literacy Competencies for citizens (DHLC) ? ? + ? ? ? ? ? ?
[A16] Scherrenberg et al. (2023) Digital Health Readiness Questionnaire (DHRQ) + ? ? + + ? + ?
[A17] Seçkin et al. (2016) electronic-Health Literacy Scale (e-HLS) ? ? ? + ? + ?
[A18] Sudbury-Riley et al. (2017) The eHealth Literacy Scale (eHEALS) ? ? ? + + ? ? + ?
[A19] van der Vaart & Drossaert (2017) The Digital Health Literacy Instrument (DHLI) ? ? + + + + +
[A20] Yoon et al. (2022) Digital Health Technology Literacy Assessment Questionnaire (DHTL-AQ) + ? ? + + + + ?
[A21] Zhang & Li (2022) Problem-Based mHealth Literacy Scale (PB-mHLS) + ? ? + + + + + + ?

+=sufficient; –=insufficient; ?=indeterminate

  • 1. Simonds SK. Health education as social policy. Health Education & Behavior. 1974;2(1):1–10. https://doi.org/10.1177/10901981740020S102Article
  • 2. Nutbeam D. The evolving concept of health literacy. Social Science & Medicine. 2008;67(12):2072–2078. https://doi.org/10.1016/j.socscimed.2008.09.050ArticlePubMed
  • 3. World Health Organization. Shanghai declaration on promoting health in the 2030 agenda for sustainable development: 9th Global Conference on Health Promotion. Shanghai, China: World Health Organization; 2016 November. Report No.: WHO/NMH/PND/17.5.
  • 4. European Commission. European citizens’ digital health literacy. Brussels: European Commission; 2014. 221 p.
  • 5. Nutbeam D. Health literacy as a population strategy for health promotion. Japanese Society of Health Education and Promotion. 2017;25(3):210–222. https://doi.org/10.11260/kenkokyoiku.25.210Article
  • 6. U.S. Department of Health and Human Services. Healthy People 2030: Understanding and improving health [Internet]. Maryland: Office of disease prevention and health promotion. [cited 2024 Jul 31]. Available from https://health.gov/healthypeople
  • 7. Park DJ, Koh KW, Lee JY. Direction of national policy for health literacy in Korea. Korean Journal of Health Education and Promotion. 2022;39(4):1–14. https://doi.org/10.14367/kjhep.2022.39.4.1Article
  • 8. Bhavnani SP, Narula J, Sengupta PP. Mobile technology and the digitization of healthcare. European Heart Journal. 2016;37(18):1428–1438. https://doi.org/10.1093/eurheartj/ehv770ArticlePubMedPMC
  • 9. World Health Organization. Global strategy on digital health 2020-2025. Geneva: World Health Organization; 2021. 60 p.
  • 10. Norman C. Ehealth literacy 2.0: Problems and opportunities with an evolving concept. Journal of Medical Internet Research. 2011;13(4):e125. https://doi.org/10.2196/jmir.2035ArticlePubMedPMC
  • 11. Karnoe A, Kayser L. How is eHealth literacy measured and what do the measurements tell us? A systematic review. Knowledge Management & E-Learning. 2015;7(4):576–600. Article
  • 12. Haun JN, Valerio MA, McCormack LA, Sørensen K, Paasche-Orlow MK. Health literacy measurement: An inventory and descriptive summary of 51 instruments. Journal of Health Communication. 2014;19(2):302–333. https://doi.org/10.1080/10810730.2014.936571ArticlePubMed
  • 13. Norman CD, Skinner HA. eHEALS: The eHealth literacy scale. Journal of Medical Internet Research. 2006;8(4):e27. https://doi.org/10.2196/jmir.8.4.e27ArticlePubMedPMC
  • 14. Paige SR, Stellefson M, Krieger JL, Miller MD, Cheong J, Anderson-Lewis C. Transactional eHealth literacy: Developing and testing a multi-dimensional instrument. Journal of Health Communication. 2019;24(10):737–748. https://doi.org/10.1080/10810730.2019.1666940ArticlePubMedPMC
  • 15. Yoon J, Lee M, Ahn JS, Oh D, Shin SY, Chang YJ, et al. Development and validation of digital health technology literacy assessment questionnaire. Journal of Medical Systems. 2022;46(2):13. https://doi.org/10.1007/s10916-022-01800-8ArticlePubMedPMC
  • 16. Lee J, Lee EH, Chae D. Ehealth literacy instruments: Systematic review of measurement properties. Journal of Medical Internet Research. 2021;23(11):e30644. https://doi.org/10.2196/30644ArticlePubMedPMC
  • 17. Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, Vet HCW, et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Quality of Life Research. 2018;27(5):1147–1157. https://doi.org/10.1007/s11136-018-1798-3ArticlePubMedPMC
  • 18. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine. 2009;6(7):e1000097. https://doi.org/10.1371/journal.pmed.1000097ArticlePubMedPMC
  • 19. Délétroz C, Allen MC, Sasseville M, Rouquette A, Bodenmann A, Gagnon1 MP. Ehealth literacy measurement tools: A systematic review protocol. Systematic Reviews. 2022;11(1):205. https://doi.org/10.1186/s13643-022-02076-2ArticlePubMedPMC
  • 20. Terwee CB, Jansma EP, Riphagen II, de Vet HC. Development of a methodological PubMed search filter for finding studies on measurement properties of measurement instruments. Quality of Life Research. 2009;18(8):1115–1123. https://doi.org/10.1007/s11136-009-9528-5ArticlePubMedPMC
  • 21. Holch P, Marwood JR. EHealth literacy in UK teenagers and young adults: exploration of predictors and factor structure of the eHealth literacy scale (eHEALS). JMIR Formative Research. 2020;4(9):e14450. https://doi.org/10.2196/14450ArticlePubMedPMC
  • 22. Sudbury-Riley L, FitzPatrick M, Schulz PJ. Exploring the measurement properties of the eHealth Literacy Scale (eHEALS) among baby boomers: A multinational test of measurement invariance. Journal of Medical Internet Research. 2017;19(2):e53. https://doi.org/10.2196/jmir.5998ArticlePubMedPMC
  • 23. Kim S, Cho KW, Kim M. Development of a Korean mobile based digital health literacy instrument: validity and reliability assessment. The Korean Journal of Health Service Management. 2024;18(2):75–89. https://doi.org/10.12811/kshsm.2024.18.2.075Article
  • 24. Liu HX, Chow BC, Liang W, Hassel H, Huang YW. Measuring a broad spectrum of eHealth skills in the web 3.0 context using an eHealth literacy scale: Development and validation study. Journal of Medical Internet Research. 2021;23(9):e31627. https://doi.org/10.2196/31627ArticlePubMedPMC
  • 25. Niemelä R, Ek S, Eriksson-Backa K, Huotari M-L. A screening tool for assessing everyday health information literacy. Libri. 2012;62(2):125–134. https://doi.org/10.1515/libri-2012-0009Article
  • 26. Rachmani E, Haikal H, Rimawati E. Development and validation of digital health literacy competencies for citizens (DHLC), an instrument for measuring digital health literacy in the community. Computer Methods and Programs in Biomedicine Update. 2022;2:100082. https://doi.org/10.1016/j.cmpbup.2022.100082ArticlePubMedPMC
  • 27. Kayser L, Karnoe A, Furstrand D, Batterham R, Christensen K, Elsworth G, et al. A multidimensional tool based on the eHealth literacy framework: development and initial validity testing of the eHealth literacy questionnaire (eHLQ). Journal of Medical Internet Research. 2018;20(2):e36. https://doi.org/10.2196/jmir.8371ArticlePubMedPMC
  • 28. Petrič G, Atanasova S, Kamin T. Ill literates or illiterates? Investigating the eHealth literacy of users of online health communities. Journal of Medical Internet Research. 2017;19(10):e331. https://doi.org/10.2196/jmir.7372ArticlePubMedPMC
  • 29. Seçkin G, Yeatts D, Hughes S, Hudson C, Bell V. Being an informed consumer of health information and assessment of electronic health literacy in a national sample of internet users: Validity and reliability of the e-HLS instrument. Journal of Medical Internet Research. 2016;18(7):e161. https://doi.org/10.2196/jmir.5496ArticlePubMedPMC
  • 30. Chinn D, McCarthy C. All Aspects of Health Literacy Scale (AAHLS): developing a tool to measure functional, communicative and critical health literacy in primary healthcare settings. Patient Education and Counseling. 2013;9(2):247–253. http://dx.doi.org/10.1016/j.pec.2012.10.019Article
  • 31. Koopman RJ, Petroski GF, Canfield SM, Stuppy JA, Mehr DR. Development of the PRE-HIT instrument: patient readiness to engage in health information technology. BMC Family Practice. 2014;15(18):1–9. https://doi.org/10.1186/1471-2296-15-18ArticlePubMedPMC
  • 32. Scherrenberg M, Falter M, Kaihara T, Xu L, van Leunen M, Kemps H, et al. Development and internal validation of the digital health readiness questionnaire: prospective single-center survey study. Journal of Medical Internet Research. 2023;25:e41615. https://doi.org/10.2196/41615ArticlePubMedPMC
  • 33. Van der Vaart R, Drossaert C. Development of the digital health literacy instrument: Measuring a broad spectrum of health 1.0 and health 2.0 skills. Journal of Medical Internet Research. 2017;19(1):e27. https://doi.org/10.2196/jmir.6709ArticlePubMedPMC
  • 34. Zhang L, Li P. Problem-Based mHealth Literacy Scale (PB-mHLS): Development and validation. JMIR Mhealth and Uhealth. 2022;10(4):e31459. https://doi.org/10.2196/31459ArticlePubMedPMC
  • 35. Karnoe A, Furstrand D, Christensen KB, Norgaard O, Kayser L. Assessing competencies needed to engage with digital health services: Development of the eHealth literacy assessment toolkit. Journal of Medical Internet Research. 2018;20(5):e178. https://doi.org/10.2196/jmir.8347ArticlePubMedPMC
  • 36. Liu P, Wang JY, Lee ST. Relationship between levels of digital health literacy based on the Taiwan digital health literacy assessment and accurate assessment of online health information: Cross-sectional questionnaire study. Journal of Medical Internet Research. 2020;22(12):e19767. https://doi.org/10.2196/19767ArticlePubMedPMC
  • 37. Kline RB. Principles and practice of structural equation modeling. 4th ed. New York: The Guilford Press; 2015. 494 p.
  • 38. Tufts Medical Center. Health Literacy Tool Shed. [Internet]. Vermont: A database of health literacy measures. [cited 2024 Jul 31]. Available from https://healthliteracy.tuftsmedicine.org/
  • 39. Mokkink LB, Prinsen C, Patrick DL, Alonso J, Bouter L, De Vet HC, et al. COSMIN methodology for systematic reviews of patient-reported outcome measures (PROMs). User manual, 2018.
  • 40. Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, Patrick DL, Alonso J, et al. COSMIN methodology for evaluating the content validity of patient-reported outcome measures: A delphi study. Quality of Life Research. 2018;27(5):1159–1170. https://doi.org/10.1007/s11136-018-1829-0ArticlePubMedPMC
  • 41. Lee EH. Psychometric properties of an instrument 3: Convergent, discriminant, known-groups, and criterion validity. Korean Journal of Women Health Nursing. 2021;27(3):176–179. https://doi.org/10.4069/kjwhn.2021.08.18ArticlePubMedPMC
Appendix 1.
Studies Included in Systematic Review
[A1] Chinn D, McCarthy C. All Aspects of Health Literacy Scale (AAHLS): developing a tool to measure functional, communicative and critical health literacy in primary healthcare settings. Patient Education and Counseling. 2013;9(2):247-253. http://dx.doi.org/10.1016/j.pec.2012.10.019
[A2] Fuzhi W, Dan L, Weiwei S, Tingting Y, Dehua H, Wei P, et al. Health information literacy and barriers of online health information seeking among digital immigrants in rural China: a preliminary survey. SAGE Open. 2019;9(2):2158244019856946. https://doi.org/10.1177/2158244019856946
[A3] Holch P, Marwood JR. EHealth literacy in UK teenagers and young adults: exploration of predictors and factor structure of the eHealth literacy scale (eHEALS). JMIR Formative Research. 2020;4(9):e14450. https://doi.org/10.2196/14450
[A4] Hsu W, Chiang C, Yang S. The effect of individual factors on health behaviors among college students: the mediating effects of eHealth literacy. Journal of Medical Internet Research. 2014;16(12):e287. https://doi.org/10.2196/jmir.3542
[A5] Karnoe A, Furstrand D, Christensen KB, Norgaard O, Kayser L. Assessing competencies needed to engage with digital health services: Development of the eHealth literacy assessment toolkit. Journal of Medical Internet Research. 2018;20(5):e178. https://doi.org/10.2196/jmir.8347
[A6] Kayser L, Karnoe A, Furstrand D, Batterham R, Christensen K, Elsworth G, et al. A multidimensional tool based on the eHealth literacy framework: development and initial validity testing of the eHealth literacy questionnaire (eHLQ). Journal of Medical Internet Research. 2018;20(2):e36. https://doi.org/10.2196/jmir.8371
[A7] Kim S, Cho KW, Kim M. Development of a Korean mobile based digital health literacy instrument: validity and reliability assessment. The Korean Journal of Health Service Management. 2024;18(2):75-89. https://doi.org/10.12811/kshsm.2024.18.2.075
[A8] Koopman RJ, Petroski GF, Canfield SM, Stuppy JA, Mehr DR. Development of the PRE-HIT instrument: patient readiness to engage in health information technology. BMC Family Practice. 2014;15(18):1-9. https://doi.org/10.1186/1471-2296-15-18
[A9] Liu HX, Chow BC, Liang W, Hassel H, Huang YW. Measuring a broad spectrum of eHealth skills in the web 3.0 context using an eHealth literacy scale: Development and validation study. Journal of Medical Internet Research. 2021;23(9):e31627. https://doi.org/10.2196/31627
[A10] Liu P, Yeh, LL, Wang JY, Lee ST. Relationship between levels of digital health literacy based on the Taiwan digital health literacy assessment and accurate assessment of online health information: Cross-sectional questionnaire study. Journal of Medical Internet Research. 2020;22(12);e19767. https://doi.org/10.2196/19767
[A11] Niemelä R, Ek S, Eriksson-Backa K, Huotari M-L. A screening tool for assessing everyday health information literacy. Libri. 2012;62(2):125-134. https://doi.org/10.1515/libri-2012-0009
[A12] Norman CD, Skinner HA. eHEALS: The eHealth literacy scale. Journal of Medical Internet Research. 2006;8(4):e27. https://doi.org/10.2196/jmir.8.4.e27
[A13] Paige SR, Stellefson M, Krieger JL, Miller MD, Cheong J, Anderson-Lewis C. Transactional eHealth literacy: Developing and testing a multi-dimensional instrument. Journal of Health Communication. 2019;24(10):737-748. https://doi.org/10.1080/10810730.2019.1666940
[A14] Petrič G, Atanasova S, Kamin T. Ill literates or illiterates? Investigating the eHealth literacy of users of online health communities. Journal of Medical Internet Research. 2017;19(10):e331. https://doi.org/10.2196/jmir.7372
[A15] Rachmani E, Haikal H, Rimawati E. Development and validation of digital health literacy competencies for citizens (DHLC), an instrument for measuring digital health literacy in the community. Computer Methods and Programs in Biomedicine Update. 2022;2:100082. https://doi.org/10.1016/j.cmpbup.2022.100082
[A16] Scherrenberg M, Falter M, Kaihara T, Xu L, van Leunen M, Kemps H, et al. Development and internal validation of the digital health readiness questionnaire: prospective single-center survey study. Journal of Medical Internet Research. 2023;25:e41615. https://doi.org/10.2196/41615
[A17] Seçkin G, Yeatts D, Hughes S, Hudson C, Bell V. Being an informed consumer of health information and assessment of electronic health literacy in a national sample of internet users: Validity and reliability of the e-HLS instrument. Journal of Medical Internet Research. 2016;18(7):e161. https://doi.org/10.2196/jmir.5496
[A18] Sudbury-Riley L, FitzPatrick M, Schulz PJ. Exploring the measurement properties of the eHealth Literacy Scale (eHEALS) among baby boomers: A multinational test of measurement invariance. Journal of Medical Internet Research. 2017;19(2):e53. https://doi.org/10.2196/jmir.5998
[A19] Van der Vaart R. Drossaert C. Development of the digital health literacy instrument: Measuring a broad spectrum of health 1.0 and health 2.0 skills. Journal of Medical Internet Research. 2017;19(1): e27. https://doi.org/10.2196/jmir.6709
[A20] Yoon J, Lee M, Ahn JS, Oh, D., Shin, S. Y., Chang YJ et al. Development and validation of digital health technology literacy assessment questionnaire. Journal of Medical Systems. 2022;46(2):13. https://doi.org/10.1007/s10916-022-01800-8
[A21] Zhang L, Li P. Problem-Based mHealth Literacy Scale (PB-mHLS): Development and validation. JMIR Mhealth and Uhealth. 2022;10(4):e31459. https://doi.org/10.2196/31459

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      Figure
      • 0
      We recommend
      A Systematic Review of Questionnaire Measuring eHealth Literacy
      Image
      Figure 1. PRISMA flowchart of the search process.
      A Systematic Review of Questionnaire Measuring eHealth Literacy
      [No.] Author (year) Name of instrument Population (N) Contents Reliability Validity
      Items Domains (number of items) Scoring system
      [A1] Chinn & McCarthy (2013) All Aspects of Health Literacy Scale (AAHLS) Aged 15~82 (38.0±15.4) yrs, (N=146) community in UK 14 Functional health literacy (4)/ 3-point Likert scale, 1 (rarely) to 3 (often) Cronbach's α=.42~.82 EFA, 58.9% of total variance explained; construct validity r=.19~.59
      Communicative health literacy (3)/
      Critical health literacy (7)
      [A2] Fuzhi et al. (2019) Health Information Literacy (HIL) Aged 45~65 (56.4±7.9) yrs, (N=1,132) rural China 14 Health information-seeking (4)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.51~.74 EFA, 69.8% of total variance explained; construct validity r=.23~.44
      Health information evaluation (5)/
      Health information consciousness (3)/
      Health information application (2)
      [A3] Holch & Marwood (2020) The eHealth Literacy Scale (eHEALS) Aged 20.1±2.2 yrs, (N=188) undergraduate students in UK 8 Information acquisition (3)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.77~90 EFA, FL=.32~.91, 64.6% of total variance explained
      Information application (5)
      [A4] Hsu et al. (2014) eHealth Literacy Scale (eHLS) College students† (N=525) in Taiwan 12 Functional eHealth literacy (4)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.70~.83, CFA, GFI=0.95, AGFI=0.93, CFI=0.95, RMSEA=0.06
      Interactive eHealth literacy (4)/
      Critical eHealth literacy (4)
      [A5] Karnoe et al. (2018) eHealth Literacy Assessment (eHLA) toolkit Aged 18~60+ yrs†, (N=475) outpatient clinic and community in Denmark 44 Functional health literacy (10)/ 4-point set of response options, (different in each domain) Cronbach's α=.59~.94 Rasch analysis Anderson CLR, df, p-value was acceptable
      Health literacy self-assessed (9)/
      Familiarity with health and health care (5)/
      Knowledge of health and disease (6)/
      Technology familiarity (6)/
      Technology confidence (4)/
      Incentives for engaging with technology (4)
      [A6] Kayser et al. (2018) The eHealth Literacy Question-naire (eHLQ) Adult 18~64 yrs & older adult 65≤ yrs†, (N=475) from community and health care settings with chronic conditions in Denmark 35 Using technology to process health information (5)/ 4-point Likert scale 1 (strongly disagree) to 4 (strongly agree) Composite reliability coefficient =.75~.87; Cronbach's α=.77~.86 CFA, FL=.43~.84, PPP=0.27
      Understanding of health concepts and language (5)/
      Ability to actively engage (5)/
      Feel safe and in control (5)/
      Motivated to engage (5)/
      Access to digital services (6)/
      Digital services that suit individual needs (4)
      [A7] Kim et al. (2024) Mobile based digital Health Literacy Scale-Short Form 11 (MHLS-SF11) Aged 20~60+ yrs†, (N=299) online survey in republic of Korea 11 Health care (3)/ 4-point Likert scale Cronbach's α=.90~.92; ICC r=.75~.83; test-retest satisfactory CFA, FL=.73~92, CFI=0.99, TLI=0.98, RMSEA=0.06
      Disease prevention (4)/
      Health promotion (4)
      [A8] Koopman et al. (2014) The Patient Readiness to Engage in Health Internet Technology (PRE–HIT) instrument Aged 27~75 yrs† with chronic disease, (N=200) community medicine clinics in USA 28 Health information need (5)/ 4-point Likert scale 1 (strongly disagree) to 4 (strongly agree) Cronbach's α=.57~.87; test-retest r=.60~.85 EFA, items removed FL<.3, other results not mentioned
      Computer/Internet experience, expertise (4)/
      Computer anxiety (4)/
      Preferred mode of interaction (5)/
      Relationship with doctor (3)/
      Cell phone expertise (2)/
      Internet privacy concerns (2)/
      No news is good news (3)
      [A9] Liu et al. (2021) eHealth Literacy Scale Web 3.0 (eHLS-Web3.0) Aged 17~25 (20.5±1.4) yrs, (EFA N=393) & (CFA N=741) college students in China 24 Acquisition (8)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Composite reliability =.93~.96; Cronbach's α=.91~.96; test-retest r=.86, EFA, FL=.42~.98, 71.7% of total variance explained; CFA, FL=.68~.90, CFI=0.92, TLI=0.91, RMSEA=0.06, SRMR=0.05
      Verification (6)/
      Application (10)
      [A10] Liu et al. (2020) Digital Health Literacy Assessment (DHLA) Aged 20+ yrs†, (N=1,588) community in Taiwan 10 Digital health literacy (6)/ 5-point set of response options Cronbach's α=.77~92 EFA, FL=.63~.93, 76.6% of total variance explained
      Belief in medicine (3)/
      Belief in folk remedies (1)
      [A11] Niemelä et al. (2012) Everyday Health Information Literacy (EHIL) Aged 16~18 yrs†, (N=217) secondary school in Finland 10 •   EHIL 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.56 EFA, data is not available
      Motivation (4)/
      Confidence (3)/
      Evaluation (2)/
      •   EHIL8‡(1) – difficulties in understanding information
      [A12] Norman and Skinner (2006) The eHealth Literacy Scale (eHEALS) Aged 13~21 (15.0±1.2) yrs, (N=664) secondary school in Canada 8 Single domain (8) 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.88; EFA, FL=.60~.84, 56.0% of total variance explained
      item-total correlation r=.51~.76; test-retest r=.40~.68
      [A13] Paige et al. (2019) Transactional Aged 40+ (65.0±1.5) yrs, (N=283) from a research registry in USA 18 Functional (4)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.87~.92 EFA; CFA, FL (lambda=0.53~1.00), CFI=0.95, TLI=0.94, RMSEA=0.07, SRMR=0.06
      eHealth Literacy Instrument (TeHLI) Communicative (5)/
      Critical (5)/
      Translational eHealth literacy (4)
      [A14] Petrič et al. (2017) Extended eHealth Literacy Scale (eHEALS-E) Aged 15~90 (40.0±10.3) yrs, (N=644) registered user of online health community in Slovenia 20 Awareness of sources (3)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.52~.81 EFA, FL=.41~.78; CFA, CFI=0.94, RMSEA=0.06, SRMR=0.06
      Recognizing quality and meaning (3)/
      Understanding information (4)/
      Perceived efficiency (4)/
      Validating information (3)/
      Being smart on the Net (3)
      [A15] Rachmani et al. (2022) Digital Health Literacy Competencies for citizens (DHLC) Aged 13~68 (37.6±12.7) yrs, (N=383) community in Indonesia 26 A: Digital competency 8-point Likert scale 0 (unable to do) to 7 (very easy to do the activity and can solve the problems) Cronbach's α=.97; item-total correlation r=.45~.83 Data is not available
      Information & data literacy (1)/
      Communication & collaboration (6)/
      Digital content creation (1)/
      Safety (5)/ Problem solving (5)/
      B: Health information literacy
      Access (2)/ Management (2)/
      Integration (2)/ Evaluation (2)
      [A16] Scherrenberg et al. (2023) Digital Health Readiness Questionnaire (DHRQ) Aged 62.6±15.1 yrs, (N=315) outpatient patients in Belgium 15 Digital usage (4)/ 5-point Likert scale, 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.74~.94 CFA, CFI=0.91, TLI=0.90, RMSEA=0.10, SRMR=0.07
      Digital skills (5)/
      Digital literacy (3)/
      Digital health literacy (3)/
      *Additional: digital learnability (5)
      [A17] Seçkin et al. (2016) electronic-Health Literacy Scale (e-HLS) Aged 18~93 (48.8±16.4) yrs, (N=710) national web-based research panel in the USA 19 Action (13)/ 5-point Likert scale, 1 (never/ strongly disagree) to 5 (always/strongly agree) Cronbach's α=.93, item total correlation r=.11~.78 EFA, FL=.45~.86, 65.0% of total variance explained; CFA, FL=.42~.86, CFI=0.94, NFI=0.92, RMSEA=0.08
      Trust (4)/
      Communication (2)
      [A18] Sudbury-Riley et al. (2017) The eHealth Literacy Scale (eHEALS) Baby boomers, aged ±60 yrs†, (N=996) from the USA, UK & NZ 8 Awareness (2)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Cronbach's α=.92(USA), .93(UK), .91 (NZ) CFA, FL=.73~.92, CFI=0.98~0.99, TLI=0.97~0.98, RMSEA=0.04
      Skills (3)/
      Evaluation (3)
      [A19] van der Vaart & Drossaert (2017) The Digital Health Literacy Instrument (DHLI) Aged 18~84 (46.4±19.0) yrs, (N=200) general population in Netherlands 28 Self-reported scale 4-point scale 1 (very easy or never) to 4 (very difficult or often) Cronbach's α=.57~.89; ICC=.49~.81 EFA, FL=.56~.90, 76.0% of total variance explained
      Operational skills (3)
      Navigation skills (3)
      Information searching (3)
      Evaluating reliability (3)
      Determining relevance (3)
      Adding content (3)
      Protecting privacy (3)
      & Performance-based items (7)
      [A20] Yoon et al. (2022) Digital Health Technology Literacy Assessment Questionnaire (DHTL-AQ) Aged 20~84 (46.5±13.0) yrs, (N=590) in republic of Korea 34 Digital functional literacy Yes/No or 4-point scale 1 (self-implementation) to 4 (do not know), cutoff value 22 out of 34 Cronbach's α=.87~.94 EFA, FL>.50; CFA, FL=60~91, CFI=0.82, TLI=0.81,
      Knowledge based RMSEA=0.09, SRMR=0.07
      Use of an app (9)/
      Task based
      ICT terms (11)/ ICT icons (9)/
      Digital critical literacy
      Evaluating reliability & rele-vance of health information (5)
      [A21] Zhang & Li (2022) Problem-Based mHealth Literacy Scale (PB-mHLS) Aged 18+ yrs†, (N=552 & 433) mobile phone users in China 33 mHealth desire (3)/ 5-point Likert scale 1 (strongly disagree) to 5 (strongly agree) Composite reliability coefficient=.78~.88; Cronbach's α=.86~.95 EFA, FL=.55~.90, 78.7% of total variance explained; CFA, FL=.64~.90,
      mobile phone operational skills (5)/ GFI=0.87, CFI=0.92,
      Acquiring mHealth information (4)/ NFI=0.92, RMSEA=0.06
      Acquiring mHealth services (3)/
      Understanding of medical terms (3)/
      Mobile-based patient–doctor communication (5)/
      Evaluating mHealth information (6)/
      mHealth decision-making (4)
      [No.] Author (year) Measurement Internal consistency Reliability Measurement error Content validity Structural validity Hypotheses testing Cross-cultural validity Criterion validity Responsiveness Floor & ceiling effects
      [A1] Chinn & McCarthy (2013) All Aspects of Health Literacy Scale (AAHLS) ? ? + ? ?
      [A2] Fuzhi et al. (2019) Health Information Literacy (HIL) ? ? ? + + ? ? + ?
      [A3] Holch & Marwood (2020) The eHealth Literacy Scale (eHEALS) + ? ? ? ? ? +
      [A4] Hsu et al. (2014) eHealth Literacy Scale (eHLS) + ? ? ? + + ? ? + ?
      [A5] Karnoe et al. (2018) eHealth Literacy Assessment (eHLA) toolkit ? ? + + + ? +
      [A6] Kayser et al. (2018) The eHealth Literacy Questionnaire (eHLQ) + ? ? + + + ? + +
      [A7] Kim et al. (2024) Mobile based digital Health Literacy Scale-Short Form 11 (MHLS-SF11) + + ? + + + + + + +
      [A8] Koopman et al. (2014) The Patient Readiness to Engage in Health Internet Technology (PRE–HIT) instrument ? + ? ? ? ? ?
      [A9] Liu et al. (2021) eHealth Literacy Scale Web 3.0 (eHLS-Web3.0) + + ? + + + + + + ?
      [A10] Liu et al. (2020) Digital Health Literacy Assessment (DHLA) + ? ? + + + ? +
      [A11] Niemelä et al. (2012) Everyday Health Information Literacy (EHIL) ? ? ? ? ?
      [A12] Norman and Skinner (2006) The eHealth Literacy Scale (eHEALS) + ? + + + ? + ?
      [A13] Paige et al. (2019) Transactional eHealth Literacy Instrument (TeHLI) + ? ? + + + ? ?
      [A14] Petrič et al. (2017) Extended eHealth Literacy Scale (eHEALS-E) ? ? + ? ? ? ?
      [A15] Rachmani et al. (2022) Digital Health Literacy Competencies for citizens (DHLC) ? ? + ? ? ? ? ? ?
      [A16] Scherrenberg et al. (2023) Digital Health Readiness Questionnaire (DHRQ) + ? ? + + ? + ?
      [A17] Seçkin et al. (2016) electronic-Health Literacy Scale (e-HLS) ? ? ? + ? + ?
      [A18] Sudbury-Riley et al. (2017) The eHealth Literacy Scale (eHEALS) ? ? ? + + ? ? + ?
      [A19] van der Vaart & Drossaert (2017) The Digital Health Literacy Instrument (DHLI) ? ? + + + + +
      [A20] Yoon et al. (2022) Digital Health Technology Literacy Assessment Questionnaire (DHTL-AQ) + ? ? + + + + ?
      [A21] Zhang & Li (2022) Problem-Based mHealth Literacy Scale (PB-mHLS) + ? ? + + + + + + ?
      Table 1. Characteristics of Included Studies

      no mean ages; independent question

      Anderson CLR=Anderson conditional likelihood ratio; AGFI=adjusted goodness of fit index; CFA=confirmatory factor analysis; CFI=comparative fit index; df=degree of freedom; EFA=exploratory factor analysis; FL=factor loading; GFI=goodness of fit index; ICC=intraclass correlation coefficient; NFI=normed fit index; PPP=posterior predictive p-value; RMSEA=root mean square error of approximation; TLI=Turker-Lewis fit index; yrs=years.

      Table 2. Overall Rating and Quality of Evidence of Each Instrument

      +=sufficient; –=insufficient; ?=indeterminate


      RCPHN : Research in Community and Public Health Nursing
      TOP