Coping with the stresses of life, realizing one’s ability, learning, working, and contributing to the community rely on people’s state of mental well-being, often referred to as mental health []. However, attaining this state of mental health remains a common challenge for over 1 billion of the world’s population [,]. Thereby presenting a tremendous societal burden in terms of morbidity, quality of life, and premature mortality among others worldwide [,]. A range of evidence-based approaches have been developed and globally adopted to address these mental health conditions and are commonly regarded as biomedical, psychotherapeutic, and lifestyle-based interventions [-]. Biomedical treatments include pharmacological (medications) interventions such as antidepressants, mood stabilizers, and antipsychotics, and neuromodulation techniques, typically adopted for severe or treatment-resistant cases [,,,]. Psychotherapeutic treatments such as psychosocial (individual, group, or family) and cognitive-behavioral–based therapies are widely recommended as first-line treatments for common mental health conditions [,,]. Lifestyle-based interventions, including exercise, mindfulness, and social support, have shown positive effects as supplement treatment approaches [,,]. Additionally, studies recommend that combining different treatment approaches yield the most effective outcomes for mental health [,-]. These interventions typically depend on in-person delivery, and while individuals experiencing mental health challenges want some form of treatment [-], multiple barriers hinder this []. These barriers include limited access to treatments, geographical and financial constraints, underresourced health care services [], and stigmatization [], which results in a high increase in mental health complaints. Accordingly, the rising incidence of mental health challenges causes a significant increase in the demands on health care systems, exceeding the available resources [,].
Given these limitations, technological interventions have emerged as promising supplementary solutions for their potential to enhance the scalability, affordability, and accessibility of mental health care [,]. Among these technological interventions are digital mental health interventions (DMHIs), which are nonpharmacological, often therapy-oriented, and lifestyle-supportive tools [-]. They are commonly delivered via digital platforms, such as the internet, smartphone apps, SMS, and virtual reality, aimed at preventing or alleviating mental health conditions [,,]. DMHIs often use techniques such as cognitive behavioral therapy (CBT) or positive psychology [] and are applied in both clinical and general populations [-]. Examples include artificial intelligence–based virtual agents for mental health care [,], digital platforms for early interventions in young people with mental health challenges [,], mobile health interventions for suicide prevention [], virtual reality psychotherapy [,], and internet-delivered CBT []. Despite their potential, designing and implementing DMHIs present significant challenges. These include the lack of personalization, limited human resources, technical and ethical considerations, and difficulties in clinical integration []. Additionally, applying suitable evaluation strategies remains a complex task [,], further complicating the development and assessment of effective interventions. Existing standards [] (eg, ISO [International Organization for Standardization] 9241, 2019) and guidelines [] (eg, Interaction Design Foundation, 2015) for digital technology design and evaluation are often field-specific, making them difficult to translate across different disciplines []. While identifying design principles is more common in human-computer interaction (HCI), it is less frequent in clinical science [].
Efforts are being made to derive design principles for DMHIs from learning theories, such as repeated testing, interleaving, and spacing [], as well as adapting HCI principles to formulate guidelines []. Doherty et al [] emphasized that human-centered design (HCD) approaches, such as user-centered design and participatory design, can be adapted for mental health care technologies. They proposed guidelines such as designing for desired outcomes, collaborating with mental health professionals, adapting user-centered design for health care settings, and refining both system protocols and design during development. Evaluation guidelines included multiple stages of testing, evaluating with nonclinical users, using therapists as proxies, and monitoring unsuccessful cases. While these guidelines [] and principles [] offer a foundation for future DMHI design, Murray et al [] noted difficulties in building a consistent knowledge base for evaluating digital health interventions (DHIs). Rapid technological evolution, gaps between research and publication, and varying patients’ needs limit the usefulness of current guidelines for supporting design decisions. Michie et al [] further highlighted the need for scientific principles to guide DMHI design, evaluation, and implementation in health care, urging interdisciplinary collaboration to advance research methods.
Hrynyschyn et al [] conducted a scoping review of evaluation methods beyond randomized controlled trials (RCTs) for DHIs. They found that factorial designs, stepped-wedge designs, sequential multiple assignment randomized trials, and microrandomized trials are common approaches. These methods allow for intervention adaptation and component evaluation, yet challenges remain in establishing these approaches in research practice and addressing their limitations, particularly within collaborative design processes in mental health care. Similarly, Balcombe and De Leo [] focused on identifying the evaluation of digital mental health (DMH) platforms used and DMHIs applied on the DMH platforms. Their report highlighted the feasibility of DMH platforms and DMHIs, although the evidence for their effectiveness, quality, and usability is mostly heterogeneous and preliminary. In the context of design principles, Vial et al [] reported that attempts have been made to integrate HCD approaches into DMHI development in their exploratory mapping review. Nevertheless, these approaches rely very little on designers and design research. They provided suggestions for better reporting of HCD approaches in future research. These include (1) stating and defining the HCD approaches used in the design process and explaining why it was used, (2) describing the core elements of HCD activity, defining the steps and methods used, and explaining the extent to which actors were involved in the design process, and finally, (3) indicating the number of designers involved in the design process, their design profession, and the manner of their contributions.
Research AimWhile previous research [,,] has explored the existing design principles and evaluation approaches for DMHI, there is a need to understand how these concepts are applied in research and design. Improved access to and understanding of design principles and evaluation approaches could enhance DMHI development, foster stakeholder collaboration, and lead to more effective implementation strategies [,]. Therefore, our study aims to review existing principles and approaches used in DMHI design and evaluation, providing an overview of their application and implications using a scoping review approach.
For this study, a principle refers to fundamental guidelines or frameworks derived from interdisciplinary knowledge, offering process guidance to improve the likelihood of successful DMHI development [,]. We refer to the methods or strategies used to evaluate DMHIs as an “approach.” provides a description of what we mean by the terms design and evaluation.
Textbox 1. Definition of the terms used in this study.Design: the design or development process of an application or digital intervention for mental health care, whether a proof of concept or a fully functional intervention.Evaluation: examining or investigating the effectiveness, engagement, user experience (UX), usability, functionalities, and performance of any digital intervention for mental health care.Our review makes 2 important contributions. First, we provide a comprehensive review of the principles and approaches used in the design and evaluation of DMHIs. Second, we propose 8 guidelines for DMHI design and evaluation based on the results of this review. Although we initially aimed to explore implementation strategies, none of the identified studies explicitly focused on DMHI implementation. Therefore, we concentrated on identifying implementation strategies recommended by the reviewed literature.
The research questions guiding this review are as follows: (1) What design principles and evaluation approaches are used in DMHIs? (2) How are these principles, approaches, or strategies applied in DMHIs development process?
The field of DMHI is relatively new; therefore, our research focuses on providing a review of existing principles and approaches for designing and evaluating DMHIs. This meant that a variety of study designs would be included in our review; a scoping review is most appropriate for this study []. We followed widely accepted guidelines for reporting a scoping review: JBI (Joanna Briggs Institute) Scoping Review Methodology [] and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines [].
Inclusion and Exclusion CriteriaWe designed the inclusion criteria to align with the research objectives of our study, which aims to provide an overview of how existing design principles and evaluation approaches are applied in the design and evaluation of DMHIs. The general idea of the inclusion criteria was to include only primary empirical studies reporting on the design, implementation, or evaluation of DMHIs. In addition, we defined the exclusion criteria in .
Textbox 2. The exclusion criteria used for this study.Type of publications: reviews, commentaries, letters to the editor, meta-analyses, literature studies, Delphi studies, framework developments, and conference abstracts.Papers that only present study protocols, but do not present the execution and results.Studies focused on conditions other than mental conditions as a primary condition, or never explicitly defined the mental health condition.Studies that do not explicitly mention interventions that are completely digital or internet-based but use digital tools only for distribution or recruitment instead of implementation or intervention.Papers that do not explicitly mention or report on any design or study or implementation or evaluation of a DMHI.No (explicit) mention, description, or use of a principle, framework, strategy, or guideline in designing, implementing, or evaluating a DMHI.Following JBI guidance and the PRISMA-ScR reporting standards, studies included in our review had to fulfill the following Population, Concept, and Context criteria:
(a) Population:
Studies involving individuals or groups experiencing any type of mental health challenges, conditions, or illnesses (eg, depression, psychosis, anxiety, etc), irrespective of age, gender, or cultural background.Studies targeting users of digital interventions aimed at mental health promotion, prevention, treatment, or well-being from the general population.No restrictions were applied to specific populations (eg, clinical vs nonclinical) provided that the intervention or study focus addressed mental health outcomes explicitly.(b) Concept:
Studies that design, develop, implement, or evaluate any type of DMHI (eg, mobile apps, online therapy platforms, chatbots, web-based self-help tools, or virtual reality interventions).Included studies must explicitly reference, describe, or apply a principle, theoretical framework, model, strategy, or guideline during any phase of the DMHI lifecycle (including design, implementation, or evaluation).Eligible studies may use qualitative, quantitative, or mixed-methods approaches, and report on engagement, usability, effectiveness, or implementation outcomes.(c) Context:
Studies conducted in any geographical, cultural, or health care context, including community, clinical, educational, and workplace settings.The context must clearly involve mental health care, treatment, or well-being, ensuring the digital intervention is applied within a mental health objective.Search StrategyWe used SCOPUS and the Web of Science database to search for relevant literature to be included in this study. The search was conducted from January 2024 to February 2024, with only English-language journal papers and conference papers published. There were no limitations on the year of publication. We ran another search in January 2025 on the same databases and inclusion and exclusion criteria as before, but no new papers were included in the review.
Search TermsThe search structure combined appropriate keywords and controlled vocabulary terms for 5 concepts: DHIs, design, implementation, evaluation, and mental health. We checked previous literature reviews [,,,] to validate these terms. We based these search terms on the target intervention (eg, digital or online health interventions), condition (eg, mental disorder, depression, stress, anxiety, etc) and the research or project activity (design, implementation, and evaluation). A detailed overview of the search strings used for searching the Scopus database can be found in .
Selection of Sources of EvidenceAll results were exported to Excel (Microsoft Corp) and Mendeley (Elsevier.com) reference management software for deduplication. The exported CSV files were then imported into Excel for title, abstract, and full-text screening. Two authors independently screened the selected studies based on title or abstract and resolved any discrepancies by consensus during discussions. Cohen κ was calculated to assess the intercoder agreement between the inclusion and exclusion codes, which showed an excellent agreement (0.81) for screening titles and abstracts. A list of papers included and excluded after full-text screening is presented in [-].
Data ItemsThe following data were extracted from the selected studies:
Bibliographic information: title, first author, year of publication, and country.Study and participant characteristics: study design, study type, sample size, and age of participants.Characteristics of the digital intervention: type, name, purpose, targeted disorder, and features.Design, evaluation, and implementation strategies: design principles, evaluation approaches (including methods, tools, outcome measures, and data collection techniques), and implementation strategies for the intervention.Data SynthesisThe data were divided into groups based on paper type; a code for whether the study aim was either a combination of design (development) and evaluation, or solely focused on evaluation of a DMHI (henceforth referred to as design and evaluation studies, and evaluation studies, respectively). This provided a more concise approach to reporting the relevant principles or guidelines used for the research activity or paper type, as studies that were focused on DMHI design used different principles than those focused on evaluating a DMHI.
A total of 401 papers were identified across the Scopus (193 papers) and Web of Science (208 papers) databases, with 116 duplicates removed. The titles and abstracts of the remaining 285 papers were screened based on the exclusion and inclusion criteria, leading to the exclusion of 250 papers. Full texts of the remaining 35 papers were then downloaded and assessed against the inclusion and exclusion criteria. After reviewing these 35 papers, 18 were excluded, resulting in 17 papers being included in this review. These papers highlighted the design principles, evaluation approaches, and design processes. illustrates the selection process of the studies.
Figure 1. inPRISMA flow diagram of the study process. PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Study Characteristicsprovides an overview of the 17 papers included in the review (see - [,-] for a detailed overview). The studies reported in the papers were published between 2011 and 2023 and were conducted in Europe (6 papers), North America (6 papers), Australia (2 papers), Asia (1 paper), Africa (1 paper), and 1 paper in both Europe and Africa. Of the 17 papers, 11 papers focused on evaluation studies, and 6 papers covered both development and evaluation studies. In terms of study design, 8 were qualitative studies, 2 used mixed methods, and 7 were quantitative studies—with 3 RCTs, 2 retrospective observational studies, and 2 unspecified. Most digital interventions were mobile-based (7 papers), followed by web-based (5 papers), while the remaining (4 papers) were a combination of web and mobile-based interventions, with 1 paper being internet-based. The included papers will be discussed further based on their respective categories: design and evaluation (mixed) and evaluation.
Table 1. General characteristics of papers (n=17).Author, year of publication; countryStudy type (paper type,)Study designIntervention type (name)Xiang et al, 2023 []; United StatesDevelopment and comparative usability evaluationQualitativeWeb-based (Empower@Home)Shkel J et al, 2023 []; United StatesEvaluation of user experience and perceptionQualitativeWeb-based (Overcoming Thoughts)van Orden et al, 2022 []; NetherlandsEvaluation of effectiveness and efficiencyExploratory naturalistic retrospective cohort studyMobile and web-based (NiceDay)Cuijpers et al, 2022 []; LebanonEvaluation of effectivenessSingle-blind, 2-arm pragmatic RCTMobile and web-based (Step by Step)Harty et al, 2023 []; IrelandEvaluation of effectivenessQuantitativeaCountry of the study setting.
bDesign or development and evaluation study.
cEvaluation study.
dRCT: randomized controlled trial.
Overview of Targeted Mental Health ConditionsAcross the 17 papers included in this review, various mental health conditions were targeted in their subsequent studies. Depression was the most frequently targeted condition, investigated in 10 studies [,-,,,,,,]. Anxiety was also commonly examined, appearing alongside depression in 5 papers [,,,,] and in 1 study combined with other conditions such as obsessive-compulsive disorder, posttraumatic stress disorder, and panic disorder []. Panic disorder was also the primary focus of 1 additional study []. General mental well-being was explored in 3 studies [,,], while psychosis [] and emotional distress [] were examined in 1 study each. A detailed overview of the included papers and their corresponding target conditions is provided in .
Design and Evaluation StudiesAmong the 17 included papers, 6 studies focused on the design and evaluation of DMHIs. Two of these studies were qualitative and focused on design and development, while 4 additional studies used mixed methods approaches that included design and evaluation (mixed). An overview of these studies is presented in . All studies shared several common design principles, emphasizing user-centered and iterative design processes often involving co-design with users and expert consultation or collaboration to ensure the intervention was both effective and user-friendly. Usability testing was a key evaluation approach across studies, with feedback loops used to refine and improve the interventions. Additionally, the mixed studies incorporated a focus on behavioral engagement through features such as mood tracking, real-life exercises, and gamification elements. Due to the various design principles and evaluation approaches used in the included studies, we categorized these into 5 groups. Additionally, we reported the recommended implementation strategies highlighted in the studies. illustrates an overview of the included studies.
Table 2. Design (development) and evaluation studies.StudyDesign principles for development and adaptationEvaluation approachesRecommendationsRecommended implementation strategiesBurchert et al, 2018 []User-centered adaptation processaNR: not reported.
Design Principles Reported in the StudiesWe identified five groups of design principles in the studies reviewed: HCD approaches such as (1) user-centered design, (2) iterative development, (3) engagement and motivation, (4) design specificity, and (5) security and accessibility. Among these, the principles of engagement and motivation and iterative development were commonly used to enhance user experience and personalize interventions to the specific needs of target populations. These principles ensured that the designs were both relevant and adaptable. The reported design principles are detailed below.
Of user-centered design, all studies emphasized the importance of user-centered design, directly involving users and relevant stakeholders in the design process. We synthesized this aspect into 2 approaches as highlighted in the included studies: direct end user involvement and expert-driven collaboration.
First, for direct end user involvement, Burchert et al [] performed interviews, focus groups, and usability testing with real or intended users (Syrian refugees), focusing on their needs and how users interacted with the design, including barriers to implementation in practice. Similarly, Xiang et al [] and Pozuelo et al [] adopted co-design methods, engaging users as active collaborators in the design process. This approach emphasized accessibility, learning, and iterative refinement driven by user feedback. Geraghty et al [], on the other hand, used explorative qualitative interviews to capture user perspectives, adopting a person-based approach to ensure that the intervention aligned with users’ needs and contexts.
Second, for expert-driven collaboration, Stegemann et al [] prioritized expert inclusion and informal team testing, suggesting a collaborative process focused on expert knowledge as a guiding influence for their design process. Pozuelo et al [] also consulted experts during co-design but maintained a user-centered approach by including users in the co-design and iterating their prototype based on user feedback. Ferguson et al [] used a theory-centered design, involving focus groups with users to inform character and narrative design. Their approach balanced behavioral engagement features with user involvement to refine prototypes.
Of iterative development, all studies adopted iterative development processes, focusing on continuously refining designs based on feedback. Studies relied primarily on either end user feedback or on expert-driven iteration.
First, for end user feedback, Burchert et al [] used iterative prototyping and usability testing with the target users, allowing direct feedback to shape subsequent versions. Similarly, Xiang et al [] and Pozuelo et al [] adopted agile methods, iteratively refining their digital intervention based on user input. Geraghty et al [] focused on prototyping, integrating user feedback into successive iterations to improve the intervention’s relevance and functionality.
Second, for expert-driven iteration, Stegemann et al [] emphasized informal team testing and expert collaboration in their iterative process. This approach leaned on expert input rather than direct user interaction for refining prototypes. Ferguson et al [] combined task management principles with adaptive design, allowing game features to evolve during user interaction but supported by expert oversight and theoretical frameworks such as behavioral engagement.
Of user engagement and motivation, the studies adopted varied approaches to enhance user engagement and motivation. These can be synthesized into 2 main strategies: user-centered and behavioral design, and gamified or emotional engagement.
First, user-centered and behavioral design, Burchert et al [] used the Integrate, Design, Access, and Share framework, combining usability testing with free list interviews, key informant interviews, and focus group discussions to ensure the tool met users’ needs. This approach focused on tailoring the design to align with users’ behavior and preferences. Geraghty et al [] used a person-based approach, focusing on intrinsic motivation by offering users choices and avoiding directive or medicalized language to foster a sense of autonomy and engagement.
Second, gamified or emotional engagement, Xiang et al [] integrated persuasive and emotional design elements, such as motivational quotes and animated storytelling, to create a deeper connection with users. Pozuelo et al [] and Ferguson et al [] incorporated gamification strategies, such as storytelling, mood reflection, and immediate-use rewards, to sustain user motivation and encourage long-term participation. Stegemann et al [] included functions that allowed users to track their behaviors by documenting panic-related events or daily summaries of their state and progress and providing feedback. They adopted a more data-driven approach to engagement by interpreting user actions rather than direct involvement in the design.
Of design specification, the studies varied in their emphasis on general versus specific design elements, with 2 studies highlighting the balance between general contextual adaptation and specific aesthetic design choices.
First, for general contextual adaptation, Burchert et al [] emphasized contextual adaptation as a core principle, focusing on aligning the intervention with users’ cultural and situational contexts. Their approach prioritized the overall process over detailed interface elements, ensuring flexibility in design to suit diverse user needs.
Second, for specific aesthetic design choices, Stegemann et al [] presented highly specific design choices, such as a Mondrian-style display, event-based design, minimal interface design, and casual information visualization. These choices focused on functional simplicity and an aesthetically pleasing user experience, prioritizing clarity and ease of interaction.
For security and accessibility, 2 studies addressed these through 2 approaches: designing for ease of use and ensuring data confidentiality.
First, for designing for ease of use, Xiang et al [] prioritized accessibility by incorporating large buttons, text descriptions for icons, high-contrast color schemes, and intuitive navigation. These features aimed to ensure usability for a broad demographic, including older adults and individuals in low-resource settings. Pozuelo et al [] extended accessibility by offering the intervention both online and offline, ensuring that users with limited internet access could still benefit from the tool.
Second, ensuring data confidentiality, Pozuelo et al [] also incorporated security measures, such as password protection and an emergency button, to safeguard sensitive mental health data.
Categories of Evaluation ApproachesThe included studies focused on evaluating the engagement and user experience of their respective interventions. We categorized the various evaluation approaches that were used for evaluation into three groups: (1) usability testing, (2) qualitative feedback and focus groups, and (3) iterative feedback and adaptation.
All studies used various usability testing approaches that were dependent on the study or research at hand, creating a distinction between user-centered testing and internal or indirect testing.
First, for user-centered testing, Xiang et al [] and Geraghty et al [] conducted direct usability testing with target users during the development phase to ensure the design met user needs. Pozuelo et al [] used RCTs to collect structured feedback from participants, allowing for an evaluation of usability. Burchert et al [] implemented rapid appraisal techniques, including focus groups with the intended users, to verify findings and refine the iterations of their prototype.
Second, for internal or indirect testing, Ferguson et al [] focused on feedback analysis after the public release of their intervention, emphasizing long-term user engagement and adaptation. Stegemann et al [] focused on informal team testing, relying on internal feedback rather than direct user input during development.
For qualitative feedback and focus groups, qualitative feedback was a common approach used for understanding user experiences, with studies using either exploratory interviews or participatory workshops as their primary approach. These methods illustrate how qualitative feedback supports an in-depth understanding of user perspectives, enabling iterative refinement based on user experiences.
First, for exploratory interviews, Geraghty et al [] used think-aloud methods and explorative qualitative interviews to gather in-depth insights into user interactions and preferences.
Second, Pozuelo et al [] combined participatory workshops with focus groups to evaluate their intervention, involving users in collaborative sessions to refine the design.
For iterative feedback and adaptation, the studies adopted iterative feedback as a key approach for refining their intervention. This was, however, split between structured development loops and postrelease adaptation.
First, for structured development loops, Pozuelo et al [] used an ongoing feedback loop during the development and evaluation phases, continuously refining the intervention based on user input.
Second, for postrelease adaptation, Ferguson et al [] focused on engagement and feedback analysis after the public release of the app, allowing real-world user interactions to shape subsequent updates.
Recommended Implementation StrategiesAlthough our initial goal included exploring implementation strategies, none of the identified studies explicitly focused on DMHI implementation. However, some studies did offer recommendations for implementing their interventions or highlighted barriers that could impact the implementation process. For instance, Xiang et al [] reported on the possible implementation of their intervention in real-life settings (). Their results identified features for improving peer support and barriers that might influence the experience and engagement with their DMHI. Mental health condition, internalized stigma, and perception of autonomy could influence the users’ engagement, while the classification of users’ experience should not be dependent on the login frequency. Burchert et al [], however, reported on the barriers that might influence the implementation of their intervention in real-life: acceptability, credibility, and technical requirements ().
Evaluation StudiesOf the 17 papers reviewed, 11 focused on evaluating DMHIs using qualitative, quantitative, or mixed-method approaches. A total of 8 of the studies focused on evaluating the effectiveness of the intervention, and they typically used quantitative methods such as RCTs, retrospective observational studies, and quasi-experimental designs. Further, 3 studies specifically evaluated user experience, perception, and engagement of the DMHI using qualitative or mixed-method evaluation approaches. presents an overview of the papers, including the evaluation approaches used in the studies. We further discuss the various evaluation approaches used in the included studies and the recommended implementation strategies reported in the following sections.
Table 3. Evaluation studies.StudyEvaluation approaches (including methods and tools)RecommendationsRecommended implementation strategiesShkel et al, 2023 []RCTaRCT: randomized controlled trial.
bPHQ: Patient Health Questionnaire.
cDMHI: digital mental health intervention.
dWHO: World Health Organization.
eWHO-5: World Health Organization Well-Being Index.
fGAD-7: Generalized Anxiety Disorder-7 Scale.
gPSYCHLOPS: psychological outcome profiles.
hDSM-5: Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition.
iNR:
jDSM-IV-TR: Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision.
kAUDIT-C: Alcohol Use Disorders Identification Test-Concise.
lWHOQOL-BREF: World Health Organization Quality of Life.
mUCLA: University of California, Los Angeles.
Evaluation Approaches Reported in StudiesThe evaluation approaches reported by the studies can be categorized into two groups that influence the setup of a DMHI evaluation: (1) evaluation focus and (2) methodological approach (qualitative, quantitative, or mixed methods). These aspects highlight underlying decisions researchers face, in turn shaping the design and outcomes of their evaluations.
For evaluation focus, studies highlighted 4 areas on which DMHIs were evaluated. These areas include effectiveness, user experience, implementation feasibility, and longitudinal tracking. Providing insights into user engagement, clinical integration, sustained intervention use, and DMHIs’ impact on symptom reduction.
First, for effectiveness, 8 studies concentrated on evaluating the effectiveness of DMHIs, often using quantitative methods such as RCTs, retrospective observational studies, or quasiexperimental designs. These methods are suitable for assessing measurable outcomes such as symptom reduction or behavior change. In contrast, 4 studies explicitly evaluated user experience, perception, and engagement with DMHIs, using qualitative or mixed method approaches to gain deeper insights into user interactions and satisfaction. The evaluation focus of the experiment determined the type of methodological approach to be used. For example, studies focused on assessing the effectiveness of the intervention or for symptom tracking primarily used quantitative approaches. Further, 4 studies conducted RCTs to assess the intervention outcomes. Studies also used standardized tools and quasiexperimental designs to measure changes in mental health symptoms over time [-]. Other studies used retrospective observational designs to assess real-world effectiveness in uncontrolled environments [,,].
Second, for user experience, 2 studies conducted user-centered evaluation by focusing on understanding user experiences, perceptions, satisfaction level, and engagement with DMHIs. This approach provides insights into the subjective experiences and contextual nuances by examining how the interventions can meet the users’ needs, how the user interacts with the intervention, and the practical challenges, which might be unique among users. Studies primarily used qualitative approaches such as focus groups, semistructured interviews, open-ended questions, and participatory workshops [,]. Qualitative analysis was applied to field notes and reflective logs to explore the users’ phenomenological insights, offering a deep understanding of their lived experiences with the intervention. For example, Shkel et al [] integrated qualitative findings with quantitative measures, including crowdsourced support and health questionnaires, to contextualize user feedback on areas for improvement in their intervention. Valentine et al [] used a phenomenological approach to explore the lived experiences of patients through open-ended questions and reflective logs to document user interactions and perceptions. Insight revealed themes related to engagement, emotional support, and practical challenges, providing implementable insights to improve the design of the intervention.
Third, for implementation and feasibility, 2 studies evaluated the clinical outcomes and practical considerations of DMHIs, such as usability in health care settings and implementation barriers. To achieve this, the studies adopted a mixed-method approach. For instance, van Orden et al [] and Mayer et al [] examined the feasibility of integrating DMHIs into health care workflows while evaluating their effectiveness. Thus, reflecting a proposed balance between evaluating clinical outcomes and addressing possible real-world challenges that might occur during or with their use.
Fourth, for longitudinal tracking, 3 longitudinal studies sought to evaluate the long-term impact of the interventions on the users. These studies combined quantitative symptom tracking with qualitative user feedback to assess the ongoing engagement and intervention durability of the interventions (eg, [,]). The nature of this evaluation approach provides insight into the long-term impact and engagement with DMHIs over time, thereby providing researchers with foresight on what features, functions, or interaction qualities might be redundant over time.
The studies illustrated a range of methodological approaches. For example, 4 studies used RCTs to assess the effectiveness of the interventions, while others opted for retrospective observational designs, which allow evaluation in real-world settings. Further, 2 studies used mixed-method approaches, combining qualitative and quantitative data to balance measurable outcomes with rich, contextual insights. Finally, 2 studies focused exclusively on qualitative methods to explore the user experiences.
First, for quantitative approaches, the evaluation focus often influences the approach or method used by the researchers in assessing the DMHIs. For example, quantitative studies reportedly use experimental designs and standardized measurement tools to evaluate effectiveness, feasibility, and long-term impacts of the DMHIs. RCT experiments were particularly prominent in studies evaluating effectiveness and symptom tracking, emphasizing their role in establishing relationships and intervention efficacy. For example, Kerber et al [] used baseline, postintervention, and follow-up assessments to evaluate a self-guided transdiagnostic app, measuring its effects on mental health symptoms and quality of life over time [,,]. Similarly, RCTs were used to demonstrate intervention efficacy. Nonrandomized approaches, such as retrospective observational studies, were also adopted in evaluating real-world applications of the DMHIs. Venkatesan et al [] used this approach by using retrospective data to assess a therapy-supported app for depression and anxiety, combining standardized questionnaires with observational data to measure engagement, symptom improvement, and effectiveness. As highlighted in the studies, standardized tools were consistently used in quantitative studies to ensure reliable, feasible, and comparable outcomes by measuring symptom severity, well-being, and functional outcomes. These tools include the PHQ-8 and PHQ-9 (Patient Health Questionnaire), which were frequently used to measure depression severity, as seen in studies by Cuijpers et al [], Mayer et al [], and Venkatesan et al []. The GAD-7 (Generalized Anxiety Disorder-7 Scale) was similarly used by Venkatesan et al [] and Klein et al [] to quantify anxiety symptoms. Some studies adopted broader well-being metrics, such as the WHO-5 (World Health Organization Well-Being Index) and the WHODAS-12 (World Health Organization Disability Assessment Schedule), as demonstrated by Cuijpers et al [] to provide a more comprehensive understanding of the intervention’s effects on overall functioning and symptom reduction. Klein et al [] incorporated specialized tools such as e-PASS and the Kessler-6 scale to evaluate anxiety treatments and assess a range of DSM-IV-TR (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision) disorders across multiple self-help e-therapy programs. In addition to these tools, some studies focused on service-level and user-level metrics to assess their intervention at scale by analyzing real-world usage and outcomes. Harty et al [], for instance, conducted a retrospective observational study of a supported digital CBT service offered by the national health service. Routine outcome monitoring and repeated assessments further contributed to the evaluation of DMHIs by tracking changes in symptoms, satisfaction, and engagement over time. These approaches provided insights into both the immediate efficacy and the long-term impact of interventions.
Second, for qualitative approaches, qualitative methodologies are frequently used in DMHI studies, especially those focused on assessing the user experience with the interventions. These methodologies provide insights into the users’ needs, perceptions, and the contextual relevance of the DMHIs. Thereby providing subjective nuances that influence the development, implementation, and usefulness of these interventions, insights that are not fully captured by quantitative methods. A total of 2 studies [,] of the 11 studies focused on evaluation, exclusively adopted qualitative approaches to evaluate the user experience with DMHI designed for multiple mental health challenges. Further, 3 qualitative approaches emerged across these studies. These approaches were semistructured interviews and open-ended questions to gather detailed user feedback; field notes and reflective logs to capture contextual nuances and user behaviors; and qualitative analysis frameworks such as phenomenological approaches to explore deeper insights into the impact and usability of the DMHIs. For instance, Shek et al [] combined an RCT with qualitative methodologies to evaluate a web-based intervention, Overcoming Thoughts, designed for anxiety and depression. By conducting semistructured interviews wi
Comments (0)