Common psychiatric disorders, such as depression and anxiety, are prevalent and costly. Recent nationally representative data suggest that approximately 1 in 3 adults—and 1 in 2 young adults—in the United States struggles with anxiety or depressive symptoms []. Although efficacious mental health treatments exist [], demand for care outpaces its availability, leaving more than half of those with unmet mental health needs unable to access care []. Barriers include cost, inadequate insurance coverage, a shortage of clinicians, geographical challenges, stigma, and inadequate health care provider diversity, according to national data []. Alongside long-term health care system reforms, there is a need for immediate, scalable solutions to increase access to care.
Digital mental health interventions (DMHIs), delivered via smartphones or the internet, offer one promising solution. DMHIs can disseminate evidence-based interventions at scale and at low cost, and their inherent privacy can mitigate stigma. However, existing DMHIs have not yet fully realized their potential.
Problems With Efficacy, Engagement, and TrustDMHIs range from unguided (ie, self-help apps and electronic learning modules) to guided (those that include varying degrees of human support). As unguided DMHIs offer greater scalability at lower cost, they have been touted as ideal solutions to access problems. However, challenges with efficacy, engagement, and trust have undermined their utility in addressing access problems [,]. In contrast, guided DMHIs have stronger evidence of efficacy, engagement, and trust.
First, meta-analytic evidence suggests that unguided (vs guided) DMHIs have lower effect sizes and are less effective for those with more severe psychopathology [,]. In addition, when given a choice between using a mental health app as stand-alone treatment or to augment in-person therapy, people with moderate and severe (vs mild) psychopathology prefer the latter []. Even internet-delivered cognitive behavioral therapy (iCBT) with minimal human support tends to be less acceptable than individual cognitive behavioral therapy (CBT) for people with higher clinical severity []. These findings suggest that the lack of human support may lower perceived quality and acceptability for many, especially those with higher clinical severity.
Second, unguided DMHIs suffer from low real-world engagement. Despite demonstrating efficacy in reducing depression and anxiety in randomized controlled trials (RCTs) [], many stand-alone mental health apps fail to sustain engagement after just 10 days in real-world settings []. Similarly, the dropout rates of iCBT are as high as 80%, despite iCBT’s demonstrated efficacy []. A recent systematic review found that DMHI engagement facilitators include greater digital literacy, more structured training in DMHI use, perceived DMHI relevance, and DMHI integration into daily life []. Because guided DMHIs can directly address such factors, they may be more capable of facilitating engagement []. Indeed, studies comparing unguided and guided versions of the same app show that the latter has better engagement and outcomes [,].
Third, mistrust of data handling hinders DMHI uptake []. Most DMHIs operate outside health care regulation and lack data use protections and transparency []. Examples of problematic data sharing practices and opaque privacy policies abound [] and have, at times, led to high-profile privacy violations []. Such incidents undermine trust [], contributing to low DMHI engagement [].
Taken together, these findings suggest that sacrificing human support for scalability can lower quality and acceptability. To reach people with mental health problems that range in clinical severity, it is key to retain human support, design trustworthy DMHIs, and use scalable engagement strategies (eg, support that enhances DMHI use, relevance, and digital literacy). However, the challenge remains: how can this all be done without sacrificing scalability?
Guided DMHIs: More Viable Solutions?While guided DMHIs have stronger evidence, they have also been incomplete solutions for addressing barriers to access. Many simply add digital components to the existing face-to-face care [-], which neither lowers cost nor impacts the workforce shortage. To address these barriers, some guided DMHIs have replaced clinicians with less costly supporters (eg, coaches and nonprofessionals) []. While the effect sizes of those DMHIs are higher than those of unguided DMHIs [], it remains unclear when and for whom support is best provided by a clinician, coach, or nonspecialist []. Until it is known, retaining a trained clinician in care remains an important ethical consideration in DMHIs.
In addition, a key implementation challenge inherent to guided DMHIs remains unresolved: As DMHIs differ from traditional in-person services, they do not readily fit within traditional clinician workflows; as such, the addition of digital tools and data can feel burdensome to clinicians []. As a result, when DMHIs transition from RCTs to real-world settings, uptake tends to be low, and these tools are quickly abandoned []. Minimizing the added workload for clinicians is essential for the successful implementation of guided DMHIs. However, examples of effective DMHI integration into health care settings remain scarce [].
A Practical Solution: Ensuring Scalability, Quality, Engagement, and TrustA more comprehensive solution is that of the Digital Clinic, an innovative guided DMHI that combines brief clinician-delivered treatment via telehealth with between-session support from an app and a nonspecialist called a digital navigator [,], as seen in . Each of the model’s components addresses leading access barriers while supporting effectiveness.
Figure 1. The Digital Clinic. In the Digital Clinic, (A) the patient receives brief therapy sessions of evidence-based transdiagnostic treatment, provided via telehealth by a trained clinician. (B) The mindLAMP app is integrated into care, enabling real-world skills practice and measurement-based care, including digital phenotyping data collection. (C) Brief weekly check-ins are also held via telehealth with a digital navigator, who provides technology support, shares key data insights, and encourages sustained app engagement. (D) The Digital Navigator also shares data highlights from the app with the clinician, who then uses app data to inform clinical decision-making and enhance patient care in subsequent sessions. First, to ensure both scalability and quality, the model offers brief evidence-based transdiagnostic treatment by a trained clinician (A). Brief evidence-based interventions are more efficacious than treatment as usual for depression and anxiety [-]. Their brevity addresses the workforce shortage by freeing up clinicians faster; retaining the clinician in care increases the likelihood of reaching people with higher clinical severity.
Second, for greater scalability and impact, the mental health app mindLAMP is integrated into care to facilitate real-world skills practice and enable measurement-based care (B). Recent meta-analytic evidence suggests that supplementing standard interventions with an app has additive effects [], possibly increasing therapeutic dose without taking up additional clinical resources []. In addition, measurement-based care is a well-established yet underutilized approach to increasing care quality [-]. To facilitate measurement-based care, mindLAMP streamlines questionnaire administration and data visualization; data are discussed in sessions, informing clinical decision-making. The app’s digital phenotyping capacity also enables the collection of behavioral data, offering additional insights for clinical care [].
Third, to support patient app engagement and clinician data integration into care, a digital navigator is included in this model (C). Designed to support innovative technology-enhanced care models, the broad purpose of the digital navigator role is to make technology usable and useful for patients and clinicians [-]. In the Digital Clinic, digital navigators support patients during weekly check-ins in several ways: they explain data use and privacy policies to facilitate trust in mindLAMP (which is already designed with the highest privacy and security standards). They also provide technical support and app use training, as needed, share data highlights with patients, and encourage them to use the app in ways their clinician has recommended. Digital navigators also offer data insights to clinicians that can inform patient care so that data enhances care without adding to clinician workloads []. As digital navigators do not need clinical expertise, the addition of this feature preserves the model’s scalability and cost-effectiveness.
Finally, remote delivery methods are prioritized to enhance scalability and mitigate stigma, with therapy sessions and digital navigator check-ins conducted via telehealth. Telehealth is acceptable to patients [], and brief evidence-based treatment via telehealth is noninferior to its in-person equivalent []. In addition, some data suggests that digital approaches can address stigma [].
Prior research has not focused on or evaluated the feasibility and acceptability of a comprehensive care model incorporating all of these components to address leading access barriers. An earlier report on the development of the Digital Clinic showed that this model is promising []. In this study, we conducted an open trial to evaluate the Digital Clinic model’s feasibility, acceptability, and potential efficacy in treating patients with common mental health problems ranging in clinical severity.
A total of 10 clinicians were involved in this study (5 master’s and 3 doctoral mental health counseling students from local colleges, a postdoctoral-level clinical psychologist, and a licensed psychiatrist). Clinicians identified as White (n=6), Asian (n=3), and Black (n=1), and had a background in evidence-based treatment. In addition, 16 nonclinician volunteers with an interest in digital interventions were involved as digital navigators in the study [,]. Digital navigators were either college students (n=8) or had recently earned bachelor’s degrees (n=8). They had no prior clinical experience, as the digital navigator role does not require clinical expertise but rather active listening skills, which can be taught in a curriculum [].
ProceduresRecruitment and Enrollment ProceduresIndividuals were screened on the basis of the inclusion and exclusion criteria shown in .
Textbox 1. Participant inclusion and exclusion criteria.Inclusion criteria
Aged at least 18 yearsAble to speak EnglishOwnership of an Android or Apple phoneExclusion criteria
Severe intellectual or attentional deficits that would interfere with participation in therapyAcute suicidality requiring a higher level of careCurrent enrollment in a more intensive care program (ie, inpatient treatment, partial hospitalization, or detox rehabilitation program)For potential participants who were in higher levels of care, we recommended transitioning to the Digital Clinic after discharge, allowing it to serve as a step-down, transitional care option. Participants with subclinical anxiety or depression severity scores were not screened out to avoid denying care to those whose distress may not have been captured by these measures, but they were later excluded from the analyses.
Participants were recruited primarily through referrals from primary care at 2 hospitals in the eastern region of Massachusetts, Beth Israel Deaconess Medical Center and Beth Israel Deaconess Hospital—Needham. A psychiatrist and member of the research team held informational sessions twice a year for primary care doctors from these 2 hospitals. These sessions focused on describing the Digital Clinic, its offering, eligibility criteria, and how to introduce it to potentially eligible patients. Referrals were then made directly from primary care providers (PCPs) to the psychiatrist on the research team, with no direct outreach or recruitment of primary care patients. Recruitment began in August 2022. Upon referral receipt, participants were sent a web-based screening questionnaire via email. Once this form was completed and potential eligibility was confirmed, participants were invited to attend a virtual appointment with a trained digital navigator, who would offer details about the intervention and all of its components. If a participant chose to enroll in the clinic during this appointment, the digital navigator scheduled their first therapy appointment and administered a baseline questionnaire.
Intervention ProceduresEnrolled participants were offered 8 weeks of brief app-augmented treatment based on the Unified Protocol (UP) for Transdiagnostic Treatment of Emotional Disorders [], as further described in the Intervention section below. Treatment was provided free of charge and involved weekly therapy sessions with a trained clinician, weekly check-ins with a digital navigator, and regular use of the mental health app mindLAMP. All sessions and check-ins were conducted via telehealth.
Training and CompetencyA brief therapy manual based on the UP was created for this study. This manual contained session-by-session guides, including guidance for clinicians to integrate the app and its data into care. Clinicians were first given several in-person training sessions in evidence-based treatment. These sessions, led by a licensed clinical psychologist, amounted to approximately 8 hours of training. The focus of the training was on understanding UP principles and on conducting therapy based on the manual. Once clinicians began seeing participants, their adherence and competency was monitored closely in weekly supervisions. Individual supervision (with a psychologist or psychiatrist) focused on conceptualizing patient problems within the UP and conducting treatment in line with evidence-based principles.
Digital navigators were offered 4 training sessions in person that amounted to a total of 10 hours of training. The lead digital navigator of the research team led these training sessions, which included didactic information and experiential learning through role plays. Topics covered included how to conduct the introductory session and the weekly check-ins with participants, how to troubleshoot technical issues that may arise along the way, how to understand digital phenotyping data collected via mindLAMP, and how to support digital literacy by helping participants interpret and understand their own data. Digital navigators were also trained to handle special considerations that could arise in their support meetings with clients: if a participant expressed suicidal thoughts or intent, for example, the digital navigator was trained to escalate the concern to a designated clinical supervisor. Similarly, if a client sought therapeutic advice, digital navigators were trained to encourage the participant to direct those inquiries to their clinician in the next session and to use the app for support between sessions. Upon the completion of this 10-hour training curriculum, digital navigators completed 2 supervised live appointments with participants before being approved to conduct appointments on their own. These training guidelines have been described and are published elsewhere [,].
Intervention DescriptionThe Digital Clinic is a blended care program that offers brief evidence-based treatment augmented by a mobile app (mindLAMP) [] and a digital navigator. The purpose of integrating mindLAMP into care is 2-fold: to help patients acquire and generalize new skills and to collect psychosocial insights that inform treatment. The role of the digital navigator is to support app use, helping to resolve any difficulties (technological or motivational) that may interfere with the patient’s ability to benefit from app use. The digital navigator role has been integrated into this model in light of research showing that app engagement often correlates with clinical improvement yet tends to decline when patients are given a stand-alone app without support []. The digital navigator was also introduced into the model to avoid overburdening clinicians with app and data management tasks in addition to their therapy-related responsibilities.
The Digital Clinic intervention has 2 phases. Phase 1 involved app use with digital navigator support. Participants begin the 8-week program with 2 weeks of app use supported by brief weekly check-ins with a digital navigator. The goal of this period is to help participants become accustomed to completing daily and weekly self-report measures on the app, and for the app to begin collecting digital phenotyping data. Digital phenotyping data is defined as the “individual-level human phenotype in-situ using data from smartphones and other personal digital devices” [] and in this study included several behavioral metrics: steps, movement, screentime, and a sleep estimate. Highlights of this and other data from the app were shared with participants during the digital navigator check-ins, where digital navigators also offered support for continued app use. Phase 2 involved therapy sessions and continued app use along with digital navigator support. In this phase, participants met weekly for 6 weeks with a clinician, who provided transdiagnostic treatment based on the UP. Sessions lasted between 45 and 50 minutes, with an hour-long intake. In each session, mindLAMP data were reviewed with the participant, and UP skills were discussed and assigned as home practice through the app. Therapists shared their screen at key points in each session to review mindLAMP data together with the participant, including week-by-week symptom fluctuation graphs and home practice data.
The UP was selected as the basis of care in the Digital Clinic because its transdiagnostic approach aligned with the clinic’s primary aim of increasing access to care. This approach enhances scalability by training clinicians to deliver a single therapy that can be applied to a wide range of presentations. The UP is an emotion-focused CBT that targets reactivity and avoidance, 2 underlying mechanisms that perpetuate distress across various forms of psychopathology. The therapy begins with conceptualization of the patient’s problems within the UP framework. The therapist then offers emotion psychoeducation on the adaptive function of emotions and helps the patient learn to self-monitor their mood and identify the 3 components of emotional experiences (ie, cognitive, physiological, and behavioral). The goal is to help the patient begin to tolerate and understand, rather than habitually react to, unpleasant emotions. Core UP interventions include mindfulness, cognitive flexibility, countering avoidance, and exposure (including interoceptive, emotional, and situational). A termination session consolidates learning and assists patients in creating a plan to independently practice skills and thus continue making gains after termination. The brief treatment manual designed for the Digital Clinic offers guidance on all of these topics, as seen in .
Table 1. Topics covered in the Digital Clinic manuala.SessionFocusmindLAMP home practice moduleIntakeProblem assessment, history, goalsaThe Digital Clinic manual focuses on emotion-focused cognitive behavioral therapy–based skill-building interventions that support adaptive coping. The manual is based on the Unified Protocol. Therapists are trained to adhere to its core principles but deliver it flexibly—that is, by slowing down the pace, emphasizing certain interventions more than others or adding an adjunctive module to tailor treatment to the patient’s needs.
bNot applicable, given that it is the last session, where participants were encouraged to continue practicing all skills learned during their time in the Digital Clinic.
MaterialsmindLAMP App and DashboardmindLAMP is an open-source mental health app that is designed to be easily customizable to meet the needs of different populations and to be integrated into care. mindLAMP comes with an accompanying dashboard that can be accessed on desktop by both patient and clinician. LAMP stands for the app’s 4 prominent navigation tabs: Learn (contains psychoeducation modules), Assess (self-report measures), Manage (interactive modules for skills practice), and Portal (visualizations of patient data). mindLAMP also has digital phenotyping capabilities and can automatically collect various types of behavioral data (eg, steps and screentime) without the patient having to enter it. mindLAMP has a wide variety of sensors available, including access to metrics derived from Apple Sensorkit that patients can opt in to share data from. The sensors used in the Digital Clinic are accelerometer, ambient light, nearby devices (detected through proximity to Bluetooth), and screen state. mindLAMP has been described in more detail elsewhere [,].
Digital Clinic ManualsClinicians were trained with the Digital Clinic manual for conducting brief, app-augmented UP-based therapy via telehealth. The manual was written by a licensed clinical psychologist and includes guidance for integrating the app and its data into care. Digital navigators were trained with the Digital Navigator manual, which details the protocol for the digital navigator role. Protocols for digital navigator sessions were previously published as part of the Digital Clinic Implementation Manual [], along with detailed descriptions of how the digital navigator role can be adapted for different clinical settings [].
MeasuresFeasibilityFeasibility of recruitment was assessed in 2 ways. First, we calculated the proportion of approached participants who enrolled in the clinic. Approached participants were those we first confirmed to be eligible upon referral and thus invited to an introductory informed consent meeting. Second, we calculated the proportion of participants who agreed to participate after understanding what is involved in the Digital Clinic program during the introductory meeting. For both metrics, a feasibility rate of at least 70% was considered good, with at least 36% considered acceptable for the first metric, given that 36% is the rate of treatment initiation upon receiving a new depressive episode diagnosis in primary care, according to large-scale research [].
Feasibility of retention was determined by calculating the proportion of participants who completed the entire 8-week program. A feasibility rate of 70% was considered good, based on large-scale research that found a 30% attrition rate for in-person therapy in high-income countries []; a feasibility rate of 76% was considered ideal, based on recent RCT findings showing a 24% attrition rate for blended CBT [] (ie, CBT that blended in-person treatment and iCBT components).
Adherence to mindLAMP home practice was determined by the frequency of activities completed in mindLAMP. Adherence was deemed good if at least 70% of participants used the app on at least half the days of their total time in the clinic (ie, 8 weeks) to complete home practice (ie, self-monitoring and UP-based skills practice). Therapist adherence to the manual was closely monitored in ongoing supervision and was deemed good if at least 75% of a random selection of 40% of all clinical notes described session content that was in line with the Digital Clinic therapy manual (ie, in line with UP core principles and interventions). Digital navigator adherence to the digital navigator protocol was assessed via checklists that each digital navigator submitted after each digital navigator meeting with a patient. Preexisting templated checklists for each check-in covered such topics as introducing the clinic structure, setting up and demonstrating the app, reviewing data highlights, and troubleshooting technology issues. Adherence was considered good if at least 75% of a random selection of 10% of all-digital navigator meeting checklists completed showed perfect adherence.
Feasibility of quantitative measures was deemed acceptable if 80% completed questionnaires at each time point. Finally, feasibility of the digital format of the program was assessed with 1 question in the postintervention questionnaire regarding hurdles to digital access (“What was the biggest hurdle you encountered regarding access to the Digital Clinic?”). Feasibility was deemed good if at least 75% endorsed “No significant hurdles encountered” rather than the other options (ie, “difficulty getting stable Wi-Fi,” “difficulty finding a quiet place for clinician sessions,” “difficulty using mindLAMP,” or other self-reported hurdles).
AcceptabilitySatisfaction with key aspects of the intervention was evaluated using several questions in the postintervention questionnaire. For clinician satisfaction, participants were asked to rate “How supported did you feel by your clinician?” on a scale of 1 (not supported at all) to 5 (very supported). Digital navigator satisfaction was assessed with 4 items (“What was the quality of time you spent with your Digital Navigator?” “What was the quality of information provided by the Digital Navigator?” “The Digital Navigator was willing to understand my questions and concerns,” and “The Digital Navigator explained things in a way I understood.”) on a scale of 1 to 5, with 1 indicating low satisfaction and 5 high. These 4 items were averaged to create a composite digital navigator satisfaction score. App satisfaction was assessed with “How would you rank the mindLAMP user experience?” rated from 1 (very difficult to use) to 5 (very easy to use). Acceptability was deemed good if these components of the Digital Clinic were rated at least a 4, on average.
A total of 2 additional indicators of acceptability were assessed: therapeutic alliance with the clinician (measured via the Working Alliance Inventory-Short Revised; WAI-SR [] and digital working alliance, or the perception of the app as a helpful therapeutic tool (Digital Working Alliance Inventory; DWAI] []. Both measures were administered weekly via mindLAMP, and the scores closest to the midpoint (ie, +10 or –10 days) were used in this study. For the WAI-SR, which has demonstrated good validity and reliability [,], participants rated 12 items (eg, “What I am doing in therapy gives me a new way to look at my problem”) from 1 (seldom) to 5 (always), summed to yield a total score ranging from 12 to 60. For DWAI, which follows the same structure as the WAI and has also shown good reliability and validity [], participants rated 6 items (eg, “I believe the app tasks will help me to address my problem”) from 1 (strongly disagree) to 7 (strongly agree), yielding a summed total score. Although normative data for determining cutoffs for these scales are not available, it has been suggested that a score of at least a 42 is considered positive or high on the WAI-SR []. Our benchmarks for good acceptability thus became a minimum score of 42 on the WAI-SR and a corresponding minimum score of 30 on the DWAI, on average.
Potential EfficacyDepressive symptom severity and anxiety symptom severity were assessed with the 9-item Patient Health Questionnaire (PHQ-9) [] and the 7-item Generalized Anxiety Disorder (GAD-7) scale [], respectively. Participants completed these measures at the pre-post and 3-month follow-up time points via a web-based questionnaire and weekly via mindLAMP during the intervention period. On each measure, participants rated from 0 (not at all) to 3 (nearly every day) how much each symptom bothered them over the past 2 weeks. Scale items were then summed to yield a total PHQ-9 score (0-27) and GAD-7 score (0-21). The PHQ-9 has demonstrated construct and criterion validity, and excellent internal reliability (α=.89) [], as has the GAD-7 (α=.92) []. Scores on these 2 scales were also summed together to derive the Patient Health Questionnaire Anxiety and Depression Scale (PHQ-ADS), a measure of comorbid depressive and anxiety symptom severity with strong convergent and construct validity and high internal consistency reliability (α=.88) []. This measure was included as our treatment is transdiagnostic and targets comorbid disorders.
Secondary Clinical OutcomesEmotion regulation self-efficacy, the hypothesized mechanism of treatment [], was measured via the Patient Reported Outcomes Measurement Information System (PROMIS) Item Bank (version 1.0)—Self-Efficacy for Managing Emotions Short Form 8a [], which contains 8 items rated from 1 (I am not at all confident) to 5 (I am very confident). These items are summed to yield a total score from 8 to 40, with higher scores indicating higher levels of self-efficacy for managing negative emotions. This brief scale has good psychometric properties, including high internal consistency (α=.90-.95) []. Flourishing, a measure of psychosocial functioning, was measured with the 8-item Flourishing Scale []. Rated from 1 (strongly disagree) to 7 (strongly agree), items on this scale are summed to yield a total score from 8 to 56, with higher scores representing greater psychological resources []. This scale also has good psychometric properties and high internal consistency (α=.86). Functional impairment was measured with the Sheehan Disability Scale, a 5-item assessment of impairment in 3 domains: work or school, social life, and family life. A total of 3 items assessing these 3 domains are rated from 0 (not at all) to 10 (extremely) and yield a total summed score of 0 (unimpaired) to 30 (highly impaired). The Sheehan Disability Scale is a psychometrically sound instrument, with good internal consistency (α=.83) [].
Data Analytic PlanAnalyses were conducted using R (version 4.2.1). In line with, and guided by, guidelines for feasibility studies [], we computed descriptive statistics to assess feasibility and acceptability, and then conducted paired samples t tests (2-tailed, with Cohen d effect sizes) to examine within-group pre-post differences as a marker of potential efficacy. As commonly reported in the therapy outcomes literature, we also computed clinically significant improvement and remission rates to further assess potential efficacy. Clinically significant improvement was determined by the proportion meeting the minimum clinically important difference (MCID) thresholds established in prior empirical research, which is 4 points for the PHQ-9 and GAD-7 [] and 6 points for the PHQ-ADS []. Per published guidelines, remission was defined as <8 on PHQ-9 [] and <8 on GAD-7 [,], which corresponds to <16 on the PHQ-ADS.
In preparation for conducting the analyses, for participants who did not complete the questionnaire at the end of the intervention (n=36), we obtained PHQ-9 and GAD-7 scores from mindLAMP if they had completed these measures in the app within 10 days of their last therapy session (n=27). Before conducting each type of potential efficacy analysis, we excluded data from participants with baseline subclinical scores on the PHQ-9 (n=74), on the GAD-7 (n=89), or on the PHQ-ADS (n=81). A total of 7 participants experienced significant life events during the 8-week period, including the death of a close loved one (n=5, 71%) and homelessness due to eviction (n=1, 14%) or fire (n=1, 14%). Although we offered these individuals care, nonetheless, we also excluded their data from analyses as these events would have prevented adequate participation in and response to brief treatment. We also excluded data from 1 individual who was wrongly referred to the clinic for a physical rather than a psychological condition.
Ethical ConsiderationsThis study was reviewed before being conducted by the Beth Israel Deaconess Medical Center institutional review board and was approved as a quality improvement project (reference #2022D000016). All participants provided consent as part of the introductory meeting to the Digital Clinic, where a digital navigator informed them on all aspects of the clinic, including the collection of data and its use in the clinic and afterward. As part of the baseline questionnaire, participants signed an informed consent and acknowledgment of services form, which outlined the use of data and limits of confidentiality in treatment and asked participants for their explicit consent to have their deidentified data be used in aggregate with others’ data for research purposes. Data were collected in Health Insurance Portability and Accountability Act (HIPAA)–compliant systems. Participants did not receive compensation for participating in the Digital Clinic.
The total number of participants who completed the Digital Clinic program was 215 (n=136, 63.3% cisgender women, n=73, 34% cisgender men, and n=6, 2.8% nonbinary), with a mean age of 41 (SD 14) years. Approximately 70.2% (151/215) identified as White, 10.7% (23/215) Asian, 9.3% (20/215) Black or African American, 7% (15/215) Hispanic or Latinx, 2.8% (6/215) Middle Eastern or North African, 0.5% (1/215) Native Hawaiian or Pacific Islander, and 0.5% (1/215) biracial.
Feasibility of RecruitmentOf the 401 individuals approached after initial eligibility was confirmed, 289 decided to enroll in the clinic—a 72.1% (289/401) recruitment rate (good, per the 70% benchmark). The proportion of participants who agreed to enroll after understanding all of the components of the Digital Clinic during their first introductory meeting was 87.8% (289/329; excellent, per the 70% benchmark). The recruitment and enrollment flowchart is shown in .
Figure 2. Recruitment and enrollment flowchart. Data for this study comes from a clinic with a constant flow of patients. “Ongoing” in this flowchart thus refers to participants who were still receiving treatment at the Digital Clinic at the time that data collection ended for this particular study. “Completed” refers to participants who completed a full course of care (ie, phase 1 and phase 2, as described in the Intervention Description section of the Methods section). DN: digital navigator; EHR: electronic health record. Feasibility of RetentionAfter excluding 31 participants still in progress from the 289 enrolled, 83.3% (215/258) completed the 8-week program, an excellent retention rate (given benchmarks of 70%-76%).
Adherence to mindLAMP Home PracticeResults indicated that 73.5% (158/215) of participants used the app on at least half the days of their total time in the clinic (benchmark 70%).
Therapist AdherenceA review of clinical notes from a random selection of 51 cases revealed that 86.6% (265/306) of all session notes closely adhered to UP core principles and interventions described in the Digital Clinic manual. The average adherence rate within each participant’s course of care was 87% (SD 17%), indicating that adherence was present in at least 5 of the 6 sessions, on average. Regarding the 4 core UP interventions, 98% (50/51) of cases focused on mindfulness practice, 78% (40/51) on cognitive flexibility, 76% (39/51) on countering avoidance, and 41% (21/51) on exposure. The Digital Clinic manual permits the use of non-UP adjunctive evidence-based interventions as needed, provided they align with the case conceptualization; session notes describing adjunctive interventions most often focused on assertive communication skills practice (ie, the DEARMAN [Describe, Express, Assert, Reinforce, Mindful, Appear Confident, and Negotiate] skill from dialectical behavior therapy), additional assessment and problem solving, relaxation (eg, progressive muscle relaxation and deep breathing), gratitude practice, and values clarification.
Digital Navigator AdherenceA review of a random selection of 22 cases’ digital navigator meeting checklists showed that 98.9% (196/198; benchmark 75%) had perfect adherence.
Feasibility of Quantitative MeasuresFor the 215 who completed the intervention, questionnaire completion rates were 100% (215/215) at baseline, 83.3% (179/215) at the end of the intervention, and 39.1% (84/215) at 3-month follow-up. (After excluding those with baseline subclinical scores on all three symptom measures [PHQ-9, GAD-7, and PHQ-ADS; n=52], the 3-month follow-up rate was still low; 68/163, 41.7%). While the baseline and postintervention questionnaire feasibility rates met and exceeded our 80% benchmark, the follow-up rate did not.
Feasibility of the Digital FormatRegarding hurdles to digital access, 72.1% (129/179) endorsed no significant hurdles, falling slightly below the 75% benchmark. Regarding challenges, 11.2% (20/179) reported difficulty using mindLAMP, 10.6% (19/179) difficulty finding a quiet place for clinician sessions, 3.4% (6/179) difficulty getting stable Wi-Fi, 2.2% (4/179) challenges remembering to do home practice on the app, and 0.6% (1/179) other self-reported challenges.
AcceptabilityThe average clinician, digital navigator, and mindLAMP user experience satisfaction rates were 4.81 (SD 0.52), 4.61 (SD 0.62), and 4.18 (SD 0.79), respectively (benchmark: at least a 4). The average midpoint WAI-SR and DWAI scores were 50.15 (SD 7.86) and 32.10 (SD 6.74), respectively, exceeding the benchmarks of 42 (WAI-SR) and 30 (DWAI).
Potential EfficacyFor those who entered the clinic with depression severity in the clinical range, the average baseline PHQ-9 score was in the moderate to moderately severe range (mean 13.39, SD 4.53) and fell to the subclinical range (mean 7.79, SD 4.61) by the end of the intervention, a statistically significant mean reduction of 5.61 (95% CI 4.72-6.49; t126=12.50; P<.001), with a large effect size (Cohen d=1.11). Gains were maintained at the 3-month follow-up for those who completed the follow-up questionnaire and had an end-of-intervention PHQ-9 score (n=55, 43%), with scores still in the subclinical range (mean 7.42, SD 4.60) and not significantly different from scores at the end of the intervention (t54=1.38; P=.17).
For those who entered the clinic with anxiety severity in the clinical range, the average baseline GAD-7 score was in the moderate range (mean 12.93, SD 3.67) and fell to the subclinical range (mean 7.35, SD 4.19) by the end of the intervention, a statistically significant mean reduction of 5.58 (95% CI 4.73-6.43; t113=13; P<.001), with a large effect size (Cohen d=1.22). Gains were maintained at 3-month follow-up for those who completed the follow-up questionnaire and had a postintervention GAD-7 score (n=48, 42%), with scores still in the subclinical range (mean 6.88, SD 4.65) and not significantly different from scores at the end of the intervention (t47=0.62; P=.54).
For those who entered the clinic with comorbid depressive and anxiety symptoms in the clinical range, the average baseline PHQ-ADS score was in the moderate range (mean 25.51, SD 7.05) and fell to the subclinical range (mean 15.01, SD 8.30) by the end of the intervention, a statistically significant mean reduction of 10.50 (95% CI 8.95-12; t121=13.40; P<.001), with a large effect size (Cohen d=1.21). Gains were maintained at 3-month follow-up for those who completed the follow-up questionnaire and had a postintervention PHQ-ADS score (n=52; 43%), with scores still in the subclinical range at 3-month follow-up (M 14.56, SD 8.77) and not significantly different from scores at the end of the intervention (t51=1.09; P=.28).
From baseline to the end of the intervention, approximately 68.5% (87/127), 76.3% (87/114), and 70.5% (86/122) of those who completed the intervention experienced at least a 25% symptom decrease in PHQ-9, GAD-7, and PHQ-ADS, respectively. Rates of clinically significant improvement (ie, a reduction equal to or greater than the MCID) were 63.8% (81/127), 66.7% (76/114), and 73% (89/122) in terms of PHQ-9, GAD-7, and PHQ-ADS, respectively. Remission rates were 52.8% (67/127), 59.6% (68/114), and 55.7% (68/122) on the PHQ-9, GAD-7, and PHQ-ADS, respectively. These per protocol rates along with the intent-to-treat rates are shown in and the rates calculated separately by baseline severity level are shown in .
Table 2. Rates of overall clinically significant improvement and remission (N=215).aT1: baseline time point.
bT2: end-of-intervention time point.
cClinically significant improvement, also known as clinically meaningful change.
dRemission: rates of remission at T2.
eRemission or mild at T2: proportion of participants whose T2 symptoms were mild or in remission if baseline symptoms were moderate or severe, or in remission if baseline symptoms were mild.
fDep: depressive symptoms, measured by the 9-item Patient Health Questionnaire scale.
gPP: per protocol sample (ie, participants with a baseline clinical-range score who completed the intervention and had a T2 score). Percentages represent the proportion of PP participants out of all completers with a baseline clinical-range score, regardless of T2 questionnaire completion).
hAnx: anxiety symptoms, measured by the 7-item Generalized Anxiety Disorder scale.
iComorb: comorbid symptoms of depression of anxiety, measured by the PHQ-Anxiety and Depression Scale.
jITT: intent-to-treat.
kITT: intent-to-treat sample, which includes those with missing T2 scores either due to dropout or T2 questionnaire noncompletion.
lITT MI: missing T2 scores were imputed using the mean imputation method (ie, the PP T2 mean of the severity subgroup to which the participant belongs).
mBOCF: baseline observation carried forward.
nITT BOCF: missing T2 scores were imputed using the baseline observation carried forward method, a conservative worst-case scenario approach that assumes participants with missing T2 scores would have had no improvement.
Table 3. Potential efficacy rates (N=215).Baseline severityValues, n (%)T1a, mean (SD)T2b, mean (SD)Clinically significant changec (%)Remissiond (%)Remission or mild at T2e (%)Depressive symptoms (PHQ-9f)aT1: baseline time point.
bT2: posttreatment time point.
cClinically significant improvement, also known as clinically meaningful change.
dRemission: rates of remission at T2.
eRemission or mild at T2: proportion of participants whose T2 symptoms were either at least in the mild range (if moderate or severe at baseline) or in remission (if mild at baseline).
fPHQ-9: 9-item Patient Health Questionnaire.
gPer-protocol sample (ie, participants who completed the intervention and had a T2 score).
hNot applicable.
iITT: intent-to-treat.
jIn
Comments (0)