Abstract
Background: Memory and learning deficits are among the most impactful and longest-lasting symptoms experienced by people with chronic traumatic brain injury (TBI). Despite the persistence of post-TBI memory deficits and their implications for community reintegration, memory rehabilitation is restricted to short-term care within structured therapy sessions. Technology shows promise to extend memory rehabilitation into daily life and to increase the number and contextual diversity of learning opportunities. Ecological momentary assessment and intervention frameworks leverage mobile phone technology to assess and support individuals’ behaviors across contexts and have shown benefits in other chronic conditions. However, few studies have used regular outreach via text messaging for adults with chronic TBI, and none have done so to assess and support memory.
Objective: This study aimed to develop and test the usability of memory ecological momentary intervention (MEMI), a text message–based assessment and intervention tool for memory in daily life. MEMI is designed to introduce new information, cue retrieval of the information, and assess learning across time and contexts. We tested MEMI via an iterative, user-centered design process to ready it for a future trial.
Methods: We developed MEMI by leveraging automated text messages for prompts using a REDCap (Research Electronic Data Capture)/Twilio interface linking to the Gorilla web-based behavioral experimental platform. We recruited 14 adults with chronic, moderate-severe TBI from the Vanderbilt Brain Injury Patient Registry to participate in 3 rounds of usability testing: one round of ThinkAloud sessions using the platform and providing real-time feedback to an experimenter (n=4) and 2 rounds of real-world usability testing in which participants used MEMI in their daily lives for a week and provided feedback (n=5/round). We analyzed engagement and quantitative and qualitative user feedback to assess MEMI’s usability and acceptability.
Results: Participants were highly engaged with MEMI, completing an average of 11.8 out of 12 (98%) possible sessions. They rated MEMI as highly usable, with scores on the System Usability Scale across all rounds equivalent to an A+ on a standardized scale. In semistructured interviews, they stated that MEMI was simple and easy to use, that daily retrieval sessions were not burdensome, and that they perceived MEMI as helpful for memory. We identified a few small issues (eg, instruction wording) and made improvements between usability testing rounds.
Conclusions: Testing MEMI with adults with chronic TBI revealed that this technology is highly usable and favorably rated for this population. We incorporated feedback regarding users’ preferences and plan to test the efficacy of this tool in a future clinical trial.
doi:10.2196/59630
Keywords
Introduction
Overview
Every 21 seconds, one person in the United States sustains a traumatic brain injury (TBI) [
]. There has been a decrease in deaths caused by TBI in the last 20 years, but there has been no corresponding reduction in the rate of TBI-related disability [ ]. TBI is increasingly recognized as a chronic condition, rather than an injury with finite recovery [ ]. Symptoms, including memory and learning deficits, may persist for the rest of a patient’s life [ ]. Yet, rehabilitation services are constrained to the acute and subacute phases of injury [ - ]. This leaves patients to cope with chronic disability over a lifetime without skilled support [ , ].While the barriers to community reintegration after TBI are multifactorial [
], memory and the ability to (re)learn words and concepts are critical for rehabilitation potential and participation at school or work [ - ]. Memory disorders represent a key target for chronic care as they are among the most reported, costly, and lasting deficits after injury [ - ]. Memory deficits are present in more than half of patients with moderate-severe TBI [ ] and can be detected in patients 10 years postinjury [ , ]. Yet, there has been limited progress in developing memory therapies for TBI in recent decades [ ].Learning is a complex and multifaceted phenomenon. Memory is not a process that occurs during a single event, but rather is an iterative process that repeats across multiple phases (encoding, consolidation, and retrieval) [
]. The act of memory retrieval is itself a form of learning that changes the nature and strength of that memory and its relations to other memories in the neocortex [ , ]. The memory literature distinguishes between short-term memory (in which a person remembers something when tested shortly after it has been encoded) and long-term memory (in which a person consolidates, or strengthens, a memory so that it is accessible and usable in flexible contexts long after it has been encoded) [ ]. Existing clinical and research approaches that assess and treat memory in a single session are inadequate to capture and support the iterative process of encoding, consolidating, and retrieving a memory to build it into a long-term representation [ , ].Multiple memory systems support dynamic elements of learning [
]. Considerable effort has been directed toward understanding the role of the declarative memory system in TBI outcomes. The declarative memory system depends critically on the hippocampus and medial temporal lobes [ ], which are highly vulnerable to mechanisms of TBI [ - ]. This memory system binds arbitrary elements of an experience (eg, a word’s form to its meaning or a person’s name to their face) into lasting representations and facilitates the retrieval, recombination, and use of those representations in novel contexts [ ]. Given the vulnerability of the declarative memory system in TBI, any memory assessment or treatment in TBI must consider the role of the declarative memory system in binding multiple elements of an experience to form a memory. Canonical neuropsychological assessments of memory for verbal material such as word lists, like the Rey Auditory Verbal Learning Test [ ] and the California Verbal Learning Test [ ], are less sensitive to the range and severity of deficits in declarative, or relational, memory that are common after TBI [ , ] and can be expanded upon by learning tasks that require individuals to form arbitrary relations between elements of an experience (eg, verbal and visual) and enact those relations over time and context.Memory disorders should be assessed and treated in context [
, ]. Deficits that manifest in daily life may not be evident in a short session and controlled context, such as a medical visit, therapy session, or neuropsychological evaluation [ , ]. For example, recent work has shown that adults with chronic TBI do not strengthen (consolidate) their memories over time at the same rate as noninjured peers [ ]. This deficit, which results in a growing learning gap, can only be identified by multiple memory assessments conducted throughout the learning process. Evidence in the cognitive neuroscience literature also indicates people with and those without brain damage benefit when the context of learning is diverse across time and space [ - ]. In memory rehabilitation, each retrieval of information is both an opportunity for assessment and (re)learning [ ]. Contextual diversity includes retrieving information at dispersed times and in different locations (ie, spaced retrieval [ , ], mimicking the real-world nature of learning [ , , ]. Yet, rehabilitation sessions often occur in a constrained context (ie, same room at the same time of day).There is an opportunity to increase the contextual diversity of memory rehabilitation using technology. Ecological momentary assessment and intervention frameworks leverage mobile phone technology to assess and support individuals’ behaviors in real time, in their daily lives [
]. Technology has consistently shown benefits as an intervention tool in chronic conditions [ - ], and text messaging has been used to support daily behaviors like medication adherence in chronic conditions like diabetes [ , ]. Yet, few studies [ - ] have used regular outreach via text messaging for adults with chronic TBI, and none have used an ecological momentary intervention framework to assess and support memory [ ].Objective
We developed memory ecological momentary intervention (MEMI) to address the dual goals of (1) assessing memory over time after TBI and (2) increasing the contextual diversity of retrieval opportunities to support memory for better long-term recall. We conducted 3 rounds of iterative usability testing with adults with chronic, moderate-severe TBI to identify and address any issues with functionality or acceptability before our evaluation of MEMI’s efficacy for assessing and supporting memory in a future trial.
Methods
Intervention Development
We developed a prototype for MEMI using simple text messaging with a mobile web interface. Text messages were delivered using Twilio integrated with REDCap (Research Electronic Data Capture) [
]. Participants clicked on a link in the text message to complete memory tasks on the Gorilla behavioral experiment platform [ ]. We conducted internal testing with 8 members of our research team before moving MEMI to iterative usability testing as described below.Functionality
Overview
Participants received text message prompts to complete MEMI sessions throughout the week. The schedule was composed of 4 primary activities: training (1 time point; day 1), an immediate assessment (1 time point; day 1), AM/PM retrievals (10 time points; days 2‐6); and a delayed assessment (1 time point; day 7). This resulted in 12 possible sessions for each participant (1 session for training and immediate assessment, 10 retrieval sessions, and 1 session for delayed assessment).
contains a visualization of the MEMI schedule. Participants selected the timing of their morning and evening text message prompts, and the experimenter entered these times into REDCap. Twilio then delivered the text message prompts at the chosen times. Each text message contained a link to a memory task on the Gorilla platform [ ] and an approximate task duration (eg, “It is time to complete your memory task. This should take about 5 minutes;” contains a sample message). We describe the memory tasks below.Context Checks
Before every MEMI session (training, retrieval, or assessment), we asked participants to indicate their spatial context (
). This will allow future assessment of the number of spatial contexts experienced by participants throughout the week, as well as whether training and assessment sessions were completed in the same spatial context.Training
At the beginning of the week, participants completed a 15-minute training in which they saw each of 16 novel pieces of information (items) 4 times each. The items were presented via visual stimuli to ensure that all participants received the same exposure (eg, headphones were not required). The number of exposures was limited initially to keep the length of training and burden of using MEMI low.
contains example images from MEMI training.Immediate and Delayed Assessments
Immediately after the training, participants completed an assessment with several recall/recognition tasks designed to assess multiple levels of learning (immediate assessment). First, participants were asked to free type all the items that they learned (ie, free recall, the most robust indicator of learning [
]). Then, they responded to recall cues (eg, seeing an image and having to provide the name for that image). These tasks were designed to tap into memory for both individual elements of an experience (eg, a word form) and the relations between elements (eg, matching a word’s form to its meaning). The entire assessment took approximately 5‐8 minutes to complete. Participants completed the same assessment again on day 7 to assess their long-term retention of the trained information (delayed assessment). For both the immediate and delayed assessments, participants did not receive feedback on their responses. The goal of these sessions was to assess the memories that participants form immediately after training (encoding, immediate assessment) and how those memories changed over the course of the memory process (delayed assessment).Retrieval Sessions
On days 2‐6, participants received text messages each morning and evening cueing them to complete short (5‐8 min) retrieval sessions on a subset of the trained items. Each retrieval session included 8 of the 16 trained items, and the items were cycled such that each item had the same number of retrieval opportunities over the course of the week. Participants completed a mix of the same free- and cued-recall tasks from the assessments. However, the retrieval sessions included an added component: after participants responded to each cued recall item, they saw the correct answer to ensure they gained exposure to the trained information. Notably, MEMI did not indicate if the participants’ responses were correct or incorrect to avoid corrective feedback that may discourage them from using the system.
contains sample items from MEMI retrieval sessions.Content
We designed MEMI to flexibly deliver any target items, with the goal that this system will eventually be customizable to the information that people want to learn. To ensure that MEMI works across information types, we tested 2 types of items in usability testing. Half of the participants received novel word prompts like the ones in
and . The other half learned about names, occupations, and favorite colors associated with images of faces.Ethical Considerations
This work was approved by the Vanderbilt Human Research Program (Institutional Review Board #221667). Study procedures complied with the Helsinki Declaration. All participants gave written informed consent to participate in this study.
Usability Testing
Sample and Recruitment
Participants were recruited from the Vanderbilt Brain Injury Patient Registry [
]. All participants in the Registry are in the chronic phase (>6 mo post injury) of moderate-severe TBI. We chose to focus on people with moderate-severe TBI as they are more likely to experience lasting memory deficits than people with mild TBI [ , ]. We chose to focus on the chronic phase of injury because we wanted to understand how people with chronic TBI use MEMI given our eventual goal to extend memory treatment to daily life and the chronic phase of injury. TBI severity was determined using the Mayo Classification Systemand [ ] met at least one of these criteria: (1) Glasgow Coma Scale <13 within the first 24 hours of acute care admission, (2) positive neuroimaging finding (acute CT findings, or lesions visible on chronic MRI), (3) loss of consciousness >30 minutes, or (4) posttraumatic amnesia > 24 hours. Participants were 18 to 55 years of age; age was restricted to exclude developmental effects of TBI and conservatively limit the effects of age-related cognitive decline. They had no other neurological or cognitive disabilities aside from the qualifying brain injury. All participants had to own a mobile phone to participate in the intervention.At the start of usability testing, we contacted a group of Registry participants that was selected to be diverse with regard to sex, age, and education to garner a wide range of perspectives on MEMI. We also intentionally recruited participants who may benefit from the integration of accessibility features within MEMI (eg, participants with hemiparesis who may use the text-to-speech tool). Each participant received an email and had the opportunity to respond to the email to indicate interest or schedule a phone call to learn more about the study. When a participant expressed interest, they were assigned to the round of usability testing that was ongoing at that time.
Procedures
Overview
We conducted usability testing across multiple, iterative rounds. We used 2 formats to address separate but complimentary usability evaluation goals [
, ]. The first testing round was in a ThinkAloud format [ - ] so that experimenters could observe participants’ use of MEMI and quickly address any usability challenges in an intensive and collaborative way. The subsequent testing rounds were in a real-world usability format so that participants could report on how MEMI worked for them across settings and in daily life.ThinkAloud Usability Testing
In the first round of usability testing (n=4), participants used MEMI and verbally shared their thoughts on the system with an experimenter in real time. The experimenter could observe times they were unsure or struggled to complete a task. In addition, they were asked to provide feedback out loud (a method called ThinkAloud [
- ]) as they underwent the entire MEMI process (from receiving a text message to completing the memory tasks via the web) for a training, an assessment, and a single retrieval session. After completing the ThinkAloud session, participants responded to standardized feedback measures and completed a semistructured interview to assess usability and acceptability.Real-World Usability Testing
Our next 2 rounds of usability testing (n=5/round) were conducted using a real-world format, so that participants used MEMI in their daily lives for a week and provided feedback on the experience. Participants met with an experimenter at the start of the week to provide consent and go through a handout describing key components of MEMI (
). Next, they used MEMI each day for a week (see ). Finally, they met with the experimenter again the day after they finished using MEMI to complete standardized feedback measures and a semistructured interview to assess usability and acceptability.Participant Characteristics Available From the Brain Injury Patient Registry
Demographic and Injury Characteristics
Demographic and injury characteristics were collected from available medical records and a semistructured participant interview when participants joined the Brain Injury Patient Registry. These included participant age, sex, years of education attainment, and time since injury, as well as acute injury characteristics used to categorize severity.
Memory
Participants in the Brain Injury Patient Registry also completed 2 neuropsychological assessments of episodic memory as part of their participation in the Registry: the Rey Auditory Verbal Learning Test [
] and the NIH Toolbox Picture Sequence Memory Test [ , ]. contains more information about the memory assessments.Measures Collected During Usability Testing
Digital Literacy
We assessed digital literacy using an adapted version of the Digital Health Literacy Scale, a validated 3-item measure of digital health care literacy [
]. Scores range from 0 to 12, with higher scores indicating better digital literacy. The original scale includes questions about using applications, setting up a video chat, and solving basic technical problems using a cell phone, computer, or another electronic device. Because our study focused on a mobile phone–based system, we modified the questions to focus on mobile phone use ( ).Engagement
We quantified the number and proportion of available sessions (out of a possible 12: 1 training session, 10 retrieval sessions, and 1 final memory test) completed by each participant in the real-world usability rounds. We did not prompt participants to complete sessions beyond the automated text messages they received. All participants were compensated the same amount, regardless of the number of sessions that they completed, so there was no financial incentive for higher engagement.
User Feedback
The same measures were used across all rounds of usability testing (ThinkAloud and real-world) to allow for direct comparison of results. We assessed usability using 10 items adapted from the System Usability Scale, a widely used standardized questionnaire for perceived usability [
, ]. Response options ranged from 1 (strongly disagree) to 5 (strongly agree) for each item. System Usability Scores range from 0 to 100, with higher scores indicating higher usability [ ].Following practical guidance [
], we modified some System Usability Scale items to make them more accessible for participants with TBI ( ). For example, we changed “I found the system unnecessarily complex” to “I found the system too complicated.” We also added clarifying examples in some cases. For example, we changed “I found the various functions in this system were well integrated” to “I found that the parts of this system worked well together. For example, it was clear how to get to the memory questions from the text messages.”We also asked 2 items to assess acceptability, which we reported separately: “using the system could be helpful for my memory” and “using the system was convenient.” Response options ranged from 1 (strongly disagree) to 5 (strongly agree).
After survey completion, we conducted a short semistructured interview with each participant to solicit additional feedback on usability and acceptability. We asked all participants what they liked best about MEMI, suggestions for improvement, if the scheduled text messages worked well in their daily lives, and whether the MEMI schedule was burdensome. We also asked each participant tailored questions based on the feedback they provided on the System Usability Scale and the acceptability items (eg, why they found the system useful for their memory).
Analyses
We calculated descriptive statistics using R (version 4.2.1; The R Foundation). Interviews were audio-recorded, and key statements were transcribed. We undertook a pragmatic approach to analyze participant feedback quickly between rounds and change the intervention as needed in a timely fashion. Between rounds, author ELM identified actionable areas for improvement from participants’ feedback. Authors ELM and LSM decided if feedback should be incorporated before moving to the next round. Usability testing was completed after participants provided no more substantive feedback on MEMI, as 3 rounds of usability testing with 4 to 5 participants per round is considered sufficient to identify most usability problems [
, ].Results
Participant Characteristics
Demographic and Injury Characteristics
Participants were on average 38.1 (SD 12.3, range 22-54) years old. Half of the sample was female. A total of 4 participants out of 14 (28.6%) had a high school education, 6 out of 14 (42.9%) had a college education, and 4 out of 14 (28.6%) had advanced degrees. Mean time since injury was 6.6 (SD 8.1, range 1‐23) years. Injury etiologies included motor vehicle accidents (8/14, 57.1%), falls (2/14, 14.2%), nonmotorized vehicle accidents (1/14, 7.1%), being hit by a car when walking (1/14, 7.1%), and one other etiology that is not nameable for participant confidentiality. One participant had hemiparesis affecting the ability to type.
separates participant demographic data by usability round. contains demographic and injury information about participants with TBI.Characteristics | Total (n=14) | Iterative testing round | |||
ThinkAloud (n=4) | Real-world 1 (n=5) | Real-world 2 (n=5) | |||
Age (in years), mean (SD) | 38.1 (12.3) | 40.5 (15.6) | 42.0 (9.7) | 32.4 (12.3) | |
Sex (female), n (%) | 7 (50%) | 2 (50%) | 2 (40%) | 3 (60%) | |
Education (in years), mean (SD) | 15.4 (2.7) | 15.0 (2.0) | 14.4 (3.6) | 16.8 (2.3) | |
Months since injury, mean (SD) | 79.6 (84.5) | 100.8 (96.8) | 101.0 (113.4) | 41.4 (21.0) | |
Digital literacy, mean (SD) | 11.2 (1.1) | 12.0 (0.0) | 10.8 (1.3) | 11.2 (1.3) |
aModified Digital Health Literacy Scale; scores range 0‐12, with higher scores indicating more digital literacy.
Memory
As a group, this sample exhibited deficits in episodic memory on neuropsychological testing. There was a range of memory abilities within the group.
contains memory scores from neuropsychological testing.Digital Health Literacy
The sample’s self-reported digital literacy using mobile phones was high, with an average score of 11.2/12 (SD 1.1, range 9‐12) on the mobile phone–focused Digital Health Literacy Scale.
Usability Testing
Changing the target content presented between faces and words did not change user engagement or usability scores (
), so we report results from all participants as a group below.Outcomes | Total (n=14) | Iterative testing round | ||
ThinkAloud (n=4) | Real-world 1 (faces; n=5) | Real-world 2 (words; n=5) | ||
Engagement, number of sessions completed (out of 12), mean (SD) | 11.8 (0.4) | N/A | 11.6 (0.5) | 12.0 (0.0) |
Engagement, proportion of sessions completed, mean (SD) | 98.3 (0.0) | N/A | 96.7 (0.0) | 100.0 (0.0) |
System Usability Scale score, mean (SD) | 91.4 (8.6) | 95.0 (5.4) | 87.5 (10.9) | 93.8 (8.3) |
aN/A: not applicable.
Engagement
Participants in both rounds of real-world usability testing exhibited high engagement with MEMI. As a group, participants completed an average of 11.8 (SD 0.4) out of 12 available sessions. All participants completed the training, immediate assessment, and delayed assessment; the limited missed sessions (3/120 total possible sessions) were midweek retrieval sessions.
contains engagement by round.Usability and Acceptability
Quantitative Feedback
On the System Usability Scale, participants rated MEMI’s usability highly across all rounds. The average score across all rounds was 91.4 (SD 8.6), which is rated as an A+ score in the 96‐100 percentile range on a standardized scale [
]. Scores in each round ( ) qualified as A+scores. Every item was rated in the favorable range for all participants except two. On item 1 (“I would be open to using this system in the future”), 1 participant did not agree because they are trying to cut down on their mobile phone use. On item 7 (“I think that most people with traumatic brain injury would learn to use this system very quickly”), several participants stated that they were unsure because of the range of possible outcomes and ability levels after TBI. Otherwise, participants rated MEMI favorably across all components of usability: intuitive design (ie, understandability, clarity of instructions and tasks), subjective satisfaction (ie, enjoyment of use), and efficiency of use (ie, effectiveness in reaching desired goals).Items added to assess acceptability indicated participants thought MEMI was convenient to use (average score 4.7/5, SD 0.6) and helpful for memory (average score 4.8/5, SD 0.4).
Qualitative Feedback
In semistructured follow-up interviews, several themes emerged from participant feedback. Below, we report common themes along with representative quotes from participants; participant information is reported in brackets (gender, age range, and usability testing round).
Participants Found MEMI to Be Simple and Easy to Use
Consistent with scores on the System Usability Scale, participants noted that MEMI was easy to access and use. They noted that the consistency of the system allowed them to set expectations and complete the tasks.
I thought it was really easy to use. It just had a really nice flow.
[female, 26-30 y, real-world 2]
As far as ease of use, it’s pretty much self-explanatory. If I can do it, anybody can do it.
[female, 51-55 y, real-world 1]
And the more simple something is, the less chance of something going wrong. It’s simple, but it accomplishes what it needs to accomplish.
[female, 51-55 y, real-world 1]
Participants Drew a Clear Distinction Between the Ease of Use and the Difficulty of the Memory Tasks
Several participants reported that the system was easy to use, but the memory tasks were very difficult.
My answers weren’t confident, but I didn’t feel like it was a challenge to use it. Remembering all that was a challenge for the noggin!
[male, 41-45 y old, real-world 1]
I could use some tips on how to remember stuff, but the system was easy to use.
[male, 41-45 y old, real-world 1]
Participants Did Not Find MEMI Burdensome
No participants reported finding the MEMI schedule burdensome when asked directly. All participants said that the sessions were short, and several reported that they liked the predictability of the scheduled messages.
I think after the first day and the second day, I got more used to the flow of it. I was even able to say, oh, and step outside, like at a friend’s house. And I know how long it takes me, so I feel more confident that I can open and finish it.
[female, 26-30 y, real-world 2]
I really liked it because it was very simple, and even on the days I couldn’t do it immediately, I knew I had something to do at 7AM and 7PM. And it was so simple because like, you just clicked on the link and knocked it out.
[female, 19-25 y, real-world 2]
Participants Felt MEMI Was Useful for Memory
All participants agreed that MEMI was useful for their memory. They identified 4 distinct reasons for this usefulness: noticing improvements in the memory task, tracking patterns in memory, the benefits of scheduled memory tasks, and diverse, cued repetitions.
contains sample quotes from participants in each of these areas.Theme | Sample quotes |
Noticing improvement |
|
Tracking patterns in one’s own memory |
|
Consistent, scheduled memory tasks each day |
|
Diverse, cued repetitions |
|
Participant Suggestions
We asked each participant for suggestions about how to improve MEMI. We incorporated 2 pieces of feedback after the ThinkAloud usability round: a request for a clear image indicating how to turn the phone to portrait mode to complete the memory tasks and a request for specific instructions on every item reminding participants to guess if they are unsure. We received positive feedback on each of these changes in subsequent rounds.
We also received feedback from some participants that we did not incorporate because it was only mentioned by a single participant and was not feasible in the current MEMI framework. One participant stated a preference to avoid technology: “I just feel like I am trying to, I don’t like phones. Like…I’m trying to use it less and less. Phones are amazing, they’re so powerful. But we need to not waste our lives there” [male, 21‐25 y old, real-world 2]. Another would have preferred to use MEMI on a desktop computer: “Probably most people would rather the cell phone. I even see people filling out applications on their cell phones. But for me, get to a computer” [male, 51‐55 y old, ThinkAloud]. Other participants requested changes to MEMI’s cueing structure based on personal preference, for example, a desire to slow down prompts, the opportunity to compare the correct answer to what they had written, or the option to use the words in a sentence.
Accessibility Modifications and MEMI
Two participants used MEMI in conjunction with other accessibility features that assist them with motor and cognitive deficits. One participant who found that typing is time-consuming due to hemiparesis used a text-to-speech feature to input their responses. This participant noted that they did have to correct occasional misspellings but otherwise found that the feature integrated well with MEMI since the tasks were in their phone’s web browser, although the tasks did take longer to complete. Another participant reported that they left MEMI text messages marked as unread until they completed the tasks, which is a tool that they use to accommodate for memory deficits in their daily correspondence.
Discussion
Principal Findings
Technology-based interventions provide an opportunity to extend the delivery of memory assessment and support for people with TBI. Text message–based ecological momentary intervention, specifically, may present a minimally burdensome way to extend memory rehabilitation to daily life. MEMI is the first such ecological momentary intervention for memory. We developed MEMI, a text message–based ecological momentary intervention system designed to assess and support memory using short sessions across time and context. We tested MEMI’s usability and acceptability among participants with TBI. Participants in all rounds of testing had favorable opinions of the system and responded frequently to the twice-daily memory task prompts.
Overall, participants were satisfied with MEMI and provided favorable ratings for its ease of use, intuitive design, enjoyment of use, and efficiency of use. The system provided consistent, structured memory support and allowed participants to monitor patterns in their own memory. Participants did not find twice-daily memory sessions to be burdensome and appreciated the opportunity to choose session times that were tailored to their schedules. Participants with accessibility challenges (eg, hemiparesis) were able to use MEMI with other accessibility features (eg, text-to-speech), which was possible because MEMI’s tasks take place on a phone’s integrated web browser.
We collected both quantitative and qualitative user feedback and system-collected engagement data to fully understand users’ experiences, improve functionality, and solve technical issues. Through these multiple data sources, we made improvements to MEMI and received unanticipated but actionable feedback on future research directions (eg, ways to change MEMI’s prompt levels that might enhance both memory and the user experience). This work highlights the need for participatory, user-centered design in all aspects of TBI rehabilitation research.
Usability testing is a necessary first step in user-centered intervention development, and this work suggests that MEMI may be usable for a wide range of people with chronic moderate-severe TBI. This study was conducted in a small sample of individuals with TBI, but we recruited the sample to be diverse with regard to sex, age, and education level. We also intentionally recruited participants who may benefit from integrating accessibility features with MEMI. There was also a range of memory abilities within this sample, and the sample exhibited deficits in episodic memory as a group on neuropsychological testing. Subsequent studies will include larger samples to understand MEMI’s acceptability and efficacy across the full range of heterogeneous post-TBI outcomes.
MEMI is distinct from existing models of memory rehabilitation (eg, weekly therapy sessions) because it leverages technology to deliver assessments and prompts repetitively in daily life, consistent with a context-sensitive rehabilitation approach [
]. It expands on existing technology-based memory interventions, which have largely been app-based and focused on using assistive technology to support prospective behaviors (eg, using a reminders app to take medications) [ , - ]. We are aware of one study [ ] in which participants received text messages containing target information (therapy goals) but were not asked to retrieve the information and respond themselves until tested at weekly intervals. By contrast, MEMI prompts retrieval to monitor and support an individual’s learning of specific information over time and across context. With its delivery of ecological momentary assessment and intervention via text messaging, MEMI may provide an approach to memory care that is more flexible and context-sensitive than traditional therapy but also more structured and informative about identifying specific learning patterns at the level of the individual than existing technology-delivered tools.Next Steps
MEMI was designed to extend the assessment and treatment of memory disorders beyond existing therapy contexts. This tool has multiple potential uses, including improving the assessment of the memory and learning processes in daily life for both clinical and research purposes. Using this tool in research may expand our understanding of how TBI affects memory over time and context, which can lead to new approaches to memory rehabilitation. Clinicians may program MEMI for patients to use between sessions to support memory for target items and to understand patterns in patients’ learning over time (eg, if disruptions occur in encoding, consolidation, or retrieval). Understanding learning patterns at the level of the individual will allow for a personalized medicine approach to memory treatment. After MEMI has undergone further testing to ensure its feasibility, acceptability, and efficacy, the system will be customizable to the information that people want to learn to increase the ecological validity of memory care.
Now that we have determined the usability of the tool, the next steps will be evaluating longer-term engagement, examining the feasibility of different prompt schedules and content, and then establishing efficacy in accordance with the ORBIT model of behavioral intervention development [
, ].Limitations
Although we did not specifically incentivize engagement with the study’s compensation structure, participants were compensated for providing feedback on their experience. Engagement with MEMI may be lower outside of a compensated research study or over a longer period of time. Furthermore, participants may have been reluctant to provide critical feedback due to compensation or their participation with the Brain Injury Patient Registry, although all participants were encouraged to provide their honest opinions. Some of our feedback items and interview questions did not have validity and reliability information to report, but composing our own items allowed us to target our research questions with face validity.
Conclusions
Usability testing is a necessary first step in a participatory design process to ensure that effects identified in clinical trials are due to the intervention as intended and not impacted by difficulty with understanding or using the intervention. Because this is the first examination of an ecological momentary intervention for memory, it was critical to include people with a range of abilities in usability testing. Iterative usability testing of MEMI using multiple data sources revealed that MEMI is highly engaging and usable for people with TBI. Our findings emphasize the need for flexible, daily memory care in TBI and the importance of including people with TBI in a user-centered design process and using multiple sources of data to understand participant perspectives.
Acknowledgments
We sincerely thank the participants who gave their time and insights to this project. ELM’s time was supported in part by the Vanderbilt Faculty Research Scholars Program (grant KL2TR002245) from the National Center for Advancing Translational Sciences. This work was funded by a grant (T32 HS026122) from the Agency for Healthcare Research and Quality and another grant (R01 NS110661) to MCD. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.
Conflicts of Interest
None declared.
This includes all portions of the Multimedia Appendix referred to in the manuscript: (1) handout describing the memory ecological momentary intervention for participants, (2) participant and injury characteristics, (3) memory characteristics of the study sample, (4) modified System Usability Scale items, and (5) modified Digital Health Literacy Scale items.
PDF File, 348 KBReferences
- Report to congress on traumatic brain injury in the united states: epidemiology and rehabilitation. Centers for Disease Control and Prevention; 2015.
- Roozenbeek B, Maas AIR, Menon DK. Changing patterns in the epidemiology of traumatic brain injury. Nat Rev Neurol. Apr 2013;9(4):231-236. [CrossRef] [Medline]
- Dams-O’Connor K, Juengst SB, Bogner J, et al. Traumatic brain injury as a chronic disease: insights from the United States traumatic brain injury model systems research program. Lancet Neurol. Jun 2023;22(6):517-528. [CrossRef]
- Morrow EL, Patel NN, Duff MC. Disability and the COVID-19 pandemic: a survey of individuals with traumatic brain injury. Arch Phys Med Rehabil. Jun 2021;102(6):1075-1083. [CrossRef] [Medline]
- Dahdah MN, Barnes S, Buros A, et al. Variations in inpatient rehabilitation functional outcomes across centers in the traumatic brain injury model systems study and the influence of demographics and injury severity on patient outcomes. Arch Phys Med Rehabil. Nov 2016;97(11):1821-1831. [CrossRef] [Medline]
- Ylvisaker M, Adelson PD, Braga LW, et al. Rehabilitation and ongoing support after pediatric TBI: twenty years of progress. J Head Trauma Rehabil. 2005;20(1):95-109. [CrossRef] [Medline]
- Hoepner JK, Keegan LC. “I Avoid Interactions With Medical Professionals as Much as Possible Now”: health care experiences of individuals with traumatic brain injuries. Am J Speech Lang Pathol. Mar 23, 2023;32(2S):848-866. [CrossRef]
- Ownsworth T, McKenna K. Investigation of factors related to employment outcome following traumatic brain injury: a critical review and conceptual model. Disabil Rehabil. Jul 8, 2004;26(13):765-783. [CrossRef] [Medline]
- Morrow EL, Duff MC. Sleep supports memory and learning: implications for clinical practice in speech-language pathology. Am J Speech Lang Pathol. May 8, 2020;29(2):577-585. [CrossRef] [Medline]
- Vakil E. The effect of moderate to severe traumatic brain injury (TBI) on different aspects of memory: a selective review. J Clin Exp Neuropsychol. Nov 2005;27(8):977-1021. [CrossRef] [Medline]
- Murray LL, Ramage AE, Hopper A. Memory impairments in adults with neurogenic communication disorders. Semin Speech Lang. 2001;22(2):127-136. [CrossRef] [Medline]
- Morrow EL, Duff MC. Word learning as a window to memory and rehabilitation outcomes in traumatic brain injury. Am J Speech Lang Pathol. Mar 23, 2023;32(2S):956-965. [CrossRef] [Medline]
- Tsai YC, Liu CJ, Huang HC, et al. A meta-analysis of dynamic prevalence of cognitive deficits in the acute, subacute, and chronic phases after traumatic brain injury. J Neurosci Nurs. Apr 1, 2021;53(2):63-68. [CrossRef] [Medline]
- Zec RF, Zellers D, Belman J, et al. Long-term consequences of severe closed head injury on episodic memory. J Clin Exp Neuropsychol. Oct 2001;23(5):671-691. [CrossRef] [Medline]
- Velikonja D, Tate R, Ponsford J, et al. INCOG recommendations for management of cognition following traumatic brain injury, part V: memory. J Head Trauma Rehabil. 2014;29(4):369-386. [CrossRef] [Medline]
- Cohen NJ, Banich MT. Memory. In: Banich MT, editor. Cognitive Neuroscience and Neuropsychology. 2nd ed. Houghton-Mifflin; 2003:323-364.
- Oppenheim GM, Dell GS, Schwartz MF. The dark side of incremental learning: a model of cumulative semantic interference during lexical access in speech production. Cognition. Feb 2010;114(2):227-252. [CrossRef] [Medline]
- Oppenheim GM, Dell GS, Schwartz MF. Cumulative semantic interference as learning. Brain Lang. Oct 2007;103(1-2):175-176. [CrossRef]
- Morrow EL, Mayberry LS, Duff MC. The growing gap: a study of sleep, encoding, and consolidation of new words in chronic traumatic brain injury. Neuropsychologia. Jun 6, 2023;184:108518. [CrossRef] [Medline]
- Eichenbaum H, Cohen NJ. From Conditioning to Conscious Recollection: Memory Systems of the Brain. Oxford University Press; 2001.
- Bigler ED, Johnson SC, Anderson CV, et al. Traumatic brain injury and memory: the role of hippocampal atrophy. Neuropsychology. 1996;10(3):333-342. [CrossRef]
- Palacios EM, Sala-Llonch R, Junque C, et al. Long-term declarative memory deficits in diffuse TBI: correlations with cortical thickness, white matter integrity and hippocampal volume. Cortex. Mar 2013;49(3):646-657. [CrossRef] [Medline]
- Rabinowitz AR, Levin HS. Cognitive sequelae of traumatic brain injury. Psychiatr Clin North Am. Mar 2014;37(1):1-11. [CrossRef] [Medline]
- Schmidt M. Rey Auditory Verbal Learning Test: A Handbook. Western Psychological Services; 1996.
- Delis DC, Kramer JH, Kaplan E, Ober BA. California verbal learning test--second edition (CVLT –II). APA PsycTests. 2000. [CrossRef]
- Morrow EL, Dulas MR, Cohen NJ, Duff MC. Relational memory at short and long delays in individuals with moderate-severe traumatic brain injury. Front Hum Neurosci. 2020;14:270. [CrossRef] [Medline]
- Rigon A, Schwarb H, Klooster N, Cohen NJ, Duff MC. Spatial relational memory in individuals with traumatic brain injury. J Clin Exp Neuropsychol. Jan 2, 2020;42(1):14-27. [CrossRef]
- Ylvisaker M. Context-sensitive cognitive rehabilitation after brain injury: theory and practice. Brain Impair. May 1, 2003;4(1):1-16. [CrossRef]
- McAndrews MP, Cohn M, Gold DA. Infusing cognitive neuroscience into the clinical neuropsychology of memory. Curr Opin Behav Sci. Apr 2020;32:94-101. [CrossRef]
- Stark C, Stark S, Gordon B. New semantic learning and generalization in a patient with amnesia. Neuropsychology. 2005;19(2):139-151. [CrossRef]
- Seabrook R, Brown GDA, Solity JE. Distributed and massed practice: from laboratory to classroom. Appl Cognit Psychol. Jan 2005;19(1):107-122. [CrossRef]
- Cepeda NJ, Pashler H, Vul E, Wixted JT, Rohrer D. Distributed practice in verbal recall tasks: a review and quantitative synthesis. Psychol Bull. May 2006;132(3):354-380. [CrossRef] [Medline]
- Gordon KR. The advantages of retrieval-based and spaced practice: implications for word learning in clinical and educational contexts. Lang Speech Hear Serv Sch. Oct 2, 2020;51(4):955-965. [CrossRef] [Medline]
- Middleton EL, Rawson KA, Verkuilen J. Retrieval practice and spacing effects in multi-session treatment of naming impairment in aphasia. Cortex. Oct 2019;119:386-400. [CrossRef] [Medline]
- Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1-32. [CrossRef] [Medline]
- Low JK, Manias E. Use of technology-based tools to support adolescents and young adults with chronic disease: systematic review and meta-analysis. JMIR Mhealth Uhealth. Jul 18, 2019;7(7):e12042. [CrossRef] [Medline]
- Celler BG, Lovell NH, Basilakis J. Using information technology to improve the management of chronic disease. Med J Aust. Sep 1, 2003;179(5):242-246. [CrossRef] [Medline]
- Blake H. Mobile phone technology in chronic disease management. Nurs Stand. 2008;23(12):43-46. [CrossRef] [Medline]
- Milani RV, Bober RM, Lavie CJ. The role of technology in chronic disease care. Prog Cardiovasc Dis. May 2016;58(6):579-583. [CrossRef]
- Nam S, Griggs S, Ash GI, et al. Ecological momentary assessment for health behaviors and contextual factors in persons with diabetes: a systematic review. Diabetes Res Clin Pract. Apr 2021;174:108745. [CrossRef] [Medline]
- Mayberry LS, Mulvaney SA, Johnson KB, Osborn CY. The messaging for diabetes intervention reduced barriers to medication adherence among low-income, diverse adults with type 2. J Diabetes Sci Technol. Jan 2017;11(1):92-99. [CrossRef] [Medline]
- Vaezipour A, Whelan BM, Wall K, Theodoros D. Acceptance of rehabilitation technology in adults with moderate to severe traumatic brain injury, their caregivers, and healthcare professionals: a systematic review. J Head Trauma Rehabil. 2019;34(4):E67-E82. [CrossRef] [Medline]
- Leopold A, Lourie A, Petras H, Elias E. The use of assistive technology for cognition to support the performance of daily activities for individuals with cognitive disabilities due to traumatic brain injury: the current state of the research. NeuroRehabilitation. 2015;37(3):359-378. [CrossRef] [Medline]
- Kirsch NL, Shenton M, Spirl E, et al. Web-based assistive technology interventions for cognitive impairments after traumatic brain injury: a selective review and two case studies. Rehabil Psychol. 2004;49(3):200-212. [CrossRef]
- Juengst SB, Terhorst L, Nabasny A, et al. Use of mHealth technology for patient-reported outcomes in community-dwelling adults with acquired brain injuries: a scoping review. Int J Environ Res Public Health. Feb 23, 2021;18(4):2173. [CrossRef] [Medline]
- Rabinowitz A, Hart T, Wilson J. Ecological momentary assessment of affect in context after traumatic brain injury. Rehabil Psychol. Nov 2021;66(4):442-449. [CrossRef] [Medline]
- Lenaert B, Colombi M, van Heugten C, Rasquin S, Kasanova Z, Ponds R. Exploring the feasibility and usability of the experience sampling method to examine the daily lives of patients with acquired brain injury. Neuropsychol Rehabil. Jun 2019;29(5):754-766. [CrossRef] [Medline]
- Mitchell RJ, Goggins R, Lystad RP. Synthesis of evidence on the use of ecological momentary assessments to monitor health outcomes after traumatic injury: rapid systematic review. BMC Med Res Methodol. Apr 22, 2022;22(1):119. [CrossRef] [Medline]
- Ownsworth T, Mitchell J, Griffin J, Bell R, Gibson E, Shirota C. Electronic assistive technology to support memory function after traumatic brain injury: a systematic review of efficacy and user perspectives. J Neurotrauma. Aug 2023;40(15-16):1533-1556. [CrossRef] [Medline]
- Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. Apr 2009;42(2):377-381. [CrossRef] [Medline]
- Anwyl-Irvine AL, Massonnié J, Flitton A, Kirkham N, Evershed JK. Gorilla in our midst: an online behavioral experiment builder. Behav Res Methods. Feb 2020;52(1):388-407. [CrossRef] [Medline]
- Duff MC, Morrow EL, Edwards M, et al. The value of patient registries to advance basic and translational research in the area of traumatic brain injury. Front Behav Neurosci. 2022;16:846919. [CrossRef] [Medline]
- Malec JF, Brown AW, Leibson CL, et al. The Mayo Classification System for Traumatic Brain Injury Severity. J Neurotrauma. Sep 2007;24(9):1417-1424. [CrossRef]
- Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Inform. Jun 2019;126:95-104. [CrossRef] [Medline]
- Jaspers MWM. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform. May 2009;78(5):340-353. [CrossRef]
- Rubin J, Chisnell D. Handbook of Usability Testing. Wiley Publishing; 2008.
- National Institutes of Health. NIH toolbox scoring and interpretation guide for the ipad. 2016. URL: https://www.nihtoolbox.org/app/uploads/2022/05/Toolbox_Scoring_and_Interpretation_Guide_for_iPad_v1.7-5.25.21.pdf [Accessed 2024-11-19]
- Heaton RK, Akshoomoff N, Tulsky D, et al. Reliability and validity of composite scores from the NIH toolbox cognition battery in adults. J Int Neuropsychol Soc. Jul 2014;20(6):588-598. [CrossRef]
- Nelson LA, Pennings JS, Sommer EC, Popescu F, Barkin SL. A 3-item measure of digital health care literacy: development and validation study. JMIR Form Res. Apr 29, 2022;6(4):e36043. [CrossRef] [Medline]
- Lewis JR. The system usability scale: past, present, and future. Int J Hum Comput Interact. Jul 3, 2018;34(7):577-590. [CrossRef]
- Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum Comput Interact. Jul 29, 2008;24(6):574-594. [CrossRef]
- Brooke J. SUS: a retrospective. J Usability Stud. 2013;8(2):29-40.
- Sauro J. A Practical Guide to the System Usability Scale. Createspace Independent Publishing Platform; 2011.
- Virzi RA. Refining the test phase of usability evaluation: how many subjects is enough? Hum Factors. Aug 1992;34(4):457-468. [CrossRef]
- Hwang W, Salvendy G. Number of people required for usability evaluation. Commun ACM. May 2010;53(5):130-133. [CrossRef]
- Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4:114-123.
- Lannin N, Carr B, Allaous J, Mackenzie B, Falcon A, Tate R. A randomized controlled trial of the effectiveness of handheld computers for improving everyday memory functioning in patients with memory impairments after acquired brain injury. Clin Rehabil. May 2014;28(5):470-481. [CrossRef]
- Bos HR, Babbage DR, Leathem JM. Efficacy of memory aids after traumatic brain injury: a single case series. NeuroRehabilitation. 2017;41(2):463-481. [CrossRef] [Medline]
- Dowds MM, Lee PH, Sheer JB, et al. Electronic reminding technology following traumatic brain injury: effects on timely task completion. J Head Trauma Rehabil. 2011;26(5):339-347. [CrossRef] [Medline]
- Culley C, Evans JJ. SMS text messaging as a means of increasing recall of therapy goals in brain injury rehabilitation: a single-blind within-subjects trial. Neuropsychol Rehabil. Jan 2010;20(1):103-119. [CrossRef] [Medline]
- Czajkowski SM, Powell LH, Adler N, et al. From ideas to efficacy: the ORBIT model for developing behavioral treatments for chronic diseases. Health Psychol. Oct 2015;34(10):971-982. [CrossRef] [Medline]
- Czajkoski S, Freedland K, Powell L. The “nuts & bolts” of behavioral intervention development: using the ORBIT model to develop behavioral treatments for chronic diseases. Presented at: IBTN Workshop; May 26, 2018; Montreal, Quebec, Canada.
Abbreviations
MEMI: memory ecological momentary intervention |
REDCap: Research Electronic Data Capture |
TBI: traumatic brain injury |
Edited by Boris Schmitz; submitted 17.04.24; peer-reviewed by Amaryllis-Chryssi Malegiannaki, Mireille Gagnon-Roy; final revised version received 13.09.24; accepted 16.09.24; published 26.11.24.
Copyright© Emily L Morrow, Lyndsay A Nelson, Melissa C Duff, Lindsay S Mayberry. Originally published in JMIR Rehabilitation and Assistive Technology (https://rehab.jmir.org), 26.11.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Rehabilitation and Assistive Technology, is properly cited. The complete bibliographic information, a link to the original publication on https://rehab.jmir.org/, as well as this copyright and license information must be included.