Abstract
Technology-based interventions in the field of disability and rehabilitation, which serve assistive, therapeutic, and/or service delivery functions, are considered complex due to the skills required of providers and recipients, degree of individual tailoring, and diversity of use settings. Feasibility studies are an important step in the evolution of complex interventions that can help refine the intervention, inform implementation, and prevent wasted resources. However, guidance is lacking regarding specific considerations for feasibility studies of technology-based interventions in disability and rehabilitation, which leaves researchers and developers reliant on resources from other fields that do not address important technology properties. To advance the field, context-specific definitions, considerations, and evaluation dimensions must be explicitly outlined to ensure that feasibility studies are constructively designed to meet the unique needs of these interventions. In this viewpoint article, we (1) propose a definition and framework for feasibility studies within the specific context of technology-based disability and rehabilitation interventions, (2) highlight important and unique imperatives for feasibility studies of these interventions, and (3) articulate relevant feasibility dimensions and associated evaluation criteria for these interventions. Building on previous work, we distinguish between feasibility studies, wherein we focus on iterative intervention refinement by addressing key development questions (eg, usability), and pilot studies, which are small-scale versions of a larger study that will evaluate intervention outcomes. Integrating previous typologies, we present 13 feasibility dimensions relevant to technology-based interventions and provide sample evaluation criteria, focusing on the intervention itself rather than study design considerations (eg, trial management). This information may be useful for research and development communities (academic, clinical, or industry) to inform comprehensive feasibility studies that examine unique aspects of technology-based interventions to promote real-world impact. This contribution encourages greater harmonization of terminology and evaluation methods to streamline interpretation and comparison across studies.
JMIR Rehabil Assist Technol 2026;13:e79026doi:10.2196/79026
Keywords
Introduction
Technology-Based Interventions in Disability and Rehabilitation
Innovative technology-based solutions are increasingly sought to meet the evolving needs of health and social service systems, particularly since the onset of the COVID-19 pandemic [-]. In the field of disability and rehabilitation, technology-based interventions involve structured use of technological devices or systems to achieve targeted goals related to health, function, or participation. Technology-based interventions can be broadly classified as serving assistive, therapeutic, and/or service delivery purposes []. Assistive products, such as wheelchairs, hearing aids, and communication devices, help optimize a person’s functioning to reduce disability []. Therapeutic interventions restore or improve function through the use of technologies including robotics, alternative reality systems, and exercise-based video games. Technology-mediated service delivery models replace or augment existing services to improve care access or maximize efficiency through approaches like telerehabilitation, web or mobile apps, and smart home monitoring systems. Technology-based interventions encompass the core technology products, such as the devices, software, or platform, as well as services (eg, training) and systems (eg, smart home system) that enable use of the intervention. Target users of such interventions may include rehabilitation patients, people with disabilities, clinicians, or care partners.
Regardless of intended function and target population, technology-based interventions are considered complex for several reasons. They often require providers and recipients to learn specific skills and necessitate customization or tailoring to individuals and/or specific contexts or settings of use []. Varying these factors may also engender technical complexity (eg, privacy, data security, reliability, usability, and scalability) that needs to be managed. Rapid testing may help uncover design requirements that are not identified during initial design activities and guide iterative intervention refinement.
Feasibility and Pilot Studies
Feasibility is a vital facet in the Medical Research Council (MRC) framework for complex intervention development and evaluation []. Dictionaries define feasibility as the possibility, capability, likelihood, or reasonableness that something can be made, done, or achieved [,]. In the context of health research, feasibility studies refer to preliminary research before a main study to determine whether future work can and should be done, and to inform the design of such work []. According to the MRC framework, feasibility studies should address both the intervention itself (ie, content and delivery), as well as study design and process considerations for future evaluation (eg, recruitment, data collection, outcome measurement) []. Common aspects of feasibility studies include a focus on process rather than outcomes, exploration of key areas of uncertainty, and evaluation of multiple dimensions based on predetermined success or progression criteria [,]. Thorough feasibility studies prior to larger-scale evaluation contribute to better interventions, stronger evaluation designs, greater likelihood of successful implementation, and fewer wasted resources [,].
Historically, the terms feasibility and pilot have been used interchangeably and inconsistently in the literature, causing confusion about their scope and objectives [,]. In response to this lack of terminological clarity and growing interest in the field, Eldridge et al [] developed an international consensus–based conceptual framework for feasibility and pilot studies that has been widely cited. According to their framework, feasibility studies ask whether something can be done, if it should be done, and how it should be done. Pilot studies are considered a subset of feasibility studies that ask similar questions but do so by conducting a small-scale version of a future study (or parts of it) []. It is now generally accepted that feasibility studies represent an overarching type of research concerning whether something will work, with pilot studies as a subset that specifically aim to test the operation of a future larger study [,,].
Rooted in phased clinical trial practices, the Eldridge et al [] framework focuses on studies conducted in preparation for a randomized controlled trial (RCT). However, RCTs are often inappropriate for technology-based interventions in disability and rehabilitation due to practical challenges (eg, need for prolonged intervention use across diverse settings, difficulty matching the speed of technology advances), scientific limitations (eg, lack of masking, and limited generalizability to real-world contexts), and ethical concerns of withholding interventions with observable benefits (see Wang et al [] for detailed description of these issues). Given the challenges associated with reliance on phased clinical trial practices, new approaches have been proposed to generate evidence for technology-based interventions in disability and rehabilitation [,]. An alternate field-specific conceptualization of feasibility studies is thus warranted.
Feasibility Evaluation of Technology-Based Interventions
To address the challenges above, Wang et al [] proposed the Framework for Accelerated and Systematic Technology-Based Intervention Development and Evaluation Research (FASTER), a three-phased approach to evidence generation informed by the MRC guidance: (1) development, (2) progressive usability and feasibility evaluation, and (3) scaled evaluation and implementation []. The aim of the feasibility phase is to progressively refine the intervention prototype through small-scale iterative usability testing and assessment of other process concerns (eg, acceptability, adherence, and intervention safety) []. By uncovering insights on how to improve intervention features, the feasibility phase is essential to promote future uptake and sustained use []. Within the feasibility phase, FASTER promotes a variety of research designs and methodologies to gain a comprehensive understanding of the intervention, its users, and their contexts of use. Teams are encouraged to progressively address issues identified in previous work and tailor the feasibility evaluation to key research questions, uncertainties, and intervention maturity []. However, no guidance is offered on what aspects of feasibility may be relevant.
To our knowledge, no existing frameworks delineate aspects of feasibility relevant for technology-based interventions. While previous work has provided guidance on the design, key features, and areas of evaluation for feasibility studies across various health-related fields (eg, pharmaceuticals [], public health [], psychology [], and rehabilitation [,]), these resources do not account for aspects of feasibility related to interactions between users, technology, and the environment (eg, usability and technical performance), or unique considerations for disability and rehabilitation populations (eg, duration and diversity of intervention use). Specification is important since feasibility studies are applied differently across fields. Technical feasibility studies within engineering or computer science disciplines generally assess whether a system can be developed to perform its intended functions within defined constraints and are therefore typically undertaken in highly controlled settings. Within business settings, feasibility studies assess the likelihood of success for a proposed commercial product or service considering factors like market demand and financial viability. A comprehensive harmonizing framework is needed to provide field-specific guidance for feasibility studies of technology-based interventions, insofar as their potential to perform as expected and deliver intended outcomes.
A harmonized framework could promote more efficient and theoretically driven intervention design, development, and evaluation. Currently, researchers and developers resort to borrowing and “translating” concepts from sources in other fields—a time-consuming undertaking that requires deep familiarization with new literature. This translation leads to differences in interpretation and application of concepts, making it difficult to compare and consolidate across studies. Without unified and cohesive resources, research in this area remains conceptually and empirically fragmented, delaying the pace and quality of innovation. To advance the field amidst rapid digitization of tools, services, and programs, a comprehensive framework is needed that outlines feasibility considerations, dimensions, and evaluation criteria pertinent to technology-based interventions. Such a framework will help researchers and developers achieve consistency and comparability, providing a foundation to develop best practices for studying the feasibility of technology-based interventions for disability and rehabilitation.
Aims
This viewpoint article aims to (1) propose a definition and framework for feasibility studies within the context of technology-based disability and rehabilitation interventions, (2) highlight important and unique imperatives for feasibility studies of these interventions, and (3) articulate feasibility dimensions and evaluation criteria relevant to these interventions. We focus on the feasibility of the intervention itself rather than study design and process considerations (eg, trial management) for which suitable field-specific guidance is available [-]. It is hoped that this discussion encourages comprehensive feasibility studies that examine unique aspects of technology-based interventions to promote real-world research impact.
Authorship Identity Statement
For transparency, we highlight key identities of the authorship team informing this viewpoint. The authors have varied disciplinary backgrounds (kinesiology, occupational therapy, cognitive sciences, rehabilitation science, biomedical engineering, and computer science) and degrees of clinical, research, and lived experiences related to rehabilitation and assistive technologies. Our experiences span diverse populations (eg, traumatic brain injury, respiratory conditions, stroke, and dementia), technologies (eg, telerehabilitation, smart homes, mobility devices, artificial intelligence, robotic rehabilitation, and assistive devices), and user groups (eg, care recipients, partners, providers, industry, and policymakers), across the lifespan (children to older adults). These experiences shape our assumptions about the iterative multifaceted nature of technology-based intervention research and inform our interpretation of the literature.
Defining Feasibility Studies of Technology-Based Interventions
Building on conceptual definitions from Eldridge et al [] and FASTER [], we propose the following adapted definitions of feasibility and pilot studies for application to the context of technology-based intervention research in the field of disability and rehabilitation ().
Together, feasibility and pilot studies aim to gather insights about intervention features and their delivery [,,]. This involves a focus on issues related to the intervention (eg, practicality, usability, and safety) and its implementation, as well as gaining a comprehensive understanding of user experience with the intervention across use contexts [,,]. Once an intervention is sufficiently developed and data support its feasibility, pilot studies may be undertaken as part of the scaled evaluation and implementation phase [,]. Pilot studies have a greater focus on outcomes and may be more likely to consider additional scientific, logistical, or study process factors relevant to the design of future larger definitive studies, such as recruitment and retention rates, data completion and quality, and effect size estimates for efficacy measures [,]. At this stage, technology-based interventions should be evaluated with target users in real-world settings to enable realistic measurement of use and abandonment, effects on health or social outcomes, and economic considerations [,]. Pilot studies can be particularly valuable to inform such research and scale evaluation across time periods, user groups, care contexts, or geographic locations.
The next section highlights imperatives for evaluating the feasibility of technology-based interventions in disability and rehabilitation, followed by a summary of potential feasibility dimensions and associated evaluation criteria. To reiterate, we focus on the feasibility of the technology-based intervention itself, rather than study design and process considerations for future larger evaluation studies. Hence, factors related to study procedures (eg, recruitment, data collection, research infrastructure) are not included. Readers are directed to other works that discuss these aspects of study feasibility for intervention research, which are readily applicable to technology-based interventions [-]. We recognize there is overlap between intervention and study feasibility, and as such, information presented below may be useful for either purpose.
Feasibility study: preliminary investigation of a concept of, a part of, or a whole new technology-based intervention to assess its viability and inform its iterative refinement by addressing uncertainty about key development and design questions, such as how well an intervention prototype can be used (ie, usability), whether it should be used, and how it should be used.
Pilot study: specific type of feasibility study that is a small-scale version of a part of or whole, larger future study that evaluates outcomes, such as the use, effectiveness, or impacts of a new technology-based intervention. Pilot studies may be conducted in either controlled or naturalistic real-world contexts and require the intervention to be administered. Pilot studies involve target end-users, or participants who have some of the characteristics or experiences of the target end-users or knowledge of the experiences and contexts of target end-users. Decisions about the study context and participants depend on factors such as technology readiness or ethical concerns (eg, anticipated risk to target end users).
Imperatives for Evaluating the Feasibility of Technology-Based Interventions
Developing a new technology-based intervention is an iterative nonlinear process that often involves cycles of prototyping, evaluation, and refinement []. Several rounds of feasibility testing and design modification may be warranted before an intervention is considered for larger-scale evaluation. While most interventions require some form of feasibility study, the inclusion of technology and unique needs of disability and rehabilitation populations introduces additional complexities that merit consideration. Based on our collective research and clinical experiences, literature review, and application of the FASTER framework [], we synthesize 10 imperatives that feasibility studies for technology-based disability and rehabilitation interventions should address. These imperatives relate to target user populations; technology and intervention design; and regulatory, policy, and funding constraints (). While some are not entirely unique to technology-based interventions, the inclusion of technology complicates how these imperatives are addressed.
Target user population needs
- Accounting for diversity in user groups and target aims.
- Technology-based interventions in disability and rehabilitation may have several different user groups and various target health or functional outcomes. [] The profile of individuals receiving these interventions may also be complicated by heterogeneity, comorbidities or concurrent needs, and involvement in multiple interventions[].
- Triangulation of multiple research methods and/or data sources provides a more complete understanding of feasibility and maximizes knowledge gained about the intervention from smaller sample sizes [].
- Identifying meaningful user outcomes for health and social function.
- Technology-based interventions for disability and rehabilitation populations serve functions beyond the medical model (curing symptoms or impairments), addressing issues related to health and social outcomes, such as management of chronic conditions, participation, and social inclusion.
- While the focus of feasibility studies is not to evaluate intervention efficacy, exploring mechanisms of action and identifying the potential impacts and meaningful outcomes for users can help refine intervention features and inform future studies [].
- Creating conditions for meaningful feedback from different user groups.
- Multiple user groups may interact with or be impacted by rehabilitation and assistive technology interventions, such as persons with a disability or clinical condition, clinicians, caregivers, and family members.
- Given that different user groups may have distinct needs, values, capacities, and perspectives, creative testing approaches may be needed to simulate the experiences of various groups using the intervention in real-world conditions for them to provide meaningful contextualized feedback.
- There may be stigma associated with the intervention that requires care to ensure that users feel comfortable testing it and providing feedback.
Technology and intervention considerations
- Navigating technological complexities.
- Technology-based interventions in disability and rehabilitation may be complex due to the number of technology components encompassed and their flexibility (ie, customizability) [], skills required by service providers and recipients,[] degree of individual tailoring required [], and diversity of use settings []. These factors are complicated by variable levels of digital literacy and technical expertise among service providers and recipients who must learn and adapt to the technology.
- Diverse study designs and methods from technical and clinical disciplines are thus needed to generate evidence for the feasibility and benefits of technology-based interventions [].
- Evaluating technology properties in real-world settings.
- Evaluating technical performance (eg, system performance and reliability) and usability is critical to the development of new technology-based interventions []. These factors should be considered in early feasibility testing and require unique approaches to evaluation.
- At the feasibility stage, intervention performance and usability should ideally be explored beyond a controlled lab environment [,]. Real-world testing considers how technology properties (eg, components and functions) may be affected by environmental conditions such as lighting, flooring, or background noise.
- Monitoring adaptations needed to accommodate user needs.
- Modifications to a rehabilitation or assistive intervention and its application may be needed to accommodate individual user needs or specific use settings. Adaptability to enable such modifications is a key feature of implementation success that should be addressed in feasibility testing.
- Monitoring the frequency, extent, and impact of intervention modifications at the feasibility stage can provide important information regarding adaptability and help identify overly rigid features as well as opportunities for increased flexibility.
- Incorporating relevant support services.
- Support services are often required to facilitate technology-based interventions, including eligibility and fit assessment, device customization, user training, product servicing or maintenance, and technical support [].
- Incorporating aspects of these support services during feasibility testing can provide a more realistic evaluation of how the intervention may function in real-world environments.
- Capturing appropriate duration and diversity of use.
- Technology-based rehabilitation and assistive intervention use is often not discrete but rather may be used for prolonged or lifelong periods in various settings (eg, hospital or care center, home, workplace, community) that may each be associated with unique privacy, ethical, social, or security concerns.
- Evaluating feasibility at appropriate intervals in diverse real-world contexts is important to identify changes in user experience that may occur during activities with differing demands, in unique environments, or due to changing individual circumstances over time (eg, decline in health status or new social environment).
Regulatory, policy, and funding constraints
- Navigating and obtaining regulatory approvals.
- Some rehabilitation and assistive technologies may be classified as medical devices requiring regulatory approval (eg, Health Canada, Federal Drug Administration, and European Medicines Agency).
- Depending on jurisdictional requirements, specific outcomes or data may be needed to obtain relevant approvals, documentation of which may be initiated at the feasibility stage (eg, history of risk management and safety plans).
- Development teams should be aware of applicable device classifications and associated requirements so they can be addressed in subsequent studies.
- Examining variability in funding and reimbursement structures.
- Funding sources (eg, publicly funded health care, third-party insurance, and self-pay) and reimbursement policies for technology-based interventions vary across jurisdictions, meaning there may be uncertainty about optimal approaches to providing access and establishing commercial feasibility.
- Data related to the cost of the intervention and/or potential cost savings may support market research and commercial feasibility studies to establish a clear pathway for providing access.
Evaluating the Feasibility of Technology-Based Interventions in Disability and Rehabilitation
Here we outline our approach to consolidate potential areas of focus for feasibility studies of technology-based interventions in disability and rehabilitation. Broadly, we “translated” existing concepts from other fields to better fit technology-based interventions by addressing important considerations in this field (see Imperatives for Evaluating the Feasibility of Technology-Based Interventions) and identified additional areas where needed. The first author identified sources by searching Google Scholar and PubMed using keywords including “technology,” “intervention,” and “feasibility” and reviewing forward and backward references. Commonalities across sources were clustered, synthesized, and organized into dimensions that were reviewed by other authors and refined through discussion. Based on our knowledge and experiences in the field, we developed guiding questions for each dimension by adapting existing descriptions from the literature where possible and selected sample evaluation criteria. Throughout this process, conflicting ideas were iteratively resolved through deliberation until consensus was reached.
Most feasibility aspects included here are not new, but rather a collection of those described in other fields [,,-], which we curated and adapted to this unique context. In particular, we drew upon guidance from Bowen et al [] who proposed 8 areas of focus for feasibility studies in public health research that are relevant to technology-based interventions: (1) acceptability, (2) demand, (3) implementation, (4) practicality, (5) adaptation, (6) integration, (7) expansion, and (8) limited efficacy testing []. These areas were a foundation upon which we supplemented additional properties relevant to technology-based interventions.
Dimensions Overview
summarizes feasibility dimensions for technology-based interventions with associated guiding questions and sample evaluation criteria. We use the term “dimension” to describe aspects of feasibility, while evaluation criteria refer to different types of information to address the guiding questions. These criteria include relevant constructs or properties for qualitative inquiry, specific measured parameters and outcomes, and potential analysis tools or frameworks. Each dimension is discussed within the subsections below, including literature examples to illustrate specific points. These dimensions encompass clinical, technical, and commercial properties, but are not exhaustive or prescriptive. Rather, it is a starting point for determining pertinent issues and research questions on an individual basis, which may be addressed in a single study or through the course of multiple studies. As some dimensions are highly interrelated and evaluation criteria may be relevant across multiple dimensions, readers are encouraged to select dimensions and criteria based on the unique context and maturity of their intervention, providing appropriate justification.
| Dimension | Guiding question | Sample evaluation criteria |
| Technical performance | To what extent does the performance of a technology-based intervention satisfy its requirements and meet its intended goals over a specified period of time in a given context? |
|
| Safety | To what extent is a technology-based intervention associated with risk of harm? |
|
| Tolerability | To what extent can a technology-based intervention be used without experiencing unacceptable discomfort? |
|
| Acceptability | To what extent is a technology-based intervention judged by users and other interest holders as suitable, appropriate, or satisfactory towards addressing its intended aim(s)? |
|
| Demand | To what extent is a technology-based intervention likely to be used by its intended users for its intended purpose? |
|
| Usability | To what extent does a technology-based intervention enable users to accomplish its stated goal(s) effectively, efficiently, and to their satisfaction? |
|
| Adaptability | To what extent can a technology-based intervention be modified to meet the needs of diverse users and use contexts? |
|
| Practicality | To what extent can a technology-based intervention be used in its intended setting(s) given resource constraints such as space, equipment, personnel, and time? |
|
| Implementation | To what extent can a technology-based intervention be deployed and used as intended in environments with varying levels of control? |
|
| Integration | To what extent can a technology-based intervention be incorporated into existing systems? |
|
| Impact | To what extent does a technology-based intervention show promise in eliciting positive changes for its target population? |
|
| Ethicality | To what extent is a technology-based intervention aligned with ethical values? |
|
| Commercial viability | To what extent does a technology-based intervention show promise towards being economically viable for commercialization? |
|
aAdapted from Garrett et al []
bAdapted from Bowen et al []
cBased on ISO 9241-11:2018 usability definition
Technical Performance
Users of technology-based interventions rely on systems that perform consistently to achieve their goals. Technical flaws and system errors are frequently cited as barriers to technology adoption [,]. Technical performance examines how well a technology-based intervention satisfies its functional requirements and meets its intended goals over a specified time period in a given context. This reflects aspects of technology performance evaluation from systems and design engineering []. Whereas usability focuses on interaction between the user and the technology, technical performance is more concerned with the function of the technology itself.
In systems engineering, technical performance measures refer to measurable qualities that determine whether a system satisfies key technical requirements []. Relevant technical performance measures for a project depend on the type of technology and its intended functions. summarizes select technical performance measures with examples from interventions in disability and rehabilitation. While these properties may be initially addressed during intervention development, the feasibility stage extends performance evaluation to real-world contexts to identify features requiring refinement [].
| Measure | Definition | Example |
| Accuracy | The closeness of agreement between an observed value and an accepted reference value (ground truth). | In a feasibility study of a sensor motion detection system for fall prevention in hospitalized older adults, Ferrari et al [] assessed the accuracy of sensor output data by establishing its correlation with observer ratings of participant movement from video recordings. |
| Responsivity and Responsiveness | How well a system adjusts to commands or cues promptly. | Fang et al [] evaluated the responsiveness of a Rotational Orthosis for Walking with Arm Swing system by assessing whether a measurable movement was produced that was close to the user’s target joint trajectory. |
| Reliability | Likelihood of a system performing its intended function without error for a given period of time under a specified set of conditions. | Awad et al [] assessed the reliability of the ReWalk ReStore soft robotic exosuit by measuring device malfunctions during usage. |
| Availability | Likelihood that a system is operational at a given point in time under a given set of environmental conditions. | Marschollek et al [] assessed the technical feasibility of a smart home monitoring system by measuring the downtime of installed systems in relation to their up-times. |
aAccording to American Society for Quality [].
Safety
Several authors identify safety as a critical component of intervention feasibility, and it is among the most common measures reported by feasibility studies in rehabilitation research []. Safety refers to the risk of harm imposed on intervention recipients or others affected by the intervention []. In some cases, safety is closely related to technical performance. “Design for safety” is an approach whereby products and systems are purposefully engineered to prioritize the safety of people who interact with them, with specific features designed to proactively address safety risks []. For example, a robotic exoskeleton could be made “fail-safe” by defaulting into a safe system state upon detecting an operational error to prevent injury caused by excessive force applied to a limb []. Thus, intervention safety may depend on appropriate technical performance.
Safety is typically operationalized as the extent to which an intervention is associated with adverse events, such as injuries or disease exacerbations. Measuring the frequency and severity of adverse events provides information about the risks and potential harm of an intervention in a given setting. Defining adverse events prior to commencing evaluation helps ensure relevant incidents are captured and properly managed. For technology-based interventions, researchers may wish to evaluate users’ ability to complete tasks safely while using the device. This can be accomplished through behavioral observation and coding safety incidents and near misses. Perceived safety from the perspective of users and members of their support system may also provide valuable insight, particularly for devices that involve loss of control (eg, robotic exoskeletons), or those that target vulnerable populations like people with dementia.
Wang et al [] provide an example of multidimensional safety evaluation in a study evaluating contact sensors for powered wheelchairs among older adults with dementia []. A multiple single-subject mixed methods design was used, involving quantification of safety-related incidents observed by the investigator during testing with older adults in nursing homes, and qualitative exploration of perceived safety via debrief interviews with participants and focus groups with staff. Investigator observations showed that while the sensors successfully detected most obstacles, minor design faults (eg, gaps between panels) led to some performance errors, and more sensors were needed for complete environmental coverage to minimize injury risk. The qualitative analysis revealed that resident and staff perceptions of safety were mixed, leading the authors to identify a need for further safety improvements by increasing sensor reliability and coverage []. This study demonstrates the close relationship between safety and technical performance and underscores the importance of assessing and redesigning for safety when developing technology-based interventions for people with cognitive impairment.
Tolerability
Related to safety is the concept of tolerability. The US Food and Drug Administration defines tolerability as the degree to which adverse effects can be tolerated by participants []. For technology-based interventions, we conceptualize tolerability as the degree to which users can engage with the intervention without experiencing unacceptable discomfort. Tolerability is particularly important for interventions that are associated with discomfort, such as neuromodulation [], brain-computer interfaces [], and prosthetic devices [], although it may also be relevant for other types of interventions that serve populations with sensitivities (eg, chronic pain, headaches, and autism spectrum disorder). It has important implications for other dimensions such as acceptability, usability, demand, implementation, and ultimately adoption. For example, device fit and comfort are critical factors in long-term use and abandonment of upper limb prosthetics []. Assessing tolerability can also help determine appropriate treatment dosage.
Self-report measures are useful to capture side effects or changes in participants’ condition while interacting with the intervention. For example, symptoms or discomfort can be rated before, during, and after intervention use with condition-specific instruments, or generic subjective rating scales (eg, Visual Analog Scale, Visual Rating Scale, Numeric Rating Scale, and Borg Scale) []. Validated device-specific comfort questionnaires are also available for some interventions, such as wheelchairs [] and wearable computers []. When testing interventions in real-world settings, ecological momentary assessment (EMA) can be a useful approach. EMA involves repeatedly sampling participants’ current condition in real-time within their natural environment []. Technology-based interventions offer unique opportunities to embed EMA (eg, mobile app prompts), allowing teams to gather detailed and context-rich data on participant discomfort, stress, fatigue, and frustration. By assessing users’ condition at various time points in everyday environments, EMA can provide valuable insight into factors that influence tolerability and help monitor cumulative stress from long-term use.
Some objective methods of assessing comfort have been described. Physiological measures (eg, heart rate, skin conductance, and muscle activity) can identify adverse stress responses associated with the intervention. Pressure sensors and movement analysis techniques have been used to capture compensatory movements that indicate discomfort []. Finally, observational analysis of participants’ behavior during the intervention can help identify signs of discomfort, fatigue, or frustration [].
Intervention usage (eg, frequency, duration, or maintenance) is sometimes reported as a measure of tolerability by indicating whether demands are appropriate, or if participants avoid the intervention due to discomfort. In such cases, it should be clear why usage metrics are framed as measures of tolerability rather than engagement or adherence, and how information about tolerability is inferred. For example, a study on a smart glasses social communication aid for people with autism assessed tolerability based on participants’ ability to wear the device during coaching sessions []. The authors were interested in how well participants tolerated the smart glasses due to their sensory and cognitive challenges, rather than whether they chose to use the smart glasses (ie, engagement) or comply with a prescribed treatment plan (adherence).
A good example of tolerability evaluation can be found in a case study from Ryan et al [] on the integration of transcranial direct current stimulation (tDCS) into a pediatric inpatient brain injury rehabilitation program []. Individual session tolerance was monitored using pre- and postratings of common tDCS side effects (eg, headache, fatigue, itching, and tingling). A postintervention Tolerability Questionnaire [] was also used to assess tolerance across the study period, which involved the participant ordering a list of common activities from most to least enjoyable (eg, playing games, watching TV, going to the dentist, and getting a needle), and rating their tDCS experience relative to those activities. With minimal symptom exacerbation within sessions and a favorable rating of tDCS relative to other activities, the authors concluded that tDCS was tolerated well by the participant and could be incorporated into inpatient rehabilitation [].
Acceptability
Acceptability is a foundational aspect of feasibility that describes how individuals involved in an intervention react to it []. While empathizing with users to understand their needs and preferences is a vital step in designing an intervention, perspectives may change once users actually experience it, underscoring the need for user involvement throughout development and evaluation []. According to Nielsen [], technology is accepted when it satisfies the needs and requirements of its users and other interest holders. Thus, acceptability can be defined as the extent to which users of a technology-based intervention and other interest holders judge it to be suitable, appropriate, or satisfactory for addressing its intended aim. Acceptability is an important predictor of use behavior that precedes technology adoption []. Ultimately, successful implementation relies on procedures that are acceptable and meet the needs of individuals who deliver, receive, or otherwise engage with an intervention [,].
Efforts have been made to better understand and operationalize acceptability. Several theories have been proposed to explain technology acceptance, the most prominent of which include Technology Acceptance Model (TAM and TAM2), Combined Technology Acceptance and Theory of Planned Behavior Model, Motivational Model, and Innovation Diffusion Theory []. Combining principles from many models, the Unified Theory of Acceptance and Use of Technology (UTAUT and UTAUT2) posits that acceptance and intention to use technology are dependent on one’s beliefs about performance expectancy (utility toward one’s goals), effort expectancy (ease of use), and facilitating conditions (organizational and technical support), as well as subjective norms regarding their use of the technology, all of which are moderated by gender, age, experience, and voluntariness of use []. For health care interventions, Sekhon et al [] developed the Theoretical Framework of Acceptability, which conceptualizes acceptability through 7 constructs: (1) affective attitude, (2) burden, (3) perceived effectiveness, (4) ethicality, (5) intervention coherence, (6) opportunity costs, and (7) self-efficacy []. It is clear that the acceptability of technology-based interventions in health care is complex and multidimensional, with several interacting factors related to beliefs and attitudes that overlap with other feasibility dimensions.
Satisfaction is considered an indicator of acceptability that can be measured quantitatively using questionnaires []. Sekhon et al [] developed the Theoretical Framework of Acceptability Questionnaire, a general measure reflecting the Theoretical Framework of Acceptability constructs that can be adapted for various health care interventions []. Other generic satisfaction scales include the Client Satisfaction Questionnaire [], Acceptability of Intervention Measure [], and Intervention Appropriateness Measure []. For technology-based interventions, the TAM questionnaire evaluates acceptance based on perceived usefulness and ease of use []. highlights acceptability and satisfaction questionnaires relevant to technology-based interventions []. Comprehensive reviews of satisfaction and acceptability tools for assistive technologies [] and mobile health programs [] are also available.
Generic health care intervention measures
- Theoretical Framework of Acceptability Questionnaire []
- Client Satisfaction Questionnaire []
- Acceptability of Intervention Measure []
- Intervention Appropriateness Measure []
Generic technology measures
- Technology Acceptance Model Questionnaire []
- Unified Theory of Acceptance and Use of Technology Questionnaire []
Assistive technologies
- Quebec User Evaluation of Satisfaction with Assistive Technology 2.0 []
- Almere Technology Acceptance Questionnaire []
- Remote Monitoring Satisfaction Survey []
- Satisfaction with Prosthesis Questionnaire []
- Satisfaction with Amplification in Daily Life Questionnaire []
Therapeutic technologies
- Physiotherapy Mobile Acceptance Questionnaire []
- Exergame Enjoyment Questionnaire []
- Virtual Rehabilitation User Satisfaction Evaluation Questionnaire []
- Immersive Experience Questionnaire []
- Questionnaire for User Interaction Satisfaction []
Telehealth and mobile service delivery
- Telemedicine Satisfaction Questionnaire []
- Telemedicine Satisfaction and Usefulness Questionnaire []
- Telehealth Satisfaction Scale [-]
- Service User Technology Acceptability Questionnaire []
- Website Satisfaction Scale []
- Digital Health Acceptability Questionnaire []
Although satisfaction questionnaires are simple quantifiable measures, qualitative methods often elicit a deeper understanding of user experiences and acceptability at the feasibility stage [,]. Interviews and focus groups can be conducted after users have tested an intervention to gather perspectives regarding appropriateness of the intervention and delivery approach to inform iterative refinement. Qualitative inquiry is particularly valuable to probe the unique and nuanced aspects of technology acceptability in the field of disability and rehabilitation, such as social implications of observable assistive technologies across use contexts.
Importantly, intervention maturity and stage of development influence acceptability. For instance, early stages may involve low-fidelity intervention prototypes that users trial on a single occasion and must imagine the experience of living with in real-life circumstances. Conversely, high-fidelity prototypes in later stages allow users to trial the intervention in real-world environments, over longer periods, and in more diverse contexts. Acceptability should thus be viewed as a dynamic attribute that warrants consideration throughout evaluation.
Demand
Technology innovations in disability and rehabilitation are designed to address unmet needs by providing a new solution or service not available through conventional means. However, the likelihood that an intervention will be used to serve this function cannot be fully understood until in the hands of its target population. Based on interaction and experience with a prototype, demand evaluates the likelihood that a technology-based intervention will be used by intended users []. Demand is closely tied to acceptability, as users are more likely to engage with an intervention that is deemed suitable or satisfactory. By estimating future use, demand also has implications for the commercial viability of technology-based interventions.
A relatively straightforward indicator of demand is user engagement. Cavanaugh [] describes engagement as efforts by users to start and continue with an intervention. Engagement differs from adherence in that it implies an active and volitional choice to interact with an intervention in a meaningful way, rather than passive compliance with a prescribed plan. Therefore, it reflects the extent to which a technology-based intervention is likely to be used, particularly for those that involve free use in uncontrolled contexts, such as home-based exergames, virtual reality programs, and assistive devices. Evidence across various types of technology-based interventions shows that user engagement is an important predictor of future use and intervention success []. In digital health research, engagement has been conceptualized as a multidimensional construct involving observable behavior (ie, usage parameters) and subjective experience characterized by attention, interest, and affect []. Behavioral usage parameters relevant to gauging demand include the frequency, duration, and depth (ie, variety or intensity) of usage []. For interventions that use websites, mobile apps, gaming systems, or other digital devices, these metrics may be captured automatically through system logs. The User Engagement Scale measures engagement experience based on aesthetic appeal, focused attention, novelty, perceived usability, felt involvement, and endurability [].
Engagement should ideally be monitored longitudinally, as diminished use and abandonment are common in digital health and assistive technology interventions [,]. While engagement may be high initially, it often tapers over time as novelty fades. Considering this “law of attrition,” Eysenbach [] recommends analyzing determinants of engagement with eHealth programs and characteristics of subgroups that maintain and discontinue engagement []. Feedback from participants regarding perceived value of the intervention and interest in continuing its use in their routine practice or daily activities can help understand demand. Qualitative inquiry is valuable to obtain a nuanced understanding of engagement by addressing questions such as how, why, and when users choose to engage with the intervention, and to appreciate psychological or social experiences of interacting with the intervention. This can help drive intervention refinement to promote more meaningful and sustained use, ultimately increasing demand.
Usability
Rehabilitation and assistive technology interventions aim to provide a useful and usable tool to accomplish the goal of optimizing function and reducing disability. According to the International Organization for Standardization (ISO) 9241-11:2018, usability refers to how well a system, product, or service can be used by its target users in a specific context to achieve their goals with effectiveness, efficiency, and satisfaction []. Usability is a common area of evaluation for rehabilitation and assistive technologies with important implications for their acceptance, adoption, and effectiveness [,]. It is a key component of feasibility studies that is particularly important for iteratively refining intervention features [].
The ISO emphasizes the multidimensional nature of usability, highlighting effectiveness, efficiency, and satisfaction as key elements []. Similarly, Nielsen [] identifies 5 usability heuristics in his textbook Usability Engineering: (1) learnability, (2) efficiency, (3) memorability, (4) errors, and (5) satisfaction. Building on these conceptualizations, Quesenbery [] coined the 5E’s of usability for user experience design: effectiveness, efficiency, engagement, error tolerance, and ease of learning. summarizes these usability attributes. While not exhaustive, these attributes are a useful starting point for evaluating usability.
| Usability attributes | Definitions | |
| ISO Standard 9241-11:2018 [] | Effectiveness | Accuracy and completeness with which users achieve intended goals or outcomes |
| Efficiency | Resources used (eg, time, human effort, cost, and materials) relative to results achieved | |
| Satisfaction | Extent to which the user’s physical, cognitive, and emotional responses that result from the use of a system meet the user’s needs and expectations | |
| Nielsen’s Usability Engineering [] | Learnability | How easily a technology can be learned such that the user can rapidly begin to use the technology to perform the necessary tasks |
| Efficiency | How well the user can achieve a high level of productivity with minimum resource requirements (eg, time, money, effort) | |
| Memorability | How easily the user can remember how to use the technology such that they can return to the product easily with minimal relearning after a period of disuse | |
| Errors | The extent to which errors are minimized and catastrophic errors are avoided | |
| Satisfaction | How pleased the user is with their experience of the technology | |
| Quesenbery’s 5E’s of Usability [] | Efficiency | The speed with which the user can complete work accurately using the technology |
| Engaging | How pleasant, satisfying, or interesting it is for the user to interact with technology | |
| Error tolerance | How well the technology prevents errors and helps the user recover from errors that occur | |
| Easy to learn | How well the product supports the user in initial orientation and deeper learning |
Design engineering practices provide various approaches for evaluating usability. According to Nielsen [], usability can be tested directly through behavioral measures (what people do) or indirectly assessed based on attitudinal beliefs of the user (what people think). Direct methods of usability testing consider how technology is actually used by measuring behavior or performance [,]. This can involve quantifiable performance metrics or qualitative methods such as participant observation and/or review of materials produced by the user []. Relevant performance measures depend on goals of the intervention and may include accuracy, task completion time, and errors (recoverable and unrecoverable) []. Think-aloud protocols, which involve users verbalizing their thoughts while interacting with technology, are a valuable strategy to identify usability problems with small samples. Finally, observational analysis techniques involve coding participant behavior while using the technology [].
Indirect or attitudinal methods of usability evaluation explore user perspectives regarding their experience interacting with technology, which can be done through questionnaires, interviews, focus groups, or other methods [,]. Some argue these measures of user attitudes reflect technology acceptability rather than usability per se []. Nevertheless, subjective experience questionnaires are the most common method of usability evaluation across assistive and rehabilitation technology intervention studies [,]. lists usability questionnaires for technology-based interventions. Qualitative feedback through posttesting interviews or focus groups is recommended to gain a deeper understanding of why usability problems exist and how they should be addressed [,].
The most appropriate usability evaluation methods depend on project context and intervention maturity. Available options also depend on the context in which the intervention is studied, for example, under natural settings in real-world target environments, scripted use within a controlled laboratory environment, or only perceptions and expectations about hypothetical use []. Combining direct and indirect measurement often provides a more comprehensive understanding of usability by not only identifying apparent usability issues but also exploring why they exist and how they can be resolved []. Early usability studies might benefit most from think-aloud, observation, and focus group methods within scripted use contexts to gather rich data targeting specific issues with small sample sizes []. At the feasibility stage, real-world use is important, and usability evaluation might involve a combination of performance measures, observation, questionnaires, and interviews []. For larger and longer-term outcome studies, performance measures are likely most informative []. Readers are directed to Liu et al [] for a thorough discussion of usability measurement, including examples from rehabilitation and assistive technology interventions.
- System Usability Scale []
- Usefulness, Satisfaction, and Ease of Use Questionnaire []
- NASA Task Load Index []
- Telehealth Usability Questionnaire []
- mHealth Application Usability Questionnaire []
- Usability Scale for Assistive Technology-Wheeled Mobility []
- Usability Scale for Assistive Technology-Computer Access []
- Accessible Usability Scale []
Adaptability
Technology-based interventions for disability and rehabilitation are characterized by diversity in user populations and use settings []. Device modifications or personalized intervention features may therefore be needed to accommodate individual users. Adaptability is the degree to which a technology-based intervention can be modified to meet the needs of diverse users and use contexts []. Evaluating adaptability can help define the scope of the target population, identify overly rigid features, and determine opportunities for increased flexibility to enable personalization.
Adaptability encompasses the frequency, extent, and reasons for intervention modifications, as well as any unique user needs that could not be accommodated through modifications. Adaptable interventions are expected to achieve similar outcomes despite changes in features or delivery format to accommodate individual needs []. Thus, it is important to assess the impact of modifications, including whether implementation processes, user experiences, and outcomes are consistent across modified versions []. Addressing adaptability also provides an opportunity to test the processes that enable personalized modifications, such as fit assessments, trial periods, technical support, and professional follow-up.
The importance of considering the adaptability of early prototypes is evident in a study from Pellichero et al [], which evaluated the usability and usefulness of a blind spot sensor system for powered wheelchairs []. Although customization of sensor mounting, feedback configuration (ie, sounds, lights, or vibration), and sensor parameters (detection distances) were available through an associated smartphone app, the study was conducted with a standardized set-up preventing customization. The authors subsequently postulated that insufficient customization may have limited participants’ use of the system during the 2-month home trial period and influenced perceived usability and usefulness. Customization of the sensor system according to individuals’ abilities, preferences, and environment was thus recommended for future research and implementation []. This example illustrates the interrelatedness of feasibility dimensions, highlighting how adaptability has important implications for evaluating other aspects of intervention feasibility (eg, usability and demand). It also demonstrates how allowing opportunities for customization and assessing subsequent modifications during early-stage evaluations can help optimize interventions to accommodate diverse user needs.
Practicality
The best of interventions will not be successful unless they are practical within user and organizational capacities. Practicality refers to how well an intervention can be used in its intended setting given resource constraints like space, equipment, personnel, and time [,]. Literature across an array of technology-based interventions consistently highlights practical factors as important barriers to clinical implementation [-]. Disability and rehabilitation interventions may be deployed in hospitals or care centers, clinics, homes, or other community settings that have variable resource capacities, making practicality an important consideration.
Physical factors related to practicality include space and equipment needs for intervention use and storage [,]. System interoperability and compatibility with existing equipment and infrastructure is also important. For example, whether an exergame deployed for home use is suitable for conventional recreational gaming systems and whether it successfully integrates with commercial wearable heart rate monitors. Personnel required to facilitate intervention use should be assessed, including the level of training and expertise needed [,]. Time limitations are a well-documented barrier to implementation of rehabilitation technologies, which can encompass the duration of training, set-up, use, and take-down [,].
Implementation
To understand whether an intervention will work, it is important to explore the process of how it is delivered and received []. The implementation dimension evaluates how well a technology-based intervention can be deployed and used as intended in target settings with varying levels of control. This is important given the diversity of contexts in which technology-based disability and rehabilitation interventions are used []. Examining implementation helps understand how well the intervention can be delivered and what supports are needed to enable successful delivery []. It can also inform the training and support required for users to learn and apply the technology appropriately. The context of use, within a controlled lab environment, home, or community setting, is an important consideration to determine the conditions under which the intervention is likely to succeed and identify features requiring adaptability [,,].
Process evaluations assessing implementation describe the quality or integrity with which an intervention is delivered by monitoring adherence and fidelity []. Adherence is traditionally defined as the extent to which the behavior of an intervention recipient corresponds with the prescribed program []. Whereas we describe engagement above as an active choice to interact with the intervention in a meaningful way, we use the term adherence in reference to whether participants use the intervention as prescribed in its delivery []. Adherence is a common measure among feasibility studies of rehabilitation interventions that can be quantified via self-report or observation as the proportion of intervention components (eg, appointments, training sessions, modules, and tasks) completed by participants relative to those prescribed []. These measures, which translate well to technology-based interventions serving therapeutic or service delivery functions and may be captured through automated system logs, can help infer the burden of intervention demands and appropriateness of procedures []. For assistive technologies, adherence is poorly defined [] and infrequently reported []. In reporting guidelines for assistive technology outcomes, Tuazon and Jutai [] identify frequency, duration, and continuance (or discontinuance) of use as measures of adherence. We argue that these usage metrics reflect adherence when the intervention is used as recommended, but may be more appropriately viewed as measures of engagement in conditions of free voluntary use. Given the apparent similarity and overlap between adherence and engagement, we encourage authors to thoughtfully consider and rationalize their application.
Fidelity refers to whether the intervention is delivered as intended []. Self-report checklists and/or observational methods may be used to assess the degree to which intervention delivery and use align with key features of the intended design. This includes components or ingredients of the intervention and the level of competence demonstrated by intervention providers [], which can help determine the training required to achieve proficiency and fidelity. A benefit of technology-based interventions is the possible integration of screen, audio, or video recordings that capture user behavior, enabling assessment of fidelity or adherence. For interventions that require complex technology operation, user perceptions regarding the ease of achieving fidelity may be informative and can be assessed using the Agency for Healthcare Research and Quality Feasibility Assessment for Wide-Scale Implementation (Criterion 2) []. These approaches to fidelity assessment work well for technology-based interventions with therapeutic or service delivery functions, for which treatment theory can be readily defined and interventions can be manualized, allowing active ingredients to be monitored []. In a provocative discussion, Lenker et al [] highlight historical inadequacy of fidelity reporting in assistive technology research, which they attribute to poorly defined treatment protocols, temporal delays between device use and evaluation, and lack of available measurement tools to capture intervention attributes. To strengthen the quality of assistive technology research, they recommend greater inclusion of treatment theory, better specificity in intervention characterization, and routine fidelity monitoring [].
The distinction between intervention and study feasibility is particularly blurred within the implementation dimension. For example, the fidelity by which clinicians deliver a tele-rehabilitation intervention and the extent to which recipients adhere to the program will likely have implications for both the success of the intervention itself (ie, whether benefits are achieved by recipients), as well as the scientific integrity and outcome of research studies evaluating the intervention. A clear rationale for evaluating implementation should be specified based on development stage (eg, refining intervention or study procedures).
Integration
Organizational support and alignment with system-level priorities are frequently cited as facilitators to technology implementation in disability and rehabilitation practices [-]. The integration dimension considers the “fit” of a technology-based intervention in terms of how well it can be integrated into existing systems within its target settings []. A focus on system-level integration is important as it likely affects implementation quality and eventual technology adoption. Importantly, there may be multiple systems into which a technology-based intervention must integrate. For example, therapeutic exergames may be initially introduced by therapists within a clinical setting and subsequently integrated into individual or family routines at home for continued practice. Assessing integration across all relevant systems can help resolve incongruencies and proactively enact strategies that support technology transfer.
Several system-level factors have been reported in the literature that influence integration of technology-based interventions in disability or rehabilitation practice. Reviews identify a lack of intervention fit within organizational capacities and workflows as important barriers to technology adoption, which may include referral processes [], scheduling [,], service delivery models [], and length of therapy sessions [,]. Organizational culture and policy, particularly regarding technology in practice, is another important factor that can facilitate or hinder integration [,].
The Agency for Healthcare Research and Quality Implementation Feasibility Assessment evaluates integration factors by having service providers rate intervention fit with organizational capabilities (Criterion 1) and alignment with policies and incentives (Criterion 2) []. The Program Sustainability Assessment Tool can be used to assess organizational structures and processes that promote sustained integration []. From the perspective of clients and families, the ease of integrating a technology-based intervention in their home environment and daily routine is a priority [].
Impact
Performance expectancy, or perceived usefulness, is a leading determinant of acceptance and use of technology-based interventions among therapists, caregivers, and people with disabilities [,,]. The impact dimension evaluates the promise or potential that a technology-based intervention shows in eliciting positive change for its target population. This dimension has been renamed from limited efficacy described by Bowen et al [] to recognize the diverse aims of technology-based interventions and methods through which their benefits may be evaluated. Although the focus of feasibility research is process rather than outcomes, this stage is an opportunity to gather preliminary evidence on participant responses to the intervention [,,,].
Various research designs involving quantitative, qualitative, or mixed methods approaches may be used at the feasibility stage to explore potential impacts of technology-based interventions. Preliminary effects can be measured using descriptive observational studies, case reports or series, single-subject designs, interrupted time series designs, or one-group pretest-posttest designs []. Outcomes of interest for care recipients include symptoms, functional performance, participation, mobility, quality of life, and other psychosocial constructs []. The purpose of evaluating impact during the feasibility stage is not to definitively establish effectiveness but rather to test assessment procedures that accompany the intervention, evaluate appropriateness and sensitivity of outcome measures, and inform features of intervention design including duration and dosage []. Qualitative feedback is essential to accomplish these goals by exploring user perceptions regarding perceived benefits and the meaningfulness of outcome measures [].
For care providers, the usefulness of technology in supporting care is an important driver of adoption [], indicating that perceived job performance or quality of care are relevant measures of impact. For technologies designed to reduce therapist workload, such as autonomous robots and exoskeletons, impact may also be measured in subjective task effort or stress and burnout, which are common among care providers []. Alleviating caregiver burden is a target of some interventions for which impact may be investigated using the Caregiver Assistive Technology Outcome Measure []. System-level impacts are also relevant to technology-based interventions, which may include measures of resource use, service efficiency, and access to care [-].
Ethicality
Ethical concerns are barriers to adoption and commercialization across a variety of technology-based approaches in disability and rehabilitation [-]. Common ethical concerns include challenges related to privacy, security, equity, stigma, autonomy, and power [-]. There are calls for more proactive consideration of ethics during development, evaluation, implementation, and policymaking processes for technology-based interventions []. The ethicality dimension considers the degree to which a technology-based intervention aligns with values of biomedical and social ethics. Ethical evaluation during feasibility studies can help anticipate and address concerns that may affect eventual adoption or policy [].
Ethical concerns occur in the field of disability and rehabilitation when there is conflict between two or more biomedical or societal values. For example, protecting the safety of older adults using smart home monitoring technologies while maintaining personal privacy []. Applied ethics frameworks outline considerations for identifying and navigating these conflicts. Principles of bioethics and technology ethics may be useful for identifying value conflicts among technology-based interventions. Bioethics, concerned with ethical issues related to biomedicine and health care, includes 4 primary principles: (1) autonomy (personal choice), (2) beneficence (provision of benefit), (3) nonmaleficence (avoidance of harm), and (4) justice (fairness and equity) []. Technology ethics address issues related to technological advancements; relevant concepts overlap with bioethics and include human welfare, ownership and property, privacy, freedom from bias, universal usability, trust, autonomy, informed consent, accountability, identity, calmness, and environmental sustainability [].
Based on these concepts, ethical domains relevant to the intervention can be compiled in consultation with users and other interest holders []. As the intervention is trialed during feasibility testing, conflicts across these domains can be monitored through discussion with users and reflective questioning about how the technology is used and any unexpected secondary effects []. A Socratic approach to questioning can be used, which poses morally relevant questions to highlight value issues related to health technologies []. The Model for Ethical Evaluation of Sociotechnical Arrangements (MEESTAR) is another tool to guide dialog on ethical analysis, designed specifically for assistive technologies []. It comprises seven ethical dimensions (care, autonomy, safety, justice, privacy, participation, and self-conception) across four levels (harmless, ethically sensitive with possibility of compensation, extreme ethical sensitivity requiring monitoring, opposition of use) and three perspectives (individual, organizational, society). Key questions are provided to support ethical discussion across the seven dimensions.
An example of ethical evaluation during feasibility testing can be found in a study from Kernebeck et al of an app- and sensor-based assistive technology intervention for caregivers of people with dementia []. In addition to exploratory outcome measures, the study involved a process evaluation of implementation, mechanisms of impact, and intervention context, including ethical implications. The ethical evaluation was a workshop based on MEESTAR conducted after the intervention with the study team, interest holders, and advisory board members. The authors highlight how this process supports the design of their intervention to reflect appropriate ethical orientation considering the values of independence, privacy, and safety from multiple perspectives [].
Commercial Viability
Cost and affordability are priorities in decisions regarding rehabilitation and assistive technology interventions from the perspective of clinicians, care recipients, families, health care organizations, and policymakers [,,,]. Yet, economic factors are infrequently and inadequately captured in research on these interventions []. The commercial dimension evaluates the degree to which a technology-based intervention shows promise toward being economically viable for commercialization. While often reserved for later implementation stages, gathering data on economic considerations at the feasibility stage can help refine intervention features to better fit the market, be more cost-efficient, and support commercialization.
Various economic evaluation methods are described in the complex intervention and health technology assessment literature. Cost-utility and cost-effectiveness analyses compare the costs and health outcomes of a new intervention against a comparator (eg, current standard of care), but use different measures of health outcome []. Cost-effectiveness analyses employ natural units of clinical effect, such as life-years or cases of disability, while cost-utility analyses, a subtype of cost-effectiveness analysis, use generic measures of health status reflecting both morbidity and mortality (eg, disability-adjusted life-years) []. Cost-benefit analyses express costs and benefits in financial terms and aim to capture a broader range of economic costs, including those not directly related to health outcomes (eg, cost of travel to care) []. Cost-utility analyses are favored in policy decision-making regarding health technologies because they allow comparison across conditions and intervention approaches [].
The comprehensive economic evaluations described above may not be appropriate at the feasibility stage. During early development and testing of health technologies, Sculpher et al [] recommend that economic evaluation should focus on defining and understanding the costs of existing practice that the new technology may help alleviate, and exploratory economic modeling of potential cost-effectiveness using data from existing studies or limited prospective data collection []. Relevant costs to consider for assistive and rehabilitation interventions include device production, assessment and training services, staffing, and servicing or maintenance. In terms of economic value, cost savings can be characterized through factors such as improved health outcomes, reduced costs of care, increased productivity, and decreased caregiver burden []. These exploratory economic feasibility approaches can inform intervention refinement by helping to redesign device features using more cost-efficient methods and improve efficiency of delivery services.
Discussion
Overview
Feasibility studies are of growing importance in intervention research and frameworks are available to guide this work across health disciplines, yet none specifically address technology-based interventions, which are inherently complex and necessitate iterative development and evaluation approaches []. This viewpoint helps define the scope of feasibility studies for technology-based interventions and highlights considerations for this work in the field of disability and rehabilitation. Key contributions and practical uses are discussed below.
Contributions and Recommendations
Chiefly, this article is intended to support researchers and developers (in academic, clinical, or industry settings) by providing clarity on terminology in feasibility studies of technology-based interventions. Reviews of feasibility research across health disciplines have identified inconsistent use of terminology and variability in methods of evaluation [,]. Synthesizing concepts from feasibility literature and research on rehabilitation and assistive technology interventions, we outline various dimensions of feasibility specifically relevant to technology-based interventions and highlight potential criteria for their evaluation (). This can inform the design of comprehensive feasibility studies that reflect unique technological considerations and the complex factors that influence rehabilitation intervention outcomes. Integrating engineering practices and digital health concepts, we collated conceptual definitions and practical examples () to facilitate dialog between technical developers and clinical research teams. The compiled measurement instruments ( and ) can help tailor data collection approaches and tools for specific types of technology-based interventions. Ultimately, our contributions serve to harmonize terminology and evaluation methods, enable critical selection of feasibility dimensions and criteria, and facilitate interpretation and comparison across studies.
To our knowledge, this is the first attempt to consolidate information on feasibility studies specifically for technology-based interventions. While further refinement may occur in time, this work provides essential background material to support study design decisions. Those seeking to apply information in this article should be aware of general best practices for feasibility studies [,,]. We recommend that teams carefully select and justify the most appropriate dimensions and evaluation criteria according to their project needs, relevant uncertainties, and field-specific considerations (). These decisions are influenced by various contextual factors like intervention maturity, risk profile, and setting; provides sample reflection questions to support critical decision-making. Setting a priori success criteria for each feasibility dimension promotes rigor within and between studies for evidence building. Importantly, failure to achieve criteria may signal the need for intervention refinement (eg, software improvements to enhance usability). The process of feasibility evaluation is thus highly iterative, whereby multiple rounds of testing may be warranted to progressively refine intervention prototypes until comprehensive success criteria are met []. For instance, it may be appropriate to first establish user acceptability and demand with early-stage prototypes for interventions involving technology that is expensive or difficult to produce, while high-risk interventions with potential harms or discomforts necessitate early focus on safety and technical performance.
This work is also relevant for improving alignment between research and policy on technology-based interventions. The feasibility dimensions we delineate align with and extend current practices in health technology assessment—the comprehensive review process that informs policy recommendations on adoption of new technologies in health systems []. While health technology assessment approaches differ between jurisdictions, there is a common shared focus on considering factors beyond only clinical evidence on safety and effectiveness [,]. Health technology assessment processes around the world recognize the importance of organizational integration, economic and resource efficiency, social implications, equity, ethical ramifications, and user and public perspectives in decision-making [,]. These multidimensional factors and their application to technological interventions are not specifically articulated in existing feasibility literature and thus may not be considered until their implementation, after significant resources have already been invested in development and outcome evaluation. By considering these factors in the early stages of evaluation, potential issues can be proactively identified and addressed through refinement of intervention features and delivery approach, which we emphasize as a key aim of feasibility studies for technology-based intervention. In doing so, innovations can be advanced with the greatest likelihood of attaining policy support for widespread adoption to the benefit of people with disabilities or rehabilitation needs and the communities that support them.
Future Directions
The ideas in this viewpoint represent the experiences and opinions of a small authorship team. While a useful guide for research and development, it is a starting point for future work. There is a need for broader discussion and engagement among interest holders in this space to establish consensus on feasibility considerations and dimensions in this field. It is our hope that this article serves as a catalyst for such dialog. Future directions building on this work may include the development of decision aids (eg, checklists, workflows, and case examples) and reporting guidelines to ensure adequate consideration, justification, and operationalization of relevant feasibility issues during the intervention development process. Decision tools may also help guide the process of iteratively refining and advancing intervention prototypes. Methods of weighting or prioritizing feasibility dimensions could also be useful in the context of limited resources or competing priorities.
Acknowledgments
This article was written in partial fulfillment of academic requirements for a graduate-level individual reading and research course (Foundations in Rehabilitation and Assistive Technology) within the Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto. The authors wish to thank the other students from this course, Erica Dove and Chaitali Desai, for their support and feedback related to this work. The authors also thank the anonymous reviewers for their feedback, which has helped to improve this article.
Funding
No direct funding was obtained to support this work. JS received graduate scholarship funding support from the Temerty Faculty of Medicine (University of Toronto), Ontario Graduate Scholarship, and Canadian Institutes of Health Research.
Authors' Contributions
JS researched and drafted the manuscript, which was subsequently reviewed and edited in collaboration with ASH and RHW. All authors approved the final version of the manuscript prior to submission for publication.
Conflicts of Interest
None declared.
References
- Dahl T, Boulos M. Robots in health and social care: a complementary technology to home care and telehealthcare? Robotics. 2013;3(1):1-21. [CrossRef]
- Martin S, Kelly G, Kernohan WG, McCreight B, Nugent C. Smart home technologies for health and social care support. Cochrane Database Syst Rev. Oct 8, 2008;2008(4):CD006412. [CrossRef] [Medline]
- Blandford A, Wesson J, Amalberti R, AlHazme R, Allwihan R. Opportunities and challenges for telehealth within, and beyond, a pandemic. Lancet Glob Health. Nov 2020;8(11):e1364-e1365. [CrossRef]
- Wang RH, Kenyon LK, McGilton KS, et al. The time is now: a FASTER approach to generate research evidence for technology-based interventions in the field of disability and rehabilitation. Arch Phys Med Rehabil. Sep 2021;102(9):1848-1859. [CrossRef] [Medline]
- Bauer SM, Elsaesser LJ, Arthanat S. Assistive technology device classification based upon the World Health Organization’s, International Classification of Functioning, Disability and Health (ICF). Disabil Rehabil Assist Technol. 2011;6(3):243-259. [CrossRef] [Medline]
- Petticrew M. When are complex interventions “complex”? When are simple interventions “simple”? Eur J Public Health. Aug 2011;21(4):397-398. [CrossRef] [Medline]
- Skivington K, Matthews L, Simpson SA, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. Sep 30, 2021;374:n2061. [CrossRef] [Medline]
- Feasibility English meaning. Cambridge Dictionary. URL: https://dictionary.cambridge.org/dictionary/english/feasibility [Accessed 2026-03-14]
- Dictionary.com. URL: https://www.dictionary.com/browse/feasibility [Accessed 2025-02-02]
- Eldridge SM, Lancaster GA, Campbell MJ, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS ONE. 2016;11(3):e0150205. [CrossRef] [Medline]
- Gadke DL, Kratochwill TR, Gettinger M. Incorporating feasibility protocols in intervention research. J Sch Psychol. Feb 2021;84:1-18. [CrossRef] [Medline]
- Bowen DJ, Kreuter M, Spring B, et al. How we design feasibility studies. Am J Prev Med. May 2009;36(5):452-457. [CrossRef] [Medline]
- Arain M, Campbell MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. Jul 16, 2010;10(1):67. [CrossRef] [Medline]
- Whitehead AL, Sully BGO, Campbell MJ. Pilot and feasibility studies: is there a difference from each other and from a randomised controlled trial? Contemp Clin Trials. May 2014;38(1):130-133. [CrossRef] [Medline]
- Donald G. A brief summary of pilot and feasibility studies: Exploring terminology, aims, and methods. Eur J Integr Med. Dec 2018;24:65-70. [CrossRef]
- Viswanathan P, Wang RH, Sutcliffe A. Smart wheelchairs in assessment and training (SWAT): state of the field. AGE-WELL NCE; 2018. URL: https://agewell-nce.ca/wp-content/uploads/2019/01/SWAT-State-of-the-field_FINAL.pdf [Accessed 2026-03-25]
- Thabane L, Ma J, Chu R, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. Jan 6, 2010;10(1):1. [CrossRef] [Medline]
- Tickle-Degnen L. Nuts and bolts of conducting feasibility studies. Am J Occup Ther. 2013;67(2):171-176. [CrossRef] [Medline]
- Orsmond GI, Cohn ES. The distinctive features of a feasibility study: objectives and guiding questions. OTJR (Thorofare N J). Jul 2015;35(3):169-177. [CrossRef] [Medline]
- Lawson DO, Mellor K, Eddy S, et al. Pilot and feasibility studies in rehabilitation research: a review and educational primer for the physiatrist researcher. Am J Phys Med Rehabil. Apr 1, 2022;101(4):372-383. [CrossRef] [Medline]
- Kho ME, Thabane L. Pilot and Feasibility Studies in Rehabilitation: Moving into the Next Decade. University of Toronto Press; 2020. [CrossRef]
- Dobkin BH. Progressive staging of pilot studies to improve phase III trials for motor interventions. Neurorehabil Neural Repair. 2009;23(3):197-206. [CrossRef] [Medline]
- Clayback D, Stanley R, Leahy J, et al. Standards for assistive technology funding: what are the right criteria? Knowledge Translation for Technology Transfer. 2014;(KT4TT). URL: https://ktdrr.org/ktlibrary/articles_pubs/Standards_for_Assistive_Technology_Funding.pdf [Accessed 2026-03-25]
- Aschbrenner KA, Kruse G, Gallo JJ, Plano Clark VL. Applying mixed methods to pilot feasibility studies to inform intervention trials. Pilot Feasibility Stud. 2022;8(1):1-13. [CrossRef]
- Liu L, Cruz AM, Rincon AMR. Technology Acceptance, Adoption, and Usability: Arriving at Consistent Terminologies and Measurement Approaches. Everyday Technologies in Healthcare CRC Press; 2019:319-338. [CrossRef]
- Garrett C, Levack D, Rhodes R. Using technical performance measures. Presented at: 47th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit; Jul 31, 2011. [CrossRef]
- Alper S, Raharinirina S. Assistive technology for individuals with disabilities: a review and synthesis of the literature. J Spec Educ Technol. Mar 2006;21(2):47-64. [CrossRef]
- Fager SK, Burnfield JM. Patients’ experiences with technology during inpatient rehabilitation: opportunities to support independence and therapeutic engagement. Disability and Rehabilitation: Assistive Technology. Mar 2014;9(2):121-127. [CrossRef]
- Handbook of Reliability, Availability, Maintainability and Safety in Engineering Design. Springer; 2009. URL: http://link.springer.com/10.1007/978-1-84800-175-6 [CrossRef]
- Ferrari M, Harrison B, Rawashdeh O, et al. Clinical feasibility trial of a motion detection system for fall prevention in hospitalized older adult patients. Geriatr Nurs (Lond). May 2012;33(3):177-183. [CrossRef]
- Fang J, Xie Q, Yang GY, Xie L. Development and feasibility assessment of a rotational orthosis for walking with arm swing. Front Neurosci. 2017;11:32. [CrossRef] [Medline]
- Awad LN, Esquenazi A, Francisco GE, Nolan KJ, Jayaraman A. The ReWalk ReStore TM soft robotic exosuit: A multi-site clinical trial of the safety, reliability, and feasibility of exosuit-augmented post-stroke gait rehabilitation. J NeuroEngineering Rehabil. Dec 2020;17(1). [CrossRef]
- Marschollek M, Becker M, Bauer JM, et al. Multimodal activity monitoring for home rehabilitation of geriatric fracture patients – feasibility and acceptance of sensor systems in the GAL-NATARS study. Informatics for Health and Social Care. Sep 2014;39(3-4):262-271. [CrossRef]
- Quality glossary of terms, acronyms & definitions. ASQ. URL: https://asq.org/quality-resources/quality-glossary [Accessed 2023-04-11]
- Guidance for Industry E9 Statistical Principles for Clinical Trials. 1998. URL: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/e9-statistical-principles-clinical-trials [Accessed 2023-03-29]
- About design for safety. URL: https://www.tal.sg/wshc/topics/design-for-safety/about-design-for-safety [Accessed 2025-02-18]
- Roderick SN, Carignan CR. An approach to designing software safety systems for rehabilitation robots. Presented at: 9th International Conference on Rehabilitation Robotics, 2005 ICORR 2005; Jun 28, 2005. [CrossRef]
- Wang RH, Gorski SM, Holliday PJ, Fernie GR. Evaluation of a contact sensor skirt for an anti-collision power wheelchair for older adult nursing home residents with dementia: safety and mobility. Assist Technol. Sep 2011;23(3):117-134. [CrossRef]
- Friel KM, Gordon AM, Carmel JB, Kirton A, Gillick BT. Pediatric issues in neuromodulation: safety, tolerability and ethical considerations. In: Mapping and Modulating The Developing Brain. Academic Press; 2016:131-149. [CrossRef]
- Kögel J, Wolbring G. What it takes to be a pioneer: ability expectations from brain-computer interface users. Nanoethics. Dec 2020;14(3):227-239. [CrossRef]
- Biddiss E, Chau T. Upper-limb prosthetics: critical factors in device abandonment. Am J Phys Med Rehabil. Dec 2007;86(12):977-987. [CrossRef] [Medline]
- Pearson EJM. Comfort and its measurement--a literature review. Disabil Rehabil Assist Technol. Sep 2009;4(5):301-310. [CrossRef] [Medline]
- Crane BA, Holm MB, Hobson D, Cooper RA, Reed MP. Responsiveness of the TAWC tool for assessing wheelchair discomfort. Disability and Rehabilitation: Assistive Technology. Jan 2007;2(2):97-103. [CrossRef]
- Knight JF, Baber C. A tool to assess the comfort of wearable computers. Hum Factors. 2005;47(1):77-91. [CrossRef] [Medline]
- McKeon A, McCue M, Skidmore E, Schein M, Kulzer J. Ecological momentary assessment for rehabilitation of chronic illness and disability. Disabil Rehabil. Apr 2018;40(8):974-987. [CrossRef] [Medline]
- Keshav NU, Salisbury JP, Vahabzadeh A, Sahin NT, Power B. Social communication coaching smartglasses: well tolerated in a diverse sample of children and adults with autism. JMIR Mhealth Uhealth. Sep 21, 2017;5(9):e140. [CrossRef] [Medline]
- Ryan JL, Beal DS, Levac DE, Fehlings DL, Wright FV. Integrating transcranial direct current stimulation into an existing inpatient physiotherapy program to enhance motor learning in an adolescent with traumatic brain injury: a case report. Phys Occup Ther Pediatr. 2023;43(4):463-481. [CrossRef] [Medline]
- Garvey MA, Kaczynski KJ, Becker DA, Bartko JJ. Subjective reactions of children to single-pulse transcranial magnetic stimulation. J Child Neurol. Dec 2001;16(12):891-894. [CrossRef] [Medline]
- Nielsen J. Usability Engineering. Morgan Kaufmann; URL: https://books.google.ca/books?hl=en&lr=&id=95As2OF67f0C&oi=fnd&pg=PR9&dq=nielsen+usability+engineering&ots=3cGxDkesYs&sig=8d8t3IeZRw8AW2S_WBXrsBooIHk#v=onepage&q=nielsen%20usability%20engineering&f=false [Accessed 2026-03-14]
- Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res. Jan 26, 2017;17(1):88. [CrossRef] [Medline]
- Momani AM, Jamous MM, Hilles SMS. Technology acceptance theories. International Journal of Cyber Behavior, Psychology and Learning. 2017;7(2):1-14. URL: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJCBPL.20170401 [CrossRef]
- Venkatesh V, Thong JYL, Xu X. Unified theory of acceptance and use of technology: a synthesis and the road ahead. Journal of the Association for Information Systems. 2016;17(5):328-376. URL: https://papers.ssrn.com/abstract=2800121 [Accessed 2026-03-31]
- Sekhon M, Cartwright M, Francis JJ. Development of a theory-informed questionnaire to assess the acceptability of healthcare interventions. BMC Health Serv Res. Dec 2022;22(1). [CrossRef]
- Attkisson CC, Zwick R. The client satisfaction questionnaire. Eval Program Plann. Jan 1982;5(3):233-237. [CrossRef]
- Weiner BJ, Lewis CC, Stanick C, et al. Psychometric assessment of three newly developed implementation outcome measures. Implementation Sci. Dec 2017;12(1). [CrossRef]
- Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. Sep 1, 1989;13(3):319-340. [CrossRef]
- Tao G, Charm G, Kabacińska K, Miller WC, Robillard JM. Evaluation tools for assistive technologies: a scoping review. Arch Phys Med Rehabil. Jun 2020;101(6):1025-1040. [CrossRef] [Medline]
- Hajesmaeel-Gohari S, Khordastan F, Fatehi F, Samzadeh H, Bahaadinbeigy K. The most used questionnaires for evaluating satisfaction, usability, acceptance, and quality outcomes of mobile health. BMC Med Inform Decis Mak. Jan 27, 2022;22(1):22. [CrossRef] [Medline]
- Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q. Sep 1, 2003;27(3):425-478. [CrossRef]
- Demers L, Weiss-Lambrou R, Ska B. The Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0): an overview and recent progress. TAD. Jan 1, 2002;14(3):101-105. [CrossRef]
- Heerink M, Kröse B, Evers V, Wielinga B. Assessing acceptance of assistive social agent technology by older adults: the Almere model. Int J of Soc Robotics. Dec 2010;2(4):361-375. [CrossRef]
- Finkelstein SM, MacMahon K, Lindgren BR, et al. Development of a remote monitoring satisfaction survey and its use in a clinical trial with lung transplant recipients. J Telemed Telecare. Jan 2012;18(1):42-46. [CrossRef]
- Samitier CB, Guirao L, Costea M, Camós JM, Pleguezuelos E. The benefits of using a vacuum-assisted socket system to improve balance and gait in elderly transtibial amputees. Prosthetics & Orthotics International. 2016;40(1):83-88. [CrossRef]
- Cox RM, Alexander GC. Validation of the SADL Questionnaire. Ear Hear. Apr 2001;22(2):151-160. [CrossRef]
- Blumenthal J, Wilkinson A, Chignell M. Physiotherapists’ and physiotherapy students’ perspectives on the use of mobile or wearable technology in their practice. Physiother Can. 2018;70(3):251-261. [CrossRef] [Medline]
- Fitzgerald A, Huang S, Sposato K, Wang D, Claypool M, Agu E. The exergame enjoyment questionnaire (EEQ): an instrument for measuring exergame enjoyment. Presented at: Hawaii International Conference on System Sciences; Jan 7, 2020. [CrossRef]
- Gil-Gómez JA, Manzano-Hernández P, Albiol-Pérez S, Aula-Valero C, Gil-Gómez H, Lozano-Quilis JA. USEQ: a short questionnaire for satisfaction evaluation of virtual rehabilitation systems. Sensors (Basel). Jul 7, 2017;17(7):1589. [CrossRef] [Medline]
- Jennett C, Cox AL, Cairns P, et al. Measuring and defining the experience of immersion in games. Int J Hum Comput Stud. Sep 2008;66(9):641-661. [CrossRef]
- Chin JP, Diehl VA, Norman LK. Development of an instrument measuring user satisfaction of the human-computer interface. Presented at: Conference on Human Factors in Computing Systems - Proceedings; May 1, 1988:213-218; Washington, DC, United States. [CrossRef]
- Yip MP, Chang AM, Chan J, MacKenzie AE. Development of the Telemedicine Satisfaction Questionnaire to evaluate patient satisfaction with telemedicine: a preliminary study. J Telemed Telecare. 2003;9(1):46-50. [CrossRef] [Medline]
- Bakken S, Grullon-Figueroa L, Izquierdo R, et al. Development, validation, and use of English and Spanish versions of the telemedicine satisfaction and usefulness questionnaire. J Am Med Inform Assoc. 2006;13(6):660-667. [CrossRef] [Medline]
- Hirani SP, Rixon L, Beynon M, et al. Quantifying beliefs regarding telehealth: Development of the whole systems demonstrator service user technology acceptability questionnaire. J Telemed Telecare. May 2017;23(4):460-469. [CrossRef] [Medline]
- Bol N, van Weert JCM, de Haes H, et al. Using cognitive and affective illustrations to enhance older adults’ website satisfaction and recall of online cancer-related information. Health Commun. 2014;29(7):678-688. [CrossRef] [Medline]
- Haydon HM, Major T, Kelly JT, et al. Development and validation of the Digital Health Acceptability Questionnaire. J Telemed Telecare. Dec 2023;29(10_suppl):8S-15S. [CrossRef] [Medline]
- O’Cathain A, Hoddinott P, Lewin S, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Trials. Dec 2015;16(S2):1-13. [CrossRef]
- Cavanagh K. Turn on, tune in and (don’t) drop out: engagement, adherence, attrition, and alliance with internet-based interventions. In: Oxford Guide to Low Intensity CBT Interventions. Oxford University Press; 2010:227-234. [CrossRef]
- Donkin L, Christensen H, Naismith SL, Neal B, Hickie IB, Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J Med Internet Res. Aug 5, 2011;13(3):e52. [CrossRef] [Medline]
- Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. Jun 2017;7(2):254-267. [CrossRef] [Medline]
- O’Brien HL, Cairns P, Hall M. A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. Int J Hum Comput Stud. Apr 2018;112:28-39. [CrossRef]
- Petrie H, Carmien S, Lewis A. Assistive technology abandonment: research realities and potentials. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer Verlag; 2018:532-540. [CrossRef]
- Eysenbach G. The law of attrition. J Med Internet Res. Mar 31, 2005;7(1):e11. [CrossRef] [Medline]
- ISO 9241-11:2018 - Ergonomics of human-system interaction — Part 11: Usability: Definitions and concepts. URL: https://www.iso.org/standard/63500.html [Accessed 2023-04-10]
- Arthanat S, Bauer SM, Lenker JA, Nochajski SM, Wu YWB. Conceptualization and measurement of assistive technology usability. Disabil Rehabil Assist Technol. Jul 2007;2(4):235-248. [CrossRef] [Medline]
- Tuena C, Pedroli E, Trimarchi PD, et al. Usability issues of clinical and research applications of virtual reality in older people: a systematic review. Front Hum Neurosci. 2020;14:93. [CrossRef] [Medline]
- Nielsen J. Usability Engineering. Google Books URL: https://books.google.ca/books?hl=en&lr=&id=95As2OF67f0C&oi=fnd&pg=PR9&dq=usability+engineering&ots=3cGyzlcp-o&sig=r9hNjXDVIq4uTcrumB9X3aF9HqI#v=onepage&q=usability%20engineering&f=false [Accessed 2023-04-10]
- Quesenbery W. Balancing the 5Es: usability. Research Gate. URL: https://www.researchgate.net/publication/327937198_Balancing_the_5Es_Usability [Accessed 2023-04-10]
- Medina JLP, Acosta-Vargas P, Rybarczyk Y, Medina JLP, Acosta-Vargas P, Rybarczyk Y. A systematic review of usability and accessibility in tele-rehabilitation systems. Assistive and Rehabilitation Engineering IntechOpen. Apr 15, 2019. [CrossRef]
- Simor FW, Brum MR, Schmidt JDE, Rieder R, De Marchi ACB. Usability evaluation methods for gesture-based games: a systematic review. JMIR Serious Games. Oct 4, 2016;4(2):e17. [CrossRef] [Medline]
- Rohrer C. When to use which user-experience research methods. Nielsen Norman Group; 2014. URL: https://www.xdstrategy.com/wp-content/uploads/2018/08/When-to-Use-Which-User-Experience-Research-Methods-2014-10-12-Print.pdf [Accessed 2026-03-25]
- SUS: a “quick and dirty” usability scale. In: Usability Evaluation In Industry. CRC Press; 1996:207-212. [CrossRef]
- Gao M, Kortum P, Oswald F. Psychometric Evaluation of the USE (Usefulness, Satisfaction, and Ease of USE) Questionnaire for Reliability and Validity. SAGE Publications; 2018:1414-1418. [CrossRef]
- Parmanto B, Lewis, Jr. AN, Graham KM, Bertolet MH. Development of the Telehealth Usability Questionnaire (TUQ). Int J Telerehab. 2016;8(1):3-10. [CrossRef]
- Zhou L, Bao J, Setiawan IMA, Saptono A, Parmanto B. The mHealth App Usability Questionnaire (MAUQ): development and validation study. JMIR Mhealth Uhealth. Apr 11, 2019;7(4):e11500. [CrossRef] [Medline]
- Arthanat S, Wu YWB, Bauer SM, Lenker JA, Nochajski SM. Development of the Usability Scale for assistive technology-wheeled mobility: a preliminary psychometric evaluation. TAD. Jan 1, 2009;21(3):79-95. [CrossRef]
- Gosselin LR, Arthanat S. Reliability and validity of the usability scale for assistive technology for computer access: a preliminary study using video-based evaluation. Assist Technol. Nov 2, 2021;33(6):350-356. [CrossRef]
- Accessible usability scale - test usability with assistive technology users. Fable. URL: https://makeitfable.com/accessible-usability-scale/ [Accessed 2023-04-14]
- Pellichero A, Best KL, Routhier F, Viswanathan P, Wang RH, Miller WC. Blind spot sensor systems for power wheelchairs: obstacle detection accuracy, cognitive task load, and perceived usefulness among older adults. Disability and Rehabilitation: Assistive Technology. Oct 3, 2023;18(7):1084-1092. [CrossRef]
- Glegg SMN, Levac DE. Barriers, facilitators and interventions to support virtual reality implementation in rehabilitation: a scoping review. PM R. Nov 2018;10(11):1237. [CrossRef]
- Zander V, Gustafsson C, Landerdahl Stridsberg S, Borg J. Implementation of welfare technology: a systematic review of barriers and facilitators. Disability and Rehabilitation: Assistive Technology. Aug 18, 2023;18(6):913-928. [CrossRef]
- Moorcroft A, Scarinci N, Meyer C. A systematic review of the barriers and facilitators to the provision and use of low-tech and unaided AAC systems for people with complex communication needs and their families. Disabil Rehabil Assist Technol. Oct 2019;14(7):710-731. [CrossRef] [Medline]
- Liu L, Miguel Cruz A, Rios Rincon A, Buttar V, Ranson Q, Goertzen D. What factors determine therapists’ acceptance of new technologies for rehabilitation – a study using the Unified Theory of Acceptance and Use of Technology (UTAUT). Disabil Rehabil. 2015;37(5):447-455. [CrossRef] [Medline]
- Moore GF, Audrey S, Barker M, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. Mar 19, 2015;350(mar19 6):h1258. [CrossRef] [Medline]
- Chakrabarti S. What’s in a name? Compliance, adherence and concordance in chronic psychiatric disorders. World J Psychiatry. Jun 22, 2014;4(2):30-36. [CrossRef] [Medline]
- Tuazon JR, Jahan A, Jutai JW. Understanding adherence to assistive devices among older adults: a conceptual review. Disability and Rehabilitation: Assistive Technology. Jul 4, 2019;14(5):424-433. [CrossRef]
- Tuazon JR, Jutai JW. Toward guidelines for reporting assistive technology device outcomes. Disabil Rehabil Assist Technol. Oct 2021;16(7):702-711. [CrossRef] [Medline]
- Fournier AK, Wasserman MR, Jones CF, et al. Developing AHRQ’s feasibility assessment criteria for wide-scale implementation of patient-centered outcomes research findings. J Gen Intern Med. Feb 2021;36(2):374-382. [CrossRef] [Medline]
- Van Stan JH, Whyte J, Duffy JR, et al. Rehabilitation treatment specification system: methodology to identify and describe unique targets and ingredients. Arch Phys Med Rehabil. Mar 2021;102(3):521-531. [CrossRef] [Medline]
- Lenker JA, Fuhrer MJ, Jutai JW, Demers L, Scherer MJ, DeRuyter F. Treatment theory, intervention specification, and treatment fidelity in assistive technology outcomes research. Assist Technol. 2010;22(3):129-138. [CrossRef] [Medline]
- Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program Sustainability Assessment Tool: a new instrument for public health programs. Prev Chronic Dis. Jan 23, 2014;11:130184. [CrossRef] [Medline]
- Vourganas I, Stankovic V, Stankovic L, Kerr A. Factors that contribute to the use of stroke self-rehabilitation technologies: a review. JMIR Biomed Eng. 2019;4(1):e13732. [CrossRef]
- Sriram V, Jenkinson C, Peters M. Informal carers’ experience of assistive technology use in dementia care at home: a systematic review. BMC Geriatr. Jun 14, 2019;19(1):160. [CrossRef] [Medline]
- Shin HR, Um SR, Yoon HJ, et al. Comprehensive senior technology acceptance model of daily living assistive technology for older adults with frailty: cross-sectional study. J Med Internet Res. Apr 10, 2023;25:e41935. [CrossRef] [Medline]
- Bridgeman PJ, Bridgeman MB, Barone J. Burnout syndrome among healthcare professionals. Am J Health Syst Pharm. Feb 1, 2018;75(3):147-152. [CrossRef] [Medline]
- Mortenson WB, Demers L, Fuhrer MJ, Jutai JW, Lenker J, DeRuyter F. Development and preliminary evaluation of the caregiver assistive technology outcome measure. J Rehabil Med. May 2015;47(5):412-418. [CrossRef] [Medline]
- Tenforde AS, Hefner JE, Kodish‐Wachs JE, Iaccarino MA, Paganoni S. Telehealth in physical medicine and rehabilitation: a narrative review. PM R. May 2017;9(5S). [CrossRef]
- Patel S, Park H, Bonato P, Chan L, Rodgers M. A review of wearable sensors and systems with application in rehabilitation. J NeuroEngineering Rehabil. Dec 2012;9(1). [CrossRef]
- Laut J, Porfiri M, Raghavan P. The present and future of robotic technology in rehabilitation. Curr Phys Med Rehabil Rep. Dec 2016;4(4):312-319. [CrossRef] [Medline]
- Sundgren S, Stolt M, Suhonen R. Ethical issues related to the use of gerontechnology in older people care: A scoping review. Nurs Ethics. Feb 2020;27(1):88-103. [CrossRef]
- Zwijsen SA, Niemeijer AR, Hertogh C. Ethics of using assistive technology in the care for community-dwelling elderly people: an overview of the literature. Aging Ment Health. May 2011;15(4):419-427. [CrossRef] [Medline]
- Ienca M, Kressig RW, Jotterand F, Elger B. Proactive ethical design for neuroengineering, assistive and rehabilitation technologies: the Cybathlon Lesson. J NeuroEngineering Rehabil. Dec 2017;14(1). [CrossRef]
- Wang RH, Tannou T, Bier N, Couture M, Aubry R. Proactive and ongoing analysis and management of ethical concerns in the development, evaluation, and implementation of smart homes for older adults with frailty. JMIR Aging. Mar 9, 2023;6(1):e41322. [CrossRef] [Medline]
- Beauchamp T, Childress J. Principles of biomedical ethics: marking its fortieth anniversary. Am J Bioeth. Nov 2019;19(11):9-12. [CrossRef] [Medline]
- Human values, ethics, and design. In: The Human-Computer Interaction Handbook. CRC Press; 2007:1267-1292. [CrossRef]
- Hofmann B, Droste S, Oortwijn W, Cleemput I, Sacchini D. Harmonization of ethics in health technology assessment: a revision of the Socratic approach. Int J Technol Assess Health Care. Jan 2014;30(1):3-9. [CrossRef] [Medline]
- Ethische fragen im bereich altersgerechter assistenzsysteme. Miteinander durch Innovation [Web page in German]. URL: https://www.interaktive-technologien.de/service/publikationen/ethische-fragen-im-bereich-altersgerechter-assistenzsysteme [Accessed 2023-04-13]
- Kernebeck S, Holle D, Pogscheba P, et al. A tablet app- and sensor-based assistive technology intervention for informal caregivers to manage the challenging behavior of people with dementia (the insideDEM Study): protocol for a feasibility study. JMIR Res Protoc. Feb 26, 2019;8(2):e11630. [CrossRef] [Medline]
- O’Donnell JC, Pham SV, Pashos CL, Miller DW, Smith MD. Health technology assessment: lessons learned from around the world—an overview. Value Health. Jun 2009;12 Suppl 2:S1-S5. [CrossRef] [Medline]
- Mathes T, Jacobs E, Morfeld JC, Pieper D. Methods of international health technology assessment agencies for economic evaluations--a comparative analysis. BMC Health Serv Res. Sep 30, 2013;13(1):371. [CrossRef] [Medline]
- Albala SA, Kasteng F, Eide AH, Kattel R. Scoping review of economic evaluations of assistive technology globally. Assist Technol. Dec 1, 2021;33(sup1):50-67. [CrossRef]
- Turner HC, Archer RA, Downey LE, et al. An introduction to the main types of economic evaluations used for informing priority setting and resource allocation in healthcare: key features, uses, and limitations. Front Public Health. 2021;9(1236):722927. [CrossRef] [Medline]
- Sculpher M, Drummond M, Buxton M. The iterative use of economic evaluation as part of the process of health technology assessment. J Health Serv Res Policy. Jan 1997;2(1):26-30. [CrossRef] [Medline]
- Draborg E, Gyrd-Hansen D, Poulsen PB, Horder M. International comparison of the definition and the practical application of health technology assessment. Int J Technol Assess Health Care. 2005;21(1):89-95. [CrossRef] [Medline]
Abbreviations
| EMA: Ecological momentary assessment |
| FASTER: Framework for Accelerated and Systematic Technology-based intervention development and Evaluation Research |
| ISO: International Organization for Standardization |
| MRC: Medical Research Council |
| RCT: randomized controlled trial |
| TAM: Technology Acceptance Model |
| tDCS: transcranial direct current stimulation |
| UTAUT: Unified Theory of Acceptance and Use of Technology |
Edited by Javad Sarvestan; submitted 13.Jun.2025; peer-reviewed by Holger Heppner, Rachel Stockley; final revised version received 18.Feb.2026; accepted 19.Feb.2026; published 14.Apr.2026.
Copyright© Josh Shore, Amy S Hwang, Rosalie H Wang. Originally published in JMIR Rehabilitation and Assistive Technology (https://rehab.jmir.org), 14.Apr.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Rehabilitation and Assistive Technology, is properly cited. The complete bibliographic information, a link to the original publication on https://rehab.jmir.org/, as well as this copyright and license information must be included.

