Published on in Vol 8, No 4 (2021): Oct-Dec

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/29610, first published .
Adaptability of Assistive Mobility Devices and the Role of the Internet of Medical Things: Comprehensive Review

Adaptability of Assistive Mobility Devices and the Role of the Internet of Medical Things: Comprehensive Review

Adaptability of Assistive Mobility Devices and the Role of the Internet of Medical Things: Comprehensive Review

Review

1Department of Electrical, Electronic and Computer Engineering, Central University of Technology, Bloemfontein, South Africa

2Council for Scientific and Industrial Research, Pretoria, South Africa

*these authors contributed equally

Corresponding Author:

Elisha Didam Markus, BSc, MSc, PhD

Department of Electrical, Electronic and Computer Engineering

Central University of Technology

20 President Brand St, Bloemfontein Central

Bloemfontein, 9301

South Africa

Phone: 27 744062563

Email: emarkus@cut.ac.za


Background: With the projected upsurge in the percentage of people with some form of disability, there has been a significant increase in the need for assistive mobility devices. However, for mobility aids to be effective, such devices should be adapted to the user’s needs. This can be achieved by improving the confidence of the acquired information (interaction between the user, the environment, and the device) following design specifications. Therefore, there is a need for literature review on the adaptability of assistive mobility devices.

Objective: In this study, we aim to review the adaptability of assistive mobility devices and the role of the internet of medical things in terms of the acquired information for assistive mobility devices. We review internet-enabled assistive mobility technologies and non–internet of things (IoT) assistive mobility devices. These technologies will provide awareness of the status of adaptive mobility technology and serve as a source and reference regarding information to health care professionals and researchers.

Methods: We performed a literature review search on the following databases of academic references and journals: Google Scholar, ScienceDirect, Institute of Electrical and Electronics Engineers, Springer, and websites of assistive mobility and foundations presenting studies on assistive mobility found through a generic Google search (including the World Health Organization website). The following keywords were used: assistive mobility OR assistive robots, assistive mobility devices, internet-enabled assistive mobility technologies, IoT Framework OR IoT Architecture AND for Healthcare, assisted navigation OR autonomous navigation, mobility AND aids OR devices, adaptability of assistive technology, adaptive mobility devices, pattern recognition, autonomous navigational systems, human-robot interfaces, motor rehabilitation devices, perception, and ambient assisted living.

Results: We identified 13,286 results (excluding titles that were not relevant to this study). Then, through a narrative review, we selected 189 potential studies (189/13,286, 1.42%) from the existing literature on the adaptability of assistive mobility devices and IoT frameworks for assistive mobility and conducted a critical analysis. Of the 189 potential studies, 82 (43.4%) were selected for analysis after meeting the inclusion criteria. On the basis of the type of technologies presented in the reviewed articles, we proposed a categorization of the adaptability of smart assistive mobility devices in terms of their interaction with the user (user system interface), perception techniques, and communication and sensing frameworks.

Conclusions: We discussed notable limitations of the reviewed literature studies. The findings revealed that an improvement in the adaptation of assistive mobility systems would require a reduction in training time and avoidance of cognitive overload. Furthermore, sensor fusion and classification accuracy are critical for achieving real-world testing requirements. Finally, the trade-off between cost and performance should be considered in the commercialization of these devices.

JMIR Rehabil Assist Technol 2021;8(4):e29610

doi:10.2196/29610

Keywords



The Internet of Things

Internet technology has experienced remarkable progress since its early stages. It has become a vital transmission framework aiming to connect anyone and anything at any time to any service [1]. The basic idea of the internet of things (IoT) is to allow an autonomous and secure connection and exchange of data between real-world devices and app [2]. IoT has become a crucial factor in next-generation technology and the whole business spectrum. It is the seamless interconnection of uniquely identifiable smart objects, sensors, and informatics systems within today’s internet infrastructure with extended benefits. Typically, benefits include the advanced interconnectivity of these devices, systems, and services that go beyond machine-to-machine scenarios [3]. The impact of IoT has led to its application in several fields for enhancing network operation and the user’s quality of experience [1]. These fields include transportation, health care, industrial automation, and public safety management [4].

Smart Health Care and Assistive Mobility

Health care is an attractive application area for IoT [5]. IoT has the potential to give rise to many medical apps, such as remote control and health monitoring, fitness programs, chronic diseases, and elderly care [3]. For instance, with a monitoring app, the patient can transmit daily or weekly blood pressure readings. This enables their physician to detect a problem and intervene earlier. Smart health care can be referred to as an organic whole of conventional mobile devices used with wearable medical devices, assistive mobility devices, and IoT gadgets (such as implantable or ingestible sensors). This can also be referred to as the internet of medical things (IoMT). This organic whole enables continuous patient monitoring and treatment, even when patients are at their homes. Examples of these assistive mobility devices are pressure monitors, glucometers, smartwatches, smart walkers, smart wheelchairs, smart contact lenses, and way finders [6].

With an increase in the percentage of people with some form of disability [7-10], assistive mobility has become an important aspect of research and has gained a lot of attention from researchers in recent years. Mobility has to do with an individual’s ability to move his or her body within an environment and the ability to manipulate objects. This ability can be hampered by impaired body functions or structures and limit the individual’s functioning, independence, and overall well-being [11]. Assistive mobility is a broad term used to refer to the use of aid (of any kind) to improve the mobility of an impaired individual.

Technology has been a tool used by researchers and companies to address the limitations in mobility caused by some form of impairment. For this reason, literature reviews and surveys have been conducted on assistive technologies for individuals with some form of disability. Although literature reviews have been conducted on specific assistive mobility technologies (such as smart wheelchairs, scooters [12,13], and smart canes [14]), gait rehabilitation devices (such as smart walkers, lower-limb exoskeletons, and smart crutches) [15-19], and how these technologies have addressed mobility limitations of impaired individuals, the review of all elements needed in the adaptability of assistive mobility devices to the user in terms of information used requires more attention.

Related literature review papers have paid attention to specific elements needed in the adaptability of mobility devices, such as the survey of alternative input and feedback methods, including haptic [20], visual, and auditory [21,22] methods, as sensory replacement and sensory augmentation for certain sensory impairments and the survey of computer vision (CV) and machine learning techniques [23,24] for autonomous driving. More closely related surveys [25] approached the categorization of assistive technology based on users’ needs but concentrated on the cross-application of CV for categorization. An older review in 2012 [11] focused on the seamless integration of the capabilities of the user and the assistive technology for mobility. These related reviews highlighted the adaptability of assistive technologies as crucial in the technological advancement of mobility devices. However, we believe that an approach to the adaptability of assistive mobility devices in terms of information used has not been considered.

The objective of this study is to primarily focus on a literature review of the adaptability of assistive mobility devices and the role of IoMT in terms of the acquired information for assistive mobility devices. Internet-enabled assistive mobility technologies and non-IoT assistive mobility devices will be reviewed. The technologies reviewed will provide insight into some important themes and serve as a source and reference for information on adaptive assistive mobility technology to health care professionals and researchers. More specifically, we aim to contribute to the following:

  • Identifying the major areas crucial for the adaptability of internet-enabled assistive mobility technologies (such as smart wheelchairs, smart walkers, smart canes, and scooters) and other non-IoT assistive mobility devices (such as regular walkers, wheelchairs, canes, crutches, walkers, orthoses, and prostheses) to its intended users
  • Categorization of the adaptability of assistive mobility devices in terms of the acquired information into three major areas: user system interfaces (USIs), perception and sensor fusion techniques, and IoMT frameworks
  • Highlighting the role that IoMT plays in the adaptability of assistive mobility devices

We selected a list of studies and references to review the adaptability of assistive mobility devices and IoT frameworks for assistive mobility to be included in the literature search. The data sources used to search for the items to be included in this review were the following databases of academic references: Google Scholar (including ResearchGate), ScienceDirect, Institute of Electrical and Electronics Engineers, Springer, and websites of assistive mobility and foundations presenting studies on assistive mobility found through a generic Google search (including the World Health Organization website).

The search criteria included the following keywords and combinations thereof: assistive mobility OR assistive robots, assistive mobility devices, internet-enabled assistive mobility technologies, IoT Framework OR IoT Architecture AND for Healthcare, assisted navigation OR autonomous navigation, mobility AND aids OR devices, adaptability of assistive technology, adaptive mobility devices, pattern recognition, autonomous navigational systems, human-robot interfaces, motor rehabilitation devices, perception, and ambient assisted living.

As these combinations of data sources and keywords returned a vast number of results, we selected the following inclusion criteria to identify the most relevant sources: (1) language should be English, (2) date range should be in the past 12 years (2008-2020)—most articles were published within the past 5 years to reflect the state-of-the-art (since 2015), and older references were made to technologies that substantially shaped the future direction of assistive mobility devices—and (3) its relevance should be in internet-enabled assistive mobility technologies or non-IoT assistive mobility devices.

The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) criteria were applied [26]. The screening of titles and abstracts was performed by DAO and EDM and reviewed by DAO, EDM, and AMAM. Full texts were reviewed in a second screening.


Overview

After excluding results with titles that were not relevant to this study, the literature search identified 13,286 abstracts, of which 189 (1.42%) potential studies were selected for a detailed full text review.

We used the following exclusion criteria to identify the most relevant sources and reduce the number of literature search results: (1) no relevance to internet-enabled assistive mobility technologies or non-IoT assistive mobility devices in terms of the acquired information, (2) full text not available, (3) no report on promises for user adaptability as a result of simulation testing or using the technology, (4) no description of the technology, and (5) no additional contribution to the review findings compared with the previously reviewed articles.

Of the 189 potential studies, 82 (43.4%) studies remained for analysis after meeting the inclusion criteria. Some studies contributed to more than one section in this review (Figure 1).

To perform a literature review based on the type of technologies presented in the reviewed articles, we proposed a categorization of the adaptability of smart assistive mobility devices in terms of their interaction with the user (USI), perception techniques, and communication and sensing frameworks.

In recent years, advances in technology have helped to improve the quality and efficiency of assistive mobility devices. The use of traditional assistive mobility devices by users with some form of cognitive, sensory, or intellectual impairment requires the help of medical personnel or a caregiver for navigation assistance with difficult daily maneuvering tasks. To accommodate users who find operating standard mobility devices difficult or impossible, several researchers have used technologies originally developed for mobile robots to create smart mobility devices [27], such as smart wheelchairs and smart ambulatory devices. These assistive mobility devices are made smart by attaching computers, actuators, and sensor subsystems to the traditional assistive mobility device to provide easy maneuvering, system localization, object detection, and other sensory, cognitive, and health monitoring functions [12,28-30].

Figure 1. Flow diagram of search results. IoT: internet of things.
View this figure

USI (Input and Output Methods)

Overview

With the advent of smart assistive mobility devices, some assistive devices have become too complex to use. In addition, improper characteristics of the target users have resulted in numerous assistive mobility projects failing to transition to real-world use [31]. For this reason, the adaptability of assistive mobility devices is very important. Some users of assistive mobility devices have comorbidities, such as sensory impairment for users with spinal cord injury (SCI) or mental health challenges because of aging or depression. This impairment needs to be taken into account in the design of efficient assistive mobility devices. Assistive mobility devices should be designed to continually evaluate and correct their actions based on their perception of the needs of the user [32]. Mobility impairment of patients can be largely classified into 2 functional groups. The first group includes individuals with a total loss of ability to move by themselves and with a high risk of confinement in bed, and, consequently, they suffer the effects of prolonged immobility. Examples are patients with complete SCI, advanced neurodegenerative pathologies, severe lower-limb osteoarthritis, and fractures of the spine or lower-limb bones. The suitable kind of assistive mobility technology for this group is called the alternative device. Examples are wheelchairs and autonomous vehicles (AVs). The second group includes individuals with partial loss of mobility, presenting different levels of residual motor capacity that can be powered by assistive mobility devices. The suitable type of assistive device for this group of individuals is the augmentation (rehabilitation) device. Examples are wearable orthoses and prostheses or external devices (such as canes, crutches, and walkers) [28,33]. Notwithstanding the functional group of mobility-impaired patients, USIs are a crucial element in the adaptability of assistive mobility devices. USI has to do with the acquisition of information from the user, the interpretation of this set of acquired information, and the available feedback methods that can be understood by its intended users.

USIs for assistive mobility devices are categorized based on the type of sensors and actuators used for acquisition of user’s information. These includes CV, brain-computer interface (BCI), and voice, touch, and haptic feedback [12]. The USI technologies presented below are categorized as follows: BCI, CV interface (CVI), and auditory and haptic interface.

BCI System

BCI generally refers to a system that measures and uses signals produced by the central nervous system. This interface enables useful functions for people with disabilities caused by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or SCI [34]. The basic components of the BCI are signal acquisition, signal processing, and the effector or output device [35]. Signal acquisition can be invasive or noninvasive [35]. Over the past decade, many educative literature reviews and surveys have been conducted and documented by researchers on the definition, mode of operation, classifications, functionality, and applications of BCI [34-37].

The adaptability of assistive mobility devices for users with neuromuscular disorders has led to the adoption of BCI as a suitable means of user-machine communication for simple mobility tasks. BCI offers limited navigation control capabilities to assistive mobility devices. To improve the navigational abilities offered by BCI, models proposed by researchers integrate BCI with other USI and machine learning tools. For example, Rebsamen et al [38] and Long et al [39] proposed a P300-based BCI wheelchair for the execution of commands for a set of predefined locations. Some auxiliary sensors were also integrated for collision avoidance during navigation. Long et al [39] proposed a hybrid BCI system comprising a motor imagery (MI)-based mu rhythm and the P300 potential. This model was designed for the directional and speed control of a brain-actuated simulated wheelchair or a real wheelchair. Kim et al [40] proposed a prototype that addressed a user’s loss of vision in their environment. The prototype uses the steady-state somatosensory evoked potential (SSSEP) paradigm to control a wheelchair by using specific frequencies and vibrations of different body parts to elicit brain responses. They also recommended the use of an auxiliary autonomous navigation system to improve performance. An asynchronous MI-based BCI protocol system control was proposed by Carlson and del R Millan [41] to improve navigational control with the help of 10 lost-range sonar sensors and 2 webcam cameras.

Furthermore, a teleoperation control for a robotic exoskeleton system based on the steady-state visual evoked potentials (SSVEPs) BCI and visual feedback was proposed by Qiu et al [42]. A camera was used to capture video for visual feedback, and a local adaptive fuzzy controller was used to drive the exoskeleton to track the intended trajectories in the human operator’s mind. The controller was also used to provide, in a convenient way, dynamic compensation with minimal knowledge of the dynamic parameters of the exoskeleton robot.

Auditory and Haptic Interface

Individuals with mobility impairments having visual, hearing, or tactile disabilities require the use of an alternative sensory ability for effective communication with assistive mobility devices. Auditory interfaces are designed to take advantage of hearing ability as a substitute for visual or tactile impairment. On the other hand, haptic interfaces are designed to take advantage of the users’ tactile ability as a substitute for visual, auditory, or motor impairment [43]. An extensive review has been conducted on haptic assistive technology as a means of communication for individuals with some form of sensory impairment, such as visually and auditorily impaired individuals [20,21,31,43]. Parker et al [22] also reviewed the positive effect of visual and auditory feedback on motor skills of poststroke patients during gait rehabilitation. This subtopic presents recent auditory and haptic interface technologies for individuals with mobility impairments.

Haptic technology has been a beneficial USI for certain impaired users. It has found its application in many areas for the monitoring of users’ progress and for navigational assistance. It has been successfully implemented in the design of exoskeletons (such as orthoses and wearable devices for grasping and assisted movement), smart walkers, smart crutches, and smart wheelchairs. Like haptic technology, auditory technology is also used as an alternative navigational control for individuals with mobility impairment and as a navigational guide or feedback for patients with visual impairments. Many researchers have integrated haptic or auditory technology for navigational control, navigation assistance, or feedback of assistive mobility devices. Wearable devices such as the Jet Propulsion Laboratory BioSleeve [44,45], the wireless tongue drive system (TDS) to smartphone (iPhone) electric powered wheelchair (PWC; TDS to smartphone (iPhone) electric-PWC [TDS-iPhone-PWC]; [46]), and the MyoSuit [47] were designed using haptic technology for navigational control and aided mobility, respectively. The JPL BioSleeve is a wearable, hands-free gesture recognition interface that decodes as many as 20 discrete hand and finger gestures and can estimate the continuous pose of the arm. It was designed using surface electromyography (EMG) sensors, an inertial measurement unit (IMU), and embedded software. EMG and IMU acquire gesture and pose signals, whereas the embedded software classifies the signals and maps the result to commands. The wireless TDS-iPhone-PWC uses a TDS comprising a wearable TDS headset, a magnetic tongue barbell, a control unit, and magnetic sensors. The prototype wirelessly sends up to six distinct control commands to an iPhone for the navigation of a PWC after calibration training using a PC. MyoSuit is a lightweight, lower-limb, soft, wearable robot (exoskeleton) for rehabilitation training that allows active contributions from users in residual mobility. It was designed to estimate interlimb angles and trunk postures using a five-segment body model acquired from IMU. It also determines which model is suitable for the user.

Other examples of haptic-based technology for adaptive mobility include the smart cane [48], intelligent control smart walker [49], and learning shared control of an assistive robotic transport for adults wheelchair-powered platform [50]. The smart cane was designed using a force sensor for the measurement of the exerted weight and IMU for pose estimation. The intelligent control smart walker was designed to use a force sensor to control acceleration. The learning shared control of an assistive robotic transport for adults wheelchair-powered platform was designed to regulate the level of assistance between the user and the robot by matching the location and amount of offered assistance on different trajectories.

Some recent technologies integrate multiple technologies for USI in the process of adapting mobility devices to a desired group of disabilities. An example is the electronic mobility cane (EMC) [51], which was designed using multiple sensors to contract the logical map of the surrounding environment and give feedback of the priority information to the user without causing any information overload. Another example is the EyeCane [52], which is an electronic travel aid or electronic travel support that aims to increase the perception of the environment using multiple sensors for distance estimation, navigation, obstacle detection, and feedback to the user. The last example is the multiple controlled interfaces smart wheelchair [53], which was designed to accommodate a variety of impaired individuals. It is a prototype wheelchair with multiple control options (voice, gesture, and joystick input). Another recently explored area is CV to sound technology, which is further discussed in the following section.

CVI System

As humans, we perceive 3D structures of the world around us with apparent ease [54]. The ability of computers to see and understand the world just like humans do gave birth to the research of CV. CV is a field of study that seeks to develop mathematical techniques that enable computers to interpret and understand the visual world (images and videos) accurately in the same way as humans do. CV starts with the acquisition of data or capturing of information, which is done with the help of vision and depth (3D ranging) sensors, such as image-based sensors (mono and stereo or depth cameras), laser-based depth sensors (light detection and ranging, laser scanner, and infrared light), sound-based depth sensors (sound navigation and ranging and ultrasonic), and radio detection and ranging sensor [55].

There are many applications of CV [56-60], and one such application is CV USI for adaptive assistive mobility devices. An example is the visual servoing-controlled wheelchair proposed by Pasteau et al [61]. The proposed smart wheelchair uses 3 cameras for autonomous corridor following and doorway passing. Another example is the autonomous scooter navigational system proposed by Mulky et al [13] to assist people with independent transportation challenges and recognition of the fine-grained world around them. This was achieved using a long-range eye-safe laser (up to 60 m) and a stereo vision camera. Finally, the user-adaptive control intelligent walker proposed by Chalvatzaki et al [62] used CVI technology (laser range finder) to estimate the human state and classify a patient’s mobility status.

CVI mostly integrates haptic or auditory technologies for user feedback and finds its applicability in user or environmental perception for assistive control, monitoring, and sensory substitution devices (SSDs). For instance, the sound of vision SSD technology [63-65] assists people with visual impairment with navigation by converting visual perception to (spatial) sound or haptic feedback. Usually, sound of vision SSDs comprise data acquisition operational modes, an image processing pipeline, and a feedback system [63-65]. An example of a multimodal USI is the iChair, a multimodal input platform that accepts commands from voice, touch, proximity switch, and head-tracking cameras and provides seamless access and control for users with severe disabilities [30].

CVI is the first phase toward autonomous navigation and is a crucial part of perception and multisensor fusion techniques [24].

Perception for Adaptability (Autonomous Navigation)

Autonomous navigation simply refers to the ability of a robot or vehicle to sense its environment and navigate accurately without human input or assistance [66]. AVs or autonomous robots (ARs) are meant to be intelligent enough to perceive, predict, decide, plan, and execute their decisions in the real world [24]. The main difference between AVs and ARs is in the fact that AVs address road networks where traffic rules have to be obeyed, whereas ARs have to cope with open environments without many specific rules to follow only to reach the final destination [67]. There are six different levels of driving autonomy (Table 1), as published by the Society of Automotive Engineers International in 2021, ranging from no automation at level 0 to full automation at level 5 [66,68]. Following the Society of Automotive Engineers International definition, existing AVs and ARs in 2021 are not fully autonomous. Mobility aids can be seen as a type of AR, and the adaptability of the mobility aid is dependent on its ability to make intelligent navigational decisions with limited to no intervention by its user.

Table 1. Summary of the Society of Automotive Engineers (SAE) automation levels.
SAE [66,68] levelsDDTaDriving supervision (DDT fallback)Scenarios (ODDb)

Vehicle controlsEnvironment monitoring (OEDRc)

0: no driver automationDriverDriverDriverN/Ad
1: driver assistantDriverDriverDriverLimited
2: partial driving automationDriver and vehicleDriverDriverLimited
3: conditional driving automationVehicleVehicleDriver and vehicleLimited
4: high driving automationVehicleVehicleVehicleLimited
5: full driving automationVehicleVehicleVehicleUnlimited

aDDT: dynamic driving task.

bODD: operational design domain.

cOEDR: object and even detection and response.

dN/A: not applicable.

Generally, there are three main steps in the operation of an autonomous system (Figure 2): the perception stage (environmental perception and localization), the path planning stage, and the control stage. The perception stage, which is the first stage of a self-driving system, is a crucial aspect of autonomous navigation or self-driving robots. The perception stage majorly comprises environmental perception and localization [69]. The success of perception is largely dependent on the accuracy of the sensors used in the data acquisition. A combination of sensors helps improve accuracy and confidence for the best decision task in environmental perception and autonomous navigation. Although there are high-accuracy sensors that can work alone without exhibiting some of the limitations common to regular sensors, they are often unavailable because of their operating limits and high costs. This makes them impractical for use in real-world applications [70]. This limitation, which is common to regular sensors, has led to the need for multisensor fusion to improve accuracy and confidence. Multisensor fusion has to do with the process of combining information from different sensors to provide a robust and complete description of the environment or process of interest [71]. Detailed literature about each stage has been reviewed [67,69,72,73].

Figure 2. Summary of an autonomous system.
View this figure

Many recent assistive mobility technologies have made advancements in striving toward fully autonomous navigation, such as the technologies discussed in the USI (Input and Output Methods) section (Tables 2-4). Some examples include the P300 BCI-based controlled wheelchairs [38,74], as shown in Table 2. The authors designed the prototype to achieve a level of autonomy using cheap sensors. A bar code was used for global positioning, and a proximity sensor was used for collision avoidance. The use of multisensor fusion was not adopted in this prototype. Feature extraction classifiers (stepwise linear discriminant analysis and support vector machine [SVM]) were used to adequately process the BCI information needed to autonomously navigate the wheelchair from the point of command to its predefined location without the help of its user.

In Table 3, the multicontrolled wheelchair [53] used an algorithm for the control and execution of commands to check for predefined commands and execute them in the navigation and speed control of the wheelchair. An ultrasonic sensor was used for autonomous navigation.

The visual servoing-controlled wheelchair [61], as shown in Table 4, used CV with the classic Gaussian sphere projection framework and line segmentation algorithm for corridor following. A door detection and tracking framework (for indoor navigation tasks) and a 2D edge tracker was inspired by the moving edges algorithm for autonomous doorway passing. In addition, the autonomous scooter navigation [13], as shown in Table 4, used CV and the graph-based simultaneous localization and mapping algorithm for steering control and autonomous navigation.

Table 2. Brain-computer interface (BCI) technologies for adaptive assistive mobility devices.
Brain signals and auxiliary sensorsClassifier for feature extractionOutput commandContributionsDrawbacks
P300 (laser scanner) [74]Stepwise linear discriminant analysisA predefined set of locations and stopsHigh accuracy, no training required, and autonomous navigation after successful selectionLow information transfer rate, predefined paths, limited testing scenarios, and possible fatigue after long focus period of the eye on the target stimulus
P300 (odometer, barcode scanner, and a proximity sensor) [38]Support vector machineA predefined set of locations and stopsSame as Rebsamen et al [38]Same as Rebsamen et al [38] and a modified environment requires an update of the guiding path
MIa-based mu rhythm and the P300 [39]One versus the rest common spatial patterns transformation matrixLeft, right, accelerate, and decelerateImproved performanceLimited testing scenarios and possible fatigue after long focus period of the eye on the target stimulus
MI-based BCI (10 sonar sensors and 2 webcams) [41]Gaussian classifierLeft, right, and keep moving forwardSpontaneous and shared controlLimited testing scenarios, requires extensive training, and limited classes (typically three)
Steady-state visual evoked potentials (camera and adaptive fuzzy controller) [42]Frequency recognition algorithm based on multivariable synchronization indexLeft, right, upwards, and downwardsTeleoperation control of an exoskeleton using a brain-machine interfacePossible fatigue after a long focus period of the eye on a target stimulus and a significant reduction in recognition accuracy for inexperienced subjects
Steady-state somatosensory evoked potential [40]Regularized linear discriminant analysisTurn left, turn right, and move forwardSpontaneous, first of its kind, and addressed the possible fatigue after a long focus period of the eye on the target stimulusOnly healthy subjects were used, with limited testing scenarios (two)

aMI: motor imagery.

Table 3. Brain-computer interface technologies for adaptive assistive mobility devices.
Technology name (type): additional sensorsMachine learning toolsContributionsDrawbacks
TDS-iPhone-PWCa (haptic): magnetic sensors [46]Sensor signal processing algorithmAn alternative USIb for people with spinal cord injury or upper limb paralysisTongue piercing can be a painful and uncomfortable option for some users. Extensive training is required for calibration.
Intelligent smart walker (haptic): force or torque sensor [49]N/AcAn intuitive rule-based speed controller for a smart walkerYoung and healthy subjects were used, so the result is not a true representation of the typical users of the walker.
EyeCane (CVId, haptic, and auditory): infrared emitters, auditory frequency actuator, and tactile actuator [52]N/ALow cost, lightweight, small and easy to use electronic travel aid for distance estimation and navigational assistance, long battery life (one whole day), intuitive to the user, and short training time (<5 minutes)Only an indoor experiment was conducted.
Electronic mobility cane (CVI, haptic, and auditory): liquid detection, 6 ultrasonic sensors, a metal detector, a microvibration motor, and a mono earphone [51]A novel algorithm named way-finding with reduced information overload.Offers real time multiple obstacle detection and way-finding assistance simultaneously to patients with visual impairments by an auditory (voice message) and tactile (vibration) feedbackExtensive training time (20 hours); the cognitive and perceptual load has not been ascertained
Jet Propulsion Laboratory BioSleeve (haptic): electromyography and IMUe sensors [44,45]A multiclass support vector machine classifierIntuitive control of robotic platforms by decoding as many as 20 discrete hand and finger gesturesHas not yet been integrated and tested with assistive mobility aids to determine its applicability
Smart cane (haptic): IMU and FSRf sensors [48]C4.5 decision tree, artificial neural network, support vector machine, and naive bayesTo monitor and distinguish between different walk-related activities during gait rehabilitationFall and near-fall detection was not considered in its design and implementation.
An ARTAg power wheelchair platform (CVI and haptic): haptic controller, laser scanner, SICK laser measurement, and IMU sensor. [50]Gaussian process regression modelImplementation of a learned shared control policy from human-to-human interactionThe efficiency of the learning process is dependent on the human assistant, who is prone to errors and might miss out on the certain intent of the user.
Multiple controlled interfaces smart wheelchair (haptic and auditory): microphone, joystick, leap motion, and ultrasonic sensor [53]An algorithm for the control and execution of commandsMultiple control interfacesLack of details on the performance of each interface and limited testing scenarios
MyoSuit (haptic): IMU sensor and two electric motors [47]N/ALightweight, soft wearable robot to aid users with a level of residual mobility during locomotion tasksOnly one incomplete spinal cord injury participant was selected for testing, so it is difficult to validate its performance.

aTDS-iPhone-PWC: tongue drive system to iPhone electric-powered wheelchair

bUSI: user system interface.

cN/A: not applicable.

dCVI: computer vision interface.

eIMU: inertial measurement unit.

fFSR: force sensitive resistor.

gARTA: assistive robotic transport for adults.

Table 4. Computer vision (CV) interface technologies for adaptive assistive mobility device.
Technology name (type): additional sensorsMachine learning toolsContributionsDrawbacks
See ColOr (CV, auditory and haptic): 3D Kinect, iPad, and Bone-Phones [63]Multilayer artificial neural network for object classification, Kalman filter for tracking objects (finger), and randomized forest algorithm for object detectionA framework for the coupling of optical sensors in the context of range and color image registration and the development of a sonic code that maps colors and depth into musical instrumentsExtensive training was required, and testing was limited to certain scenarios.
Wearable mobility aid for patients with visual impairments (visual, auditory, and haptic): RGBDa, vibrotactile glove, and bone-conductive headsets [64]Stereo vision algorithm and semiglobal matching algorithm; detection: random sample consensus algorithm and Kalman filter; categorization: convolution neural networkImproves on a preliminary prototype of Mattoccia [75], enabling dynamic autonomous mobility capability combining features of electronic travel support and self-localization support in a compact and lightweight setupPatient feedback from the Mattoccia [75] study was considered, and the result that covered collision rate and cognitive and perceptual overload on tested subjects was not presented.
Visual servoing-controlled wheelchair (vision): 1 camera for corridor following and 2 cameras for ADPb [61]Classic Gaussian sphere projection framework, door detection and tracking framework, and a 2D edge tracker inspired by the moving edge algorithmAddresses, in a secure way, the autonomous stability of the wheelchair’s position along corridors and also detects and passes through doorways using visual dataHuman input in the control was not considered.
iChair (vision, auditory and haptic): high-definition camera, 3D scanner, 10 LEDsc, touch screen and voice recognition app, and head mouse [30]Light communication algorithm, collision avoidance algorithm, and an emergency and stress detection algorithmA multimodal input smart wheelchair to identify and classify objects, build 3D maps, and eventually facilitate autonomous navigationA bug-free human trial has not yet been documented.
CV for patients with visual impairment (vision, auditory, and haptic): A stereo RGBd camera (SC), a depth-of-field camera, and an IMUe [65]Detection and tracking algorithm, support vector machine classifier, and a class-specific extremal regions for text detectionAddresses the pervasiveness requirement as well as offers sensory substitution via sound feedback to patients with visual impairmentThe outdoor performance noted clustering of several objects into a single one and error in identifying lower parts of the object; no outdoor and usability test was documented.
Autonomous scooter navigation (vision): MPU-9250 IMU, long-range laser, and stereo vision camera [13]A graph-based simultaneous localization and mapping algorithmCost-effective and addresses the navigational and localization challenges in an unknown environment by a new hybrid far-field and near-field mapping solutionExtensive documentation of human testing has not been documented.
User-adaptive intelligent robotic walker (vision): laser range finder [62]Interacting multiple model particle filters with probabilistic data association framework, Viterbi algorithm (human gait estimation), support vector machine classifier, and unscented Kalman filter algorithmHuman state estimation, pathological gait parametrization, and characterization for classifying users associated with risk fallA test to evaluate the performance of the control strategy with the robotic mobility assistive device and patients was not documented.

aRGBD: red green blue and depth.

bADP: autonomous doorway passing.

cLED: light emitting diode.

dRGB: red green blue.

eIMU: inertial measurement unit.

IoMT Frameworks: Impact of IoMT on the Adaptability of Assistive Mobility Devices

IoMT generally contributes to the adaptability of assistive mobility aids in the monitoring and control by users, caregivers, and medical personnel. The adaptability of assistive mobility devices involves the acquisition of information and the making of intelligent decisions based on the acquired information. This information is obtained from the environment and user via a means of communication (usually an interface). USIs can send and receive information from the user (individuals with some form of disability) to the mobility aid via a communication channel that could be wired or wireless, such as the JPL BioSleeve [44,45] and the TDS-iPhone-PWC interface (Table 3) [46] that can wirelessly control a mobility aid, the P300-based BCI (Table 2) [74] that controls a wheelchair via a wired USB channel, and the autonomous scooter navigation mobility aid [13] that connects its computing module to its hard unit via a wired USB medium or a wireless Bluetooth medium. With the help of IoMT, interconnectivity between mobile devices and their environment and the storage or retrieval of relevant information for control, better autonomy, and monitoring are possible. Many recent surveys and reviews have been conducted on IoMT’s recent technologies, applications, challenges, and opportunities [3,76-78].

In recent years, many researchers have proposed IoMT frameworks for assistive devices that leverage or build on existing IoMT architectures and communication protocols and restructure them (using algorithms or management systems) to suit assistive technologies. For instance, Bae et al [79] proposed a network-based rehabilitation system, for mobility aids (knee assistive devices), as shown in Table 5. The prototype framework distributes the control of the mobility device between the patient’s side and the physiotherapist’s side over a wireless network using the transmission control protocol for internet communication. A modified linear quadratic Gaussian algorithm was used to compensate for packet losses in the wireless network by modeling the losses as Bernoulli random variables. However, only simulations and experiments have been conducted. Therefore, its efficiency in tackling packet loss and robustness against modeling uncertainties, such as interactions with human emotions, has not been evaluated in real-world scenarios.

Table 5. Internet of medical things technologies for adaptive assistive mobility devices.
Name of frameworkManagement system or algorithmsContributions and functionsDrawbacks
NBRa system framework [79]Modified linear quadratic Gaussian algorithmDistributes the control of a mobility device between the patient’s side and the physiotherapist’s side; brings convenience to patients and therapistsOnly simulations and experiments have been conducted.
Global concept SEESb framework [64]Intelligent transportation systemDesigned to address the walking and orientation problem; functions: user tracking, sending of emergency error or alert messages to patients with visual impairment, obstacle detection, walked distance estimation, surface roughness estimation, and traffic light detectionOnly one simple experiment has been conducted.
SHSc framework [80]Hybrid sensing network, the IoTd smart gateway, and the user interfaces for data visualization and managementMonitoring and tracking of patients, personnel, and biomedical devices in real time; collecting both environmental conditions and patient’s physiological parameters and delivering them to a control centerUse-case scenario testing has not been conducted except for fall detection of 1 patient.
ROSe framework [81]Navigation, localization, and pick and place algorithmFor the cooperation among SWCf and RWg; for the user to be able to interact with and control the SWC as well as any object connected to the RWAt present, the whole architecture has been tested in simulation only.

aNBR: network-based rehabilitation system.

bSEES: Smart Environment Explorer Stick.

cSHS: smart health care system.

dIoT: internet of things.

eROS: robotic operating system.

fSWC: smart wheelchairs.

gRW: robotic workstations.

Yusro et al [82] proposed the global concept Smart Environment Explorer Stick framework that enhances the white cane to assist the navigation of patients with visual impairment. As shown in Table 5, it was designed to address the walking and orientation problem by assisting some of the walking and orientation functions and adopting an active multisensor (ultrasonic, camera, accelerometer, wheel encoder, compass, tactile point-wise, and audio feedback) context-awareness concept. Cellular IPv6 over low-power personal area network communication protocols and routing protocols for low-power and lossy networks were used to help patients with visual impairment to move safely and easily in any environment (indoor and outdoor). However, only one simple experiment was performed. An IoT-aware architecture for smart health care systems (SHSs), applicable to the adaptability of assistive mobility devices, was proposed by Catarinucci et al [80] (Table 5). It promised to guarantee innovative services for the automatic monitoring and tracking of patients, personnel, and biomedical devices within hospitals and nursing institutes in real time. The SHS framework [81] relies on different but complementary technologies, specifically radio frequency identification, wireless sensor networks, and smart mobile, interoperating with each other through a constrained application protocol or IPv6 over low-power personal area network or representational state transfer network infrastructure (Table 5). However, the SHS framework was proposed to demonstrate its feasibility, and it needs to be tested in various use-case scenarios to evaluate its performance. Furthermore, Foresi et al [81] proposed a robotic operating system framework that connects robotic workstations with a smart wheelchair via a Wi-Fi protocol. It was designed to improve the intelligent navigation of the wheelchair and enable interaction between the wheelchair, its user, and any object connected to the robotic workstation. However, only a simulation has been performed on the whole architecture, and a detailed evaluation of its performance is not available.

Although IoMT assistive mobility device frameworks show promising signs to improve the adaptability of mobility aids, most proposed frameworks have not been tested. This is extremely important for evaluating their performance and applicability in adapting mobility aids to their intended users. Notable drawbacks common to IoMT frameworks, such as packet loss, user privacy and security, network robustness and scalability, and commercialization cost [1,83,84], need to be extensively evaluated.


User System Interaction (Input and Output Methods)

BCI Systems

BCIs can generally be categorized into four types: P300, SSVEP, event-related synchronization or desynchronization, and SSSEP. P300 is an endogenous response to an oddball stimulus. A positive wave is evoked in response to an event-related potential at a latency of 300 ms (P300). SSVEP is also an endogenous response and is a resonance phenomenon visually evoked by a stimulus modulated at a specific frequency in the brain signals. It occurs in response to the observation of a persistent oscillating visual stimulus. Unlike P300 and SSVEP, event-related synchronization or desynchronization is spontaneously induced by performing mental tasks, such as MI, mental arithmetic, or mental orientation. The SSSEP paradigm is evoked, is endogenous, and spontaneous. The signal is generated in response to the feeling of touch or pressure [35,40,85].

Because of its high accuracy and the need for little to no training, P300 was used by Iturrate et al [74] and Rebsamen et al [38] for the BCI system in the design of the automated navigational wheelchair. Both prototypes still had the drawbacks common with the P300 BCI, such as low information transfer (successful orders per minute), the need for multiple trials for improved accuracy, and the fatigue experience that could occur as a result of the long focus period of the eye on the target stimulus. Other drawbacks included the limited testing scenarios conducted on both systems and the fact that only predefined locations could be reached. Rebsamen et al [38] used the path-following mode of operation [12] for automated navigation; therefore, a modification of the environment would require an update to the guiding path. Both prototypes had a limited number of testing scenarios and were carried out on healthy (5) subjects. Long et al [39] adopted the hybrid BCI approach for the control of wheelchair direction and speed using P300 and MI. Emphasis was given to the importance of speed and the use of hybrid BCIs to improve performance and increase command options. Although accuracy was improved (classification performance) and speed control was achieved, testing was limited to only two scenarios (5 subjects for the first and 2 for the second). In addition, the fatigue experience that could occur as a result of the long focus period of the eye on the target stimulus was not addressed.

In an attempt to address the lack of spontaneity associated with P300 and SSVEP, Carlson and del R Millan [41] adopted an MI-based BCI to control a wheelchair. The prototype focused on shared control between the user and the wheelchair, that is, the ability of the wheelchair to take actions (autonomously navigate) concerning the user’s input and its perceived surroundings (using CV). Drawbacks associated with MI BCI, such as limited classes (typically 3 to avoid difficulties in discriminating MI patterns), extensive training time (a few weeks to months) and the calibration time were still evident. It took a much longer time (>160 seconds) for the 2 inexperienced MI BCI patients out of the 4 to complete the task. In addition, if shared control is not properly matched with the user, it could lead to degradation or loss of function and efficiency. Qiu et al [42] attempted to address the complex dynamic uncertainty and input saturation (leading to tracking error), which is common to exoskeleton robots, by using vision compressive sensing, an SSVEP-based BCI (as a reference command), and an adaptive fuzzy controller for control. Limited testing was performed with 2 veterans and 1 greenhorn patient, and the results showed that training was required. Experienced subjects had a significantly better recognition accuracy (approximately 14% difference). To combat the possible fatigue problem and loss of vision to the environment because of the long focus period of the eye on a particular target stimulus, Kim et al [40] adopted the use of SSSEP BCI in the control of a wheelchair. According to Kim et al [40], this prototype is the first of its kind. Although it tested significantly better than its MI BCI–controlled equivalent, tests were limited to only healthy subjects (12) and were conducted mostly by experienced brain-machine interface subjects. In addition, only two testing scenarios were considered.

Auditory and Haptic Interface

Although many advances (in USI) have been made in an attempt to factor in individuals with varying disabilities, the extensive evaluation of the efficiency and applicability of these technologies requires more attention. Affordability, accurate detection of environmental sounds, avoidance of cognitive overload of the users, ease of use, weight of devices, and commercialization are important factors to be considered [15,20,21,31,43]. For instance, JPL BioSleeve [44,45], a very promising interface for decoding a large number of gestures (dynamic and static hand positions) at high accuracy, integrates IMU signals with EMG for gesture recognition. Its intended goal of gesture recognition with high accuracy was achieved. However, it is still unclear for which category of users and devices it would be most suitable. Therefore, proper integration and testing need to be performed with existing mobility aids to determine their applicability. TDS-iPhone-PWC [46] was designed to be an alternative USI for people with SCI or upper limb paralysis. Latched, unlatched, and semiproportion control strategies were used to send commands to the wheelchair. The commands included forward, backward, left, and right motions, as well as adjustable speed levels. The results showed that it could effectively be used to both access a computer and drive a power wheelchair in a unified, wireless, unobtrusive, and wearable form. However, tongue piercings can be a painful process, and some patients would be uncomfortable or find it difficult to use this option for control. In addition, results showed that extensive training was required for proper calibration and improved performance (task time, number of collisions, and out of tracks). MyoSuit [47] focused majorly on comfort and weight while maintaining its efficiency in aiding its users (ie, people with incomplete SCI, stroke, and multiple sclerosis or muscle dystrophy). Using elastomer springs and a tendon driver unit, MyoSuit was designed to act as an antigravity support during gait rehabilitation tasks. However, it was tested on only 1 patient with incomplete SCI, and so it is difficult to evaluate its efficiency and applicability for gait rehabilitation. The proposed EMC [51] focused on the simultaneous detection of multiple obstacles at different levels (in terms of height and distance) and floor status. EMC was designed using 6 ultrasonic sensors, a liquid detection sensor, a metal detection sensor, a wireless transceiver, and microcontroller circuits. Sensors were positioned on the stick to detect floor-level to head-level obstacles, as well as for leftward and rightward detection. EMC effectively provided navigation assistance, and the categorization or prioritization of detected information was better than with the white cane. However, more training time was suggested (even after a lengthy 20-hour training time) to properly ascertain its cognitive and perceptual load in comparison with similar devices.

Promising devices, such as EyeCane [52] and intelligent smart walker [49], had drawbacks as certain testing scenarios were not considered. EyeCane was tested only indoors, and the intelligent smart walker was tested using healthy patients who do not truly represent the typical users of the walker. The smart wheelchair that was designed to accommodate multiple control interfaces lacked a detailed evaluation of the performance and intelligence of the wheelchair for each interface. An example scenario is how the wheelchair would differentiate the user’s voice from an outlier when an alternative command option is in use. Therefore, there is a need for more detailed testing and evaluation before these technologies become usable and acceptable to their intended users.

CVI Systems

CVIs play an important role in the perception of mobility devices for autonomous navigation. CVI has been adopted in some technologies. For instance, See ColOr [63] was designed as a framework for the coupling of optical sensors in the context of range and color image registration. A sonic code was developed to map colors and depth into musical instruments. However, as it was the first of its kind, extensive training was required for the participants to master it, and testing was limited to certain scenarios (outdoor scenarios were not considered). A similar drawback was observed with patients with visual impairment [65]. It was designed to address the pervasiveness requirement (by integrating both an infrared light–based depth sensor and a stereo vision system together with an IMU device) as well as offer sensory substitution via sound feedback to patients with visual impairment. It was designed to work in any environment and illumination condition using sensor fusion techniques. The results seemed promising; however, detection or 3D representation of small objects or objects close to the ground needed a lot of improvement. In addition, only testing for indoor scenarios was conducted. iChair, was designed by Leaman et al [30], to accommodate a large range of impaired users by integrating multiple interfaces for control; however, no bug-free human trial has been documented. The same drawback was noted in the autonomous scooter [13], which was designed to be a cost-effective autonomous scooter that addressed the navigation and localization challenges in an unknown environment with a new hybrid far-field and near-field mapping solution.

The work toward autonomous navigation of mobility devices is ongoing and progressive but not without its challenges. This is because many stages make up the autonomous navigation system, and therefore, the overall performance can be hampered by just a small percentage error in one of its many stages. The first stage, the perception stage, is crucial to the performance of an autonomous navigation system as it has to do with the acquisition and processing of information. This stage, to a very large extent, determines the adaptability of the mobility device to the needs of the user. Different USIs are used to accommodate users with varying impairments; however, the ability to adequately adapt the mobility device is dependent on the quality of the information it receives. Many of the reviewed technologies applied different machine learning tools (classifiers and algorithms) to help process the acquired information. An example is the SVM classifier used by JPL BioSleeve in the studies by Assad et al [44] and Wolf et al [45] to classify gesture patterns. It was able to achieve an accuracy as high as 96%; however, as stated by Anguita et al [86], its accuracy was dependent on the chosen model, presence of noise, and data size. Drawbacks can be better tested by the comparison of similar classifiers to know which performs better for a particular technology, as was done by Wade et al [48].

In recent years, the idea of fusing data acquired from multiple sensors to improve confidence has been widely adopted because of the complementary properties exhibited by different sensors. Although this has proven to be promising, it does not come without its challenges [24,87]. This is majorly applied to CV. Examples include CV for patients with visual impairment [65] and the autonomous scooter [13], which used the fusion of 2 sensors for improved performance. To design CV for patients with visual impairment, a stereo red-green-blue camera (which is unreliable for depth estimation in the presence of poor illumination) was fused with a depth-of-field camera (which does not cope with bright light from the sun) in an attempt to improve the reconstructed 3D image output under any environmental condition. In the design of the autonomous scooter, long-range laser data were fused with that of a stereo vision camera to improve confidence under any environmental condition. Although fusion of data shows promising results, its efficiency is dependent on the accuracy of the applied fusing methods. An extensive literature review on fusion algorithms and the complementary properties of perception sensors and systems has been discussed by many researchers [24,69]. Some notable challenges in autonomous navigation and CV include improved accuracy and robustness in data fusion, trade-off between cost and performance, the self-localization problem, the detection of small or far-away objects in real time, training data set and increased testing scenarios, level of autonomy, and user training [23-25,60].

Limitations and Future Directions

Overview

This study presents a comprehensive review of the recent literature on the adaptability of assistive mobility devices in terms of the acquired information. Discussions that present interesting facts and technical details regarding recent technologies have been reported. On the basis of the literature review, the following challenges and research directions are presented:

Improved Training Time and Avoidance of Cognitive Overload

Although the exact figure for the attention span of an average human being is extremely variable, research shows that the attention span of an average human being declines as the required concentration time increases. Therefore, it is widely accepted that keeping it simple is better. This is not different from the training time for users with some form of disability [88-91]. As highlighted in the Brain-Computer Interface section under Discussion, most of the reviewed prototypes showed that training time requires more attention. In addition, in the Computer Vision Interfaces section, the training time needed for machine learning algorithms varied depending on the training data set, which could affect the decision made in autonomously navigating assistive mobility devices [24]. More research could be conducted to improve the accuracy of BCI options with shorter training times and hybrid BCIs. This could be achieved with the help of machine learning techniques or algorithms that study user inputs and behaviors to accurately predict commands and help reduce the number of failed commands. Finally, a widely accepted standard for validating the training time for both machine learning algorithms and BCI in a USI could be developed. This will help researchers adequately compare results and monitor improvements concerning the adaptability of assistive mobility devices.

Accuracy

The data reveal that people who are adapted to using their wheelchairs have little to no tolerance for new functional errors. This situation is similar to that of every other assistive mobility device [92]. The highlighted technologies related to autonomous navigation (perception) and CV have shown that the data fusion technique has become increasingly accepted in improving accuracy. However, this also increases the complexity and robustness of information, thereby presenting challenges such as fusion, calibration, and classification accuracy [23-25,60]. Machine learning tools or algorithms used in processing this information also have varying strengths and weaknesses. Similar to the SVM classifier highlighted earlier, these tools and algorithms show varying accuracy depending on the selected model and the level of noise. Future research could be directed toward improving the accuracy of mobile robots in unfamiliar environments as this is mostly the case for assistive mobility devices.

IoMT Latency, Security and Privacy

The integration of IoMT frameworks with the highlighted technologies shows a lot of promise in improving the adaptability of assistive mobility devices to their users. With the IoMT technology option, data stored in the cloud can be analyzed and used for further research. The user’s progress (for gait rehabilitation) can also be monitored, and some level of assistive control can be done by the user’s stakeholders. However, with IoMT technology come network scalability, user privacy, and security problems [1,83,84]. Most reviewed papers acknowledged the packet loss problem when remotely controlling mobility devices via an IoMT framework and proposed various management systems to combat this problem; however, only simulation tests were carried out. The scalability of these frameworks can only be known when real-world testing is performed. Frameworks such as the robotic operating system [81] and the network-based rehabilitation system [79] may have major issues when implemented on a larger network scale. Further research could be conducted on management systems and algorithms developed to improve latency and compensate for packet loss. The developed frameworks should also indicate the number of devices that they could accommodate without any drop in performance. This could all be included in comprehensive system validation. Finally, a widely accepted standard for validating these systems or prototypes could be developed to help researchers compare results and documents on IoMT-based assistive mobility devices.

Performance Evaluation

In most of the reviewed papers, little attention was paid to real-world testing and comparing related prototypes to evaluate performance. For these technologies to be tagged fit for their intended users, their performance needs to be properly evaluated and tested under varying conditions. Proper evaluation would help examine some notable drawbacks, such as ease of use (without the need for any special training), cognitive overload (during human-machine communication), and the ease of wearing these technologies (in terms of weight while maintaining or improving their functionality) [15,20,21,31,43]. Some users of assistive mobility devices have comorbidities, such as mental health challenges because of aging or depression. If the training time or cognitive or perceptual load is high, the device will be quickly abandoned by its intended users. From the discussions, it has been shown that machine learning tools play a key role in the proper classification and processing of USI information as well as the decision-making of these mobility devices. These account for the ability of these devices to navigate autonomously with high accuracy. Future research could focus on the standardization of performance evaluation methods and the accepted testing conditions.

Another research direction is the design of prototypes for clearly defined users. As discussed in previous sections, specific USIs are most suitable for specific ailments. With the advent of many different USIs, there is a tendency to want to accommodate a wider range of users in a prototype design. When assistive mobility devices are tailored to specific users or ailments, there will be improved performance and accuracy in the adaptability of those devices to their specific users.

For a mobility device to be termed adaptable, it has to meet certain requirements such as the following:

  1. Intelligent perception, that is, requires little or no effort to efficiently perceive its environment and take mobility decisions (such as obstacle avoidance and collision detection)
  2. Accurate self-localization of user and device (user tracking).
  3. User-friendly, that is, the movement speed and direction are controlled by the user subconsciously without the need for any special training; in addition, prompt and adequate control or feedback from and to the user are provided without cognitive overload, and communication with necessary stakeholders is easy and secure

These are needed for developed assistive mobility technologies to be easily commercialized and gain user acceptance (widespread adoption) [28,31]. These basic requirements reflect the need to evaluate the performance of mobility devices according to their major adaptability elements (ie, USIs, perception of adaptability—autonomous navigation—and IoMT framework).

Conclusions

The research community has developed many promising technologies in the past decade, taking advantage of smart sensors, machine learning tools, and IoMT frameworks to offer mobility independence to impaired individuals. For users to benefit from these technologies, adaptability must be properly evaluated and considered from design to implementation. This study has successfully reviewed recent technologies of assistive mobility devices to identify their adaptability to users in terms of USI, autonomous navigation (perception stage), and connectivity. Tables have been presented to highlight the reviewed technology according to the major adaptability elements. Furthermore, the review presents some notable limitations, which have shown the need for improved cohesion to effectively adapt these technologies to their users. The findings discussed in the review show that for improved adaptability, more work needs to be done to reduce the training time and cognitive overload in the USIs to improve the fusion and classification accuracy; real-world scenario testing needs to be conducted and evaluated, and the trade-off between cost and performance needs to be considered in commercialization.

Acknowledgments

This work was supported in part by the Council for Scientific and Industrial Research, Pretoria, South Africa, through the Smart Networks collaboration initiative and internet of things–Factory Program (funded by the Department of Science and Innovation, South Africa).

Authors' Contributions

All authors were involved in the conceptualization and the methodology of the study. DAO conducted the search strategy for the study, selection process, and wrote the original draft. EDM and AMAM performed an extensive review and commented on the original manuscript. All three authors approved of the final manuscript.

Conflicts of Interest

None declared.

  1. Ud Din I, Guizani M, Hassan S, Kim B, Khan MK, Atiquzzaman M, et al. The internet of things: a review of enabled technologies and future challenges. IEEE Access 2019;7:7606-7640 [FREE Full text] [CrossRef]
  2. Fan T, Chen Y. A scheme of data management in the internet of things. In: Proceedings of the 2nd IEEE International Conference on Network Infrastructure and Digital Content. 2010 Presented at: 2nd IEEE InternationalConference on Network Infrastructure and Digital Content; Sept. 24-26, 2010; Beijing, China p. 110-114. [CrossRef]
  3. Riazul Islam SM, Kwak D, Kabir MH, Hossain M, Kwak KS. The internet of things for health care: a comprehensive survey. IEEE Access 2015;3:678-708. [CrossRef]
  4. Al-Fuqaha A, Guizani M, Mohammadi M, Aledhari M, Ayyash M. Internet of things: a survey on enabling technologies, protocols, and applications. IEEE Commun Surv Tutorials 2015;17(4):2347-2376. [CrossRef]
  5. Pang Z. Technologies and architectures of the internet-of-things ( IoT ) for health and well-being. In: Thesis and Dissertations - Royal Institute of Technology. Stockholm, Sweden: KTH - Royal Institute of Technology; 2013.
  6. Zeadally S, Siddiqui F, Baig Z, Ibrahim A. Smart healthcare: challenges and potential solutions using internet of things (IoT) and big data analytics. PSU Res Rev 2019 Oct 18;4(2):149-168. [CrossRef]
  7. Shen T, Afsar MR, Haque MR, McClain E, Meek S, Shen X. A human-assistive robotic platform with quadrupedal locomotion. IEEE Int Conf Rehabil Robot 2019 Jun;2019:305-310 [FREE Full text] [CrossRef] [Medline]
  8. Elmannai WM, Elleithy KM. A highly accurate and reliable data fusion framework for guiding the visually impaired. IEEE Access 2018;6:33029-33054. [CrossRef]
  9. Li W, Hu X, Gravina R, Fortino G. A neuro-fuzzy fatigue-tracking and classification system for wheelchair users. IEEE Access 2017;5:19420-19431. [CrossRef]
  10. Disability and health. World Health Organisation. 2020.   URL: https://www.who.int/news-room/fact-sheets/detail/disability-and-health [accessed 2020-05-19]
  11. Cowan RE, Fregly BJ, Boninger ML, Chan L, Rodgers MM, Reinkensmeyer DJ. Recent trends in assistive technology for mobility. J Neuroeng Rehabil 2012 Apr 20;9:20 [FREE Full text] [CrossRef] [Medline]
  12. Leaman J, La HM. A comprehensive review of smart wheelchairs: past, present, and future. IEEE Trans Human-Mach Syst 2017 Aug;47(4):486-499. [CrossRef]
  13. Mulky R, Koganti S, Shahi S, Liu K. Autonomous scooter navigation for people with mobility challenges. In: Proceedings of the IEEE International Conference on Cognitive Computing (ICCC). Proc - 2018 IEEE Int Conf Cogn Comput ICCC 2018 - Part 2018 IEEE World Congr Serv San Francisco, CA: IEEE; 2018 Presented at: IEEE International Conference on Cognitive Computing (ICCC); July 2-7, 2018; San Francisco, CA, USA p. 87-90. [CrossRef]
  14. Khan I, Khusro S, Ullah I. Technology-assisted white cane: evaluation and future directions. PeerJ 2018;6:e6058 [FREE Full text] [CrossRef] [Medline]
  15. Chen B, Ma H, Qin L, Gao F, Chan K, Law S, et al. Recent developments and challenges of lower extremity exoskeletons. J Orthop Translat 2016 Apr;5:26-37 [FREE Full text] [CrossRef] [Medline]
  16. Jiang J, Ma X, Huo B, Zhang Y, Yu X. Recent advances on lower limb exoskeleton rehabilitation robot. Recent Patents Eng 2017 Sep 27;11(3):A. [CrossRef]
  17. Shi D, Zhang W, Zhang W, Ding X. A review on lower limb rehabilitation exoskeleton robots. Chin J Mech Eng 2019 Aug 30;32(1):A. [CrossRef]
  18. Chen Y, Napoli D, Agrawal S, Zanotto D. Smart crutches: towards instrumented crutches for rehabilitation and exoskeletons-assisted walking. In: Proceedings of the 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob). 2018 Presented at: 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob); Aug. 26-29, 2018; Enschede, Netherlands p. 193-198. [CrossRef]
  19. Page S, Saint-Bauzel L, Rumeau P, Pasqui V. Smart walkers: an application-oriented review. Robotica 2016 Feb 10;35(6):1243-1262. [CrossRef]
  20. Shull PB, Damian DD. Haptic wearables as sensory replacement, sensory augmentation and trainer - a review. J Neuroeng Rehabil 2015 Jul 20;12:59 [FREE Full text] [CrossRef] [Medline]
  21. Black D, Hansen C, Nabavi A, Kikinis R, Hahn H. A Survey of auditory display in image-guided interventions. Int J Comput Assist Radiol Surg 2017 Oct;12(10):1665-1676 [FREE Full text] [CrossRef] [Medline]
  22. Parker J, Mountain G, Hammerton J. A review of the evidence underpinning the use of visual and auditory feedback for computer technology in post-stroke upper-limb rehabilitation. Disabil Rehabil Assist Technol 2011;6(6):465-472. [CrossRef] [Medline]
  23. Bresson G, Alsayed Z, Yu L, Glaser S. Simultaneous localization and mapping: a survey of current trends in autonomous driving. IEEE Trans Intell Veh 2017 Sep;2(3):194-220. [CrossRef]
  24. Feng D, Haase-Schutz C, Rosenbaum L, Hertlein H, Glaser C, Timm F, et al. Deep multi-modal object detection and semantic segmentation for autonomous driving: datasets, methods, and challenges. IEEE Trans Intel Transport Syst 2021 Mar;22(3):1341-1360. [CrossRef]
  25. Leo M, Medioni G, Trivedi M, Kanade T, Farinella G. Computer vision for assistive technologies. Comput Vis Image Underst 2017 Jan;154:1-15. [CrossRef]
  26. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg 2021 Apr;88:105906. [CrossRef] [Medline]
  27. Simpson RC, LoPresti EF, Cooper RA. How many people would benefit from a smart wheelchair? J Rehabil Res Dev 2008;45(1):53-71 [FREE Full text] [CrossRef] [Medline]
  28. Martins M, Santos C, Frizera A, Ceres R. A review of the functionalities of smart walkers. Med Eng Phys 2015 Oct;37(10):917-928. [CrossRef] [Medline]
  29. Lim CD, Wang C, Cheng C, Chao Y, Tseng S, Fu L. Sensory cues guided rehabilitation robotic walker realized by depth image-based gait analysis. IEEE Trans Automat Sci Eng 2016 Jan;13(1):171-180. [CrossRef]
  30. Leaman J, La H, Nguyen L. Development of a smart wheelchair for people with disabilities. In: Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). 2016 Presented at: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI); Sept. 19-21, 2016; Baden-Baden, Germany p. 279-284. [CrossRef]
  31. Pawluk D, Bourbakis N, Giudice N, Hayward V, Heller M. Guest editorial: haptic assistive technology for individuals who are visually impaired. IEEE Trans Haptics 2015;8(3):245-247. [CrossRef] [Medline]
  32. Martins M, Santos C, Frizera-Neto A, Ceres R. Assistive mobility devices focusing on Smart Walkers: classification and review. Rob Auton Syst 2012 Apr;60(4):548-562. [CrossRef]
  33. Neto A, Elias A, Cifuentes C, Rodriguez C, Bastos T, Carelli R. Smart walkers: advanced robotic human walking-aid systems anselmo. In: Mohammed S, Moreno J, Kong K, Amirat Y, editors. Intelligent Assistive Robots. Cham: Springer; 2015:103-131.
  34. Shih JJ, Krusienski DJ, Wolpaw JR. Brain-computer interfaces in medicine. Mayo Clin Proc 2012 Mar;87(3):268-279 [FREE Full text] [CrossRef] [Medline]
  35. Ortiz-Rosario A, Adeli H. Brain-computer interface technologies: from signal to action. Rev Neurosci 2013;24(5):537-552. [CrossRef] [Medline]
  36. Arico P, Borghini G, Di Flumeri G, Sciaraffa N, Colosimo A, Babiloni F. Passive BCI in operational environments: insights, recent advances, and future trends. IEEE Trans Biomed Eng 2017 Jul;64(7):1431-1436. [CrossRef] [Medline]
  37. Abdulkader SN, Atia A, Mostafa MM. Brain computer interfacing: applications and challenges. Egypt Informatics J 2015 Jul;16(2):213-230. [CrossRef]
  38. Rebsamen B, Guan C, Zhang H, Wang C, Teo C, Ang MH, et al. A brain controlled wheelchair to navigate in familiar environments. IEEE Trans Neural Syst Rehabil Eng 2010 Dec;18(6):590-598. [CrossRef] [Medline]
  39. Long J, Li Y, Wang H, Yu T, Pan J, Li F. A hybrid brain computer interface to control the direction and speed of a simulated or real wheelchair. IEEE Trans Neural Syst Rehabil Eng 2012 Sep;20(5):720-729. [CrossRef] [Medline]
  40. Kim K, Suk H, Lee S. Commanding a brain-controlled wheelchair using steady-state somatosensory evoked potentials. IEEE Trans Neural Syst Rehabil Eng 2018 Mar;26(3):654-665. [CrossRef] [Medline]
  41. Carlson T, del R Millan J. Brain-controlled wheelchairs: a robotic architecture. IEEE Robot Automat Mag 2013 Mar;20(1):65-73. [CrossRef]
  42. Qiu S, Li Z, He W, Zhang L, Yang C, Su C. Brain–machine interface and visual compressive sensing-based teleoperation control of an exoskeleton robot. IEEE Trans Fuzzy Syst 2017 Feb;25(1):58-69. [CrossRef]
  43. Sorgini F, Caliò R, Carrozza MC, Oddo CM. Haptic-assistive technologies for audition and vision sensory disabilities. Disabil Rehabil Assist Technol 2018 May;13(4):394-421. [CrossRef] [Medline]
  44. Assad C, Karras J, Rodriguez J, Pivo E, Huang C, Wolf M, et al. Live demonstration: BioSleeve, a wearable hands-free gesture control interface. In: Proceedings of the IEEE Sensors Conference. 2016 Presented at: IEEE Sensors Conference; Oct. 30 - Nov. 3, 2016; Orlando, FL, USA p. 8287. [CrossRef]
  45. Wolf M, Assad C, Vernacchia M, Fromm J, Jethani H. Gesture-based robot control with variable autonomy from the JPL BioSleeve. In: Proceedings of the IEEE International Conference on Robotics and Automation. 2013 Presented at: IEEE International Conference on Robotics and Automation; May 6-10, 2013; Karlsruhe, Germany p. 1160-1165. [CrossRef]
  46. Kim J, Huo X, Minocha J, Holbrook J, Laumann A, Ghovanloo M. Evaluation of a smartphone platform as a wireless interface between tongue drive system and electric-powered wheelchairs. IEEE Trans Biomed Eng 2012 Jun;59(6):1787-1796 [FREE Full text] [CrossRef] [Medline]
  47. Haufe FL, Kober AM, Schmidt K, Sancho-Puchades A, Duarte JE, Wolf P, et al. User-driven walking assistance: first experimental results using the MyoSuit. IEEE Int Conf Rehabil Robot 2019 Jun;2019:944-949. [CrossRef] [Medline]
  48. Wade J, Beccani M, Myszka A, Bekele E, Valdastri P, Riesthal MD, et al. Design and implementation of an instrumented cane for gait recognition. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2015 Presented at: IEEE International Conference on Robotics and Automation (ICRA); May 26-30, 2015; Seattle, WA, USA p. 5904-5909. [CrossRef]
  49. Grondin S, Li Q, Terms I, Walker S, Control F. Intelligent control of a smart walker and its performance evaluation. In: Proceedings of the IEEE 13th International Conference on Rehabilitation Robotics (ICORR). 2013 Presented at: IEEE 13th International Conference on Rehabilitation Robotics (ICORR); June 24-26, 2013; Seattle, WA, USA p. 1-6. [CrossRef]
  50. Kucukyilmaz A, Demiris Y. Learning shared control by demonstration for personalized wheelchair assistance. IEEE Trans Haptics 2018 Jul 1;11(3):431-442. [CrossRef]
  51. Bhatlawande S, Mahadevappa M, Mukherjee J, Biswas M, Das D, Gupta S. Design, development, and clinical evaluation of the electronic mobility cane for vision rehabilitation. IEEE Trans Neural Syst Rehabil Eng 2014 Nov;22(6):1148-1159. [CrossRef] [Medline]
  52. Maidenbaum S, Hanassy S, Abboud S, Buchs G, Chebat D, Levy-Tzedek S, et al. The "EyeCane", a new electronic travel aid for the blind: technology, behavior and swift learning. Restor Neurol Neurosci 2014;32(6):813-824. [CrossRef] [Medline]
  53. Yashoda H, Piumal A, Polgahapitiya P, Mubeen M, Muthugala M, Jayasekara A. Design and development of a smart wheelchair with multiple control interfaces. In: Proceedings of the Moratuwa Engineering Research Conference (MERCon). 2018 Presented at: Moratuwa Engineering Research Conference (MERCon); May 30 - June 1, 2018; Moratuwa, Sri Lanka p. 324-329. [CrossRef]
  54. Szeliski R. Computer vision: algorithms and applications. Choice Rev Online 2011 May 01;48(09):48-5140-48-5140. [CrossRef]
  55. Fisher RB, Konolige K. Range sensors. In: Siciliano B, Khatib O, editors. Springer Handbook of Robotics. Berlin, Heidelberg: Springer; 2008:521-542.
  56. Feng D, Feng M. Computer vision for SHM of civil infrastructure: from dynamic response measurement to damage detection – A review. Eng Struct 2018 Feb;156:105-117. [CrossRef]
  57. Patrício DI, Rieder R. Computer vision and artificial intelligence in precision agriculture for grain crops: a systematic review. Comput Electron Agri 2018 Oct;153:69-81. [CrossRef]
  58. Fang W, Love P, Luo H, Ding L. Computer vision for behaviour-based safety in construction: a review and future directions. Adv Eng Informatics 2020 Jan;43:100980. [CrossRef]
  59. Ibrahim M, Haworth J, Cheng T. Understanding cities with machine eyes: a review of deep computer vision in urban analytics. Cities 2020 Jan;96:102481. [CrossRef]
  60. Aggarwal J, Xia L. Human activity recognition from 3D data: a review. Pattern Recognit Lett 2014 Oct;48:70-80. [CrossRef]
  61. Pasteau F, Narayanan VK, Babel M, Chaumette F. A visual servoing approach for autonomous corridor following and doorway passing in a wheelchair. Rob Auton Syst 2016 Jan;75:28-40. [CrossRef]
  62. Chalvatzaki G, Papageorgiou X, Maragos P, Tzafestas C. User-adaptive human-robot formation control for an intelligent robotic walker using augmented human state estimation and pathological gait characterization. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018 Presented at: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Oct. 1-5, 2018; Madrid, Spain p. 6016-6022. [CrossRef]
  63. Valencia G, Diego J. A computer-vision based sensory substitution device for the visually impaired (See ColOr). University of Geneva. 2014.   URL: http://archive-ouverte.unige.ch/unige:34568 [accessed 2021-10-01]
  64. Poggi M, Mattoccia S. A wearable mobility aid for the visually impaired based on embedded 3D vision and deep learning. In: Proceedings of the IEEE Symposium on Computers and Communication (ISCC). 2016 Presented at: IEEE Symposium on Computers and Communication (ISCC); June 27-30, 2016; Messina, Italy p. 208-213. [CrossRef]
  65. Caraiman S, Morar A, Owczarek M, Burlacu A, Rzeszotarski D, Botezatu N, et al. Computer vision for the visually impaired: the sound of vision system. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW). 2017 Presented at: IEEE International Conference on Computer Vision Workshops (ICCVW); Oct. 22-29, 2017; Venice, Italy p. 1480-1489. [CrossRef]
  66. Ondrus J, Kolla E, Vertal P, Saric Z. How do autonomous cars work? LOGI 2019 - horizons of autonomous mobility in Europe. Transportation Research Procedia 2019:226-233 [FREE Full text] [CrossRef]
  67. Gonzalez D, Perez J, Milanes V, Nashashibi F. A review of motion planning techniques for automated vehicles. IEEE Trans Intell Transport Syst 2016 Apr;17(4):1135-1145. [CrossRef]
  68. On-Road Automated Driving (ORAD) Committee. Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE Mobilus. 2021.   URL: https://saemobilus.sae.org/content/j3016_202104 [accessed 2021-10-01]
  69. Rosique F, Navarro PJ, Fernández C, Padilla A. A systematic review of perception system and simulators for autonomous vehicles research. Sensors (Basel) 2019 Feb 05;19(3):648 [FREE Full text] [CrossRef] [Medline]
  70. Fayyad J, Jaradat MA, Gruyer D, Najjaran H. Deep learning sensor fusion for autonomous vehicle perception and localization: a review. Sensors (Basel) 2020 Jul 29;20(15):1-34 [FREE Full text] [CrossRef] [Medline]
  71. Durrant-Whyte H, Henderson T. Multisensor data fusion. In: Siciliano B, Khatib O, editors. Springer Handbook of Robotics. Cham: Springer; 2016:867-896.
  72. Cohen J. AI... And the vehicle went autonomous. Towards Data Science. 2018.   URL: https://towardsdatascience.com/ai-and-the-vehicle-went-autonomous-e176c73239c6 [accessed 2021-10-01]
  73. Cohen J. Self-driving cars and localization. Towards Data Science. 2018.   URL: https://towardsdatascience.com/self-driving-car-localization-f800d4d8da49 [accessed 2021-10-01]
  74. Iturrate I, Antelis J, Kubler A, Minguez J. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation. IEEE Trans Robot 2009 Jun;25(3):614-627. [CrossRef]
  75. Mattoccia S, Macri P. 3D glasses as mobility aid for visually impaired people. In: Computer Vision - ECCV 2014 Workshops. Cham: Springer; 2015:539-554.
  76. Alam MM, Malik H, Khan MI, Pardy T, Kuusik A, Le Moullec Y. A survey on the roles of communication technologies in IoT-based personalized healthcare applications. IEEE Access 2018;6:36611-36631. [CrossRef]
  77. Gerla M, Lee EK, Pau G, Lee U. Internet of vehicles: from intelligent grid to autonomous cars and vehicular clouds. In: Proceedings of the IEEE World Forum on Internet of Things (WF-IoT). 2014 Presented at: IEEE World Forum on Internet of Things (WF-IoT); March 6-8, 2014; Seoul, Korea (South) p. 241-246. [CrossRef]
  78. Baker SB, Xiang W, Atkinson I. Internet of things for smart healthcare: technologies, challenges, and opportunities. IEEE Access 2017;5:26521-26544. [CrossRef]
  79. Bae J, Zhang W, Tomizuka M. Network-based rehabilitation system for improved mobility and tele-rehabilitation. IEEE Trans Contr Syst Technol 2013 Sep;21(5):1980-1987. [CrossRef]
  80. Catarinucci L, de Donno D, Mainetti L, Palano L, Patrono L, Stefanizzi ML, et al. An IoT-aware architecture for smart healthcare systems. IEEE Internet Things J 2015 Dec;2(6):515-526. [CrossRef]
  81. Foresi G, Freddi A, Monteriu A, Ortenzi D, Pagnotta D. Improving mobility and autonomy of disabled users via cooperation of assistive robots. In: Proceedings of the IEEE International Conference on Consumer Electronics (ICCE). 2018 Presented at: IEEE International Conference on Consumer Electronics (ICCE); Jan. 12-14, 2018; Las Vegas, NV, USA p. 1-2. [CrossRef]
  82. Yusro M, Hou K, Pissaloux E, Shi H, Ramli K, Sudiana D. SEES: Concept and design of a smart environment explorer stick. In: Proceedings of the 6th International Conference on Human System Interactions (HSI). 2013 Presented at: 6th International Conference on Human System Interactions (HSI); June 6-8, 2013; Sopot, Poland p. 70-77. [CrossRef]
  83. Nasiri S, Sadoughi F, Tadayon M, Dehnad A. Security requirements of internet of things-based healthcare system: a survey study. Acta Inform Med 2019 Dec;27(4):253-258 [FREE Full text] [CrossRef] [Medline]
  84. Granjal J, Monteiro E, Sa Silva J. Security for the internet of things: a survey of existing protocols and open research issues. IEEE Commun Surv Tutorials 2015;17(3):1294-1312. [CrossRef]
  85. Bi L, Fan X, Liu Y. EEG-based brain-controlled mobile robots: a survey. IEEE Trans Human-Mach Syst 2013 Mar;43(2):161-176. [CrossRef]
  86. Anguita D, Ghio A, Greco N, Oneto L, Ridella S. Model selection for support vector machines: advantages and disadvantages of the Machine Learning Theory. In: Proceedings of the The 2010 International Joint Conference on Neural Networks (IJCNN). 2010 Presented at: The 2010 International Joint Conference on Neural Networks (IJCNN); July 18-23, 2010; Barcelona, Spain p. 1-8. [CrossRef]
  87. Xue J, Wang D, Du S, Cui D, Huang Y, Zheng N. A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars. Frontiers Inf Technol Electronic Eng 2017 Feb 4;18(1):122-138. [CrossRef]
  88. Bunce DM, Flens EA, Neiles KY. How long can students pay attention in class? A study of student attention decline using clickers. J Chem Educ 2010 Oct 22;87(12):1438-1443. [CrossRef]
  89. Ashford J, Schoffstall C, Reddick WE, Leone C, Laningham FH, Glass JO, et al. Attention and working memory abilities in children treated for acute lymphoblastic leukemia. Cancer 2010 Oct 01;116(19):4638-4645 [FREE Full text] [CrossRef] [Medline]
  90. Lee SE, Kibby MY, Cohen MJ, Stanford L, Park Y, Strickland S. [Formula: see text]Differences in memory functioning between children with attention-deficit/hyperactivity disorder and/or focal epilepsy. Child Neuropsychol 2016;22(8):979-1000 [FREE Full text] [CrossRef] [Medline]
  91. Atkins S, Sprenger A, Colflesh G, Briner T, Buchanan J, Chavis S, et al. Measuring working memory is all fun and games: a four-dimensional spatial game predicts cognitive task performance. Exp Psychol 2014;61(6):417-438. [CrossRef] [Medline]
  92. Padir T. Towards personalized smart wheelchairs: lessons learned from discovery interviews. Annu Int Conf IEEE Eng Med Biol Soc 2015;2015:5016-5019. [CrossRef] [Medline]


AR: autonomous robot
AV: autonomous vehicle
BCI: brain-computer interface
CV: computer vision
CVI: computer vision interface
EMC: electronic mobility cane
EMG: electromyography
IMU: inertial measurement unit
IoMT: internet of medical things
IoT: internet of things
MI: motor imagery
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PWC: powered wheelchair
SCI: spinal cord injury
SHS: smart health care system
SSD: sensory substitution device
SSSEP: steady-state somatosensory evoked potential
SSVEP: steady-state visual evoked potential
SVM: support vector machine
TDS: tongue drive system
TDS-iPhone-PWC: tongue drive system to smartphone (iPhone) electric-powered wheelchair
USI: user system interface


Edited by G Eysenbach; submitted 13.04.21; peer-reviewed by F Yu, C Smith; comments to author 15.06.21; revised version received 29.06.21; accepted 12.09.21; published 15.11.21

Copyright

©Daniel Ayo Oladele, Elisha Didam Markus, Adnan M Abu-Mahfouz. Originally published in JMIR Rehabilitation and Assistive Technology (https://rehab.jmir.org), 15.11.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Rehabilitation and Assistive Technology, is properly cited. The complete bibliographic information, a link to the original publication on https://rehab.jmir.org/, as well as this copyright and license information must be included.