This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Rehabilitation and Assistive Technology, is properly cited. The complete bibliographic information, a link to the original publication on https://rehab.jmir.org/, as well as this copyright and license information must be included.
Cerebral palsy (CP) is a physical disability that affects movement and posture. Approximately 17 million people worldwide and 34,000 people in Australia are living with CP. In clinical and kinematic research, goniometers and inclinometers are the most commonly used clinical tools to measure joint angles and positions in children with CP.
This paper presents collaborative research between the School of Electrical Engineering, Computing and Mathematical Sciences at Curtin University and a team of clinicians in a multicenter randomized controlled trial involving children with CP. This study aims to develop a digital solution for mass data collection using inertial measurement units (IMUs) and the application of machine learning (ML) to classify the movement features associated with CP to determine the effectiveness of therapy. The results were calculated without the need to measure Euler, quaternion, and joint measurement calculation, reducing the time required to classify the data.
Custom IMUs were developed to record the usual wrist movements of participants in 2 age groups. The first age group consisted of participants approaching 3 years of age, and the second age group consisted of participants approaching 15 years of age. Both groups consisted of participants with and without CP. The IMU data were used to calculate the joint angle of the wrist movement and determine the range of motion. A total of 9 different ML algorithms were used to classify the movement features associated with CP. This classification can also confirm if the current treatment (in this case, the use of wrist extension) is effective.
Upon completion of the project, the wrist joint angle was successfully calculated and validated against Vicon motion capture. In addition, the CP movement was classified as a feature using ML on raw IMU data. The Random Forrest algorithm achieved the highest accuracy of 87.75% for the age range approaching 15 years, and C4.5 decision tree achieved the highest accuracy of 89.39% for the age range approaching 3 years.
Anecdotal feedback from Minimising Impairment Trial researchers was positive about the potential for IMUs to contribute accurate data about active range of motion, especially in children, for whom goniometric methods are challenging. There may also be potential to use IMUs for continued monitoring of hand movements throughout the day.
Australian New Zealand Clinical Trials Registry (ANZCTR) ACTRN12614001276640, https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=367398; ANZCTR ACTRN12614001275651, https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=367422
Cerebral palsy (CP) is a condition that affects a person’s ability to move [
There are many different clinical classification systems for upper limb function in children with CP with different levels of complexity. In a review by McConnell et al [
As of early 2021, there is no single method for completely curing or preventing CP. Public health measures such as mandatory seatbelts, pool fencing, and rubella vaccinations are among the prevention methods currently in use [
Occupational therapists use upper limb orthoses for children with CP who have muscle overactivity caused by spasticity, but there is little evidence of the long-term effects of these methods [
General movement assessment is used, which is a noninvasive and cost-effective method for identifying babies at risk of CP [
In clinical research, the goniometer and inclinometer are used to measure joint angles in children with CP [
A general approach for capturing movement is the use of digital technologies, such as motion capture. Motion capture (also referred to as mo-cap or mocap) is the process of digitally recording the movement of people [
Another approach is to measure gesture control using electronic sensors, such as infrared (IR) light-emitting diodes. Gesture recognition software for advanced smartphones was presented in the paper found in the study by Kong et al [
With the development of inertial sensor technologies, IMU-based motion capture systems have been introduced in the study of human motion. IMUs comprise an accelerometer, gyroscope, and magnetometer that are connected to a microcontroller and can be used to capture orientation. In recent years, there have been several IMU-based motion capture research studies, such as studies of gait modulation in patients with foot drop problems [
An overview of all the relevant existing methods, including their advantages and disadvantages can be seen in
Evaluation of existing methods.
Type of approach | Advantages | Disadvantages |
Goniometers [ |
Low cost Can provide measurements very quickly |
Lack of accuracy Does not provide long-term tracking of movement unless repeated multiple times Difficult when children are involved |
Video capture [ |
Very accurate Can provide real-time orientation and active movement |
Very costly Continued monitoring is not possible outside the motion capture studio Long set up time Facilities are not available to everyone |
IRa LEDb gesture recognition [ |
Low cost Portable |
Lack of accuracy Not possible for continued monitoring Mostly developed for entertainment use |
IMUc [ |
Low cost Can provide a reasonably accurate orientation frame Low power consumption Portable |
IMUs drift over time The postprocessing of IMU data can be lengthy |
aIR: infrared radio.
bLED: light-emitting diode.
cIMU: inertial measurement unit.
This paper presents collaborative research between the Department of Electrical Engineering and Computing at Curtin University and the investigator team of a multicenter RCT involving children with CP [
A second contribution of this paper is the application of ML to raw IMU data to classify the movement features associated with CP without the need to measure Euler, quaternion, and joint measurement calculations. This means that the processing time will be reduced because of using raw data for classification. This classification aims to investigate the existence of characteristics of CP movement, which is different from the clinical classification used for CP as a condition. This classification can also confirm if treatment (in this case, the use of wrist extension) is effective. After the initial data collection, 9 different ML algorithms were used to classify CP as a feature: the Random Forrest algorithm achieved the highest accuracy of 87.75% for the age range approaching 15 years, and C4.5 decision tree achieved the highest accuracy with 89.39% for the age range approaching 3 years. The result of this classification aligns with existing research work in which ML is applied to classify footdrop using IMUs [
A custom-built IMU was developed to capture the hand movements of children with CP for this project. The IMU consisted of an MPU 9250, a custom-built Arduino Pro Mini, and a 2.4-GHz radio frequency (RF) radio. Each sensor was powered by a small 90 mAh, 3.7-V rechargeable lithium battery and could support up to 3 hours of nonstop measurement. The custom Arduino Pro Mini was previously designed by Dr Weiyang Xu as part of his thesis titled
The MPU9150 (blue printed circuit board [PCB]), custom-built Arduino Pro Mini (green PCB), and RF Module (red PCB); a comparison of the inertial measurement unit with an Australian five-cent coin; and the 3D printed case for the sensor [
The receiver dongle in the 3D printed case [
Specification of the inertial measurement unit (IMU).
Electronic Module | Parameter | Value |
MPU 9250 IMU |
Accelerometer FS range Gyroscope FS range Magnetometer FS range |
Range of ±2 g, ±4 g, ±8 g and ±16 g Range of ±250, ±500, ±1000 and ±2000°/sec Range of ±1200 µT |
nRF24L01 Transceiver |
ISMa band operation Air data rate Programmable output power |
2.4GHz 250 kbps, 1 and 2 Mbps 0, −6, −12 or −18 dBm |
Arduino Pro mini |
Circuit operating voltage Clock Speed Flash memory |
3.3 V or 5 V 8 MHz (3.3 V version) or 16 MHz (5 V version) 32 KB |
Arduino Uno |
Circuit operating voltage Clock Speed Flash memory |
5 V 16 MHz 32 KB |
aISM: Industrial, Scientific, and Medical.
The SPI is a synchronous, full-duplex serial bus standard that was introduced by Motorola to support communication between a master processor and multiple slaves [
The left diagram shows the serial peripheral interface (SPI) connection between Arduino Uno and the RF module, and the right diagram shows the SPI connection between the custom Arduino Pro mini and the RF module. MISO: Master In Slave Out; MOSI: Master Out Slave In; RF: radio-frequency; SCLK: Serial Clock.
The designed sensors needed to wirelessly transfer data to avoid hindering the hand movements of the participants in the project. Popular wireless communication technologies include Bluetooth, RF, WiFi, and infrared. The popular frequency range for wireless communication includes subGHz below 1 GHz (for long-range) and 2.4 GHz (for short-range). The proposed joint movement calculation system uses an nRF24L01 RF transceiver [
The I2C bus is a synchronous serial protocol originally developed by Philips Semiconductor (now known as NXP semiconductors) in the early 1980s [
Schematic of the I2C connection between the custom Arduino Pro Mini and the inertial measurement unit. SCL: Serial Clock Line; SDA: Serial Data Line.
I2C timing diagram. SCL: Serial Clock Line; SDA: Serial Data Line.
The IMUs comprise an accelerometer, gyroscope, and magnetometer. Using sensor fusion techniques, an object’s orientation can be captured using differential equations describing its dynamic behavior, which can be derived from the Newton-Euler by means of the Euler angle parametrization [
The sensors collected raw acceleration and angular velocity in the X-, Y-, and Z-axes, and the results were postprocessed in MATLAB using a 2-sensor-based joint orientation algorithm. This algorithm shows the difference in relative movements between 2 sensors when they share the same frame and zero position [
Orientation of the MPU9250 inertial measurement unit chip, where X is Roll, Y is Pitch, and Z is Yaw [
Sensor placement showing sensor 1 connected to the back of the hand and sensor 2 connected above the wrist.
Using 2 sensors creates a relative system, so the rotation on the Y-axis or the orientation on the X-Z plane can simply be calculated using the following formula:
According to the tangent function, the angle of ß can be initially calculated using the acceleration from the X-and Z-axes, where x is the angle between the net acceleration and the acceleration on the X-Z plane. Therefore, the tangent of ß can be calculated as follows:
The angles used in equation (2) can be seen in
The data sample rate for both sensors was set to 100 Hz, which reduced the difference in angular velocity measurements between each sample.
Unlike traditional yaw, pitch, and roll orientation systems, a reference plane was unnecessary in the present algorithm as both sensor axes were aligned so that the joint movement was equivalent to the orientation difference between the sensors. Therefore, only relative motion was used, and the impact from the environment was ignored [
The orientation of each individual sensor was calculated using the orientation reading and angle movement during each sampling period, and a complementary filter introduced a high-pass filter to the main orientation tracker and adjusted with a low passed value from the accelerometer’s orientation measurement [
3D system for acceleration.
As the desired accuracy cannot be achieved by using only the acceleration, sensor fusion was used to increase the measurement accuracy by combining the data from both the accelerometer and the gyroscope. The accelerometer output was independent of each sample during the measurement period; therefore,
Here,
In the formula given above, n, m, and r are random integers and m is larger than 3. The total number of samples needs to be larger than n + (m−1) r + 100 m. These calculations lead to the following sensor fusion algorithm, which is based on a complementary filter:
where a, b, and c are the names of thåe measurement axes and n+1 is the current order of the sample.
The results of these joint calculations were validated in the study by Sharif Bidabadi et al [
The IMU sensors generated time-series data from the accelerometer, gyroscope, and magnetometer around the 3 axes. First, small sections were removed from readings taken at the beginning of the experiments when the IMU sensors were not stabilized. Then, the remaining data collected by each sensor from each experiment in 3 orientations (ie, pitch, row, and yaw) were converted into frequency-domain representations by performing fast Fourier transform. Converting data to the frequency domain can successfully capture the characteristics of gait motion, as shown by similar experiments in [
Each experiment was then labeled 0 for a typically developing child and 1 for a child with CP.
The problem of distinguishing typical hand movements from hand movements of children with CP constitutes a binary classification problem, that is, classification between two classes. Various algorithms can be constructed using different ML methods based on existing data that can be used to classify unseen data. This process is called
To decide between the 2 classes, ML algorithms for binary classification establish decision boundaries that separate the data points in the training data set from the 2 classes. This process relies on optimizing a cost function that varies between the algorithms. Most algorithms, such as logistic regression, support vector machine, decision trees, and neural networks, aim to construct a model with parameters that are learned from the training data set, whereas some algorithms operate directly on the data set, for example, k-nearest neighbors. Although there are numerous libraries and tools offering implementations of ML algorithms [
As a part of an Australia-wide CP research study called the
For this project, the aim was to capture CP movement as a feature by ML on the raw IMU data by focusing on the data collected during the stop sign task in the MIT and iWHOT. Each participant was asked to perform a simple stop sign motion to capture the maximum wrist joint angle as well as the maximum range of movement. To achieve this study’s aim, two separate experiments were run using participants who were approaching the age of 3 years from iWHOT and participants who were approaching the age of 15 years from MIT. From MIT, 263 samples from 89 participants with CP and 199 samples of typical movement data captured from 30 participants without CP were used. The participants without CP simulated typical movements to reach 199 samples. From iWHOT, 171 samples from 51 participants with CP and 149 samples from 20 participants without CP were used.
Cross-validation, which is 90% training and 10% testing, were used 10 times to train and test the classifier, which can be seen in the next section of this paper. The CP data were collected by the research teams working on the MIT and iWHOT trial according to ethically approved procedures (HREC REF 201406EP) and with signed, informed consent from all the participants’ parents or guardians. Deidentified data were used to produce ML results, which are analyzed in the
After the data were captured, they were processed and run through the different equations described in the joint calculation section of the report. Through these calculations, the drift was removed, and the joint angle was calculated, the results of which are shown in
Raw data captured with the sensor connected to the hand (data without CP). CP: cerebral palsy.
Raw data captured with the sensor connected above the wrist (data without CP). CP: cerebral palsy.
Stop sign motion required by the participants.
Joint angle results from a participant without CP. CP: cerebral palsy.
The stop sign trials from participants with CP were captured using the same IMUs as those used in the previous group. The results of the raw data captured from the CP participants are shown in
Anecdotal feedback from MIT and iWHOT researchers was positive about the potential of IMUs to contribute accurate data about active ROM, especially in children for whom goniometric methods are challenging.
After the initial angles were calculated, several classical ML models were trained to create a classifier for the captured data. The Waikato Environment for Knowledge Analysis platform [
Raw data captured with the sensor connected to the hand (data with CP). CP: cerebral palsy.
Raw data captured with the sensor connected above the wrist (data with CP). CP: cerebral palsy.
Joint angle results from a participant with CP. CP: cerebral palsy.
The resultant evaluation metrics are accuracy, the number of correctly classified instances over the total number of instances, the area under the curve (AUC), and the area under the receiver operating characteristic (ROC) curve. The ROC curve maps the true positive rates as the x-coordinate and false positive rates as the y-coordinate. Ten-fold cross-validation was adopted, splitting the data set into 10 parts, training the models with 9 parts, and testing with 1 part each time for a total of 10 times. The accuracy and AUC were obtained by averaging the 10 sets of results and taking the weighted average of the 2 classes. The baseline of the experiments was obtained from ZeroR, a classifier that predicts the class that occurs most often in the training data set as the label without considering other features.
Machine learning result using minimizing impairment training data, showing the best accuracy.
Algorithm | Accuracy (%) | AUCa |
OneR | 84.23 | 0.848 |
Logistic regression | 72.79 | 0.749 |
Naïve Bayes | 65.23 | 0.752 |
Bayes Net | 80.99 | 0.832 |
C4.5 decision tree | 74.95 | 0.740 |
|
|
0.867 |
Multilayer perceptron | 80.35 | 0.865 |
Support vector machine | 79.70 | 0.794 |
K-nearest neighbors | 82.07 |
|
|
|
|
aAUC: area under the curve.
bThe best accuracy and area under the curve values are italicized.
The ROC curves of 10 classification algorithms using the Minimising Impairment Trial data. The area under the curve values are the areas between the ROC curves and the x-axis. ROC: receiver operating characteristic.
Curiously, OneR uses only a single feature and achieves 84.23% classification accuracy. The algorithm uses the 91st feature, which is the phase shift corresponding to the second harmonic obtained from the hand sensor. This phenomenon may indicate that the most useful information for classification is recorded by the hand sensor and that omitting one sensor may be possible in the future.
Machine learning result using Infant Wrist Hand Orthosis Trial data.
Algorithm | Accuracy (%) | AUCa |
OneR | 88.13 | 0.886 |
Logistic regression | 80.94 | 0.906 |
Naive Bayes | 86.88 |
|
Bayes Net | 88.43 | 0.921 |
|
|
0.858 |
Random forest | 81.88 | 0.917 |
Multilayer perceptron | 81.25 | 0.937 |
Support vector machine | 83.75 | 0.783 |
K-nearest neighbors | 83.44 | 0.896 |
|
|
|
aAUC: area under the curve.
bThe best accuracy and area under the curve values are italicized.
The ROC curves of 10 classification algorithms using the Infant Wrist Hand Orthosis Trial data. The area under the curve values are the areas between the ROC curves and the x-axis. ROC: receiver operating characteristic.
Upon completion of the project, the wrist joint angle was successfully calculated, and CP movement was classified as a feature using ML on raw IMU data
There are some limitations to the IMU setup used in this study, such as the inherent drift of IMUs, which can be corrected by the drift mitigation techniques described in the methods. These techniques may prove problematic for longer trials. There were other issues during the data collection sessions, such as touching the 2 (hand and forearm) sensors because of the small hands of some participants or accidental touching of the sensors by the therapist while using the goniometer, which leads to an increase in noise in the data. Bugs in the data collection interface created for technicians also resulted in some corrupted data and data loss, which added to the preprocessing time of the ML section of this study. Finally, at the initial stages of the project, the scale of the accelerometer was set at +2 g because the slower moving trials rarely reached this value. Once free play situations were introduced that would usually contain rapid movement, particularly in younger children, it was observed that the scale of g needed to be extended beyond this threshold, which resulted in reduced accuracy. This reduction caused some data loss, so the scale was switched to +16 g for faster trials.
As part of future work, real-time calculation of joint angle and orientation data can be implemented so that direct quaternions can be collected and used for this calculation. The research team involved in this paper began the preliminary work on this next step and plans to publish their results once the solution has been fully created. The sensor setup will also be updated to remove the reliance on a separate receiver dongle by switching the communication module to Bluetooth Low Energy transfer to a smartphone application. These changes to the user experience and the medium of transfer would improve the utility of the process of data collection, better continued monitoring of children with CP, and quicker trial sessions in routine appointments for children with CP.
CONSORT-eHEALTH checklist (V 1.6.1).
area under the curve
cerebral palsy
interintegrated circuit
inertial measurement unit
infrared
Infant Wrist Hand Orthosis Trial
Minimising Impairment Trial
machine learning
randomized controlled trial
radio frequency
receiver operating characteristic
range of motion
serial peripheral interface
Waikato Environment for Knowledge Analysis
This research received no external funding; however, The Minimising Impairment Trial (MIT) was funded through the Australian Catholic University Research Fund (APP2013000413) and the National Health and Medical Research Council Centre for Research Excellence–Cerebral Palsy Grant number APP1057997. Additional funding was received from the Perth Children’s Hospital Foundation and Monash Health. These trials were registered with the ANZ Clinical Trials Registry (ACTRN12614001276640) and ACTRN12614001275651. The authors would like to acknowledge the guidance provided by Dr Shiva Bidabadi in sensor setup, all the hard work of occupational therapists who were in direct contact with patients with cerebral palsy, especially Ms Sherilyn Nolan, who helped explain the activities during the data collection trials, and Dr Weiyang Xu for his efforts in the validation of sensors. The cerebral palsy data were collected by the research teams working on the MIT trial according to ethically approved procedures (HREC REF 201406EP) and with signed, informed consent from all the participants parents and guardians. The identified data were used to produce the following machine learning results.
Conceptualization of the study was by IM and SK. Data curation was conducted by CE, CI, AC, and CW. Formal analysis was conducted by SK. Investigation was conducted by SK and HA. Methodology was devised by SK and IM. Resources were provided by CE and CI. Software provision was by SK and HP. Supervision was conducted by IM and WL. Validation was carried out by SK. Visualization was conducted by SK and BB. Original draft was written by SK. BB, HA, CE, and CI were involved in reviewing and editing the paper.
None declared.