Published on in Vol 11 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/57940, first published .
Integrated Approach Using Intuitionistic Fuzzy Multicriteria Decision-Making to Support Classifier Selection for Technology Adoption in Patients with Parkinson Disease: Algorithm Development and Validation

Integrated Approach Using Intuitionistic Fuzzy Multicriteria Decision-Making to Support Classifier Selection for Technology Adoption in Patients with Parkinson Disease: Algorithm Development and Validation

Integrated Approach Using Intuitionistic Fuzzy Multicriteria Decision-Making to Support Classifier Selection for Technology Adoption in Patients with Parkinson Disease: Algorithm Development and Validation

1Department of Productivity and Innovation, Universidad de la Costa CUC, , 58th street #55-66, Barranquilla, , Colombia

2School of Computing, Ulster University, , Belfast, , United Kingdom

3School of Transportation and Logistics, Istanbul University, , Istanbul, , Turkey

4Department of Emergency Aid and Disaster Management, Munzur University, , Munzur, , Turkey

5Department of Industrial Engineering, Institución Universitaria de Barranquilla, , Barranquilla, , Colombia

*these authors contributed equally

Corresponding Author:

Miguel Ortiz-Barrios, Prof Dr


Background: Parkinson disease (PD) is reported to be among the most prevalent neurodegenerative diseases globally, presenting ongoing challenges and increasing burden on health care systems. In an effort to support patients with PD, their carers, and the wider health care sector to manage this incurable condition, the focus has begun to shift away from traditional treatments. One of the most contemporary treatments includes prescribing assistive technologies (ATs), which are viewed as a way to promote independent living and deliver remote care. However, the uptake of these ATs is varied, with some users not ready or willing to accept all forms of AT and others only willing to adopt low-technology solutions. Consequently, to manage both the demands on resources and the efficiency with which ATs are deployed, new approaches are needed to automatically assess or predict a user’s likelihood to accept and adopt a particular AT before it is prescribed. Classification algorithms can be used to automatically consider the range of factors impacting AT adoption likelihood, thereby potentially supporting more effective AT allocation. From a computational perspective, different classification algorithms and selection criteria offer various opportunities and challenges to address this need.

Objective: This paper presents a novel hybrid multicriteria decision-making approach to support classifier selection in technology adoption processes involving patients with PD.

Methods: First, the intuitionistic fuzzy analytic hierarchy process (IF-AHP) was implemented to calculate the relative priorities of criteria and subcriteria considering experts’ knowledge and uncertainty. Second, the intuitionistic fuzzy decision-making trial and evaluation laboratory (IF-DEMATEL) was applied to evaluate the cause-effect relationships among criteria/subcriteria. Finally, the combined compromise solution (CoCoSo) was used to rank the candidate classifiers based on their capability to model the technology adoption.

Results: We conducted a study involving a mobile smartphone solution to validate the proposed methodology. Structure (F5) was identified as the factor with the highest relative priority (overall weight=0.214), while adaptability (F4) (D-R=1.234) was found to be the most influencing aspect when selecting classifiers for technology adoption in patients with PD. In this case, the most appropriate algorithm for supporting technology adoption in patients with PD was the A3 - J48 decision tree (M3=2.5592). The results obtained by comparing the CoCoSo method in the proposed approach with 2 alternative methods (simple additive weighting and technique for order of preference by similarity to ideal solution) support the accuracy and applicability of the proposed methodology. It was observed that the final scores of the algorithms in each method were highly correlated (Pearson correlation coefficient >0.8).

Conclusions: The IF-AHP-IF-DEMATEL-CoCoSo approach helped to identify classification algorithms that do not just discriminate between good and bad adopters of assistive technologies within the Parkinson population but also consider technology-specific features like design, quality, and compatibility that make these classifiers easily implementable by clinicians in the health care system.

JMIR Rehabil Assist Technol 2024;11:e57940

doi:10.2196/57940

Keywords



Background

Advances in the economy, health care, science, and technology have significantly influenced demographics. Between 2000 and 2019, global average life expectancy increased by over 6 years to 73.4 years; however, healthy life expectancy has not kept pace [1]. Consequently, the years spent living with illness or disease have increased, with approximately 1 in 3 adults suffering from multiple chronic conditions, and 3 in 4 older adults living with 1 or more chronic conditions [2]. This has added unsustainable pressure on society’s ability to provide long-term economic care, promoting a renewed drive for innovative treatment.

One initiative has been to seek efficiencies in health care delivery through prescribing assistive technologies (ATs). ATs typically support health care outside traditional settings, aiding in remote monitoring of conditions, thereby promoting the independence of individuals and caregivers. Older users, however, who tend to be less familiar with technology advancements, remain hesitant to readily adopt ATs as a long-term, low-cost replacement for human care. Consequently, low acceptance rates, along with the requirement to update prescribed ATs as a condition evolves, remain a significant challenge to widespread adoption [3].

One mitigation is to preassess adoption likelihood so that the appropriate solutions are deployed, decommissioned, and replaced accordingly over time. A research challenge exists to appropriately identify and develop automated algorithms that can assess adoption likelihood. This paper investigates this challenge and extends our previous work, identifying the most appropriate classification algorithms to support AT assessment [4,5]. The novelty of this study also lies in the use of an integrated intuitionistic fuzzy multicriteria decision-making (MCDM) approach to dealing with this problem. This approach addresses uncertainty better with the nonmembership function [6], which helps better define the evaluations of decision makers [7], and minimizes information loss in operations with fuzzy numbers [8]. Specifically, we used intuitionistic fuzzy analytic hierarchy process (IF-AHP) to estimate initial criteria weights, intuitionistic fuzzy decision-making trial and evaluation laboratory (IF-DEMATEL) to evaluate interrelations among criteria, and combined compromise solution (CoCoSo) to rank classifiers. The study uncovered factors influencing the design of algorithms that can accurately prescribe AT. The results highlight scalability, adaptability, and performance as key criteria alongside ease of interpretation for confident deployment and the use of transparent, white-box algorithms to enhance usability and acceptance. The paper presents the finding using a case study considering technology adoption among patients with Parkinson disease (PD), which is a leading chronic condition affecting approximately 10 million people, with the majority of symptoms typcially developing after age 50 [9].

In this paper, we begin by presenting related works to highlight the opportunities and challenges in this research domain, then describe the proposed methodological approach. Next, we present and critique the main findings of our work, and finally consolidate these observations toward summarizing the main scientific implications evidenced.

Review of the Literature

Statistical and machine learning (ML) approaches are increasingly promising in technology adoption modeling research. In particular, ML is vital in advancing and validating theoretical frameworks of technology adoption and improving their predictive power.

The most popular theories for technology adoption are the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) [10]. Both TAM and UTAUT suggest that technology use is impacted by an individual’s behavioural intention to use it. In the TAM, a person’s attitude to technology, determined by perceived usefulness and perceived ease of use, is used to measure intention to use [11]. UTAUT builds on this, in addition to other theoretical frameworks. In UTAUT, four constructs impacting intention to use are considered: (1) performance expectancy, (2) effort expectancy, (3) social influence, and (4) facilitating conditions. UTAUT additionally considers constructs of age, gender, voluntariness, and experience of use to temper expectations of intention to use by the individual.

Historically, researchers in technology adoption have considered 3 elements when modeling adoption: users, technology, and environment. They have constructed these elements within several frameworks mentioned above. Although frameworks such as these have made significant inroads in furthering our understanding of technology adoption, they are not without limitations. Both TAM and UTAUT have been criticized for being overly simplistic and focusing on a narrow perspective of individuals’ beliefs, perceptions, and usage intention. Additionally, several studies have highlighted that these theories no longer contribute new knowledge or understanding to technology adoption. Therefore, new ways of understanding technology adoption are required [11].

Recently, technology adoption researchers have highlighted an additional limitation of frameworks, including TAM and UTAUT. Specifically, these models have been developed focusing on explanatory and causal modeling techniques. This may overlook the nonlinearity and influence of technology-specific features such as design, quality, or compatibility [12]. With the rise of the availability of discrete data sources, such as that generated using digital health applications, there has been greater interest from the research community in data-driven approaches for technology adoption. ML approaches to technology adoption can be broadly split into 2 groups: predictive modeling and explanatory modeling [12]. Predictive modeling seeks to predict actual use behavior (adopt or not), while descriptive modeling is focused on the interaction between various constructs that influence the adoption of a specific technology.

As a data-driven modeling technique, an ML methodology empirically predicts targeted output, adopted or not. Although it is possible to combine predictive modeling with explanatory modeling, research has shown that higher accuracy can be achieved using only predictive modeling. The 2 common approaches to developing a predictive model using ML for technology adoption are supervised and unsupervised. In supervised ML, the most common approach is to develop predictive models using classification or regression. Supervised ML models used for technology adoption have included multiple linear regression, support vector machine, multilayer perceptron, random forest, decision tree, or ensemble methods [12,13]. The model’s predictive accuracy is measured by comparing the performance of 1 or more ML algorithms. In contrast, unsupervised ML for technology adoption is developed by applying ML algorithms, typically based on clustering, to gain insight into the factors that inform adoption. To complement and enhance the performance of ML, feature selection techniques can be used to reduce the dimensionality of the data and improve the reliability of the model. Feature selection techniques help to exclude irrelevant factors that have a negligible impact on the model or are redundant.

The Technology Adoption and Usage Tool project aimed to model the adoption of mobile-based reminding solutions by people with dementia and their carers [12]. The project took an iterative approach to model development, using a unique and diverse dataset obtained by recruiting 335 participants. The dataset contained genealogical, medical, and demographic records created by combining data from the Cache County Study on Memory in Aging and the Utah Population Database. Participants were categorized into four groups: 3 types of nonadopter (1=willing but unable, 2=not willing and not able, 3=not willing but able) and 1 adopter group. The study assessed the ability to classify whether an individual would adopt the technology using various ML algorithms. Results showed that including psychosocial and medical history information, the developed adoption model, based on the k-nearest neighbors (k-NN) algorithm, achieved a prediction accuracy of 99.41% [14]. The study also investigated the effect of feature selection on each algorithm, with information gain used to rank features in terms of discriminating power for classifications.

Ortiz et al [15] proposed a multicriteria decision-making approach for technology adoption modeling for people with dementia. This work applied a fuzzy analytic hierarchy process (FAHP) to estimate the initial weights of criteria and subcriteria. The decision-making trial and evaluation laboratory (DEMATEL) was then used to evaluate the relationship and feedback among criteria. The technique for order of preferences by similarity to ideal solution (TOPSIS) was then used to rank 3 classifiers (k-NN, naive Bayes, and decision tree) according to their ability to model technology adoption. Results showed that flexibility and design were the most relevant criteria, with overall weights of 0.235 and 0.260, respectively. Naive Bayes was the most suitable classifier, with a closeness coefficient of 67.7%. It was noted that there was room for further improvement of all models tested in terms of performance and scalability.

As highlighted by the related work, ML adoption models have seen significant improvements since being first developed. These models have been tailored to suit some use cases and technical solutions. The models have also been extended to include a range of constructs and demographics [14,16]. The likelihood of adoption is transient and spans not only the physical product design and characteristics of the individual but also the social settings and channels through the technology implemented and disseminated. Indeed, a user’s perception of technology’s ease of use and usefulness may change over time as the needs, capabilities, and perceptions of the individual and society change and technology capabilities advance.

Indeed, evidence suggests there are substantial benefits to be made for ML-based approaches to technology adoption [17]. Simple regression-based models have a demonstratable ability to predict individuals who are likely to adopt technology with an accuracy of over 90% [14]. Parameters used as input into these models have ranged from sociodemographic information, such as age and education, to measures of prior technology experience and perceived usefulness/ease of use. Increasing input parameters include detailed medical history [14]. It has also been possible, through the inclusion of additional processing steps of selecting features, to refine the adoption model and improve the generalization of the modeling process [14]. Adoption models have been evaluated and chosen solely based on performance (accuracy). There would be a benefit in paying closer attention to other important metrics when selecting a suitable classifier. As different classifiers and selection criteria can be considered for addressing this problem, this paper presents a hybrid MCDM approach to support classifier selection in technology adoption processes involving patients with PD. First, the IF-AHP is implemented to calculate the relative priorities of criteria and subcriteria considering experts’ knowledge and uncertainty. Second, the IF-DEMATEL was applied to evaluate the cause-effect relationships among criteria/subcriteria.

The methodology we propose differs from similar studies in the literature in terms of its theoretical and practical contributions. As a methodological contribution, MCDM methods are integrated. Thanks to MCDM, effective and reliable decisions can be made [18], and complex problems can be solved by breaking them into smaller parts [19]. MCDM is a methodology that guides decision-makers in structuring and solving decision and planning problems involving multiple criteria [20]. Decision-makers use MCDM methods to evaluate possible alternatives and determine how these alternatives affect the decision-making objective [21]. Furthermore, MCDM methods can help the decision-maker determine each criterion’s importance and identify trade-offs between these criteria. Thus, a comparative application can be performed with MCDM methods, and the best alternative solutions can be provided to decision-makers. Although decision-makers use MCDM methods in health care management, such as health care performance assessment [22] measuring the efficiency of hospitals [23], their use in specialized areas, such as health care technology adoption, is rare [24]. As a practical contribution, there is no cure for PD. Although it is impossible to access actual data when selecting AT that will increase the patient’s quality of life, the benefits of ATs vary from patient to patient. Therefore, determining the appropriate classification algorithm also includes vagueness and ambiguity.

In recent years, decision-makers have integrated multiple MCDM methods for complex problems [25]. AHP, DEMATEL, and CoCoSo from MCDM methods were used in the study. AHP does not require complex mathematical calculations used in criteria weighting and allows the decision-maker to focus on each criterion [26]. Since the AHP method could not reflect the uncertainty of the decision-makers, a method named FAHP was developed by using fuzzy logic and AHP together [27]. However, FAHP was also criticized in the literature because it did not express uncertainty. Therefore, the IF-AHP method is more effective in addressing the hesitations of decision-makers [28].

Unlike traditional MCDM methods, DEMATEL offers a more appropriate solution to real-world problems by considering the interactions between criteria [29,30]. Standard DEMATEL may often fail to represent the uncertainty encountered in real-world problems [31]. To overcome this situation, an attempt is made to deal with the uncertainty by integrating DEMATEL with fuzzy logic [32]. IF integration with DEMATEL has been realized. DEMATEL calculation is almost the same as IF-DEMATEL. The most apparent differences are the input data and averaging method [33]. In IF-DEMATEL, decision-makers express their preferences with intutionistic fuzzy sets (IFS). In group information, the intuitionistic fuzzy weighted averaging (IFWA) operator is used [34].

CoCoSo is a method based on the integration of the recently developed weighted sum method and weighted product method [35,36]. CoCoSo provides a more robust solution than traditional MCDM methods [37]. It is integrated with AHP and DEMATEL in an IF environment. With IF-AHP, decision-makers were provided with the ability to express uncertainties better, and a more realistic evaluation was made. Similarly, cause and effect criteria were determined with IF-DEMATEL. Finally, candidate classifiers were ranked according to their transferability index using CoCoSo. Another contribution of the study to the literature is in the validation part. Thanks to the mobile phone app, the proposed methodology has been verified.

This literature review highlights several critical research gaps in technology adoption modeling. First, while statistical and ML approaches, particularly ML, hold great promise in advancing theoretical frameworks of technology adoption and improving their predictive power, there is a need for more nuanced models that account for nonlinear relationships and technology-specific features like design, quality, and compatibility. Additionally, the evaluation of classifiers has traditionally been based solely on performance (accuracy). However, other metrics should be taken into account for a more comprehensive assessment. The proposed MCDM approach offers a promising method for integrating various criteria and subcriteria to make more effective and reliable decisions in technology adoption processes.

To address these research gaps, this study introduces a hybrid MCDM approach to aid in selecting classifiers for technology adoption processes, specifically those involving patients with PD. First, considering both expert knowledge and uncertainty, the IF-AHP was utilized to determine the relative priorities of criteria and subcriteria. Next, the IF-DEMATEL was used to assess the cause-effect relationships among these criteria and subcriteria. Last, the CoCoSo was used to rank the potential classifiers based on their effectiveness in modeling technology adoption. A mobile smartphone solution case study was conducted to validate the proposed methodology.

A Brief Criticism and Gap Analysis in Technology Adoption Literature

The literature review explores the evolution and limitations of technology adoption modeling, emphasizing the growing significance of statistical and ML approaches. Although foundational theories like the TAM and UTAUT have shaped understanding by focusing on individual beliefs and perceptions, they are criticized for their simplicity and narrow focus on individuals’ beliefs and intentions as well as oversimplification and neglect of broader contextual factors. Besides, they often fail to account for the complex interactions between users, technology, and the environment, as well as nonlinear relationships and technology-specific features like design, quality, and compatibility. Furthermore, these traditional theories no longer provide new insights into technology adoption, necessitating new approaches. Their reliance on explanatory and causal modeling techniques overlooks the potential of unobtrusive data sources and the inherent nonlinearity in adoption processes.

Recent research highlights the need for more sophisticated models that explain nonlinear relationships and technology-specific features such as design and compatibility. ML methods, categorized into predictive and explanatory modeling, offer promising avenues for enhancing predictive accuracy, although challenges remain in evaluating models beyond traditional metrics like accuracy. Current literature gaps include the overemphasis on performance metrics, neglecting other important evaluation criteria.

Integrating MCDM methods addresses some of these challenges by considering criteria such as flexibility and user preference, thus providing a more comprehensive approach to classifier selection. Empirical validations, such as the successful prediction of technology adoption among patients with dementia using ML algorithms, underscore the potential and the necessity for ongoing refinement and ethical considerations in technology adoption research. Recent studies suggest the usage of MCDM methods in the selection of ML classifiers to address these limitations. Methods like the IF-AHP and IF-DEMATEL can evaluate a broader range of criteria, offering a more comprehensive understanding of technology adoption. Moreover, an important MCDM method such as CoCoSo successfully ranks the classifiers based on their effectiveness in modeling technology adoption. Thus, a study like the current attempt is needed to validate these methodologies and explore their application in different technological contexts, such as health care technology adoption, where individual needs and preferences play a crucial role in adoption decisions.


Overview

A 5-step intuitionistic fuzzy MCDM approach is proposed to support classifier selection in technology adoption for people with PD (Figure 1). The validation process considers a mobile smartphone solution entailing 4 intervention categories: tipping, memory, walking, and voice. An intricate explanation of this framework is provided below.

Figure 1. Flow chart of the 5-step intuitionistic fuzzy MCDM approach. CoCoSo: combined compromise solution; IF-AHP: intuitionistic fuzzy analytic hierarchy process; IF-DEMATEL: intuitionistic fuzzy decision-making trial and evaluation laboratory; MCDM: multicriteria decision-making.

Step 1: Selection of a Decision-Making Group

This step is about establishing a decision-making team that compares criteria and subcriteria in both the IF-AHP and IF-DEMATEL phases. The team is expected to be familiar with both the criteria set in the structure of the problem and the alternative algorithms that are likely to be evaluated. Although determining the transferability indexes of the algorithms is the task of the CoCoSo model, this team must know the general outline of the problem. It is recommended that the experts enrolled in this team be selected from experienced, highly educated, and industry-experienced people.

Step 2: Structuring the MCDM Network

What is meant by the MCDM network, of course, is the decision structure formed by the selection criteria and subcriteria. This network also includes the main goal and alternative classification algorithms in this network. At this stage, the literature and the opinions of field experts were used to create decision criteria. Since the most crucial step in revealing the problem is establishing this MCDM network, it should be kept in mind that ignoring a criterion or subcriterion that affects the selection process will affect the final decision and may lead to a wrong selection.

Step 3: Estimating Criteria and Subcriteria Relative Priorities Considering Uncertainty and Hesitancy

This section is about determining the relative importance levels of criteria and subcriteria under the uncertainty of the decision process and the decision-maker’s hesitation. At this point, the advantages of membership and nonmembership features that intuitionistic fuzzy sets suggest for high uncertainty and hesitation in decision-making emerge. At the same time, its combination with the AHP algorithm provides both the individual advantages of the 2 concepts and the integrated advantages. Although AHP is easy to use and widespread in the literature, it falls short in responding to the hesitant structure in decision-making, and this gap can be remedied with IF-AHP. We aim to make pairwise comparisons of the expert group established in the IF-AHP phase and to find the relative importance of the criteria and subcriteria by using the IF-AHP algorithm.

Step 4: Evaluation of Interdependence Among Decision Criteria/Subcriteria Under an Intuitionistic Fuzzy Environment

The reason for using the DEMATEL (under intuitionistic fuzzy environment) method in this triple structure (as IF-AHP, IF-DEMATEL, and CoCoSo) is to determine the relationship between the criteria and subcriteria (of which their weights are obtained via IF-AHP algorithm in the first phase) and focus on the strength and presence of feedback. In other words, it is determining which criteria/subcriteria are the cause and which is the effect criterion regarding classifier selection decision-making in technology adoption for people with PD.

Step 5: Calculation of Transferability Index Per Algorithm

In the final phase of the triple structure, the CoCoSo MCDM method is used to compute each transferability index of classification algorithms used in technology adoption for people with PD. This index value will measure the algorithm’s capability to model the technology adoption. It is good to note that IF-CoCoSo was not proposed for this case, considering that indicators’ values of subcriteria are known and available. IF-CoCoSo is typically adopted when there is imprecise knowledge or a lack of data [38]. In this line, the crisp CoCoSo is enough to derive the transferability index without loss of information.

IF-AHP Algorithm

The IF-AHP algorithm is an MCDM approach that integrates intuitionistic fuzzy set logic into the AHP algorithm. In addition to denoting the uncertainty and vagueness of human thought regarding the technology adoption context, the IF logic is used in this case to represent the knowledge level of experts, which may vary from one to the other, hinging upon educational background and experience [33,34,39]. This latter aspect cannot be typified by type-2 fuzzy nor hesitant fuzzy sets, which is the reason why they were discarded from this application. To explain in detail how the IF-AHP algorithm works, it is helpful to present some notations (basic math operations, defuzzification, aggregation operators, etc) about this fuzzy set extension. Atanassov [39] was the first to propose this fuzzy set extension. After being presented, it has been applied to many decision problems in many different industries [40]. There are 2 functions for this type of fuzzy set: membership and nonmembership. The sum of the degrees of membership and nonmembership is always equal to 1. The step-by-step flow of the IF-AHP algorithm is as follows:

An intuitionistic fuzzy set “I” is defined by Equation 1 [41-43]:

I={x, I(μI(x),vI(x))|xX}(1)

where X is a set in a universe of discourse and μI(x) refers to the degree of membership, vI(x) refers to the degree of nonmembership, and πI(x) refers to the degree of lack of knowledge for each xX:

0μIx+vI(x)1(2)
πIx=1-μIx-vIx,xX(3)

One of the critical aspects of intuitionistic fuzzy set notation is defuzzification. Anzilli and Facchinetti [44] and Ocampo and Yamagishi [43] proposed and used a different defuzzification method as in Equations 4 and 5.

Cφ(I)={x,μI(x)+φπI(x),vI(x)+(1φ)πI(x),xϵX}with φϵ[0,1](4)
μφx=μIx+φπIx(5)

Cφ(I) is a defuzzification operator defined in Equation 4 under a usual fuzzy subset with the membership function given by Equation 5. Mostly, φ=0.5 is a solution of the minimization problem minφϵ[0,1]d(Cφ(I), I). Here, d refers to the Euclidian distance. With φ=0.5, a membership function μ(x)=12(1+μI(x)vI(x)) characterizes the fuzzy set C0.5I.

We benefitted from the studies of Karacan et al [45] and Abdullah and Najib [46] in determining the triangular intuitionistic fuzzy numbers-based preference scale. The IF-AHP algorithm we used in this study is processed as follows:

The first step starts with determining the decision criteria and subcriteria regarding the selection of classification algorithms supporting effective AT allocation, fostering independent living while reducing the economic and social burden faced by patients with PD and their carers.

The main argument of IF-AHP as “pairwise comparisons” is made in the second step, following the scale of Karacan et al [45]. The scale has 5 points: “much more importance” (0.33, 0.27, 0.40), “more importance” (0.13, 0.27, 0.60), “equal importance” (0.02, 0.18, 0.80), “less importance” (0.27, 0.13, 0.60), and “much less importance” (0.27, 0.33, 0.40). The ternaries are in the form of (μI(x),vI(x),πI(x)), denoting belongingness (affirmation/agreement), nonbelongingness (negation/disagreement), and lack of knowledge (indeterminacy/abstention) levels [47].

Another important argument of decision-making problems is performed in this step. The assignment of a coefficient for experts who assessed the criteria and subcriteria is fulfilled. The triangular intuitionistic fuzzy scale proposed by Boran et al [48] is used. It is a 5-point scale with “very important” (0.90, 0.05, 0.05), “important” (0.75, 0.20, 0.05), “medium important” (0.50, 0.40, 0.10), “unimportant” (0.25, 0.60, 0.15), and “very unimportant” (0.10, 0.80, 0.10). Assignment of a weight to a member of the expert team is performed by Equation 6.

ωk=μk+πkμk/μk+vkk=1tμk+πkμk/μk+vk(6)

Here μk,vk,πk is an intuitionistic fuzzy number used to assess the kth expert. The ωk means the weight value of kth expert.

In the fourth step, the experts’ pairwise comparisons on the criteria and subcriteria are aggregated using the IFWA aggregation operator as in Equations 7 and 8.

rij=IFWAω=(rij(1),rij(2),.,rij(t))=ω1rij(1)ω2rij(2)ωtrij(t)(7)
IFWAω=(1k=1t(1μij(k))ωk,k=1t(vij(k))ωk,k=1t(1μij(k))ωkk=1t(vij(k))ωk)(8)

Here, R(k)=rij(k)mxn is an intuitionistic fuzzy decision matrix of the kth expert and rij=μij,vij,πij.

In the fifth step, the consistency ratio (CR) for the aggregated intuitionistic fuzzy decision matrix has been computed. The traditional CR computation procedure of Saaty [49-53] is mainly suggested for all types of fuzzy set extensions.

In the sixth step, the intuitionistic fuzzy weights of the aggregated intuitionistic fuzzy decision matrix are calculated using Equations 9 and 10.

w̿i=1nln2(μilnμi+vilnvi(1πi)ln(1πi)πiln2)(9)
  wi=1w̿ini=1nw̿i(10)

Ranks of the criteria and subcriteria are obtained in the seventh and last step of the IF-AHP algorithm. It must be noted that if the values are nonnormalized, they must be normalized before finding the final optimal values.

IF-DEMATEL Algorithm

After the steps of the IF-AHP algorithm are given above, the application of the IF-DEMATEL algorithm, which will investigate the dependency relationship between the criteria in the second part, has been started. A more straightforward understanding of the notation here depends on comprehending the intuitionistic fuzzy set notation presented in the previous section. The steps of the IF-DEMATEL algorithm are as follows.

As performed at the beginning of IF-AHP, the first step involves determining the evaluation criteria and subcriteria inside the problem.

The second step of IF-DEMATEL is to build a direct relation matrix. Evaluations of the expert members of the team are made by consensus. Here, a 2-tuple intuitionistic linguistic scale is preferred as follows: “null influence” (0.1, 0.9), “low influence” (0.35, 0.6), “medium influence” (0.5, 0.45), “high influence” (0.75, 0.2), and “very high influence” (0.9, 0.1).

In the third step, the equivalent fuzzy subset’s related membership degree is computed by Anzilli and Facchinetti’s procedure [44], as detailed in the IF-AHP algorithm section. By this procedure, the intuitionistic fuzzy sets are converted to a corresponding standard fuzzy subset; thus, the “initial direct relation matrix” in standard fuzzy subsets is built.

In the fourth step, the standard fuzzy subset values are defuzzified; thus, a crisp initial direct relation matrix is built.

The fifth step is on normalizing the direct-relation matrix, which is constructed in the previous step (Step 4). The normalized direct-relation matrix (G) is computed following the traditional crisp data-based DEMATEL steps as in Equations 11-13.

G=g-1X(11)
g=maxmax1inj=1nxij,max1jni=1nxij(12)
X=xijnxn=K=1hwkxijkK=1hwknxn(13)

where,wk demonstrates the weight of expert-k. The X matrix is the aggregated direct-relation matrix.

The sixth step is to form the total relation matrix (T) by Equation 14:

T=G(I-G)-1(14)

where, I is the identity matrix. In this step, the net cause and effects are identified. Equations 15 and 16 are the computational formulas of prominence (D+RT) and relation (D-RT) vectors.

D=j=1ntijnx1=tinx1(15)
R=i=1ntij1xn=tj1xn(16)

The seventh and final step of the IF-DEMATEL algorithm is finalized by drawing the (D+RT)-D-RT digraph map.

The CoCoSo Method

CoCoSo was proposed as a miscellaneous mix of simple additive weighting (SAW), weighted aggregated sum product assessment, and multiplicative exponential weighting methods [35,37,54-56]. Its algorithm includes several steps, as given below [37].

The first step is to generate an initial decision matrix. It is referred to in Equation 17. Here i refers to the candidate classifying algorithms. On the other side, j refers to the decision criteria and subcriteria regarding the selection of classification algorithms supporting effective AT allocation, fostering independent living, and reducing the economic and social burden faced by patients with PD mentioned in the IF-AHP and IF-DEMATEL sections.

A=aij(17)

In the second step, the initial decision matrix is normalized following Equations 18 and 19.

rij=aijminiaijmaxiaijminiaijfor benefit criteria(18)
rij=maxiaijaijmaxiaijminiaijfor cost criteria(19)

The third step of CoCoSo is to calculate the sum of weighted comparability (Si) value and power-weighted comparability sequences (Pi) for each alternative classifying algorithm via Equations 20 and 21.

Si=j=1nwjrij(20)
Pi=j=1nrijwj(21)

In the fourth step, 3 different aggregated appraisal scores (Mia,Mib,Mic) are introduced to compute the weights of each alternative classifying algorithm via Equations 22-24.

Mia=Pi+Sii=1m(Pi+Si)(22)
Mib=SiminiSi+PiminiPi(23)
Mic=λSi+(1-λ)PiλmaxiSi+(1-λ)maxiPi(24)

The fifth and last stage of CoCoSo focuses on finding the ranking of each alternative classifying algorithm considering the descending order Mi scores via Equation 25.

Mi=MiaMibMic3+13Mia+Mib+Mic(25)

Ethical Considerations

According to UK regulations (UK Research and Innovation, 2024 [57]), ethical approval was not required for this study as it did not involve human participants.


Overview

The proposed approach was implemented using the PD data derived from the iPhone app called mPower [58]. In detail, 74 adopters and 307 nonadopters were enrolled in this project. Each participant was required to undertake 4 activity types supported by the app: voice, tipping, walking, and typing. In all, 3 classification algorithms—naive Bayes, J48 decision tree, and lazy instance-based k-NN (IBK)—were candidates to predict AT adoption in this context as stipulated in [59]. However, this study only focused on the performance indicators and did not consider other aspects of the app context, including usability, design, scalability, and flexibility. Such factors may limit the implementation of high-accurate algorithms in the clinical scenario, thereby limiting the exploitation of the app benefits. In the meantime, not assessing these aspects may trigger cost overruns for the health care system and have potential detrimental effects on patients with PD. This has also represented a challenge for data analytics experts, who must design classifiers highly adaptable to the environment and the changing dynamics of the health care sector. The following subsections will describe how the multimethod MCDM framework has been applied to indicate which algorithm should be selected to effectively discriminate the potential mPower adopters and nonadopters while considering the practical clinical scenario.

The Decision-Making Group

A pertinent decision-making team from the REMIND project Consortium [60] was needed to pinpoint the criteria/subcriteria importance and the interrelations in the decision model that support the technology adoption in patients with PD. In particular, the team participants are expected to: (1) define the decision factors integrating the classifier selection model; (2) undertake the necessary pairwise comparisons to obtain the relative priorities of the factors in the presence of uncertainty, vagueness, and hesitancy; (3) perform judgments to assess the significant cause-effect interrelations affecting the deployment of classifiers in the wild; and (4) contribute to the design of recommendations for improving the suitability/transferability of the classifiers concerning the real health care scenario. This intervention was guided by 1 researcher coauthoring this paper (MO-B) and had the participation of 8 experts from different disciplines whose profiles are described in Table 1. All these experts have been directly involved in designing assistive technology solutions for patients with PD and consequently have extensive knowledge of the decision-making scenario.

Table 1. Profile of experts enrolled in the classifier selection process.
ExpertProfessionAreas of expertiseExperience (years)Current position
E1Biomedical engineerTechnology adoption modeling – mobile-based reminding solutions30Managing director
E2Informatics engineerArtificial intelligence – pervasive and mobile computing>10Researcher
E3Biomedical engineerAmbient assisted living – pervasive and mobile computing>10Senior lecturer
E4Computer science engineerPervasive and mobile computing>10Senior lecturer
E5Electrical engineerImage processing – artificial intelligence models>10Professor
E6Computer science engineerHealth innovation – health technology>10Professor
E7Informatics engineerArtificial intelligence – pervasive and mobile computing>10Data scientist

In this application, the project leader designed the classifier selection model by including the decision criteria/subcriteria and candidate algorithms elucidated with the aid of the decision-making group, the health care providers, the pertinent scientific literature, and the applicable health guidelines. Moreover, he trained the decision-makers to make correct judgments using IF-AHP and IF-DEMATEL techniques. A virtual data-collection tool was prepared and later used by the participants, who finished all the necessary comparisons during a 1-hour session. This process raised awareness in the decision-making group of the factors AT developers should take into account when designing and deploying the classifiers in the actual health care context. Usually, the data experts are inclined to enhance the performance of these algorithms without considering how they should be implemented in the wild. Therefore, including all these aspects will empower AT developers to comprehend the health care scenario and define action lines transforming classifiers in a feasible technology adoption support in people with PD.

The Classifier Selection Network

The classifier selection network designed for underpinning technology adoption in patients with PD was studied together with the decision-making group to determine if it was suitable, coherent, reasonable, and deployable in the real world. The ensuing model (Figure 2) is composed of 5 factors, 16 subfactors, and 3 algorithms. Figure 3 outlines each element complemented by supplementary descriptions of the subfactors incorporated into the network.

The decision factors have been subdivided into more detailed aspects to provide a more complete panorama of the suitability of classifiers. At the same time, there is a need to pinpoint improvements that can be translated into more applicable algorithms. For instance, erformance (F1) has 6 subelements: accuracy (SF1), computational time (SF2), (−) recall (SF3), (+) recall (SF4), (−) precision (SF5), and (+) precision (SF6). Accuracy is the number of correct classifications (adopter/nonadopter) divided by the total number of classifications. On the other hand, computational time refers to the velocity at which the classifier predicts whether the patient with PD can adopt the technology effectively. (−) Recall defines how well the classifier identifies the patients who cannot assume the assistive solution, which avoids potential adverse effects on their self-esteem and life expectancy. Meanwhile, (+) recall measures how well the algorithm discriminates against patients with PD who can suitably assume the technology, making it possible to upgrade their life quality while decreasing delayed intervention. On a different tack, (−) precision (SF5) measures the relation between the true negative cases and the predicted negative cases, while (+) precision (SF6) denotes the same ratio but considers positive cases.

Figure 2. The classifier selection network for underpinning technology adoption in people with Parkinson disease.
Figure 3. Description of classifier selection factors included in the network model.

Conversely, sefulness has been split into explainability (SF7) and model type (SF8). The first subcategory denotes whether the doctor/nurse can identify and interpret the technology adoption decision recommended by the algorithm for a specific patient with PD. Likewise, model type establishes whether the algorithm is black-box or white-box.

In the adaptability cluster, 3 decision elements are enlisted: missing value handling (SF9), classification with discrete and continuous variables (SF10), and online learning capability (SF11). Health care datasets are often characterized by presenting incomplete information and/or registration errors regarding critical patient data [61,62]; this is why it is necessary to determine if the classifier can deal with this problem without further affecting their functionality. In addition, it is essential to define if the classifier can cope with discrete and continuous patient metrics, as evidenced in all the big data systems supporting Parkinson-related health care services [63]. Furthermore, it is expected to have classifiers that can be adapted according to the dynamic context of PD and the health care scenario. In other words, the algorithm should evolve by including significant emerging features responding to the context changes.

Ultimately, the structure criterion comprises 5 aspects: complexity of data-gathering procedure (SF12), overtraining (SF13), number of features (SF14), access to validated datasets (SF15), and statistical classification (SF16). Complexity of data-gathering procedure establishes if the algorithm imports the dataset from a low number of self-administered questions or retrospectively. On a different note, some classifiers experience overtraining difficulties, which indicates an apparent performance improvement but entails a worse generalization of the test data. This problem has been extensively reported in the ML literature and can only be noticed once real technology adoption is adequately detected [64,65]. In the implementation phase, it is preferable to use classifiers requiring few features to decide whether the patient with PD can adopt a particular solution; otherwise, the procedure supporting this decision will be time-consuming and less feasible in the real world. It is additionally expected that classifiers have access to validated data as it allows them to avoid corrupted data that could possibly affect the performance of classifiers. Ultimately, statistical classification algorithms enable decision-makers to define which factors are more significant in technology adoption for people with PD. They provide coefficients whose dimensionality and direction denote if each variable substantially/hardly increases or decreases the adoption likelihood.

Intuitionistic Fuzzy Relative Priorities of Criteria and Subcriteria: The IF-AHP Application

The IF-AHP technique was used to compute the relative weights of criteria and subcriteria in the classifier selection network. In this regard, a virtual survey was designed to collate the comparisons based on the assessment scale suggested in the Intuitionistic Fuzzy Analytic Hierarchy Process section. Following this, coefficients were assigned to the decision-makers using the scheme proposed by Boran et al [48]. In this case, the decision-maker (experts; Ek) with the greatest relevance was E1 (0.2857), taking into account their comprehensive knowledge and background in the design and application of IT solutions for health care (Table 2). Afterward, the pairwise comparisons derived from the Es were aggregated by the IFWA operator (Equations 7 and 8). An example of this stage is presented in Table 3 for the flexibility subcriteria. This matrix was then normalized by Equations 9 and 10, as evidenced in Table 4. Table 5 depicts the resulting local weight and overall weights of factors and subfactors. The CR of each cluster was computed using Saaty’s approach [49,50]: factors (0.04), performance (0.002), usefulness (0), adaptability (0.06), and structure (0.01).

Table 2. Priorities of decision-makers.
ExpertIntuitionistic fuzzy numberPriority
E1(0.9, 0.05, 0.05)0.2857
E2(0.75, 0.2, 0.05)0.2380
E3(0.75, 0.2, 0.05)0.2380
E4(0.75, 0.2, 0.05)0.2380
E5(0.75, 0.2, 0.05)0.2380
Table 3. Aggregated intuitionistic fuzzy matrix for flexibility subcriteria.
SF9SF10SF11
SF9[0.020, 0.180, 0.800][0.099, 0.176, 0.724][0.099, 0.176, 0.724]
SF10[0.099, 0.176, 0.659][0.020, 0.180, 0.800][0.074, 0.159, 0.766]
SF11[0.099, 0.176, 0.724][0.074, 0.159, 0.766][0.020, 0.180, 0.800]
Table 4. The normalized priorities of flexibility subcriteria.
Intuitionistic fuzzy weightNonfuzzy weightOverall weight
SF90.0730.1770.7490.2920.069
SF100.0650.1720.7420.2670.063
SF110.0650.1720.7630.2730.065
Totala0.8330.198

aNot applicable.

Table 5. The local weight and overall weight of factors and subfactors in the classifier selection model.
Criteria/subcriteriaLocal weightOverall weight
Performance (F1)a0.187
Accuracy (SF1)0.1800.034
Computational time (SF2)0.1930.036
(–) Recall (SF3)0.1570.029
(+) Recall (SF4)0.1600.030
(–) Precision (SF5)0.1560.029
(+) Precision (SF6)0.1540.029
Usefulness (F2)0.199
Explainability (SF7)0.5000.100
Model type (SF8)0.5000.100
Scalability (F3)0.198
Adaptability (F4)0.202
Missing value handling (SF9)0.3510.069
Classification with discrete and continuous variables (SF10)0.3210.063
Online learning capability (SF11)0.3280.065
Structure (F5)0.214
Complexity of data-gathering procedure (SF12)0.1770.038
Overtraining (SF13)0.2070.044
Number of features (SF14)0.2120.045
Access to validated datasets (SF15)0.2230.048
Statistical classification (SF16)0.1810.039

aNot applicable.

Intuitionistic Fuzzy Interdependence and Feedback: The IF-DEMATEL Approach

The next step of this approach was to study the interrelations among the classifier selection factors/subfactors to identify interventions in the long-term classifier development and technology adoption processes. The 2-tuple intuitionistic linguistic scale for assessing the influence between the factors/subfactors (Intuitionistic Fuzzy Decision-Making Trial and Evaluation Laboratory section) was first explained to the experts. The decision-makers then made the judgments using an easy-to-manage data-collection tool during a 3-hour session. Table 6 presents the initial intuitionistic fuzzy direct-relation matrix derived from E3 concerning the adaptability subfactors. As a next step, the IFS were crisped by a 2-step procedure. First, the IFS were transformed into their respective subsets using the equation μx=12(1+μIx-vIx) (Table 7). A crisp function was later applied to convert the intuitionistic fuzzy subset into a crisp value. In this respect, a crisp initial direct relation matrix is generated when allocating the values in Table 7 to the triangular fuzzy vector <0, 4, 4> (Table 8). We then aggregated the defuzzified values of all experts using the simple mean (Table 9). The next stage was to compute the normalized direct-relation matrix (G) by applying Equations 11-13 (Table 10). The total relation matrix T (Table 11) was then derived by using Equation 14. Ultimately, Table 12 presents the prominence (D+R) and relation (D–R) values resulting from Equations 15 and 16 to define which factors or subfactors can be grouped into the driving and effect categories. The developers should be focused on the main drivers to make the classifiers more adaptable to the health care scenario and the technology adoption requirements.

Table 6. Initial intuitionistic fuzzy direct-relation matrix – E3 (adaptability subfactors).
SF9SF10SF11
SF9000.750.20.10.9
SF100.750.2000.10.9
SF110.50.450.50.4500
Table 7. Initial intuitionistic fuzzy direct-relation matrix – E3 in subsets (adaptability subfactors).
SF9SF10SF11
SF900.780.1
SF100.7800.1
SF110.530.530
Table 8. Crisp direct-relation matrix for adaptability subcriteria – E3.
SF9SF10SF11
SF903.10.4
SF103.100.4
SF112.12.10
Table 9. Aggregated direct-influence matrix for adaptability subcriteria.
SF9SF10SF11
SF902.1751.912
SF102.6502.225
SF112.6622.6620
Table 10. Normalized aggregated direct-influence matrix for adaptability subcriteria.
SF9SF10SF11
SF900.4080.359
SF100.49800.418
SF110.50.50
Table 11. Total influence matrix for adaptability subcriteria.
SF9SF10SF11D
SF92.3872.5182.2697.174
SF103.0262.5132.5558.093
SF113.2063.0162.4128.634
R8.6198.0477.235a

aNot applicable.

Table 12. Dispatchers and receivers in the classifier selection network.
D+RD–RCategory
F19.924–0.793Effect
SF19.4030.969Driver
SF27.948–1.054Effect
SF38.397–0.604Effect
SF48.9470.168Driver
SF58.8380.395Driver
SF68.7680.127Driver
F29.879–1.026Effect
SF733.545–1.000Effect
SF833.5451.000Driver
F39.882–0.118Effect
F410.4811.234Driver
SF915.794–1.445Effect
SF1016.1400.047Driver
SF1115.8681.399Driver
F510.4230.703Driver
SF1218.8640.620Driver
SF1316.530–2.369Effect
SF1418.8611.024Driver
SF1518.7331.115Driver
SF1618.385–0.390Effect

(D+RT)(DRT) digraph maps (Figure 4A–4E) were also built to examine the interrelations among the factors/subfactors underpinned by the computation of reference values elucidating the significant influences. The developers must carefully intervene in these influences in conjunction with the health care staff to ensure high-deployable classification algorithms.

Figure 4. Impact-digraph maps for (A) factors, (B) performance, (C) usefulness, (D) adaptability, and (E) structure.

Ranking of Classifiers: The CoCoSo Implementation

This section outlines the CoCoSo application, whose main objective is two-fold: (1) to derive the transferability index (Mi score) helping to rank the classifier alternatives, namely, lazy IBK – k-NN (A1), naive Bayes (A2), and J48 decision tree (A3), that may support technology adoption in people with PD; and (2) to detect those characteristics that should be improved in each algorithm to better support this decision in the wild. The CoCoSo implementation was initiated by setting a metric per each classifier selection criterion/subcriterion. The list of indicators and their formula are presented in Table 13. These indexes were established considering the pertinent scientific evidence and the health care context associated with PD. The values of each decision element and classifier were included in the initial decision matrix A (Tables 14 and 15). This arrangement (Equation 17) also incorporates the overall weights w computed by using the IF-AHP technique (for more information, see the section titled The CoCoSo Method).

Table 13. List of metrics and their calculation method.
Classifier selection criterion/subcriterionMetricFormula
Accuracy (SF1)Average accuracyi=1n(TNC+TPCTPC+FPC+FNC+TNC)100n
TNC: true negative cases
TPC: true positive cases
FPC: false positive cases
FNC: false negative cases
n: number of iterations
Computational time (SF2)Average time complexityi=1nITin
n: number of iterations
ITi: iteration time per instance i
(–) Recall (SF3)Average negative recalli=1n(TNCFPC+TNC)1n
TNC: true negative cases
FPC: false positive cases
n: number of iterations
(+) Recall (SF4)Average positive recalli=1n(TPCTPC+FNC)1n
TNC: true positive cases
FNC: false negative cases
n: number of iterations
(–) Precision (SF5)Negative positive precisioni=1n(TNCTNC+FNC)1n
TNC: true negative cases
FPC: false negative cases
n: number of iterations
(+) Precision (SF6)Average positive precisioni=1n(TPCTPC+FPC)1n
TNC: true positive cases
FPC: false positive cases
n: number of iterations
Explainability (SF7)InterpretabilityIf the algorithm is simple to interpret by a doctor and/or nurse (2), otherwise (1)
Model type (SF8)Model categoryIf the model is a black box (2), white box (1)
Scalability (F3)Cost classificationIf the learning cost overpasses €927 (US $1018) (1), otherwise (2)
Missing value handling (SF9)Missing value managementIf the algorithm supports datasets with missing values (2), otherwise (1)
Classification with discrete and continuous variables (SF10)Data typeIf the classification model supports continuous and discrete data (2), otherwise (1)
Online learning capability (SF11)Online learningIf the classifier is trained through online learning (2), otherwise (1)
Complexity of data-gathering procedure (SF12)Data-gathering managementIf a self-administered survey is used for collecting the feature set (2), otherwise (1)
Overtraining (SF13)OvertrainingIf the algorithm has overtraining problems (2), otherwise (1)
Number of features (SF14)Number of featuresNumber of the input variables requested by the algorithm to perform the technology adoption prediction
Access to validated datasets (SF15)Classifier validationIf the algorithm can be verified with validated datasets (2), otherwise (1)
Statistical classification (SF16)Statistical capabilityIf the model is statistical (2), otherwise (1)
Table 14. Initial decision matrix A – SF01 to F4.
AlgorithmSF1SF2SF3SF4SF5SF6SF7SF8F3
A173.360.000110.720.740.730.73221
A269.050.00000.740.630.710.67221
A376.980.00170.830.710.810.74212
Overall weight0.0340.0360.0290.0300.0290.0290.1000.1000.198
Table 15. Initial decision matrix A – SF9 to SF16.
SF09SF10SF11SF12SF13SF14SF15SF16
A122121521
A222221522
A322121522
Overall weight0.0690.0630.0650.0380.0440.0450.0480.039

The initial matrix A was then normalized following Equations 18 and 19. After this, we computed the (Si) and (Pi) for each classifier (Table 16). The next step involved estimating the aggregated appraisal scores (Mia,Mib,Mic) via Equations 20-22 with λ=0.5 (Table 16). Finally, the transferability index Mi score (Equation 23) was derived for each classifier (Table 16).

Table 16. Aggregated appraisal scores and ranking of classifiers.
SiPiMiaMibMicMiRanking
A10.521311.9290.31262.08920.78411.86202
A20.552310.9520.288830.72461.77963
A30.886814.9900.39863.07001.00002.55921

Validation Study: Contrasting CoCoSo Results With TOPSIS and SAW

Even though we have suggested a robust strategic methodology combining 3 MCDM techniques with the intuitionistic fuzzy logic, it is always necessary to validate its accuracy compared to well-known methods. In this sense, we contrasted the scoring technique used in the last phase (CoCoSo) with SAW and TOPSIS. The resulting rankings in each method are shown in Figure 5. Upon analyzing this graph, no changes were observed in A3, which was the most suitable classifier in all 3 approaches. There is a slight variation in the SAW ranking of A1 and A2 compared to the findings derived from TOPSIS and CoCoSo. This is expected, considering the differences in each method’s normalization and scoring procedures. These results then underpin the accuracy and applicability of the suggested methodology.

Furthermore, the Pearson correlation tests (Figure 6) were conducted considering the transferability indexes derived from each method. The scores are highly correlated (r>0.8), especially when comparing CoCoSo and TOPSIS (r=1). This procedure strengthens the graphical validation performed in Figure 5.

Figure 5. Ranking of classifiers according to CoCoSo, SAW, and TOPSIS. CoCoSo: combined compromise solution; SAW: simple additive weighting; TOPSIS: technique for order of preference by similarity to ideal solution.
Figure 6. Pearson correlation tests between transferability indexes of TOPSIS, SAW, and CoCoSo. CoCoSo: combined compromise solution; SAW: simple additive weighting; TOPSIS: technique for order of preference by similarity to ideal solution.

Principal Results, Limitations, and Comparison With Previous Work

The Importance of Classifier Selection Criteria and Subcriteria

Considering the IF-AHP results, structure (F5) was identified as the factor with the highest relative priority. However, there was no significant difference between this factor and the other factors involved in the selection model (F5 vs F4=0.012; F5 vs F2=0.015; F5 vs F3=0.016; F5 vs F1=0.027). This demonstrates that all these factors should be simultaneously considered when selecting classifiers supporting technology adoption in patients with PD. Specifically, structure is identified as an essential factor in the selection process, given the need to accelerate the deployment of the classifiers in the actual technology scenario. Algorithms with overtraining problems, complex data-collection procedures, a high number of input features, no access to validated datasets, and no statistical modeling may enlarge the learning curve in health care staff and trigger a high rate of incorrect classifications. This finding is confirmed by Badillo et al [66], who identified that an inadequate or deficient model structure could affect the key variable predictions, which, in the case of the technology adoption processes, may signify fair discrimination between adopters and nonadopters of assistive technologies [67]. Therefore, efforts should be directed at improving the structural characteristics of the classifiers to optimize the technology adoption process within the health care scenario. Thereby, the rate of rejection and abandonment of technology can be reduced while improving the quality of life of patients with PD and their families.

It is also essential to analyze the ranking of classifier selection subfactors to derive more focused interventions. In this case, the first 2 (explainability and model type) correspond to the usefulness domain. The importance of these subcriteria lies in the fact that the selected classifiers should be easy to manage and interpret for nonexpert users such as doctors and support staff. On the contrary, there will be some resistance to change, lack of interest, extended learning time, and subsequent delays in technology adoption. These findings confirm what Miotto et al [68] reported regarding the importance of the model’s explainability and the interpretability of the results as critical aspects in developing reliable technology assistance in patients with PD.

The following 3 subfactors in the ranking are missing value handling, online learning capability, and classification with discrete and continuous variables, which belong to the adaptability factor, the second most crucial factor in the selection model. Missing value handling is one of the most common and intrinsic problems in handling large volumes of health care data [69,70]. There are several reasons for missing data, including poor adherence to data handling procedures and policies and unsuitable reporting mechanisms. Consequently, the technology adoption classifiers must be able to identify and impute the lost data adequately to avoid biases or false results that can lead medical and support staff to make wrong decisions when allocating a specific solution. This poses a challenge for developing studies focused on improving the handling of missing data. Removing values, assigning default values, or blaming the data have been some of the reported missing data approaches [71-73]. For instance, Prince et al [74] demonstrated the ability to predict PD in the presence of missing values by dividing the dataset into 2 subgroups comprising people with missing and complete source data. On the other hand, the online learning capability implies that technology adoption algorithms must continuously evolve by incorporating new features according to advances in the diagnosis and management of PD, both at the clinical and home care levels, which validate the findings of Ortiz-Barrios et al [5]. One of the strategies that can be adopted to improve the learning capacity of the classifier is the one proposed by Sigcha et al [75], in which a pretrained transfer learning model was designed to enhance the technology adoption in natural environments. Finally, the importance of the classification with discrete and continuous variables lies in the ability of classifiers to receive data of a different nature in the context of PD. For example, Harimoorthy and Thangavelu [76] mentioned that one of the main criteria in PD-related prediction models is the collection of patients’ voice characteristics, whose nature may be discrete or continuous.

Ultimately, consistency rates were computed for the aggregated intuitionistic fuzzy decision matrixes based on Saaty [49,54]. The results showed that all matrices were consistent (CR<0.1), demonstrating the decision-making process’s robustness regarding the estimated priorities of factors and subfactors. These outcomes are supported by an adequate selection of experts complemented with training and guidance during the evaluation process. In addition, it is important to remark on the importance of using easy-to-manage surveys and the shorter version of Saaty’s scale to reduce assessment bias [77-80]. The sound effects of these practices are also evident in large matrices (n≥5; performance and structure subcriteria) where the CR was equal to or less than 0.01.

Interdependence Assessment in the Classifier Selection Network

IF-DEMATEL shows that adaptability (F4) and structure (F5) are the dispatchers while performance (F1), usefulness (F2), and scalability (F3) belong to the effect group. Therefore, developers, personnel, and physicians must establish intervention actions focused on the driver factors to support the technology adoption process in patients with PD in the long term. In addition, structure and adaptability present the highest prominence value, being the primary influencers in the classifier selection model and then become priority factors that need to be carefully considered in ML algorithm design approaches for the PD context. These results are consistent with the findings of Sigcha et al [75], who highlighted that the architecture, the training configurations, and the learning model parameters are essential for the adequate scalability of the discrimination results. In this sense, a flexible model architecture and the documentation of all the model construction stages are strongly recommended for making the technology adoption process more efficient. Therefore, classifiers with these characteristics must have a high probability of being selected to support this process in the health care scenario. These conclusions are also underpinned by the presence of a feedback relationship between the aforementioned elements (Figure 4A), where it is evident how the data collection, training, and processing highly restrict the adaptability of the classifier to the PD context.

In addition, influence maps (Figure 4B–E) were prepared to show the inner interactions in each cluster and establish action courses for improving the suitability of PD technology adoption classifiers. Regarding the performance criterion (Figure 4B), the threshold value was defined as θ=19.2062=0.53, which helped to elucidate the significant dependencies. In conclusion, accuracy (SF1), (+) recall (SF4), (+) precision (SF5), and (–) precision (SF6) are the effect generators or dispatchers, while computational time (SF2) and (−) recall (SF3) are the receivers. It is also essential to emphasize the feedback relationships (orange arrows): SF1-SF5, SF1-SF6, SF1-SF4, SF5-SF6, SF6-SF4, and SF4-SF5. These results confirm the findings provided by Pereira et al [81] related to the existing correlation between different performance metrics when selecting the most appropriate classifier. It is also essential to highlight the cause-effect relationship between accuracy (SF1) and computational time (SF2). This relationship is significant when evaluating the classifier’s performance due to the dilemma of obtaining shorter execution times at the expense of predictive capacity. In this regard, Ali et al [82] mentioned that in many assistive technology medical applications such as PD, the execution time and complexity of the algorithm are crucial parameters for effective deployment support, lower resistance to change, and adoption in the real health care scenario. Otherwise, the clinicians may perceive the models as work overload under constant pressure. Therefore, research should be oriented to developing technology adoption models with high predictive capacity but low processing times.

On a different note, a reference value θ=33.54522=8.386 was defined for the usefulness subfactors group (Figure 4C). The interrelationship map uncovers that explainability (SF7) is the receiver and the model type (SF8) is the dispatcher. A related work by Zhang et al [17] indicated that transparency and accessibility to visualization allow the development of assistive health care technologies that can be easily analyzed and rationally interpreted by the clinicians who will use these solutions in the daily PD management routine. These characteristics are satisfied by white-box classifiers (eg, decision tree – A3), which reduces the learning curve experienced by health care professionals.

Interdependencies were also detected between the adaptability subfactors (Figure 4D). In this regard, the threshold metric was estimated to be θ=23.90132=2.656. The map revealed that missing value handling (SF9) is the only receiver, while the classification with discrete and continuous variables (SF10) and online learning capability (SF11) are the dispatchers. Although there is some debate regarding the absolute need for discrete information [83], the PD dynamics demands models capable of working with new input variables, either continuous [84] or discrete [85], to represent better the technology adoption context related to these patients. The algorithms can be updated and effectively respond at a human-level AI, as argued by Cartuyvels et al [86]. Specifically, the continuous-discrete representations allow the model to capture PD contextual information better. Handling both types of variables helps to address the limitations that each one holds. Likewise, it is vital to count on ML models that can learn from real-time data so that they can evolve to respond to the changing scenario. Thereby, these models can discriminate between adopters/nonadopters effectively considering the dynamic of the technology acceptance features. In this respect, Hoi et al [87] postulated that learning from large-scale, nonstationary accurate data is still an open challenge for the developers who are called to make this process more efficient and scalable. This is partially explained by the fact that real datasets are frequently incomplete, thereby fostering the use of imputation methods addressing the missing values [88,89].

The interactions within the structure cluster (Figure 4E) are not less relevant. The digraph portrays that the complexity of data-gathering procedure (SF12), number of features (SF14), and access to validated datasets (SF15) are the main drivers (i=1n(TNC+TPCTPC+FPC+FNC+TNC)100n), whereas overtraining (SF13) and statistical classification (SF16) are the receivers. The presence of feedback interrelations is also glaring among SF12, SF14, and SF15, the reason why the classifier developers need to handle this triplet effectively. The inclusion of AI algorithms in the context of PD technology is facilitated when the classifiers require fewer inputs to make the predictions. Doctors and nurses are usually reluctant to use a decision-making aid if it is too complex to manage and does not offer a significant benefit compared to the current procedures and standards [90]. In addition, simpler data-gathering mechanisms are desired to stifle a potential lack of interest from medical staff, prediction inconsistencies, extended consultation times, and work overload [91]. These aspects need to be complemented by suitable access to validated datasets, which is essential to refine the accuracy/correctness of these models when pinpointing the patients with PD with the highest technology adoption probability. However, patients and health care institutions often need to be more confident in providing personal data for security and privacy reasons. This is a significant barrier to the implementation of personalized care; therefore, it requires the application of new stringent regulations better governing data collection, use, and storage [92].

Transferability Index and Improvement Areas

CoCoSo was utilized to calculate the transferability index of each classifier, derive the ranking in descending order, and detect areas of improvement. This is a major contribution of this paper, considering that most related studies only focus on the performance measurements to select the best classifier in technology adoption for patients with PD [12,13,84]. In this case, the outcomes uncovered that the most appropriate algorithm for supporting technology adoption in patients with PD is the A3 - J48 decision tree. Still, there are areas for improvement in each algorithm that diminish their suitability in the health care scenario:

  • Moderate accuracy, (−) precision, (+) precision, (−) recall, and (+) recall levels: In this set of classifiers, intermediate accuracy levels were reported, which entails the need to include other predictors either single or hybrid to augment the capability of distinguishing between patients with PD who will accept the assistive solution and those who will not. In addition, the (−) recall values were found to be at a medium degree, which evidences the need for upgrading their ability to identify the patients with PD who are not suitable adopters of the solution and consequently circumvent latent adverse effects on their self-esteem and life expectation due to incorrect technology allocation. In a similar vein, (+) recall scores were categorized into the medium category, revealing the necessity of increasing their capability to quickly pinpoint patients with PD who can effectively assume the solution as part of their treatment. In addition, (−) precision and (+) precision values demonstrate moderate performance when predicting nonadoption and adoption of the showcased technology. It is hence suggested to (1) collect more data to train the algorithms better; (2) refine the model hyperparameters, including the regularization strength; (3) apply class weights in case of imbalance [93]; (4) use ensembling domain knowledge techniques [94]; and (5) implement data augmentation by transforming the existing datasets if data-gathering restrictions cannot be overcome [95].
  • Low scalability: In this case, the training process of A1 and A2 overpasses €927 (US $1018); therefore, strategies to make them more attractive to health care administrators from a financial perspective are needed. Application-specific integrated chips may be a feasible alternative, considering their processing speed. In parallel, data decomposition methods could be used to reduce the processing complexity, accelerate training, and consequently diminish the cost of learning.
  • Low flexibility: A1 and A3 are not trained through online learning, which hinders their potential to rapidly evolve according to the changing scenario of PD and the health care sector. If this is not solved, these algorithms will require retraining to be updated, which is costly and affects their scalability in hospitals [87]. In response, online learning algorithms should be applied to extract PD data arriving sequentially. It is possible to count on an updated classifier representing the PD context in real time. Some generic proposals have emerged to provide an alternative pathway to deal with this problem in the real world. For example, Lin et al [96] proposed a scalable quantile-based induction model to boost the Hoeffding tree, thereby making the algorithm more flexible and reducing storage and computational requirements. On a different note, Ferreira et al [97] proposed an extension of k-NN to make it more profitable in computational cost without compromising performance.

Conclusions

This study uses a combination of the IF-AHP, IF-DEMATEL, and CoCoSo techniques to find the best classification algorithm for detecting prospective AT adoption among people with Parkinson disease. By adopting a knowledge-driven approach to AT adoption, the suggested methodology addresses the constraints of other accuracy-based methods by considering nontypical characteristics such as these solutions’ design, validation, and implementation phases.

The study emphasizes the critical importance of carefully considering classifier selection criteria and subcriteria when implementing technology for PD patients. The structure factor (F5) and scalability (F4) were identified as top priorities, indicating its essential role in accelerating classifier deployment in real-world scenarios. It was noted that inadequate model structure could lead to incorrect predictions. At the same time, low-scalable algorithms may represent a barrier to technology adoption in patients with PD.

Additionally, the explainability and model type subcriteria within the usefulness domain were highlighted as crucial. These factors ensure that selected classifiers are user-friendly and interpretable for nonexpert users, such as medical professionals and support staff. This helps mitigate resistance to change and delays in technology adoption. White-box algorithms were specifically emphasized for their transparency, enabling a deeper understanding of predictions and facilitating more effective interventions.

Although the study contributes to the literature in many aspects, the study has several limitations that must be highlighted. First, the findings are based on a specific dataset and context related to PD, potentially limiting their generalizability to different populations or health care settings. Additionally, the accuracy of the results heavily relies on the assumed expertise of the individuals involved in the decision-making process. The study acknowledges the challenge of missing data in health care datasets, emphasizing the need to carefully consider data quality and availability. Furthermore, the number of evaluated classification algorithms is limited to 3. Different ATs may be needed in various stages of PD. Similarly, the effects of chronic diseases other than PD on the choice of AT and the impact of these conditions on the selection of the classification algorithm were not discussed in the study. Specific weaknesses in the selected classifiers, such as moderate accuracy levels and issues related to scalability and flexibility, may impact their suitability for real-world health care applications. Ultimately, some difficulties in applying this approach may emerge in ever-changing contexts if data scientists are not suitably trained in MCDM techniques.

In the study, criterion weights were determined by the IF-AHP method. The AHP method requires more evaluations than other weighting methods [98], such as the best-and-worst method, and it is difficult to detect inconsistent evaluations while evaluating. In addition, it was not tested whether a follow-up group representing all patients with PD was included for the study validation. It is recommended that researchers address these aspects in future studies.

Acknowledgments

The authors want to acknowledge support from the REMIND Project from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement number 734355.

Conflicts of Interest

None declared.

  1. GHE: life expectancy and healthy life expectancy. World Health Organisation. URL: https:/​/www.​who.int/​data/​gho/​data/​themes/​mortality-and-global-health-estimates/​ghe-life-expectancy-and-healthy-life-expectancy [Accessed 2022-10-04]
  2. Hajat C, Stein E. The global burden of multiple chronic conditions: a narrative review. Prev Med Rep. Dec 2018;12:284-293. [CrossRef] [Medline]
  3. Cook EJ, Randhawa G, Sharp C, et al. Exploring the factors that influence the decision to adopt and engage with an integrated assistive telehealth and telecare service in Cambridgeshire, UK: a nested qualitative study of patient ‘users’ and ‘non-users’. BMC Health Serv Res. Apr 19, 2016;16:137. [CrossRef] [Medline]
  4. Ortiz‐Barrios M, Nugent C, Cleland I, Donnelly M, Verikas A. Selecting the most suitable classification algorithm for supporting assistive technology adoption for people with dementia: a multicriteria framework. Multi Criteria Decision Anal. Jan 2020;27(1-2):20-38. URL: https://onlinelibrary.wiley.com/toc/10991360/27/1-2 [Accessed 2024-10-15] [CrossRef]
  5. Ortíz-Barrios M, Cleland I, Donnelly M, et al. Choosing the most suitable classifier for supporting assistive technology adoption in people with Parkinson’s disease: a fuzzy multi-criteria approach. In: Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management Human Communication, Organization and Work. 2020:390-405. [CrossRef]
  6. Ecer F. An extended MAIRCA method using intuitionistic fuzzy sets for coronavirus vaccine selection in the age of COVID-19. Neural Comput Appl. 2022;34(7):5603-5623. [CrossRef] [Medline]
  7. Ecer F, Pamucar D. MARCOS technique under intuitionistic fuzzy environment for determining the COVID-19 pandemic performance of insurance companies in terms of healthcare services. Appl Soft Comput. Jun 2021;104:107199. [CrossRef] [Medline]
  8. Ecer F, Böyükaslan A, Hashemkhani Zolfani S. Evaluation of cryptocurrencies for investment decisions in the era of Industry 4.0: a Borda count-based intuitionistic fuzzy set extensions EDAS-MAIRCA-MARCOS multi-criteria methodology. Ax. 2022;11(8):404. [CrossRef]
  9. Statistics. Parkinson’s Foundation. URL: https://www.parkinson.org/Understanding-Parkinsons/Statistics [Accessed 2022-10-04]
  10. Venkatesh V, Davis FD. A theoretical extension of the Technology Acceptance Model: four longitudinal field studies. Manag Sci. Feb 2000;46(2):186-204. [CrossRef]
  11. Shachak A, Kuziemsky C, Petersen C. Beyond TAM and UTAUT: future directions for HIT implementation research. J Biomed Inform. Dec 2019;100:103315. [CrossRef] [Medline]
  12. Alwabel ASA, Zeng XJ. Data-driven modeling of technology acceptance: a machine learning perspective. Exp Syst Appl. Dec 2021;185:115584. [CrossRef]
  13. Chaurasia P, McClean SI, Nugent CD, et al. Modelling assistive technology adoption for people with dementia. J Biomed Inform. Oct 2016;63:235-248. [CrossRef] [Medline]
  14. Chaurasia P, McClean S, Nugent CD, et al. Modelling mobile-based technology adoption among people with dementia. Pers Ubiquitous Comput. 2022;26(2):365-384. [CrossRef] [Medline]
  15. Ortíz-Barrios MA, Garcia-Constantino M, Nugent C, Alfaro-Sarmiento I. A novel integration of IF-DEMATEL and TOPSIS for the classifier selection problem in assistive technology adoption for people with dementia. Int J Environ Res Public Health. Jan 20, 2022;19(3):1133. [CrossRef] [Medline]
  16. Sharma R, Mishra R. A review of evolution of theories and models of technology adoption. Indore Manag J. 2014;6(2):17-29. URL: https:/​/www.​researchgate.net/​profile/​Rajesh-Sharma-12/​publication/​295461133_A_Review_of_Evolution_of_Theories_and_Models_of_Technology_Adoption/​links/​625bd869709c5c2adb82bdf8/​A-Review-of-Evolution-of-Theories-and-Models-of-Technology-Adoption.​pdf [Accessed 2024-10-15]
  17. Zhang S, McClean SI, Nugent CD, et al. A predictive model for assistive technology adoption for people with dementia. IEEE J Biomed Health Inform. 2013;18(1):375-383. [CrossRef]
  18. Stević Ž, Pamučar D, Puška A, Chatterjee P. Sustainable supplier selection in healthcare industries using a new MCDM method: measurement of alternatives and ranking according to COmpromise solution (MARCOS). Comput Ind Eng. Feb 2020;140:106231. [CrossRef]
  19. Mardani A, Jusoh A, MD Nor K, Khalifah Z, Zakwan N, Valipour A. Multiple criteria decision-making techniques and their applications – a review of the literature from 2000 to 2014. Econ Res Ekon Istr. Jan 2015;28(1):516-571. [CrossRef]
  20. Parashar S, Bhattacharya S, Titiyal R, Guha Roy D. Assessing environmental performance of service supply chain using fuzzy TOPSIS method. Health Serv Outcomes Res Method. Mar 2024;24(1):46-72. [CrossRef]
  21. Allah Bukhsh Z, Stipanovic I, Klanker G, O’ Connor A, Doree AG. Network level bridges maintenance planning using Multi-Attribute Utility Theory. Struct Infrastruct Eng. Jul 3, 2019;15(7):872-885. [CrossRef]
  22. Erdebilli B, Sicakyuz C, Yilmaz İ. An integrated multiple-criteria decision-making and data envelopment analysis framework for efficiency assessment in sustainable healthcare systems. Healthcare Analytics. Jun 2024;5:100327. [CrossRef]
  23. Rouyendegh BD, Oztekin A, Ekong J, Dag A. Measuring the efficiency of hospitals: a fully-ranking DEA–FAHP approach. Ann Oper Res. Jul 2019;278(1-2):361-378. [CrossRef]
  24. Erol I, Oztel A, Searcy C, Medeni İ. Selecting the most suitable blockchain platform: a case study on the healthcare industry using a novel rough MCDM framework. Technol Forecast Soc Change. Jan 2023;186:122132. [CrossRef]
  25. Liu Y, Yang Y, Liu Y, Tzeng GH. Improving sustainable mobile health care promotion: a novel hybrid MCDM method. Sustainability. 2019;11(3):752. [CrossRef]
  26. Siksnelyte-Butkiene I, Zavadskas EK, Streimikiene D. Multi-criteria decision-making (MCDM) for the assessment of renewable energy technologies in a household: a review. Energ. 2020;13(5):1164. [CrossRef]
  27. van Laarhoven PJM, Pedrycz W. A fuzzy extension of Saaty’s priority theory. Fuzzy Sets Syst. 1983;11(1-3):229-241. [CrossRef]
  28. Ben Rabia MA, Bellabdaoui A. Collaborative intuitionistic fuzzy-AHP to evaluate simulation-based analytics for freight transport. Expert Syst Appl. Sep 2023;225:120116. [CrossRef]
  29. Chen FH, Hsu TS, Tzeng GH. A balanced scorecard approach to establish a performance evaluation and relationship model for hot spring hotels based on a hybrid MCDM model combining DEMATEL and ANP. Int J Hosp Manag. Dec 2011;30(4):908-932. [CrossRef]
  30. Lu MT, Lin SW, Tzeng GH. Improving RFID adoption in Taiwan’s healthcare industry based on a DEMATEL technique with a hybrid MCDM model. Decis Support Syst. Dec 2013;56:259-269. [CrossRef]
  31. Bhattacharjee P, Howlader I, Rahman M, et al. Critical success factors for circular economy in the waste electrical and electronic equipment sector in an emerging economy: implications for stakeholders. J Clean Prod. May 2023;401:136767. [CrossRef]
  32. Yilmaz I, Erdebilli B, Naji MA, Mousrij A. A Fuzzy DEMATEL framework for maintenance performance improvement: a case of Moroccan chemical industry. J Eng Res. Mar 2023;11(1):100019. [CrossRef]
  33. Abdullah L, Mohd Pouzi H, Awang NA. Intuitionistic fuzzy DEMATEL for developing causal relationship of water security. IJICC. Jul 12, 2023;16(3):520-544. [CrossRef]
  34. Büyüközkan G, Güleryüz S, Karpak B. A new combined IF-DEMATEL and IF-ANP approach for CRM partner evaluation. Int J Prod Econ. Sep 2017;191:194-206. [CrossRef]
  35. Yazdani M, Zarate P, Kazimieras Zavadskas E, Turskis Z. A combined compromise solution (CoCoSo) method for multi-criteria decision-making problems. MD (Chic). Oct 15, 2019;57(9):2501-2519. [CrossRef]
  36. Panchagnula KK, Sharma JP, Kalita K, Chakraborty S. CoCoSo method-based optimization of cryogenic drilling on multi-walled carbon nanotubes reinforced composites. Int J Interact Des Manuf. Feb 2023;17(1):279-297. [CrossRef]
  37. Torkayesh AE, Pamucar D, Ecer F, Chatterjee P. An integrated BWM-LBWA-CoCoSo framework for evaluation of healthcare sectors in Eastern Europe. Socioecon Plann Sci. Dec 2021;78:101052. [CrossRef]
  38. Tripathi DK, Nigam SK, Rani P, et al. New intuitionistic fuzzy parametric divergence measures and score function-based CoCoSo method for decision-making problems. Decis Mak Appl Manag Eng. Apr 15, 2023;6(1):535-563. [CrossRef]
  39. Atanassov KT. Intuitionistic fuzzy sets. In: Intuitionistic Fuzzy Sets. Physica, Heidelberg; 1999:1-137. [CrossRef]
  40. Büyüközkan G, Göçer F. Application of a new combined intuitionistic fuzzy MCDM approach based on axiomatic design methodology for the supplier selection problem. Appl Soft Comput. Mar 2017;52:1222-1238. [CrossRef]
  41. Atanassov KT. An equality between intuitionistic fuzzy sets. Fuzzy Sets Syst. Apr 1996;79(2):257-258. [CrossRef]
  42. Ocampo LA. Applying fuzzy AHP–TOPSIS technique in identifying the content strategy of sustainable manufacturing for food production. Environ Dev Sustain. Oct 2019;21(5):2225-2251. [CrossRef]
  43. Ocampo L, Yamagishi K. Modeling the lockdown relaxation protocols of the Philippine government in response to the COVID-19 pandemic: an intuitionistic fuzzy DEMATEL analysis. Socioecon Plann Sci. Dec 2020;72:100911. [CrossRef] [Medline]
  44. Anzilli L, Facchinetti G. A new proposal of defuzzification of intuitionistic fuzzy quantities. In: Novel Developments in Uncertainty Representation and Processing. Springer; 2016:185-195. [CrossRef]
  45. Karacan I, Senvar O, Arslan O, Ekmekçi Y, Bulkan S. A novel approach integrating intuitionistic fuzzy analytical hierarchy process and goal programming for chickpea cultivar selection under stress conditions. Processes. 2020;8(10):1288. [CrossRef]
  46. Abdullah L, Najib L. Sustainable energy planning decision using the intuitionistic fuzzy analytic hierarchy process: choosing energy technology in Malaysia. Int J Sustainable Energy. Apr 20, 2016;35(4):360-377. [CrossRef]
  47. Ortíz-Barrios MA, Madrid-Sierra SL, Petrillo A, Quezada LE. A novel approach integrating IF-AHP, IF-DEMATEL and CoCoSo methods for sustainability management in food digital manufacturing supply chain systems. JEIM. 2023. [CrossRef]
  48. Boran FE, Genç S, Kurt M, Akay D. A multi-criteria intuitionistic fuzzy group decision making for supplier selection with TOPSIS method. Expert Syst Appl. Oct 2009;36(8):11363-11368. [CrossRef]
  49. Saaty TL. Decision making with the analytic hierarchy process. IJSSCI. 2008;1(1):83. [CrossRef]
  50. Saaty TL. The analytic hierarchy and analytic network processes for the measurement of intangible criteria and for decision-making. In: Greco S, Ehrgott M, Figueira J, editors. Multiple Criteria Decision Analysis International Series in Operations Research & Management Science. Vol 233. Springer; 2016. [CrossRef]
  51. Mu E, Chung TR, Reed LI. Paradigm shift in criminal police lineups: eyewitness identification as multicriteria decision making. Int J Prod Econ. Feb 2017;184:95-106. [CrossRef]
  52. Petrillo A, Colangelo F, Farina I, Travaglioni M, Salzano C, Cioffi R. Multi-criteria analysis for life cycle assessment and life cycle costing of lightweight artificial aggregates from industrial waste by double-step cold bonding palletization. J Clean Prod. Jun 2022;351:131395. [CrossRef]
  53. Fattoruso G, Barbati M, Ishizaka A. An AHP parsimonious based approach to handle manufacturing errors in production processes. Prod Plan Control. 2024:1-30. [CrossRef]
  54. Khan S, Haleem A. Investigation of circular economy practices in the context of emerging economies: a CoCoSo approach. Int J Sustain Eng. May 4, 2021;14(3):357-367. [CrossRef]
  55. Ecer F, Pamucar D. Sustainable supplier selection: a novel integrated fuzzy best worst method (F-BWM) and fuzzy CoCoSo with Bonferroni (CoCoSo’B) multi-criteria model. J Clean Prod. Sep 2020;266:121981. [CrossRef]
  56. Chen QY, Liu HC, Wang JH, Shi H. New model for occupational health and safety risk assessment based on Fermatean fuzzy linguistic sets and CoCoSo approach. Appl Soft Comput. Sep 2022;126:109262. [CrossRef]
  57. Ethics and approvals. UK Research and Innovation. URL: https://www.ukri.org/publications/mrc-guidance-for-applicants/ethics-and-approvals/ [Accessed 2024-09-23]
  58. Bot BM, Suver C, Neto EC, et al. The mPower study, Parkinson disease mobile data collected using ResearchKit. Sci Data. Mar 3, 2016;3(1):160011. [CrossRef] [Medline]
  59. Greer J, Cleland I, McClean S. Predicting assistive technology adoption for people with Parkinson’s disease using mobile data from a smartphone. 2018. Presented at: Conference on Data Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018); Aug 21-24, 2018:1273-1280; Belfast, Northern Ireland, UK. URL: https://www.worldscientific.com/worldscibooks/10.1142/11069 [Accessed 2024-10-15] [CrossRef]
  60. Hamad RA, Hidalgo AS, Bouguelia MR, Estevez ME, Quero JM. Efficient activity recognition in smart homes using delayed fuzzy temporal windows on binary sensors. IEEE J Biomed Health Inform. 2020;24(2):387-395. [CrossRef]
  61. Lee CH, Yoon HJ. Medical big data: promise and challenges. Kidney Res Clin Pract. Mar 2017;36(1):3-11. [CrossRef] [Medline]
  62. Endriyas M, Alano A, Mekonnen E, et al. Understanding performance data: health management information system data accuracy in Southern Nations Nationalities and People’s Region, Ethiopia. BMC Health Serv Res. Mar 18, 2019;19(1):175. [CrossRef] [Medline]
  63. Polat K, Nour M. Parkinson disease classification using one against all based data sampling with the acoustic features from the speech signals. Med Hypotheses. Mar 16, 2020;140:109678. [CrossRef] [Medline]
  64. Adler ED, Voors AA, Klein L, et al. Improving risk prediction in heart failure using machine learning. Eur J Heart Fail. Jan 2020;22(1):139-147. [CrossRef] [Medline]
  65. Chang V, Bhavani VR, Xu AQ, Hossain MA. An artificial intelligence model for heart disease detection using machine learning algorithms. Healthcare Analytics. Nov 2022;2:100016. [CrossRef]
  66. Badillo S, Banfai B, Birzele F, et al. An introduction to machine learning. Clin Pharmacol Ther. Apr 2020;107(4):871-885. [CrossRef] [Medline]
  67. Phillips B, Zhao H. Predictors of assistive technology abandonment. Assist Technol. 1993;5(1):36-45. [CrossRef] [Medline]
  68. Miotto R, Wang F, Wang S, Jiang X, Dudley JT. Deep learning for healthcare: review, opportunities and challenges. Brief Bioinf. Nov 27, 2018;19(6):1236-1246. [CrossRef] [Medline]
  69. Huang MW, Lin WC, Chen CW, Ke SW, Tsai CF, Eberle W. Data preprocessing issues for incomplete medical datasets. Exp Syst. Oct 2016;33(5):432-438. [CrossRef]
  70. Chang C, Deng Y, Jiang X, Long Q. Multiple imputation for analysis of incomplete data in distributed health data networks. Nat Commun. Oct 29, 2020;11(1):5467. [CrossRef] [Medline]
  71. Lin WC, Tsai CF. Missing value imputation: a review and analysis of the literature (2006–2017). Artif Intell Rev. Feb 2020;53(2):1487-1509. [CrossRef]
  72. Hasan MK, Alam MA, Roy S, Dutta A, Jawad MT, Das S. Missing value imputation affects the performance of machine learning: a review and analysis of the literature (2010–2021). Inform Med Unlocked. 2021;27:100799. [CrossRef]
  73. Alabadla M, Sidi F, Ishak I, et al. Systematic review of using machine learning in imputing missing values. IEEE Access. 2022;10:44483-44502. [CrossRef]
  74. Prince J, Andreotti F, De Vos M. Multi-source ensemble learning for the remote prediction of Parkinson’s disease in the presence of source-wise missing data. IEEE Trans Biomed Eng. May 2019;66(5):1402-1411. [CrossRef] [Medline]
  75. Sigcha L, Borzì L, Amato F, et al. Deep learning and wearable sensors for the diagnosis and monitoring of Parkinson’s disease: a systematic review. Exp Syst Appl. Nov 2023;229:120541. [CrossRef]
  76. Harimoorthy K, Thangavelu M. Cloud‐assisted Parkinson disease identification system for remote patient monitoring and diagnosis in the smart healthcare applications. Concurr Comput. Nov 10, 2021;33(21). [CrossRef]
  77. Pecchia L, Bath P, Pendleton N, Bracale M. AHP and risk management: a case study for assessing risk factors for falls in community-dwelling older patients. In: Tammy T, editor. Presented at: Proceedings of the 10th International Symposium on AHP (ISAHP2009); Jul 29 to Aug 1, 2009:1-15; Pennsylvania, USA.
  78. Pecchia L, Martin JL, Ragozzino A, et al. User needs elicitation via analytic hierarchy process (AHP). A case study on a computed tomography (CT) scanner. BMC Med Inform Decis Mak. Jan 5, 2013;13:2. [CrossRef] [Medline]
  79. Wang WC, Yu WD, Yang IT, Lin CC, Lee MT, Cheng YY. Applying the AHP to support the best-value contractor selection-lessons learned from two case studies in Taiwan. J Civil Eng Manage. Jan 16, 2013;19(1):24-36. [CrossRef]
  80. Schmidt K, Aumann I, Hollander I, Damm K, von der Schulenburg JMG. Applying the analytic hierarchy process in healthcare research: a systematic literature review and evaluation of reporting. BMC Med Inform Decis Mak. Dec 24, 2015;15:112. [CrossRef] [Medline]
  81. Pereira RB, Plastino A, Zadrozny B, Merschmann LHC. Correlation analysis of performance measures for multi-label classification. Inf Process Manag. May 2018;54(3):359-369. [CrossRef]
  82. Ali NA, El Abbassi A, Cherradi B. The performances of iterative type-2 fuzzy C-mean on GPU for image segmentation. J Supercomput. Feb 2022;78(2):1583-1601. [CrossRef]
  83. Begio Y. Yoshua Begio and Gary Marcus on the best way forward for AI. 2017. URL: https://medium.com/@Montreal.AI/transcript-of-the-ai-debate-1e098eeb8465 [Accessed 2023-10-17]
  84. Li J, Chang X. Improving mobile health apps usage: a quantitative study on mPower data of Parkinson’s disease. ITP. Jan 22, 2021;34(1):399-420. [CrossRef]
  85. Ozanne A, Johansson D, Hällgren Graneheim U, Malmgren K, Bergquist F, Alt Murphy M. Wearables in epilepsy and Parkinson’s disease-a focus group study. Acta Neurol Scand. Feb 2018;137(2):188-194. [CrossRef] [Medline]
  86. Cartuyvels R, Spinks G, Moens MF. Discrete and continuous representations and processing in deep learning: looking forward. AI Open. 2021;2:143-159. [CrossRef]
  87. Hoi SCH, Sahoo D, Lu J, Zhao P. Online learning: a comprehensive survey. Neurocomputing. Oct 2021;459:249-289. [CrossRef]
  88. Raja PS, Thangavel K. Missing value imputation using unsupervised machine learning techniques. Soft Comput. Mar 2020;24(6):4361-4392. [CrossRef]
  89. Lin WC, Tsai CF, Zhong JR. Deep learning for missing value imputation of continuous data and the effect of data discretization. Knowl Based Syst. Mar 2022;239:108079. [CrossRef]
  90. Farokhzadian J, Khajouei R, Hasman A, Ahmadian L. Nurses’ experiences and viewpoints about the benefits of adopting information technology in health care: a qualitative study in Iran. BMC Med Inform Decis Mak. Sep 21, 2020;20(1):240. [CrossRef] [Medline]
  91. Belić M, Bobić V, Badža M, Šolaja N, Đurić-Jovičić M, Kostić VS. Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease-a review. Clin Neurol Neurosurg. Sep 2019;184:105442. [CrossRef] [Medline]
  92. Khan B, Fatima H, Qureshi A, et al. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomed Mater Devices. Feb 8, 2023:1-8. [CrossRef] [Medline]
  93. Javed AR, Fahad LG, Farhan AA, et al. Automated cognitive health assessment in smart homes using machine learning. Sustain Cities Soc. Feb 2021;65:102572. [CrossRef]
  94. Lu J, Song E, Ghoneim A, Alrashoud M. Machine learning for assisting cervical cancer diagnosis: an ensemble approach. Fut Gen Comput Syst. May 2020;106:199-205. [CrossRef]
  95. Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol. Aug 2021;65(5):545-563. [CrossRef] [Medline]
  96. Lin Z, Sinha S, Zhang W. Towards efficient and scalable acceleration of online decision tree learning on FPGA. Presented at: 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM); Apr 2019:172-180; San Diego, CA, USA. [CrossRef]
  97. Ferreira PJS, Cardoso JMP, Mendes-Moreira J. kNN prototyping schemes for embedded human activity recognition with online learning. Comp. 2020;9(4):96. [CrossRef]
  98. Vega de la Cruz LO, Marrero Delgado F, Pérez Pravia MC. Procedimiento para la gestión integrada del control interno con enfoque multicriterio [Article in Spanish]. IC. 2022;18(1):223-242. [CrossRef]


AI: artificial intelligence
AT: assistive technology
CoCoSo: combined compromise solution
CR: consistency ratio
DEMATEL: decision-making trial and evaluation laboratory
FAHP: fuzzy analytic hierarchy process
IBK: instance-based k-nearest neighbors
IF-AHP: intuitionistic fuzzy analytic hierarchy process
IF-DEMATEL: intuitionistic fuzzy decision-making trial and evaluation laboratory
IFS: intuitionistic fuzzy set
IFWA: intuitionistic fuzzy weighted averaging
k-NN: k-nearest neighbors
MCDM: multicriteria decision-making
ML: machine learning
PD: Parkinson disease
SAW: simple additive weighting
TAM: Technology Acceptance Model
TOPSIS: technique for order of preference by similarity to ideal solution
UTAUT: Unified Theory of Acceptance and Use of Technology


Edited by Boris Schmitz; submitted 29.02.24; peer-reviewed by Babek Erdebilli, Fatih Ecer, Valerio Salomon; final revised version received 13.08.24; accepted 26.08.24; published 22.10.24.

Copyright

© Miguel Ortiz-Barrios, Ian Cleland, Mark Donnelly, Muhammet Gul, Melih Yucesan, Genett Isabel Jiménez-Delgado, Chris Nugent, Stephany Madrid-Sierra. Originally published in JMIR Rehabilitation and Assistive Technology (https://rehab.jmir.org), 22.10.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Rehabilitation and Assistive Technology, is properly cited. The complete bibliographic information, a link to the original publication on https://rehab.jmir.org/, as well as this copyright and license information must be included.