The COVID-19 pandemic and its widespread consequences have brought public health issues and related preventive measures into sharper focus. Increasing efforts are being made to educate people on the harms of unhealthy lifestyles, which are closely associated with the leading global risks for mortality and chronic diseases such as high blood glucose and cholesterol levels [,]. One of the most prevalent health concerns in modern society is sedentary behavior, whereas adequate physical exercise can produce positive physical and mental effects [,]. Although many people are aware of such benefits, motivational techniques and systems are needed to push people to take action [] and sustain long-term exercise engagement.
Gamification, a form of persuasive design [] and a useful behavioral change support system [], has gained increasing popularity in recent years [] owing to its theoretical bases in motivation research (such as self-determination theory [] and the behavior model proposed by Fogg []) and wide practical applications integrating motivational features such as goal setting and feedback. By incorporating common game design elements into nongame contexts to make activities more game like [], gamification is believed to hold great promise in the area of health behavior change [], particularly for promoting physical activity []. Despite the increasingly widespread use of gamification on many mobile fitness apps such as Nike+ Running (subsequently rebranded as Nike Run Club) and Zombies, Run! [], the evaluation of such commercial apps and their metrics has been rare and inconclusive; for example, Direito et al [] compared the use of Zombies, Run! with a nonimmersive app as well as the control group and did not find any significant intervention effect on the adolescent participants’ fitness and activity levels. In general, empirical evidence on the motivational effects of game elements has been accumulating and promising, although with heterogeneity across the relatively small number of high-quality experimental studies [,]. According to 2 most recent meta-analyses on gamification and physical activity [,], gamified interventions compared to a nongamified control promoted small increases in step count but not in more intense physical exercises; significant differences in the effect size were also observed among the reviewed studies, and it was suggested to provide tailored experiences for improving gamification effectiveness across diverse user populations. Overall, gamification in physical activity as a young area of research still faces many open questions that need to be rigorously studied, such as the effectiveness of various game elements, as well as the consideration of different application contexts and notable user differences [].
Tailored Gamification and User ModelingOne important reason for the discrepancies detected in existing gamification research is the reliance on the “one-size-fits-all” approaches adopted in many current apps. These approaches either did not consider variability across individuals due to user motivations and personal characteristics [,] or implemented too many game elements simultaneously [], therefore yielding less effective results. Besides, the majority of prior studies focused on testing gamified systems with multiple elements together, providing limited information on the unique effects of individual elements [,].
To address these issues, there is an emerging area of interest called “tailored gamification,” which focuses on customizing game elements in given contexts to suit individual needs and preferences for the gamification experience []. Accordingly, a robust and accurate classification of users is a critical first step before any effective tailored gamification can be implemented. Meanwhile, current user modeling approaches rely primarily on rating scale–based measures as the basis for profiling, such as personality traits or player typologies [], the latter being developed either in the context of games (eg, Bartle and BrainHex) or specifically for gamification (ie, Hexad) [].
However, these popular measurement instruments possess methodological limitations for user segmentation and tailored gamification design. The most common issue inherent in rating scales is scale use bias, such as small ranges of mean item scores, a tendency for respondents to favor the top end of the scale, and scalar inequivalence across countries and cultures []. As rating scales do not require people to make trade-offs, they provide less accurate assessments of the relative importance of each item, making it harder to clearly differentiate one segment from another. A key example is the widely used gamification user types Hexad scale [,], which is a framework for categorizing individuals based on their motivations and preferences when interacting with gamified systems. The scale consists of 6 distinct user types: philanthropist, socializer, achiever, free spirit, player, and disruptor. Each is motivated by different aspects of gamification: purpose, relatedness, competence, autonomy, extrinsic rewards, and the triggering of change, respectively. Despite the helpfulness of the Hexad scale for understanding user diversity, the issues with rating scales are still unavoidable; for example, although users can be classified into 1 of the 6 types based on their highest scores, the score differences across the dimensions might not be conspicuous or practically meaningful, and there are also strong correlations between certain types, such as those between the philanthropist and socializer types []. Further evidence has shown that the Hexad user orientations are not necessarily stable and can change over time, suggesting that gamification design based on a 1-time measurement of the dominant Hexad user type might not be adequate [].
In addition, while standard practices of correlation analysis can be used to match user types with their preferred game elements based on ratings [], the accuracy of such methods is not guaranteed, given the limitations of rating scales again, as well as the challenge of determining the most powerful game elements from a large pool of suitable candidates for a given user type. As summarized by Klock et al [], >20 different game elements were preferred by achiever, player, and socializer types, and 11 game elements were proposed even for the least common type, disruptor. More research is needed to tease out the unique effects of individual game elements to optimize their use for different user groups, particularly for tailored design on limited interfaces.
Maximum Difference Scaling: An Alternative User Modeling MethodGiven the aforementioned issues with existing user modeling approaches, alternative methods are needed to advance the field of tailored gamification. If the purpose of tailoring is to make gamification better fit what the users will enjoy or be motivated by, a natural way would be directly using user preferences and needs for various game elements as the basis for profiling. However, there is a lack of research using this intuitive approach, which might be due to the limitations of traditional rating and segmentation methods, such as scale use bias and statistical deficiencies []. Besides, the problem with asking users about game design elements directly, according to Tondello et al [], is that users might not be aware of their game preferences and thus be unable to rate each element accurately. It is also important to examine psychological factors such as motivation in a gamified context beyond what users like, which can help determine what type of gamification works best for stimulating behavior change rather than merely pleasing the users.
Accordingly, the maximum difference scaling (MaxDiff) method, a widely used technique in marketing research, provides a useful alternative for classifying gamification users because it is “a rating method that does not experience scale use bias, forces trade-offs, and allows each scale point to be used once and only once” []. As a classic pairwise comparative analysis method [], MaxDiff requires respondents to select 2 items from a given set—their most and least preferred—that can indicate the maximum difference in their preferences. This approach has been shown to solve the problem of scalar inequivalence across countries [] and demonstrate greater discrimination among items and between respondents than Likert scales []. MaxDiff can also be conveniently integrated into scenario-based experiments that simulate real-world contexts and improve experimental results []. In addition, it generates valuable data for segmenting users into distinct groups.
The MaxDiff technique provides a great opportunity for tailored gamification research, which is still a new area at the very early stage of development, with the majority of research conducted in the education domain [,]. Limited evidence exists on the usefulness of MaxDiff for teasing out user preferences for game design elements [,]; however, its full potential for user segmentation remains underused. In general, user modeling for tailored gamification needs to “consider many characteristics simultaneously to evaluate how much one may influence the other” [] to achieve better segmentation of gamification users. It should also consider the context of use that can affect the effectiveness of game elements (eg, task, domain, and device), with most existing research limited to education and smartphone apps []. More inquiry into the health domain and other new application types will be particularly valuable.
Smartwatch-Based Gamification for Physical ActivityOne promising health-related application of tailored gamification is its integration into popular wearable activity trackers such as smartwatches, which were predicted to be used by approximately 225 million people globally in 2024 [] and hold great promise for promoting healthy lifestyles through useful functions such as health monitoring, reminders, and sharing []. In particular, a recent umbrella review confirmed that using wearable devices could significantly increase physical activity [] and suggested leveraging motivational features to enhance the effectiveness of wearables. However, near half of the users abandoned their devices within the first 6 months [,]; yet, most related research has only focused on technical issues rather than user perceptions and preferences []. A small number of studies did point out the importance of providing tailored feedback and intrinsically motivating users [], as well as optimizing design elements on the limited interfaces of smartwatches [].
Meanwhile, gamification is believed to enhance personal informatics apps and sustain user engagement []. As an effective feedback mechanism [], gamification can aid the interpretation of health monitoring data for users in a straightforward and motivating way. Moreover, unlike mostly self-reported data from mobile phone apps, smartwatches offer the distinct advantage of providing immediate feedback based on real-time objective health data. Their wearable nature also makes them more suitable for exercise contexts, thereby improving the effectiveness of gamification in a more accurate and timely fashion []; for example, gamification on smartwatches has been found effective for obesity control [,], while a systematic review of mobile health–based gamification interventions suggested the need for more empirical research to explore the efficacy of combining gamification and wearables for promoting physical activity in various populations []. In particular, tailored gamification has the potential to further improve engagement and motivation for exercising among all users [].
As smartwatch users vary in their needs and behaviors, accurate user modeling is required to categorize them accurately and tailor different game elements for different groups to better motivate users for maintaining regular exercise engagement and lifelong healthy lifestyles. From a methodological perspective, the MaxDiff method is particularly useful for optimizing gamification design on the small smartwatch screen, which requires a nuanced understanding of trade-offs that different users make among various game elements (rather than the absolute importance of each element). Only on the basis of a clear understanding of these subtle elements and heterogeneous priorities can smartwatch users be accurately classified, who will then be offered tailored design solutions promoting both user enjoyment and health benefits through a carefully selected set of functions and displays on the limited interfaces of smartwatches.
This StudyThis study aimed to understand how smartwatch-based gamification should be tailored for different user groups to effectively promote physical exercise based on a more accurate and innovative user modeling approach. We incorporated both user preferences and needs for game elements from smartwatch users into the segmentation process and adopted the MaxDiff experimental method requiring users to make trade-offs among various game elements, followed by a quantified categorization of users into distinct groups using the latent class technique. As formulating theory-based hypotheses was not possible, given the limited prior research in the young area of tailored gamification, we proposed the following research questions (RQs):
RQ1: What is the relative importance of each game element in smartwatch fitness apps based on the preferences of smartwatch users?RQ2: Which game elements in smartwatch fitness apps are more effective than others for motivating physical exercise among smartwatch users?RQ3: Are there distinct user groups with unique preferences for smartwatch-based gamification? If so, how can such segment membership be predicted among smartwatch users?RQ4: Are there distinct user groups who will be differentially motivated by smartwatch-based gamification? If so, how can such segment membership be predicted among smartwatch users?As explained earlier, this study sought to compare the differences in user preferences and needs for gamification; therefore, using a simple sorting method was not appropriate due to the potential order effect in ranking that might bias the experimental results []. Meanwhile, if a Likert rating scale was used, respondents might assign similar preference scores to each game element, based on which the elements could not be clearly distinguished from one another. Therefore, we implemented the MaxDiff method and conducted 2 experiments to investigate user preferences for different game elements in smartwatch gamification apps. In addition, we compared the motivational effects of these elements on user willingness to exercise.
Materials for the MaxDiff ExperimentsOn the basis of a comprehensive literature review and a preliminary study [], we synthesized 16 commonly used game elements for the MaxDiff experiments: goals, progress, feedback, overview, points, levels, badges, leaderboards, community, sharing, cooperation, competition, challenges, narrative, avatars, and digital currency. These can be categorized into 4 groups: goal-related, reward-related, socialization-related, and immersion-related elements. After carefully reviewing related studies and market apps for the visualization and verbal explanation of each element, we designed 2 sets of experimental materials for the 2 experiments: a low-fidelity version for studying user preferences and a high-fidelity version for measuring the motivational effects of gamification (eg, the 2 versions of “goals” in ). The low-fidelity version adopted black-and-white wireframes to explain the concept of each game element in a simple manner to avoid the influence of color and design on user preferences. In comparison, the motivation-focused experiment was designed to simulate the real-life situation of using a smartwatch for exercising; therefore, the Apple Watch S6 prototype was used to represent the game elements in the form of various reminders displayed on a smartwatch screen. It should be noted that the text font and color were kept consistent across all materials for each game element to exclude the potential effects of such design factors on user responses.
Figure 1. Samples of gamification materials for the MaxDiff experiments on (A) user preferences and (B) motivational effects. Other InstrumentsIn addition to conducting the MaxDiff experiments, we collected background data from users through survey questions as well as instruments for measuring gamification user types and motivation for smartwatch use (). The Hexad scale [] included 24 items to be rated on a 7-point scale ranging from 1 (strongly disagree) to 7 (strongly agree) to determine gamification user type, while the user motivation scale for smartwatch use [] used a 6-point Likert scale and included 6 items, with 3 items measuring intrinsic motivation and 3 items measuring extrinsic motivation.
ProceduresThe study consisted of 3 parts: background information of users was collected first, followed by the preference-focused and motivation-focused MaxDiff experiments. Respondents were asked to provide basic information about themselves, including demographics, smartwatch use, exercise and gaming habits, and gamification user types. In particular, we collected important data about their attitudes and behaviors regarding smartwatch use, physical exercise, playing games, and gamification, which provided valuable complementary information for user segmentation.
The MaxDiff experiments started with a short sample MaxDiff exercise that helped familiarize the participants with the process so that they could quickly and accurately make their choices in the formal experiments that followed. In the first MaxDiff experiment, simple pictures and text descriptions about the 16 game elements related to a smartwatch fitness app (ie, the low-fidelity version) were presented to the participants based on an experimental design. Each time, participants were asked to repeatedly select their favorite and least favorite elements from the group of game elements presented. Specifically, the parameters of the MaxDiff experimental design were as follows: (1) 16 total items (game elements); (2) 4 items per question, following convention; (3) 12 questions per respondent to ensure that each item appeared at least 3 times; (4) 300 versions to meet the web-based questionnaire criteria; and (5) a minimum sample size of 200 to ensure that each item appeared at least 500 times. We followed the principle of balanced incomplete block design in selecting the MaxDiff experimental parameters, and the experimental design passed the merit criteria test, indicating that the design was reasonable and could be formally investigated.
For the second, motivation-focused MaxDiff experiment, the parameters of the experimental design were exactly the same as those of the first experiment because the number of game elements selected were identical. The main difference lay in the contents and form of presentation: high-fidelity experimental materials were used, and the game elements were presented as context-specific reminders on a simulated smartwatch screen. We hypothesized the context for the participants as follows: “You are very tired after a long day of work and finally come home. You don’t want to exercise and just want to play with your smartphone, whereas a message alert suddenly pops up on your smartwatch.” In the 12 scenarios that followed, each composed of 4 messages representing gamification elements (refer to for a sample scenario), participants were asked to choose the messages that motivated them the most to start exercising and those that motivated them the least, which measured the motivational effects of different smartwatch-based game elements in the context of exercising.
Figure 2. A sample scenario with 4 messages representing gamification elements from the motivation-focused MaxDiff experiment. Sample and Data CollectionWe used the specialized software Lighthouse Studio (Sawtooth Software) to design the MaxDiff experiments, program the measurement scales and survey questions, and collect data on the web. The study targeted smartwatch and smart band users aged 18 to 35 years in China. We recruited participants through invitations posted on popular social networking apps (eg, WeChat and QQ). A total of 529 people responded, of whom 378 (71.5%) passed the screener and provided valid responses. The whole study took 10 to 15 minutes to complete. Screening and data cleaning criteria included not meeting the background requirements (such as age not suitable and not having a smartwatch or smart band), incomplete responses, completion time <10 minutes, and straight-lining answers.
Ethical ConsiderationsThis study was approved by the ethics committee of Shenzhen University (2022010710). Informed consent was obtained electronically before participants began the web-based study, and participants could opt out at any time. All data collected were anonymous. Each participant received a cash incentive of CN ¥10 (US $1.5).
Data AnalysisSPSS software (version 26.0; IBM Corp) was used for data cleaning and conventional statistical analysis, while data from the MaxDiff experiments were analyzed using Lighthouse Studio. Specifically, the utility scores for all users’ preferences for each game element were calculated using both logit analysis and hierarchical Bayes estimation. We also obtained the overall ranking of users’ preferences for the 16 game elements based on the rescaled preference scores which summed up to 100. Similarly, we analyzed the motivation-focused MaxDiff experiment data using the same approach, calculating the utility scores and rankings of the elements in terms of their motivational effects on users.
For the user segmentation, we performed latent class multinomial logit analysis on the MaxDiff data. As a powerful user segmentation method, latent class multinomial logit analysis is more suitable for analyzing MaxDiff data than other classification methods (eg, cluster analysis) because it is not solely data driven but based on statistical models. Users with similar choices were grouped into categories, and the part-worth utilities and the probability of each respondent belonging to each category were estimated for each category. After obtaining the preference-based and motivation-based segmentation schemes, we enhanced them by adding demographic information, smartwatch use, exercise and gaming habits, and gamification user types by means of chi-square tests and multivariate analysis of variance (MANOVA) to describe the specific characteristics of each user segment. Finally, significant factors that could help predict the future classification of users were also determined through multinomial logistic regression models.
As shown in , our final sample consisted of 378 participants (male: n=204, 54%; female: n=174, 46%) with an average age of 23 (SD 2.916) years; 82 (21.7%) were employed, and the rest (n=296, 78.3%) were students. As shown in , of the 378 participants, 156 (41.3%) had prior experience using smartwatches, and 222 (58.7%) had prior experience using smart bands. The majority (293/378, 77.5%) had worn the device for >3 months with daily use. The main purpose of using the wearable varied from health monitoring and fitness to messaging and mobile assistance, with health monitoring and fitness reported by 224 (59.3%) of the 378 participants. Regarding physical exercise, 70.9% (268/378) of the participants did not meet the World Health Organization’s recommendation of exercising at least 3 times per week, with exercises limited to walking or aerobics. By contrast, the distributions of game playing preferences and behaviors were fairly even, suggesting notable individual differences in this area.
Table 1. Demographics of participants (n=378).CharacteristicsParticipants, n (%)Age (y)We also analyzed the scale measures. presents the results for each Hexad user type and each of the 4 dimensions of gamification drive that we hypothesized, as well as motivation for smartwatch use. Among the Hexad user types, player, free spirit, and achiever had the highest average scores, followed by philanthropist and socializer, whereas disruptor had substantially lower average scores. Meanwhile, respondents reported higher scores in the goal-driven and achievement-driven dimensions in comparison with the immersion-driven and socialization-driven dimensions. Interestingly, intrinsic motivation for smartwatch use was higher than extrinsic motivation. It should be noted that all measures for each item in the scale were tested for normality for further statistical modeling.
Table 3. Scores for the Hexad user types, dimensions of gamification drive, and motivation for smartwatch use.ItemsScores, mean (SD)Hexad user typeThe results from the 2 analyses of the preference-focused MaxDiff data—logit analysis and hierarchical Bayes estimation—are compared in , showing overall consistency and negligible differences. It should be noted that we used the results of the logit analysis for further segmentation modeling to simplify the analyses and also because logit analysis is more commonly used in estimating mean utility scores and more applicable to studies involving a larger number of items [].
Table 4. User preferences for each gamification element based on the preference-focused MaxDiff experiment.Gamification elementBest (number of times)Worst (number of times)Logit analysisHierarchical Bayes estimationSimilarly, based on the 2 aforementioned statistical approaches, we analyzed the average utility scores and rankings of the 16 elements for motivating users, as shown in . Interestingly, the order of importance among the elements differed from that in the first experiment, suggesting that user preferences did not necessarily equal the motivational power of gamification elements or predict actual behavior change; for example, it was surprising to see that cooperation as the 11th preferred element could indeed motivate users quite effectively, with a ranking of 4 among all 16 elements.
Table 5. Motivational effects of each gamification element based on the motivation-focused MaxDiff experiment.Gamification elementsBest (number of times)Worst (number of times)Logit analysisHierarchical Bayes estimationTo determine the fittest segmentation model with the best number of segments, multiple rounds of modeling were performed, with 7 metrics plus their log-likelihood values reported in as measures of model fit. Specifically, users were classified into 2, 3, 4, and 5 categories according to their MaxDiff preferences for game elements, and the results were then compared based on a balanced evaluation of all 7 metrics in the following way. If relative chi-square was used as the fit-level criterion, 2 categories with the largest value should be selected as the classification result, whereas the optimal solution became 5 categories if percentage certainty and chi-square were used as the criteria. Similarly, if Akaike information criterion, consistent Akaike information criterion, Bayesian information criterion, and adjusted Bayesian information criterion were used as the criteria, 5 categories with the smallest value would be the best classification result. However, as 2 categories and 5 categories seemed to be extreme, we also found that the values of Akaike information criterion, consistent Akaike information criterion, Bayesian information criterion, and adjusted Bayesian information criterion decreased differently as the number of categories increased, with the greatest decrease occurring from 2 to 3 categories; therefore, it seemed most appropriate to classify users into 3 groups. In addition, the selection of the best number of categories should also consider other important factors, for example, whether the differences among the categories are obvious and whether the market sizes of each category are balanced. Therefore, based on the aforementioned criteria altogether, it was ultimately determined to best divide the users into 3 categories according to their distinct preferences for game elements, which was also supported by further analysis later.
Table 6. Model fitting tests for MaxDiff preference-based segmentation.MetricsNumber of categoriesaAIC: Akaike information criterion.
bCAIC: consistent Akaike information criterion.
cBIC: Bayesian information criterion.
dABIC: adjusted Bayesian information criterion.
Preference Scores and Market Sizes by SegmentOn the basis of the 3-segment model and the distinctive characteristics of each group, we categorized the 3 segments as goal-preferred, immersion-preferred, and reward-preferred segments. shows by segment the market sizes and the logit preference coefficients for each gamification element, the latter being obtained from the latent class multiple logit analysis for each segment.
Clearly, the 3 segments differed significantly in their preference scores. Users in the goal-preferred segment scored notably higher on progress, goals, challenges, overview, and feedback than the other 2 segments but with much lower preferences for immersive elements such as avatars, narrative, and digital currency. In comparison, users in the immersion-preferred segment had much higher preference scores on narrative, avatars, badges, and challenges than those in the other 2 segments, who showed notably lower preferences for social elements such as community, sharing, competition, and leaderboards. By contrast, users in the reward-preferred segment loved reward-related elements and had the highest preference scores on digital currency, which was the least preferred by the other 2 segments. Not surprisingly, the reward-preferred users showed much lower preferences for social elements (eg, community, cooperation, and competition) as well as immersive elements (eg, avatars and narrative).
Table 7. Segment-level user preference coefficients for each gamification element and market sizesa.Gamification elementsGoal-preferred segmentImmersion-preferred segmentReward-preferred segmentGoals9.5296.3226.175Progress13.9779.3249.806Feedback7.3095.7615.053Overview8.4475.4126.240Points4.0584.28311.426Levels6.9825.4706.572Badges7.3076.9985.825Leaderboards6.7043.5455.904Community2.8875.0264.654Sharing6.4084.6065.373Cooperation5.8916.2644.031Competition5.8863.8823.800Challenges9.4716.3554.703Narrative1.54514.8402.330Avatars2.2439.8234.123Digital currency1.3562.09013.986aMarket size (n=378): goal-preferred segment, n=161 (42.6%); immersion-preferred segment, n=113 (29.9%); and reward-preferred segment, n=104 (27.5%).
Demographic and Behavioral Characteristics of Each SegmentOn the basis of the segmentation scheme, we also summarized the characteristics of each segment using descriptive statistics and chi-square tests, focusing on user demographics, physical exercise habits, and game playing. The goal-preferred segment had a higher proportion of male individuals than the other 2 segments, whereas the reward-preferred segment had a higher proportion of female individuals, although the sex differences were not statistically significant. Meanwhile, there was indeed a significant difference in age distribution among the 3 segments (χ22=3.8; P=.049): of the 104 reward-preferred segment members, 72 (69.2%) were aged 23 to 35 years, a significantly higher proportion than in the other 2 segments. Occupations also differed significantly across the 3 segments (χ22=7.6; P=.02), with a much smaller proportion of nonstudents (24/161, 14.9%) in the goal-preferred segment.
Furthermore, significant between-segment user differences were also observed regarding the most frequent exercise type (χ24=12.1; P=.02), exercise frequency (χ24=9.6; P=.048), and level of interest in exercising (χ24=17.1; P=.002); for example, the goal-preferred segment had the smallest percentage of users (48/161, 29.8%) who identified walking as their most frequently chosen exercise, while the other 2 segments had higher percentages of users who preferred aerobic exercise. Meanwhile, the frequency of exercise was higher for the goal-preferred group and lower for the immersion-preferred group. Not surprisingly, there were more sports enthusiasts in the former segment but more users who hardly exercised in the latter segment. By contrast, no significant difference was detected among the segments in terms of liking games, although there was a higher percentage of game lovers in the immersion-preferred group.
In addition, we used 1-way MANOVA to analyze the differences among the segments in terms of Hexad user types and the 4 dimensions of gamification drive. First, we found that scores for the philanthropist (P<.001), socializer (P=.003), free spirit (P=.004), and achiever (P<.001) dimensions differed significantly across the segments, but there were no significant differences for the disruptor (P=.58) and player (P=.05) dimensions. As further detailed in , both the immersion-preferred and reward-preferred segments had significantly lower scores on the philanthropist dimension compared to the goal-preferred group, while on the socializer dimension, only the immersion-preferred segment scored significantly lower (P=.005). Similarly, on the free spirit and achiever dimensions, both the immersion-preferred and reward-preferred segments had significantly lower scores than the goal-preferred group.
Table 8. Multiple comparisons of the Hexad user type scores across the 3 preference-based user segments (where significant differences were detected; n=378).Dependent variables, reference group, and segmentsParticipants, n (%)Scores, mean (SD)P valuePhilanthropist
Comments (0)