key: cord-0059508-5051bofi authors: Kitkowska, Agnieszka; Shulman, Yefim; Martucci, Leonardo A.; Wästlund, Erik title: Facilitating Privacy Attitudes and Behaviors with Affective Visual Design date: 2020-08-01 journal: ICT Systems Security and Privacy Protection DOI: 10.1007/978-3-030-58201-2_8 sha: d542f022cf838e3fba7a198274cc798e53ad02e6 doc_id: 59508 cord_uid: 5051bofi We all too often must consent to information collection at an early stage of digital interactions, during application sign-up. Paying low attention to privacy policies, we are rarely aware of processing practices. Drawing on multidisciplinary research, we postulate that privacy policies presenting information in a way that triggers affective responses, together with individual characteristics, may influence privacy attitudes. Through an online quasi-experiment ([Formula: see text] ), we investigate how affect, illustration type, personality, and privacy concerns may influence end-users’ willingness to disclose information and privacy awareness. Our results partially confirm these assumptions. We found that the affect may have an impact on privacy awareness, and stable psychological factors may influence disclosures. We discuss the applicability of our findings in interface design and in future research. Privacy and security breaches are regularly reported in media, but despite their awareness, people may over-disclose their personal information during online interactions [1] . Legal protections have been established, such as the General Data Protection Regulation (GDPR), aiming to improve the current privacy landscape and enhance informed consent as the primary disclosure enabler [11] . Yet, not all of the online services provide appropriate privacy-protective solutions. The decisions shaping information disclosure usually begin during the signup process. At that point, the user must consent to the service providers' data handling practices. However, at that stage, privacy management is not a primary task, and the users may disregard privacy information presented in privacy notices. Current methods of policy display may promote such negligence with inadequate user interface (UI) design-non-transparent and challenging to comprehend [27] . For instance, it is a common practice to collect the consent with affirmative action: a tick of a checkbox approving "understanding" of the hyper linked text of privacy policy. In search of new ways to overcome issues around consent, and to understand how to make consent more meaningful, we focus on affective states induced with visual design. In this paper, we assess their influence on information disclosure and privacy awareness in the context of the online sign-up process. Moreover, we look into the relationship between individual characteristics, affect, and privacy concerns. Our research objective is to identify how to improve end-users' privacy awareness at an early stage of interaction, and to advance the existing body of knowledge about privacy attitudes and behaviors. Our findings show that affective framing and arousal alter privacy-related attitudes and behaviors. The results also indicate how people may feel hopeless during early-stage interactions, lacking control over their data. The GDPR aims to protect users' privacy and has a direct impact on online companies, providing them with a set of rules regarding data collection and processing practices [11] . Predominantly, online services must deliver to users understandable and transparent information about data provision practices. Further, the GDPR enforces precise requirements regarding informed consent. We follow the GDPR guidelines and test different designs of the privacy policy providing users with more transparent information. We draw on the definition of transparency and consent being informed by designing structured privacy policy, which emphasizes information collection and processing practices. Past research has shown that visual stimuli may have an influence on attention and can improve learning [30] . Affective images may impact decisions by effects on the impression formation and decision-making [34] . Further, pictures may have an impact on a performance level; when external representations are available, the effects of the previously viewed visualization decrease, because individuals are less dependent on the existing mental images [30] . The visual design may also affect memory, when it includes animations, anthropomorphic designs, clear layouts such as division into columns and similar [36, 38] . In the context of privacy, anthropomorphic designs may increase personal information disclosure [3, 25] . Past work demonstrated that text was insufficiently communicating privacy information, and a different approach should be applied to enhance usability [2] . However, alternative design cannot be oversymbolic or cluttered (e.g., unnecessary icons). Moreover, comic strips were found to increase users' attention [37] . Comics convey a message in a way that relates to emotions, enabling a greater understanding of the outlined issues [26] . Past research revealed that the end-user agreements divided into short sections, elicited positive attitudes, increasing comprehension and time of exposure [39] . In our first research question, we aim to investigate privacy design issues: RQ1: Does different illustration type applied in privacy policies influence privacy awareness and willingness to disclose information? Many factors influence decisions. At times, choices might be rational, e.g., when based on costs and benefits calculus carried out when people possess all the necessary information to compute the optimal outcome(s). Other times, peoples' decisions might be based on simple heuristics enabling effortless decisions, which can be made with limited information and within a short time [18, 35] . In this work, we focus on some factors that may influence how people decide upon their privacy, whether their decisions are more or less informed. Particularly, we investigate privacy awareness (defined as participants' ability to recall information presented in the privacy policy); and willingness to disclose (defined as the extent to which users are willing to disclose their personal information to the well-being application service provider). Affect and Information Processing. One of simple heuristics is the affect heuristic, when people make judgments based on subjective evaluations by adding either positive or negative value to the decision outcome [12] . The affectas-information hypothesis postulates that affective states have cognitive consequences mediated by the subjective experience of affect [6] . Thus, emotions occur, and this feeling has a significant impact on cognitive processing, providing conscious information from unconscious appraisals. Such feelings can guide immediate actions and may create experiences (e.g., liking or disliking), resulting in a higher or lower evaluation of an object. Similarly, the feelings-as-information theory proposes that positive affect indicates that a given situation is safe [31] . Positive affect may serve as an incentive to rely on internal thoughts, whereas negative affect should direct attention to new external information, as it indicates whether a situation is safe. In sum, affect relates to recall, thought generation, and processing of new external information. Affective reactions may result from an external stimulus, such as the way information is presented or semantic context, in which the situation takes place. In the context of privacy, affect may shape risk perceptions [19] . It may have a lasting impact on privacy beliefs (e.g., in an e-commerce environment [22] ). Further, negative valence may increase privacy attitude and decrease sharing, while positive valence may increase sharing attitude and decrease privacy attitude [7] . To further examine affect in the context of privacy, we ask the following questions: RQ2: Do the different designs of privacy policy elicit affective responses? RQ3: Does affective framing applied in the design of privacy policies influence privacy awareness and willingness to disclose? Antecedents of Privacy-Related Decision-Making. This research builds on the APCO (Antecedents→ Privacy Concerns→Outcomes) framework [8] , because it has been created based on a thorough review of privacy studies from multidisciplinary fields. The APCO framework contains factors active during decision-making processes, enabling a deeper understanding of human aspects of privacy attitudes and behaviors. Following this model, we investigate privacy awareness and willingness to disclose as outcomes, and personality characteristics and privacy concerns as outcomes' antecedents. Personality traits can influence privacy concerns [24] . The agreeableness, conscientiousness, and imagination can contribute to the formulation of "Concerns for Privacy" [17] . Additionally, personality may influence information disclosure [14] . Past research shows the effects of privacy concerns on disclosure, moderated by psychological biases and mental shortcuts [21, 28] . Information disclosure may result from personal concerns, perceived threats, or only from user experience [20] . Considering the APCO model and past research, we raise the following question: RQ4: Do individual characteristics and privacy concerns influence awareness and willingness to disclose information? To address our research questions, we developed an online quasi-experiment (the random assignment was present, but the experiment lacked a control group) [32] . The dependent variables were willingness to disclose information, privacy awareness, and affective states (valence & arousal). The two independent variables were the design proprieties: affective framing (positive & negative) and illustration type (anthropomorphic & human). We controlled for the influence of privacy concerns and personality traits. The participants were gathered on Reddit (r/samplesize). The respondents had to be at least 18 years old and fluent in English. Participation in the study was voluntary; no financial compensation was offered. We collected 99 responses. After data screening, the sample size reduced to 88. Almost half of the respondents was females (N = 40, 45%), and the majority was between 18-34 years old (N = 37, 42%). Most of the participants completed higher education (N = 50, 56.%). Over half of the participant were from English speaking countries (UK, USA and Canada-N = 49, 55.6%). Before the study, the respondents had to acknowledge an informed consent form. Majority of questions was mandatory to answer, apart from the demographic questions. To reduce ordering effects, when possible, we implemented questionnaires' item randomization. The study consisted of five phases. Phase 1: Questionnaires. First, we measured The Big Five personality traits with the instrument acquired from Donallann et al. [9] . This method is a concise instrument validated in past research. The scale contains 20 items (four per each trait: extraversion, agreeableness, conscientiousness, neuroticism, and imagination). Next, the participants were presented with questions measuring their affective states through the "Affective Self Report" (ASR) acquired from Jenkins et al. [15] . The scale consists of ten items measuring the two-dimensional structure of affect: valence & arousal. Phase 2: Vignette and Interactive Task. We asked the participants to imagine that they were signing up for a well-being application, aiming to help with the improvement of their physical and mental well-being. The participants were advised that the app offers social functionality, e.g., sharing, connecting with other users. They were instructed that over the next few pages, they would be exposed to the fictional sign-up form, asking for personal information, such as email address and password, but none of this data would be collected. Next, the participants were prompted to acknowledge the privacy policy. There were four different policy designs ( Fig. 1) : (1) a positively framed text with anthropomorphic representation, (2) a positively framed text with human representation, (3) a negatively framed text with anthropomorphic representation, (4) a negatively framed text with human representation. We shortened the policy and structured the text into thematically arranged paragraphs, with a header describing a particular section, e.g., "How we use your data". The Gunning's Fogg text readability score was 12.7, meaning that the text should be understandable by high school seniors [40] . Each privacy policy provided a binary choice: to "Agree" or to "Disagree". We measured the willingness to disclose information with a Likert-type instrument, based on the information disclosure scale proposed by Joinson et al. [16] . The participants were asked to think back about the sign-up process and state which pieces of information they would have disclosed. The scale consisted of 14 items of personal information, e.g., number of sexual partners. The responses scored 1 ("I would disclose") or 0 ("I would not disclose"). We took the 2 nd measurement of the affect with the same instrument as in Phase 1, presenting participants with their scores and asking whether they wished to adjust them. Phase 4: Questionnaires. We measured privacy awareness with a quiz-like questionnaire, assessing how much of the privacy information the participants remembered. We used ten questions related to the text from privacy policies, focusing on the text highlighted by framing images. The participants had a binary choice to select either "True" or "False". To measure privacy concerns, we used an instrument acquired from Malhotra et al. [23] , containing six items presenting privacy statements. Participants were asked to declare to what extent they agreed or disagreed with the statements (1 -strongly disagree to 7strongly agree). We asked the participants for basic demographic information: gender, age group, nationality, and level of education. Phase 5: Open-Ended Question. We asked the participants to explain why during the interactive task, they "Agreed", or "Disagreed" with policy. The participants were required to provide an answer. Ethical Review. The study received ethical approval from the Karlstad University Ethical Review Board. There were no harms or risks associated with the study. We ensured that the data collection and processing was compliant with the GDPR. When possible, we applied data anonymization measures. Variables in the Model. The between-subject variables were affective framing (AFRM: positive, negative), and illustration type (ILLT: anthropomorphic, human). The four dependent variables were: post-stimulus valence (VALP), poststimulus arousal (AROP), willingness to disclose (WILD) and privacy awareness (PRAW). The covariates were: pre-stimulus arousal (PRAR) and valence (PREV); conscientiousness (CONS) and neuroticism (NEUR) (extraversion and agreeableness were removed, as they had no effect); privacy concerns (PRIC). The assessment of latent variables collected through self-reported instruments requires checks of validity and reliability. We used qualitative methods to check the face validity. To assess reliability, we applied the Cronbach α estimate, accepting scores higher than 0.7 [13] . Personality Traits. We ran Principal Component Analysis (PCA). Kaiser-Meyer-Olkin (KMO) measure was 0.70, and Bartlett's test for sphericity was significant, p < 0.001 [29] . Personality types did not load as expected into five factors [9] , but into six factors, with imagination loading incorrectly. We removed this trait from further analysis. For each of the remaining constructs, we ran reliability tests, which all scored well, α > 0.7. Affect. We measured valence and arousal with the scale consisting of ten semantic differential items, five per each dimension. For both pre-, and post-stimulus measures, we ran the PCA to check factorability. The KMO scores were satisfying (pre: 0.84, and post: 0.86), and Bartlett's test for sphericity was significant (p < 0.001). The scores did not load properly. Hence we removed two items: "Tired-Energetic" and "Indifferent-Curious". We used five items to compute valence, and three to compute arousal. Willingness to Disclose. The recommended estimate of scale reliability for dichotomous data is KR20 [5] . However, since Cronbach's α is a generalization of KR20, we interpreted its scores. The Cronbach's α was acceptable, 0.90 (M = 6.68, σ 2 = 18.05, SD = 4.25). Privacy Awareness. The privacy awareness scores were measured as dichotomous data (Correct = 1, or Incorrect = 0). Privacy awareness scale assessed knowledge, not a latent construct; hence, we did not perform reliability checks. We applied an average of scores in further statistical analysis (M = 0.58, SD = 0.13). The results of the PCA were satisfying, but Cronbach's α scores for six items scale were below the commonly accepted threshold. We re-ran the analysis and used only four of the scale items (Cronbach's α was satisfactory) to compute the variable. We performed a multivariate analysis of covariance (MANCOVA). Before the test, we checked the assumptions (outliers with Mahalanobis distance; linearity; multicollinearity; univariate and multivariate normality; homogeneity and homoscedasticity). Next, we ran the final model and reevaluated homogeneity with a Box's test of equality of covariance matrices (p = 0.15) and Levene's tests of equality of variances (p > 0.05). The results of MANCOVA, using the Wilk's Lambda as a criterion, are presented in Table 1 . We used individual ANCOVAs to examine their effect. PRAR was a significant adjustor of AROP (η 2 p = 0.50), VALP (η 2 p = 0.05), and WILD (η 2 p = 0.05). PREV had a significant influence on VALP (η 2 p = 0.52). Some of these variables correlated significantly (Table 2) . PRIC significantly influenced AROP (η 2 p = 0.08), VALP (η 2 p = 0.11), and WILD (η 2 p = 0.13), with some correlating significantly (Table 2) . Finally, there were significant effects of NEUR on WILD, and of CONS on PRAW; however, no significant correlations between these variables suggest that they might be weak influences of privacy decisions. Effects of Independent Variables. After estimating out the covariates, AFRM had a significant effect on combined dependent variables (η 2 p = 0.12) particularly on AROP (η 2 p = 0.06). There was a difference in means of the two levels of AFRM on arousal (p < 0.05). The scores for post-stimuli arousal were higher among the participants assigned to negative (M = 4.01, SD = 0.11), than to positive (M = 3.67, SD = 0.10) stimulus. Although the effect of AFRM on VALP was not significant, valence scores were higher after exposure to positive (M = 4.30, SD = 0.11), than to negative (M = 4.00, SD = 0.12) stimulus. AFRM had a significant influence on PRAW (η 2 p = 0.07). The participants exposed to the negative stimulus scored higher (M = 0.62, SD = 0.01), than those exposed to the positive stimulus (M = 0.55, SD = 0.01). There was a significant interaction effect between AFRM and ILLT on poststimulus arousal-η 2 p = 0.60 (Fig. 2) . The arousal's mean was higher for the anthropomorphic negative affective state (M = 4.16, SD = 0.16), than for human negative affective state (M = 3.87, SD = 0.16). This effect was reversed for arousal means of the positive anthropomorphic design being lower (M = 3.45, SD = 0.16) than of the positive human design (M = 3.89, SD = 0.15). Table 3 . The main reasons for selecting "Disagree" and "Agree" with privacy policy [Reason(frequency of appearance)]. Lack of control (13) , Social media (7), Unacceptable (7), Trust (6), Necessity of collection (5) Agree Lack of choice (28) , Want to use app (18) , Habit (14) , Trust (10), Don't care (8) In the open-ended question, we asked participants why they "Agreed" (AGR) or "Disagreed" (DIS) with the privacy policy. Only 23 participants selected to disagree, and we did not find any significant associations between agreement with policies and the policy design. All cases where participants stated that they had selected an option only to pursue the study were removed from the analysis, resulting in N = 77 answers (DIS N = 22, AGR N = 55). Two researchers read through the answers and, in a systematic manner, identified justification of the participants' choices, tagging them with a theme word. The tags were discussed and combined (Table 3) . Disagreeing. Sixteen reasons surfaced during the analysis of answers from the respondents who disagreed. The most frequent reason related to the lack of control ; sharing personal data with social media platforms was the second most frequent. For instance, "The lack of control over personal data shared with third parties as well as the catch-22 of only using the Social platforms/forums if data was shared". Participants stated that the policy was unacceptable, or they did not trust the provider, e.g., because the data would be stored abroad: "The US govt could access this data, and it is not trustworthy". Some respondents stated that there was not enough information about why data was collected (necessity, fairness, personal information collection). A few answers were emotionally loaded, e.g., "The main reason is that the pictures spelled it out in clear form what would happen to my information. As a result it made me sceptical to share my information". Only a few responses hinted that pictures help or mentioned usability, e.g., "Also, the drawings definitely gave the impression the policy was unfair". Agreeing. Seventeen reasons were identified among those who agreed. The main was the lack of choice: "If signing up for a service there is a little choice. It can't be changed. The only option is to not do so". Another reason was that the participants wanted to use the application: "if I wanted to use the website, I would have to consent to the privacy Policy. So I did not really considered disagreeing as an option". Many respondents admitted that agreement is something they always do, a habit, e.g., "Honestly, it's automatic. I'm not sure I even saw a disagree button. It's like a next button". On the other hand, a few admitted that they did not care about privacy, e.g., "I did not provide much of personal information. My name and email are already accessible, why worry?", or stated that the policy was not worse than others. Some participants thought that the policy provided them with control, it was somewhat clear and trustworthy (e.g., "the transparency of the company made me believe they were slightly more trustworthy"). Design Implications. We studied the relationship between the illustration type, affective framing and their impact on privacy awareness and willingness to disclose (RQ1-3 ). Our results show that the policy display alters affective states. Moreover, the combined illustration type and affective framing interact and influence arousal. Particularly, the negative anthropomorphic representation accompanying structured text increases arousal. Yet, the same illustration type has a lower influence on arousal, when framed positively. We have not found a direct relationship between the anthropomorphic designs and information disclosure, as suggested by Monteleone et al. [25] . Although we did not find a significant effect of the illustration type on privacy awareness (RQ1 ), we identified that framing of the designs has a significant impact on awareness (RQ3 ). Our findings are similar to the results from past research-the implementation of a cartoon-like design may increase attention [36, 37] . However, we determined that in the context of privacy, such an effect is possible only when brought by affective framing. Perhaps emotions mediate the relationship between design and privacy awareness. Our qualitative results add to such a premise, as some of the participants mentioned in emotionally loaded statements, that they comprehended privacy information because of the illustrations. These results indicate that comic designs can convey emotional meaning and improve understanding, as it has been demonstrated by Noll Webb et al. [26] . Nevertheless, such assumption requires confirmation from future studies. Considering our results about policy display, we infer that the structured text display combined with affective framing may improve transparency and clarity of the privacy information. Such findings may be implemented in the design of privacy policies to encourage more informed end-users' decisions, and service providers' legal compliance. For instance, negatively framed visual cues might be displayed next to a particular section of the privacy notice. The cues could emphasize specific data processing practices, which may result in potential risks to privacy (e.g., overexposure of sensitive personal information such as healthrelated data to undesirable third parties). Users' Needs. Our qualitative analysis showed that the sign-up process requires improvements. Our participants expressed the need for more control and choice at an early-stage of interaction. Such findings align with past research, e.g., a lack of control as one of the privacy concerns, and call for "fine-grained" control mechanisms as shown in Sheth et al. [33] . Our participants were dissatisfied with the current designs, exhibiting desperation and indicating usability issues. Concurrently with the GDPR, this calls for granular and dynamic privacy policies. Policy designers should focus on identifying new ways, in which privacy policy could provide end-users with opt-in/opt-out functionality at the early stage of interaction. For instance, one of such solutions could allow users to entirely disconnect the newly installed application from their social network tracking, through a simple interaction method (e.g., enable/disable toggles). Yet, such functionality would have to be non-intrusive as privacy management is not a primary task during the sign-up process. Additionally, some of the past research shows that too much perceived control over disclosure might lead to increased risks to privacy [4] . Hence, the design of control mechanisms requires balanced solutions that provide controls in a simplified form, with only necessary options, perhaps only for the riskiest data processing practices. We found arousal to be the most significant adjuster of the willingness to disclose, with lower arousal carrying the potential to reduce disclosure. According to our results, privacy concerns negatively correlated with willingness to disclose (RQ4 ), which may indicate that people with greater concerns manage their information more carefully. On the other hand, similar to the past research [10] , we found no relationship between personality traits and privacy awareness or willingness to disclose. Our work contributes to the research field by examination of factors acquired from the APCO framework. We demonstrated that concerns might impact willingness to disclose. These findings add to the existing knowledge on the relationship between concerns and disclosure, showing that in the context of earlystage interactions, these two constructs correlate, contradictory to the widely discussed phenomenon of the privacy paradox. We interpret this relationship as the demand for personalized systems recognizing and estimating the level of individuals' privacy concerns. Before mentioned systems could trigger affect and seemingly decrease information disclosure (e.g., presenting less concerned users with negatively framed policy to bring their attention to privacy issues). However, such a personalized approach might be challenging itself as it requires the collection of personal information. Further, our results confirm the applicability of cognitive hypotheses, such as affect-as-feeling, for the models of privacy interactions [6] . According to our findings, negative affect appears to direct participants' attention towards new information, and through activation of cognitive feelings, possibly influences information recall. This finding could be used not only by the researchers interested in studying privacy-related decision-making but also by designers. Perhaps privacy policies could elicit-through visual and interaction design-negative emotions and shift people's attention towards information included in the policy. Limitations and Future Work. The primary constraint of this study is the measurement of affect with self-reported measures. The research validity would increase, was it run in the lab, enabling additional measurements, e.g., eyetracking or electroencephalogram. Such information could improve the measurement's accuracy. As future work, we consider replicating the research in the lab environment. We ran the exploratory study, and the sampling method might have introduced bias as we gathered data only from participants interested in the research, reducing potential generalization of the results. Yet, participants gathered through a paid-for platform might be less engaged in the study, and provide answers solely to receive financial compensation. Future work should expand beyond the scenario of well-being application. This could help to identify contextual dependencies of the role of visual design and affective states in the early-stage interactions. Frequently, privacy policies leave us in a blind spot, unaware of what we agreed to. In this work, we have examined the role of visual displays in the acquisition of privacy information. We have investigated how the activation of affective states influences privacy awareness and willingness to disclose. To identify possible improvements in visual representations of privacy policies, we have examined why people decide to agree or disagree with the policies. Our results show that affective framing and arousal carry the potential to alter privacy-related attitudes and behaviors. Further, our qualitative findings show that people feel hopeless during early-stage interactions, neither having control nor choice around their data. The results can be used to design granular and dynamic consents that enable better management of personal information. Such solutions could enhance an individual's privacy, as well as help companies to comply with new regulations, such as the GDPR. The knowledge gained in this study can be applied as a backbone for future research on predictive modelling, as well as to build personalized privacy solutions. Privacy and human behavior in the age of information Towards usable privacy policy display and management Do you trust my avatar? Effects of photo-realistic seller avatars and reputation scores on trust in online transactions Misplaced confidences: privacy and the control paradox Reliability and Validity Assessment Affect as information Why privacy is all but forgotten: an empirical study of privacy & sharing attitude Informing privacy research through information systems, psychology, and behavioral economics: thinking outside the "APCO" box The mini-IPIP scales: tiny-yeteffective measures of the big five factors of personality Predicting privacy and security attitudes European Commission: Regulation (EU) 2016/679 Of The European Parliament and Of The Council of 27 The affect heuristic in judgments of risks and benefits Calculating, interpreting, and reporting Cronbach's alpha reliability coefficient for likert-type scales Facebook self-disclosure: examining the role of traits, social cohesion, and motives Comparing thermographic, EEG, and subjective measures of affective experience during simulated product interactions Measuring self-disclosure online: blurring and non-response to sensitive items in web-based surveys Personality traits and privacy perceptions: an empirical study in the context of location-based services A perspective on judgment and choice Blissfully ignorant: the effects of general privacy concerns, general institutional trust, and affect in the privacy calculus Making decisions about privacy: information disclosure in context-aware recommender systems It won't happen to me!": selfdisclosure in online social networks The role of affect and cognition on online consumers' decision to disclose personal information to unfamiliar online vendors Internet users' information privacy concerns (IUIPC): the construct, the scale, and a causal model Cultural and generational influences on privacy concerns: a qualitative study in seven European countries Nudges to privacy behaviour: exploring an alternative approach to privacy notices Wham! pow! comics as user assistance The biggest lie on the Internet: ignoring the privacy policies and terms of service policies of social networking services The privacy eonomics of voluntary overdisclosure in web forms Exploratory factor analysis: implications for theory, research, and practice External and internal representations in the acquisition and use of knowledge: visualization effects on mental model construction Feelings-as-information theory Experimental and quasi-experimental designs for generalized causal inference Us and them: a study of privacy requirements across North America, Asia, and Europe The affect heuristic Individual differences in reasoning: implications for the rationality debate? Getting the message across: visual attention, aesthetic design and what users remember Increasing user attention with a comic-based policy Getting users' attention in web apps in likable, minimally annoying ways Make it simple, or force users to read? paraphrased design improves comprehension of end user license agreements Readability of texts: state of the art Acknowledgement. This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement No 675730.