Why should the current personal data protection and portability paradigm be changed?

versione in italiano

The ideological and technological foundations of Web 2.0 have been exploited by social media and, in general, by online services to build platforms that enable the creation and exchange of users’ generated content (Kaplan and Haenlein, 2010). Primarily by leveraging mobile personal devices, online services provide users the foundation for information dissemination, content generation, and interactive communications of the modern era. Information Communication Technologies (ICTs) and their ubiquitous computing are modifying our world by creating new realities and promoting an informational interpretation of our lives (Floridi, 2014). An example is Web 2.0, which at the start of the new millennium has favored a process that broke the boundaries between Internet consumption and participation: the users of the Web produce the data that other users consume. This great innovation has led to reduced friction in disseminating information online, with great benefits for the entire world population. However, this significant influence on informational friction brings with it great concerns about the privacy of the users who inhabit the online/offline (onlife) world (Floridi, 2014). Internet users act not just as content consumers but mainly as content creators. It implies that the content they share often consists of highly personal data belonging to them or their family, friends, and colleagues. Location, interests, their general behavior are all data points derived from their textual data (comments, posts), actions (sharing, reactions, likes), social network topology (friendships, following system), hyperlinks, or metadata.

It is a revolutionary fact that “new” ICTs, such as Online Social Network (OSN) platforms, do not produce most of the data they handle by themselves, as the “old” ICTs, such as broadcast-based traditional and industrial media, usually do. This kind of data concerns ICTs users’ static personal attributes and, due to the spread of mobile devices, also dynamic information extracted by their activities (Altshuler et al., 2012). Smartphones are the primary source of this information since, through their mobile computation, set of sensors, and Internet connectivity, they can measure several aspects of an individual’s physical environment. Hence the birth of a digital world, created upon peoples’ actions, interests, and desires given in input in the form of data to firms that operate ICTs on a large scale. There is a large number of vendors in the digital marketing industry whose only purpose is to collect ICTs users’ data and transform it into actionable information, i.e., create detailed profiles and user segments for prediction, attribution, and insights. Raw users’ generated data is accessed and transformed by data aggregators and brokers, processed to obtain more sophisticated forms, referenced by analytics vendors, and sold to third parties (e.g., retailers, market researchers, brands) for prediction, attribution, and insights (Acquisti et al., 2016; Banerjee, 2019).

In the following subsections, we will go into the details of the protection and portability of personal data and the privacy threats arising from the current uses of new ICTs.

How a piece of information can be influential to your privacy

First of all, let us clarify that throughout this article, the reference to privacy will be mainly associated with the concept of informational privacy.

Informational privacy is an individual’s freedom from informational interference or intrusion achieved by a restriction on facts about him or her that are unknown or unknowable.

At the basis of this vision of Floridi (2014), we find the vision of Westin (1967), stating that privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others. Generally speaking, the privacy threat associated with Web-2.0-based services is that, although many users have some information they keep private, they are not aware that a significant part of information about them is generated from other information sources (Acquisti et al., 2016; Forbrukerrådet, 2020; Kamleitner and Mitchell, 2019). It makes each individual less free from informational interference. It creates a lack of control to determine information about them is (possibly) communicated to others.

The reason for this lies in the foundations of the current ICTs structure. The exploitation, i.e., economics, of personal data is helped by the more pervasive nature of today’s digital world. When fundamental aspects of one’s life are recreated online, his or her “digital twin” can be depicted not only by using his or her information but also by others thanks to social networks (Forbrukerrådet, 2020). Thus, it becomes easier to understand one’s activity choice and lifestyle patterns (Hasan et al., 2016) and then to make intrusive recommendations using this information (Bothorel et al., 2018; Partridge and Price, 2009). In practice, informational interference techniques can reduce some data protection mechanisms to render these almost ineffective. For instance, adding “side information”, even with a small amount of background data, most anonymous or pseudo-anonymous datasets regarding users’ online platforms interaction can be de-anonymized (Ma et al., 2010). De Montjoye et al. (2013) provide a study that shows how just four approximate location data points are sufficient to identify an individual in 95% of cases. When personal data are enriched with Point of Interest, people’s activities can be inferred (He et al., 2019). Home and work location information are usually the first (and the easiest) to be inferred (Pontes et al., 2012). Then, simply by knowing these two locations, it is possible to recognize one’s activity patterns through his or her peers (Phithakkitnukoon et al., 2010) or his or her friends (Cho et al., 2011). When social media site information comes along, it can only get better (or worse, depending on the point of view). Qian et al. (2016) use knowledge graphs to combine background knowledge and anonymous OSNs data to identify individuals and discover their personal attributes. OSNs often track and collect the location of their users when providing their services. This monitoring can continue even when users are not logged in or have never used those services. Sadilek et al. (2012) show how to infer social links, i.e., friendship in OSNs, considering the patterns in link formation, the content of users’ messages, and their location. Bonneau et al. (2009) demonstrate that eight public social links are enough to infer the entirety of your social circle. Jurgens (2013) shows how, by exploiting only a small number of initial locations, it is possible to infer fine-grained users’ location even when they keep their location data private, but their friends do not. Indeed, it is not enough for an individual to fully protect his or her activity information if it is possible to obtain “co-location” information from his or her friends (Olteanu et al., 2014). Co-location may consist of data, e.g., a friend’s picture or message posted on the OSNs (Ajao et al., 2015), and metadata, e.g., two users connecting to the OSNs with the same IP address or spatiotemporal correlations in OSNs streams (Yamaguchi et al., 2014).

Sensitive personal data, such as location data combined with other data (Keßler and McKenzie, 2018), are substantially different from the rest of personal data. The ability to track individuals’ locations and movements and to combine this data with other metadata and background knowledge allows first and third-party companies to make inferences such as, for instance, visiting a church weekly (i.e., religious affinity) or attending climate strikes (i.e., political views). Data protection thus becomes crucial, as it concerns the vast majority of the population, who are often unaware of how the underlying ICTs work and how most sensitive information can be deduced simply by using other information obtained from them.

The above inference techniques demonstrate that the “nothing to hide” approach to privacy, often raised by some people, is fundamentally flawed for many reasons: the main one is that everyone has information they want to keep private. However, many do not know that such information can be deduced from other data sources generated from someone else (Kamleitner and Mitchell, 2019). Citing Floridi (2014) in his work on how ICTs affect our sense of self and interaction with the world, he defines ourselves as informational organisms, mutually connected and embedded in an informational environment, i.e., the infosphere. The informational organism, i.e., inforg, is a set of points obtained by interacting with other organisms that are natural agents, e.g., family, friends, strangers, or artificial agents, e.g., the same digital ICTs that gather these data points. In the infosphere, individuals are de-individualized, re-identified as crossing points of many “kinds of”, and then treated like a commodity and sold or bought in the advertising market (Zuboff, 2019). Acquiring personal information enables large firms and organizations operating in the digital world to provide personalized or more valuable services in digital and physical spaces. However, it could also have potentially harmful consequences for the privacy and autonomy of users and society at large. Lack of privacy control, for instance, leads an individual to be thrown into a “filter bubble” that can affect his ability to choose how he wants to live, simply because the companies that build this bubble choose which options he can be aware of (Pariser, 2011). On a social level, this scheme can lead to a deeper polarization and manipulation of society (Cadwalladr and Graham-Harrison, 2018; Christl et al., 2017) and to “geoslavery” in the case of location information (Dobson and Fisher, 2003). After being categorized through personalities, predispositions, and secret desires, each consumer’s digital twin is bought and sold on a vast market that operates largely outside his sphere, namely the digital marketing and the adtech industry. All to persuade individuals to buy particular products or to act in a certain way (Forbrukerrådet, 2020).

The General Data Protection Regulation (GDPR) was enacted into law in 2016 (European Parliament, 2016) to protect the personal data of European Union (EU) citizens and to allow the free movement of such data within the EU. According to Article 4(1), personal data are “any information relating to an identified or identifiable natural person; (…) identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the (…) identity of that natural person”. The GDPR builds upon, or better “runs in parallel to”, the Privacy and Electronic Communications Directive (ePrivacy Directive) (European Parliament, 2002) that applies to the data protection and privacy in electronic communications networks and services for the EU citizens. The ePrivacy Directive includes language requiring providers to secure the data they carry by taking “appropriate technical and organizational measures to safeguard the security of its services”. Generally, it regulates how third parties collect consent to access information stored on individuals’ devices. After the 2009 amendment, it explicitly deals with web cookies, requiring the user’s consent for processing. The GDPR, on the other hand, conveys control to the data subject, i.e., any natural person identified or identifiable by the kind of data defined above, by imposing several accountability measures on the actor responsible for the data processing and by assigning a set of rights to subjects, i.e., as “natural persons should have control of their own personal data” (Recital 7). The data controller, i.e., the natural or legal person, public authority, agency, or other body which, alone or jointly with others, determines the purposes and means of processing personal data, plays a central role in the interactions between the various interested parties. Indeed, they are being called into action by the data subjects for the exercise of their rights, and they are rendered liable in the event of a violation of the rules by the data processors, i.e., the body which processes personal data on behalf of the controller. Processors have their obligations under the GDPR, although they ultimately report to the data controller. Within this framework, because of increased technological complexities and multiple data-exploiting business practices, it is becoming harder for ICTs users to gain control over their data. Individual control, particularly concerning one’s person, has been described as a reflection of fundamental values such as autonomy, privacy, and human dignity. In regards to this, the GDPR sets first some legal obligations about data processing: (i) data must be processed lawfully, fairly, and transparently; (ii) data must be collected for specified, explicit, and legitimate purposes only (purpose limitation); (iii) data must be limited to data required for the entity’s defined purposes (data minimization); (iv) data must be accurate and up-to-date. The idea of control over personal data, then, comes to the front in the provisions of six legal bases (Article 6(1)(a)) for data processing:

  1. Consent, the data subject has given consent to the processing of his or her data for one or more specific purposes.
  2. Performance of a contract, the data processing activity is necessary to enter into or perform a contract with the data subject.
  3. Legal requirement, the processing activity is necessary for a legal obligation, such as information security, employment, or consumer transaction law.
  4. Vital interest, the processing activity that could be necessary to save someone’s life.
  5. Public interest, the processing activity for a task carried out in the public interest or in the exercise of official authority vested in the controller.
  6. Legitimate interest, the processing activity of data subjects’ data in a way they would reasonably expect and which would have a minimal impact on their privacy.

The GDPR impact on businesses

The GDPR has had (and is having) a worldwide impact on establishing how to promote a view in favor of the interests of individuals as opposed to large companies and corporations (Li et al., 2019). For instance, it has been followed by other regulations around the world, such as the California Consumer Privacy Act (California State Legislature, 2020) in the USA. In economic terms, however, it can be argued that the GDPR affects the options available to firms to collect the data they need for their operations and the resulting ability to achieve economies of scale in data analysis (Gal and Aviv, 2020). Ensuring the lawfulness of data processing, such as obtaining each data subject’s explicit and informed consent for all the specific uses of the data pertaining to him or her, is costly, and large and diversified data controllers enjoy an advantage. Moreover, a data controller is liable to the data subject to ensure that her data are used only under his or her rights. Thus, the costs imposed by this requirement may include ongoing monitoring, screening, and auditing of the processing performed by a data receiver. The declared intention of the GDPR is not to prevent the exploitation of personal data but to ensure that such exploitation is performed in accordance with the data subjects. However, this approach has a direct impact on business activities for (Ziegler et al., 2019): (i) risk management, the necessity to better control the risks related to personal data protection and the exposition to GDPR-related sanctions and penalties; (ii) data subject rights ownership and control, the design and implement systems with the data subjects at the core of the model; (iii) purpose consistency, when the controller wants to substantially extend the use of the collected data, it should collect a complementary consent; (iv) data transfer to third parties, firms must map, manage, monitor, and control the way they process and share data; (v) cross-border transfer, the requirement to control cross-border data transfers toward non-trusted countries.

Privacy, data protection, and the user control

There is an essential distinction between privacy and data protection that would be limited to the discussion in this article but that has been discussed extensively in other studies (Kokott and Sobotta, 2013; Westin, 1967; Zuboff, 2019). Assigning a value to informational privacy is different from the protection of the actual personal information related to the individual making the assignment. Privacy controls are mainly in the hands of individuals and the system’s users. However, privacy also depends on the protection of personal data, which, on the contrary, is primarily the responsibility of the entity controlling the data, i.e., entities operating in the ICTs digital world. From the point of view of the user of an ICT system, being a data subject, it is possible to distinguish it into three types of personal data (Pangrazio and Selwyn, 2019):

  1. volunteered, data that users give to ICTs systems they are using in exchange for an often “free” service and that may be unconsciously disclosed;
  2. observed, data that ICTs systems extract from their users by monitoring them;
  3. inferred, data that ICTs systems obtained by processing the last two types created often beyond their users’ knowledge.

These types of personal data are moved through three main links along the data value chain: collection, processing, and use of data-generated information and knowledge (Gal and Aviv, 2020). The collection is the extraction of the data and its “datafication”, i.e., the recording, aggregation, and organization of information. Processing consists of optimizing, cleaning, parsing, or combining different datasets to organize the data for future extractions and to find correlations. It can transform the raw data into information and can create knowledge. Finally, data use means employing data-based information or knowledge for prediction and decision-making in relevant markets.

How does GDPR enhance ICTs systems users’ control over personal data in this data value chain? The GDPR, indeed, according to the principle of accountability, imposes the obligation to adopt and account for protection measures to the data controller. This one needs to ensure that data is protected and that the level of privacy its users have set is implemented. Therefore, the question that brings up the central issue here is: even if the data controller makes sure to adopt an “adequate” response in proportion to the assessment it has made of the level of risk to the rights and freedoms of the data subject, is the latter able to determine the level of privacy concerning the personal data being protected by the former? The answer to this question requires two layers of analysis: a surface layer and a deeper layer.

Privacy at the surface layer

The first layer comprises the interface methods with which users interact to assess privacy levels and determine the level at which they want to set their privacy. Interfaces here consist of the hardware and software tools that inform and make users decide on actions that have direct consequences on data protection and indirect consequences on data privacy. We specifically refer to smartphone apps, browsers, websites, and similar. In the context of the GDPR, consent is the one that is usually leveraged in these cases in order to process collected data. However, the general problem here is that users often do not seem to think about the consequences of providing (or refusing) consent but, instead, consent whenever they are confronted with a request (Custers et al., 2013). Users generally interface with informed consent through privacy notices (e.g., cookie notices) and user control options at the operating system level. However, these are ineffective for users because they are presented in different and inconsistent ways across services and platforms; worse, most are not GDPR compliant (Mehrnezhad, 2020). Many of the current notices implementations offer no meaningful choice to users. For example, in the case of third-party cookies, a more appropriate implementation would require service providers to use consent notices that would effectively result in less than 0.1% of users consenting to their use (Utz et al., 2019). Cookies, in particular, can assume the form of personal data and are essential by themselves because they have become the backbone of a vast market infrastructure based on their ability to transform information about users’ online behavior into data assets (Mellet and Beauvisage, 2020).

In their work, Van Ooijen and Vrabec (2019) identify three stages in consent-based data processing: (i) the information receiving stage, (ii) the approval and primary use stage, and (iii) the secondary data use (reuse) stage. In the first stage, the threats to users’ control are given by the fact that, even if data collectors may provide individuals’ information employing a data use policy, these have difficulties in cognitively processing such information. As a result of the rapid development of technology, such policies are becoming more time-consuming and more complex, resulting in increased pressure on the cognitive functioning of individuals. Moreover, this approach fails to address the problem of information complexity, as it needs to explain the real implications of automated decision-making for an individual. What the GDPR guarantees, with the right to explanation, is an ex-ante motivation that merely refers to the system’s functionality. Icons may be more successful in mitigating informational complexity, but there is a risk that they may worsen the problem of bias in decision-making (Rossi and Palmirani, 2020). The threats to users’ control at the second stage, i.e., the approval and primary use stage, are steered by subtle changes in the context wherein consent is requested, such as system architectures based on default settings. These can unconsciously steer users’ behavior in a phenomenon coined “the malleability of privacy preferences” (Acquisti et al., 2016). Consumers generally prefer and choose the option marked as the default when presented with several choice options. GDPR addresses these threats by validating consent only on the presupposition that a data subject has fully understood the consequences of his or her approval. However, this must be implemented in a way that indeed empowers individuals. Finally, in the third stage identified by Van Ooijen and Vrabec (2019), the threats over users’ control are given by the limited scope of the right to access and portability for individuals. The authors foresee the use of electronic data platforms where individuals can manage their own data.

Privacy at the deeper layer

The second layer of analysis, which is deeper than the previous one, interests the relationship between a user and the information itself in terms of information complexity and privacy perception. In perceiving privacy when they disclose personal information, users come across a privacy paradox that most of the time is not in their favor: while the attitude of users is to profess their need for privacy, in their behavior, most of them remain consumers of the same technologies that collect their personal data (Norberg et al., 2007). Two resolutions can be attributed to this: firstly, the fact that attitudes (e.g., the attitude of practicing high privacy awareness) are usually expressed generically, while behaviors (e.g., the actual data disclosure act) are more specific and contextual (Fishbein and Ajzen, 1977); secondly, users engage in a mental trade-off between privacy concerns and disclosure benefits, performing a “privacy calculus” (Laufer and Wolfe, 1977). When consumers are asked to provide personal information to companies, they will disclose their information based on a decision made after a risk-benefit analysis, i.e., the privacy calculus, analogously to estimating the perceived value. Xu et al. (2011) define this perceived value of information disclosure as the individual’s overall assessment of the utility of information disclosure based on perceptions of privacy risks incurred and benefits received. However, two main challenges hinder the correct estimation of this value. First, there is a problem of information overload that stands in the way of a correct estimation in the privacy calculus. This is because users need to consider all the information that is made available in the collector’s data use policies, together with the vast amount of information spread across different devices, media, and services. This richness of information threatens the ability and motivation of individuals to examine the critical details needed to make informed privacy decisions (Van Ooijen and Vrabec, 2019). Secondly, a problem of information complexity arises (Acquisti et al., 2016).

Most ITCs users need to learn the sophistication of how they can be tracked or to be aware of possible alternative solutions to their privacy concerns, e.g., the use of privacy-enhancing technologies (PETs).

Taking as an example a specific kind of sensitive personal data, i.e., location data, the perception of location privacy falls under the same assumptions of the privacy calculus. In particular, the determination of location privacy can assume a numerical quantity, as shown by Shokri et al. (2011), based on the idea that users’ privacy and the success of an “adversary” in his location-inference attacks are two sides of the same coin. The authors quantify location privacy as the error of the adversary in estimating the actual user’s location information (given an attack model of reference). In a more consumer-oriented definition, location privacy consists of the user’s ability to regulate external audiences’ access to information about his or her current or past locations (Banerjee, 2019). This view is in line with Westin’s and IAPP’s definitions of information privacy, i.e., based on the assumption that “privacy is not the opposite of sharing – rather, it is control over sharing” (Acquisti et al., 2016)

A solution for the long (and perhaps also short) run

Distributed Ledger Technologies (DLT) no longer need to be introduced. Its ledger, distributed among a network of nodes, and the decentralized protocol eliminate the need for a trusted authority and replace it with a system of publicly verifiable evidence. This technology provides the means for disintermediation as it increases confidence in the functioning of its particular system and indirectly reduces the need for trust in the system (De Filippi et al., 2020). The “Web3” or Web 3.0 comes along trying to exploit the advantages that decentralized system might provide in order to build upon Web 2.0, a version of the Internet in which users are truly sovereign over their data and actions, e.g., by owning the unique piece of information that might enact an operation such as a private key. From the perspective of individuals, these technologies help to move computing applications, data, and services towards the edge of the “Internet of Persons”, i.e., closer to them, as personal devices compose the frontiers of such a network of devices. For many scholars, DLTs, combined with decentralized identity mechanisms, could become the necessary building blocks for the decentralized Internet of the future that can benefit users’ privacy (Kondova and Erbguth, 2020; Lopez and Farooq, 2020; Lopez et al., 2019).

More on this in my Ph.D. Thesis

References

Acquisti, A., Taylor, C., and Wagman, L. (2016). The economics of privacy. Journal of economic Literature, 54(2):442–92.

Ajao, O., Hong, J., and Liu, W. (2015). A survey of location inference techniques on twitter. Journal of Information Science, 41(6):855–864.

Altshuler, Y., Aharony, N., Fire, M., Elovici, Y., and Pentland, A. (2012). Incremental learning with accuracy prediction of social and individual properties from mobilephone data. In 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, pages 969–974. IEEE.

Article 29 Working Party (2014). Opinion 06/2014 on the notion of legitimate interests of the data controller under article 7 of directive 95/46/ec.

Banerjee, S. (2019). Geosurveillance, location privacy, and personalization. Journal of Public Policy & Marketing, 38(4):484–499.

Bonneau, J., Anderson, J., Anderson, R., and Stajano, F. (2009). Eight friends are enough: social graph approximation via public listings. In Proceedings of the Second ACM EuroSys Workshop on Social Network Systems, pages 13–18.

Bothorel, C., Lathia, N., Picot-Clemente, R., and Noulas, A. (2018). Location recommen- dation with social media data. In Social Information Access, pages 624–653. Springer.

Cadwalladr, C. and Graham-Harrison, E. (2018). Revealed: 50 million facebook profiles harvested for cambridge analytica in major data breach. The guardian, 17:22.

California State Legislature (2020). California consumer privacy act.

Cho, E., Myers, S. A., and Leskovec, J. (2011). Friendship and mobility: user movement in location-based social networks. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1082–1090.

Christl, W., Kopp, K., and Riechert, P. U. (2017). How companies use personal data against people. Automated Disadvantage, Personalized Persuasion, and the Societal Ramifications of the Commercial Use of Personal Information. Wien: Cracked Labs.

Custers, B., van Der Hof, S., Schermer, B., Appleby-Arnold, S., and Brockdorff, N. (2013). Informed consent in social media use-the gap between user expectations and eu personal data protection law. SCRIPTed, 10:435.

De Filippi, P., Mannan, M., and Reijers, W. (2020). Blockchain as a confidence machine: The problem of trust & challenges of governance. Technology in Society, 62:101284.

De Montjoye, Y.-A., Hidalgo, C. A., Verleysen, M., and Blondel, V. D. (2013). Unique in the crowd: The privacy bounds of human mobility. Scientific reports, 3:1376.

Dobson, J. E. and Fisher, P. F. (2003). Geoslavery. IEEE Technology and Society Magazine, 22(1):47–52.

European Parliament (2002). Privacy and electronic communications directive 2002/58/ec.

European Parliament (2016). Regulation (eu) 2016/679.

Fishbein, M. and Ajzen, I. (1977). Belief, attitude, intention, and behavior: An introduc- tion to theory and research. Philosophy and Rhetoric, 10(2).

Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. OUP Oxford.

Forbrukerrådet (2020). Out of control – how consumers are exploited by the online advertising industry.

Gal, M. S. and Aviv, O. (2020). The competitive effects of the gdpr. Journal of Competition Law & Economics, 16(3):349–39 Hasan, S., Ukkusuri, S. V., and Zhan, X. (2016). Understanding social influence in activity location choice and lifestyle patterns using geolocation data from social media. Frontiers in ICT, 3:10.

He, R., Cao, J., Zhang, L., and Lee, D. (2019). Statistical enrichment models for activity inference from imprecise location data. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pages 946–954. IEEE.

Jurgens, D. (2013). That’s what friends are for: Inferring location in online social media platforms based on social relationships. In Seventh International AAAI Conference on Weblogs and Social Media.

Kamleitner, B. and Mitchell, V. (2019). Your data is my data: A framework for addressing interdependent privacy infringements. Journal of Public Policy & Marketing, 38(4):433– 450.

Kaplan, A. M. and Haenlein, M. (2010). Users of the world, unite! the challenges and opportunities of social media. Business horizons, 53(1):59–68.

Keßler, C. and McKenzie, G. (2018). A geoprivacy manifesto. Transactions in GIS, 22(1):3–19.

Kokott, J. and Sobotta, C. (2013). The distinction between privacy and data protection in the jurisprudence of the cjeu and the ecthr. International Data Privacy Law, 3(4):222–228.

Kondova, G. and Erbguth, J. (2020). Self-sovereign identity on public blockchains and the gdpr. In Proceedings of the 35th Annual ACM Symposium on Applied Computing, pages 342–345.

Laufer, R. S. and Wolfe, M. (1977). Privacy as a concept and a social issue: A multidi- mensional developmental theory. Journal of social Issues, 33(3):22–42.

Li, H., Yu, L., and He, W. (2019). The impact of gdpr on global technology development.

Lopez, D. and Farooq, B. (2020). A multi-layered blockchain framework for smart mobility data-markets. Transportation Research Part C: Emerging Technologies, 111:588– 615.

Lopez, P. G., Montresor, A., and Datta, A. (2019). Please, do not decentralize the internet with (permissionless) blockchains! In 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pages 1901–191 IEEE.

Ma, C. Y., Yau, D. K., Yip, N. K., and Rao, N. S. (2010). Privacy vulnerability of published anonymous mobility traces. In Proceedings of the sixteenth annual international conference on Mobile computing and networking, pages 185–196.

Mehrnezhad, M. (2020). A cross-platform evaluation of privacy notices and track- ing practices. In 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pages 97–106. IEEE.

Mellet, K. and Beauvisage, T. (2020). Cookie monsters. anatomy of a digital market infrastructure. Consumption Markets & Culture, 23(2):110–129.

Norberg, P. A., Horne, D. R., and Horne, D. A. (2007). The privacy paradox: Per- sonal information disclosure intentions versus behaviors. Journal of consumer affairs, 41(1):100–126.

Olteanu, A.-M., Huguenin, K., Shokri, R., and Hubaux, J.-P. (2014). Quantifying the effect of co-location information on location privacy. In International Symposium on Privacy Enhancing Technologies Symposium, pages 184–203. Springer.

Pangrazio, L. and Selwyn, N. (2019). ‘personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media & Society, 21(2):419–437.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK.

Partridge, K. and Price, B. (2009). Enhancing mobile recommender systems with activity inference. In International Conference on User Modeling, Adaptation, and Personalization, pages 307–318. Springer.

Phithakkitnukoon, S., Horanont, T., Di Lorenzo, G., Shibasaki, R., and Ratti, C. (2010). Activity-aware map: Identifying human daily activity pattern using mobile phone data. In International Workshop on Human Behavior Understanding, pages 14–25. Springer.

Pontes, T., Magno, G., Vasconcelos, M., Gupta, A., Almeida, J., Kumaraguru, P., and Almeida, V. (2012). Beware of what you share: Inferring home location in social networks. In 2012 IEEE 12th International Conference on Data Mining Workshops, pages 571–578. IEEE.

Qian, J., Li, X.-Y., Zhang, C., and Chen, L. (2016). De-anonymizing social networks and inferring private attributes using knowledge graphs. In IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications, pages 1–9. IEEE.

Rossi, A. and Palmirani, M. (2020). What’s in an icon? promises and pitfalls of data protection iconography. Data Protection and Privacy: Data Protection and Democracy, pages 59–92.

Sadilek, A., Kautz, H., and Bigham, J. P. (2012). Finding your friends and following them to where you are. In Proceedings of the fifth ACM international conference on Web search and data mining, pages 723–732.

Shokri, R., Theodorakopoulos, G., Le Boudec, J.-Y., and Hubaux, J.-P. (2011). Quantify- ing location privacy. In 2011 IEEE symposium on security and privacy, pages 247–262. IEEE.

Utz, C., Degeling, M., Fahl, S., Schaub, F., and Holz, T. (2019). (un) informed consent: Studying gdpr consent notices in the field. In Proceedings of the 2019 acm sigsac conference on computer and communications security, pages 973–990.

Van Ooijen, I. and Vrabec, H. U. (2019). Does the gdpr enhance consumers’ control over personal data? an analysis from a behavioural perspective. Journal of consumer policy, 42(1):91–107.

Westin, A. F. (1967). Privacy and freedom. Atheneum. ii, vii Xu, H., Luo, X. R., Carroll, J. M., and Rosson, M. B. (2011). The personalization privacy paradox: An exploratory study of decision making process for location- aware marketing. Decision support systems, 51(1):42–52.

Yamaguchi, Y., Amagasa, T., Kitagawa, H., and Ikawa, Y. (2014). Online user location inference exploiting spatiotemporal correlations in social streams. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 1139–1148.

Ziegler, S., Evequoz, E., and Huamani, A. M. P. (2019). The impact of the european general data protection regulation (gdpr) on future data business models: Toward a new paradigm and business opportunities. In Digital business models, pages 201–226. Springer.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile books

Comments