Privacy Workshop 2014

Workshop on End-User Privacy: Data Collection, Use, and Disclosure

Friday, November 14th, 2014
Ernst-Reuter-Platz 7, 10587 Berlin, Germany
Auditorium 3, TEL-20, TU-Hochhaus
Workshop start: 9.30
Workshop end: 17.00

This full-day workshop addresses privacy issues that arise from the modern technology. With the use of mobile devices and search engines, a vast amount of information is readily available to various instances. Is the user protected, and who is responsible for the end-users’ privacy? How much of the responsibility should the user take? We aim at addressing technical, psychological, sociological, ethical, and legislative aspects of ensuring the end-user privacy in the internet and mobile age.
The workshop is the second of its kind, organized by the Usable Security and Privacy Group of Quality and Usability Lab, TU Berlin.

To register your interest, please send an email to privacy.workshop(at)qu.tu-berlin.de no later than October 10, 2014. Attendance to the workshop is free of charge. Unfortunately, we cannot cover your travel expenses. For any further questions, please contact us by email at: privacy.workshop(at)qu.tu-berlin.de.

Organizers:
Maija Poikela
Alexander Bajic
Tobias Hirsch
Lydia Kraus
Ina Wechsung
Sebastian Möller
The Usable Security and Privacy Group of Quality and Usability Lab

Schedule

 

Time Event
 9:30 Sebastian Möller
Welcoming words and introduction
 9:45 Session: Securing Privacy
Session chair: Alexander Bajic
Markus Schmall and Jan Lichtenberg:
7 days IP data retention :: an ISP point of view
Matthew Smith:
Using Personal Examples to Improve Risk Communication
for Security and Privacy Decisions
Johanna Nordheim and Paul Gellert:
Data Protection in Clinical Research
 10:45 Coffee Break
 11:15 Session: Mobile privacy
Session chair: Lydia Kraus
Delphine Christin: 
Privacy Awareness and Control in Mobile Participatory Sensing
Lena Reinfelder and Zinaida Benenson:
Differences in Attitudes to Security and Privacy between the Android and the iOS Users
Presenting USP – the Hosts
 12:15 Lunch
 13:15 Session: User-Centric Privacy
Session chair: Maija Poikela
Eran Toch:
Privacy-by-design: What holds it back and how can it move foreword?
Marta Piekarska:
User-centric development of privacy features to FirefoxOS
Florian Schaub:
Towards Usable Privacy Policies: Semi-automated Extraction of Data Practices from Privacy Policies
 14:15 Coffee Break
 14:45 Session: The Web, Big Data, and Privacy
Session chair: Tobias Hirsch
Sören Preibusch: 
The value of privacy in Web search
Nils Backhaus: 
End user’s Perceived Privacy and Trust in Cloud Storage Services
Stefan Brandenburg: 
User data – to whom do they belong?
 16:00 Panel Discussion:
Who is responsible for the end user privacy?
Moderators: Sebastian Möller and Lydia Kraus
Panelists:
Alexander Dix, Commissioner for Data Protection and Freedom of Information of Berlin State
Delphine Christin, Professor for Privacy and Security in Ubiquitous Computing
Jan Lichtenberg, Vice President Privacy Audits & Standards, Telekom AG
Michael Wagner, Professor for Biometrics and Computer Security, University of Canberra
17:00 Workshop ends

Abstracts

Markus Schmall and Jan Lichtenberg:  7 days IP data retention :: an ISP point of view

This talk will cover the Security background of the 7-days data retention, infected customer systems and their potential risks for ISPs. A compact overview of the abuse situation at Deutsche Telekom AG will be provided, as well as applicable countermeasures and their limitations. Finally, legal aspects will be explained and discusses, focusing on why IP data retention is allowed in a controlled and limited way.

Matthew Smith:  Using Personal Examples to Improve Risk Communication for Security and Privacy Decisions

IT security systems often attempt to support users in taking a decision by communicating associated risks. However, a lack of efficacy as well as problems with habituation in such systems are well known issues. In this talk, we propose to leverage the rich set of personal data available on smart- phones to communicate risks using personalized examples. Examples of private information that may be at risk can draw the users’ attention to relevant information for a decision and also improve their response. We present two experiments that validate this approach in the context of Android app permis- sions. Private information that becomes accessible given cer- tain permissions is displayed when a user wants to install an app, demonstrating the consequences this installation might have. We find that participants made more privacy-conscious choices when deciding which apps to install. Additionally, our results show that our approach causes a negative affect in participants, which makes them pay more attention.

Johanna Nordheim and Paul Gellert: Data Protection in Clinical Research

Key issue in health research is respect for the patient’s reasonable expectation that their health information will be kept confidential and not used or disclosed without their consent. Two fields of data handling and data protection in research in the health sector will be elaborated more specifically: Research in nursing homes including patients with cognitive impairments (dementia) and limited ability to give informed consent. The sensitive health information of a vulnerable person and the protection of personal data in clinical research, particularly in intervention studies, require ethics committee review.
Furthermore, we will shed light on research with secondary data and information routinely collected by health insurances.

Delphine Christin:  Privacy Awareness and Control in Mobile Participatory Sensing

Mobile phones are increasingly leveraged as sensor platforms to collect information about the users and their environment. The collected sensor readings can however reveal personal and sensitive information about the users and hence put their privacy at stake. To protect their privacy, an alternative is to apply filters that eliminate privacy-sensitive elements of the sensor readings prior to their transmission to the application server. As a result, users can control the resulting privacy protection by configuring them. In this context, we have proposed six different graphical privacy interfaces that allow the users to fine-tune the degree of granularity at which the sensor readings are shared. Based on the results of their evaluation, we have chosen the interface preferred by the participants of our study and introduced picture-based warnings based on their current privacy settings. These warnings aim at increasing their awareness about potential privacy risks. Depending on their privacy conception and the proposed warnings, users can hence adapt their settings or leave them unchanged. Again, we have evaluated them by means of a user study. In this talk, I will therefore present the different designed solutions and comment on the results of both conducted user studies. I will further highlight different challenges in this field, before giving an overview about additional related work we have conducted in this area.

Lena Reinfelder and Zinaida Benenson:  Differences in Attitudes to Security and Privacy between the Android and the iOS Users

Using a smartphone today includes a considerably higher proportion of security- and privacy-related decisions than using a PC or a common cell phone. These decisions vary from choosing a certain smartphone manufacturer to deciding whether a certain app should have access, for example, to the contact list or to the microphone.

The most popular smartphone ecosystems Android and iOS differ considerably from each other, including system architectures, app markets and business models. Also the presentation of security and privacy features and decisions to the users is different. For example, apps have to be approved by Apple before they can be put into the App Store (which is the only place where the iOS apps can be downloaded), whereas Google Play checks the apps only after they have been added to the store, and the Android users can download their apps from other stores as well. Furthermore, whereas Android users are presented with the static list of permissions during the app installation process and cannot disable individual app permissions, iOS users receive runtime notifications about needed permissions and can allow or deny them.

Thus a research question arises whether the differences between Android and iOS play a role in security and privacy attitudes and behavior of the users. For example, do iOS users feel more secure than Android users due to the Apple’s much advertised app vetting process? If yes, does this make more security- and privacy- aware people buy iPhones? Are Android users more cautious than iOS users when they choose their apps? Do they, on the contrary, feel less restricted when using their smartphones than iPhone users due to the possibility to use arbitrary app markets? Which other differences and relationships exist?

In this talk, the results of an exploratory study concerning the impact of the differences between the Android and iOS ecosystems on users’ security and privacy attitudes and behavior will be presented. We conducted semi-structured interviews with 10 Android and 8 iOS users with different demographic characteristics. The analysis of the interviews was conducted by means of the
qualitative content analysis method and provides interesting insights into the interplay between the characteristics of Android and iOS ecosystems and the security and privacy perceptions of the respective users. Our results contribute to the understanding of how future smartphone ecosystems and other ubiquitous computing systems should be designed with respect to security and privacy, and what should be avoided.

Eran Toch:  Privacy-by-design: What holds it back and how can it move foreword?

In this talk I will present two studies that empirically investigate the relationship between engineering and privacy. In the first study, we explore developers’ perceptions, attitudes, and decision-making processes with regard to privacy. I will discuss our findings in light of developing privacy-by-design technologies and evaluating privacy-by-design methodologies. In the second study, we look at methods for automatic regulation and evaluation of privacy management interfaces. Specifically, we suggest and evaluate an algorithmic assessment method that analyzes the service’s default privacy options, by statistically comparing them to the actual preferences of a sample of existing users. We evaluate the method with a user study, based on analyzing long-term privacy preferences of 266 Facebook users. The results exemplify how the indicators can be used to design representative privacy management interfaces, and to guide policy makers in evaluating existing interfaces.

My collaborators in the first study are Tomer Hasson, Irit Hadar, Oshrat Ayalon, and Michael D. Birnhack. At the second study, I collaborated with Rony Hirshpung and Hadas Shwartzh-Chassidim.

Marta Piekarska: User-centric development of privacy features to FirefoxOS

Many solutions developed by security and privacy experts aim at being 100% bulletproof. However, as it has already been said many years back by Kevin Mitnick “I’ve been hacking people, not the passwords”. We – the designers, system architects, and developers – might have our little jokes about how users write their passwords on the post-its, but the fact is that it is our responsibility to create systems not only secure, but also usable. It is not true, that people do not care about their privacy. They do. Only if they have to choose privacy or functionality, they will decide on the latter.

For the last year we have done quite a bit of research on the topic and now want to share our experience with a broader audience so that we all start developing the user-centric privacy. We will present a tool that we have developed to enhance the user-experiance of privacy on Firefox OS and how to educate them on good privacy practices.

Florian Schaub: Towards Usable Privacy Policies: Semi-automated Extraction of Data Practices from Privacy Policies

Privacy policies of websites and mobile applications are intended to describe the system’s data practices. However, these policies are often complex and difficult to understand, and most users do not read them. Approaches to provide privacy policies in machine-readable formats lack industry support. In the usable privacy policy project, we combine natural language processing and crowdsourcing to semi-automatically extract data practices from privacy policies. Based on extracted information we aim to provide concise and useable privacy notices to users, as well as support the analysis of stated data practices.

Sören Preibusch: The value of privacy in Web search

Search is the most ubiquitous of all Web activities. It permeates our lives at home, at work and on the move. Web users’ lives and desires can be told through the search queries they issue: the AOL case has demonstrated how users can be re-identified from scrubbed search logs; search queries have been also used to infer users’ personality and demographics. The resulting privacy challenges are amplified as modern search engines are integrated with other highly personal online services.
I report on the first large-scale experiment into the value of privacy in Web search. The results provide meaningful evidence for research, businesses, regulators and policy-makers on two timely questions: (1) How much do consumers value privacy features in a search engine? (2) How much are privacy-enhancing features appreciated compared to convenience features and search result quality?

Nils Backhaus:  End user’s Perceived Privacy and Trust in Cloud Storage Services

The cloud is a buzzword in IT and a technological transition of the way we store and process data. Cloud computing means that the end users have to entrust computing resources to systems that are managed by external parties on remote servers. This transition has an influence on several IT processes and the users involved in those processes. On the one hand computing becomes more flexible and cost-effective caused by ubiquitous network access, on-demand self-service models, and elastic resources. On the other hand new challenges as to privacy, security, and trust arise, since the user is not involved in the collection, processing, storage, and disclosure of his data anymore (Cavoukian, 2008). All of these uncertainties lead to a potential risk, especially when sensitive data is stored or processed in a cloud solution. Many of these problems are closely related to security issues, since the potential harms of privacy are often a result of security gaps. Both, security and privacy can be directly linked to trust in cloud services. Finally, a lack of trust is an obstacle to cloud adoption for the user.

A lot of research has been conducted in the field of online privacy and trust, including e-commerce, e-government, e-health, and trust in information online. A short review is given in order to transfer the results from the online environment to the specific environment of cloud computing. In two questionnaire studies (N1 = 135, N2 = 505), the influence of trust, security and privacy aspects was examined for a cloud storage service. In a structural equation model (Partial Least Squares) the different factors are modeled and, based on predictions, linked to the usage behavior of end users. Different cloud services (private and public cloud) were compared using the General Linear Model. The results of both studies show that trust as well as perceived privacy and security seem to be crucial factors for the use of cloud storage services.

Stefan Brandenburg: User Data – to Whom do They Belong?

User data are everywhere. Users leave many traces in the Internet. They actively register for services, enter private information in social networks, agree to participate in surveys and so on. In addition, their behaviour is tracked by search engines, apps, all kinds of online and offline programs and different company services. But who is eligible to gather all this data? Should the monitored person be informed about or even involved into the monitoring process? Where is the border between ethical data and unethical collection and usage of users data? The present talk addresses some of these questions. It discusses ethical basics of users data collection and usage. These basics are illustrated by real life, everyday examples. Based on these ethical basics, it tries to narrow down the distinction between ethical and unethical data collection and usage. Implications for the process of engineering technical artefacts are discussed.