Privacy by Design and Identity Management Infrastructure for Interactive Assistance Systems
Interactive assistance systems are finding their way into daily life. Smart homes increase our comfort by controlling indoor climate, lighting or entertainment electronics according to our preferences. Future workplaces will interactively guide workers through complex manufacturing processes and assist surgeons during surgery. Aiming to be situation-aware and unobtrusive requires such assistance systems to monitor users, objects, and environmental conditions using a multitude of sensors. Personal data is collected and processed on a large scale, which raises privacy concerns and risks of abuse. This applies for workspaces, but also for smart home solutions, which often rely on cloud services for data analysis. Thus, interactive assistance systems have to be designed with caution, keeping in mind the paradigm of Privacy by Design (PbD).
Like interactive assistance systems, data protection is user-centric. It demands that users maintain sovereignty over their personal data, i.e. users must be able to determine which data is collected and used for which purposes by a digital assistance system. Assistance systems must also be transparent about whether and how long user data is stored, and about optional personal data, i.e. data that is not mandatory for the system to provide its service but may, for example, increase its awareness of a situation. Consider a system that guides a worker through a manufacturing process. The system has to capture the worker’s activities in order to provide appropriate instructions for the detected workflow phase. It may be able to offer even better support if it is allowed to track parameters indicating the worker’s stress level. However, the worker is free to reject this extended data processing option or only agree to it for a short trial period. While raw sensor data typically does not have to be stored (live data), derived information must, to some extent, persist in order to configure the system for a given user (profile data). Profile data may include information such as “user X is experienced with manufacturing steps a, b, and e”, “user X is left-handed”, or “user X is stressed once his heart rate exceeds 110 bpm”. Profile data may also be directly provided by the user himself (e.g. “left-handed”), and the risk of abuse varies from non-existent to extremely high if it could be used for illegitimate performance monitoring or drawing conclusions regarding the user’s state of health.
The design of a privacy-aware assistance system must therefore provide an interface for making personal data collection and usage transparent and for defining usage arrangements according to users’ preferences. In other words, this interface needs to be able to communicate with an infrastructure for maintaining and storing users’ profile data including usage policies for the personal attributes contained in a profile. The Competence Center for Applied Security Technology (KASTEL) outlines and identity management and data protection enforcement infrastructure, which is based on mechanisms such as User Managed Access (UMA), Distributed Usage Control (DUC), and Trusted Computing (TC). For user interaction with this infrastructure, we rely on a mobile device, e.g., the user’s smartphone.
Based on UMA, a user can manage identity attributes and user profiles he wants to share with different assistance systems (“Alice is left-handed”). UMA-based authorization needs to be augmented with DUC so as to enforce user-specific data protection requirements within the assistance system, e.g., “Alice’s heart rate data must be deleted after seven days”. For establishing confidence in the enforcement of such DUC policies, we need to ensure that the assistance system is in a trustworthy condition. Therefore, we have to demand that such systems are verified either by a certification process or by means of software verification methods. Given such a defined trustworthy state, we can employ remote attestation protocols based on unforgeable hardware trust anchors such as Trusted Platform Modules (TPMs) or Intel SGX technology to validate an assistance system’s integrity. Only after this step, we can deploy DUC policies on an assistance system we want to use, rely on their enforcement, and thus rely on the system’s compliance to our data protection arrangements when we provide personal data or agree in the collection and processing of personal data by the system.