Building Systems That Deserve Consent

Consent is one of the six lawful bases for processing personal data under the GDPR (Article 6(1)(a) GDPR). To be valid, it must be freely given, specific, informed, and unambiguous, reflecting a clear expression of the data subject’s wishes (Article 4(11) GDPR). In theory, this provides a strong safeguard, ensuring that personal data is only processed when users have made a genuine choice.

In practice, however, the legal standard is frequently not met. The issue is not ambiguity in the definition, but the way digital systems are designed. By the time a consent mechanism is presented, decisions about data collection and processing have often already been made at the architectural level. The interface may present a choice, but the system has already determined the outcome.

If the architecture is not designed with genuine user choice in mind, consent becomes a legal patch for a structural problem. The question is not how to obtain better consent, but how to build systems that deserve it.

Elements of Valid Consent

The element “free” implies real choice and control for data subjects, which is absent where coercion, pressure, or consequences for refusal are involved. “Informed” consent requires controllers to provide clear and accessible information so that data subjects can make meaningful decisions. “Specific” consent ensures transparency for the data subject. Consent needs to be “unambiguous,” requiring a statement from the data subject or a clear affirmative act (see Article 29 Data Protection Working Party, reaffirmed by EDPB Guidelines 05/2020 on consent).

Taken together, this means consent must be granular and separable. A data subject must be able to agree to one processing purpose while refusing another, and the system must honor that distinction. Where a controller has conflated several processing purposes without seeking separate consent for each, consent cannot be considered valid. A related but distinct issue arises where consent is bundled with terms and conditions, or where the provision of a contract or service is tied to a request for consent to process personal data not necessary for its performance. In such cases, it is also not valid (Article 7(4) GDPR, read together with Recital 43 GDPR).

Furthermore, consent must be recognizable as a genuine act of will. It must be presented in a clearly distinguishable manner (Article 7(2) GDPR). Pre-ticked boxes, silence, and inactivity do not constitute consent. 

The structural relationship between the parties may itself undermine freedom of consent, leaving no room for a genuine choice. The GDPR explicitly recognizes a power imbalance between the controller and the data subject as a common source of such pressure (Recital 43 GDPR). Where such an imbalance exists, freely given consent will rarely, if ever, be achievable.

Importantly, consent is only meaningful if it can be withdrawn. The controller must be able to demonstrate that it is possible to refuse or withdraw consent without detriment (Recital 42 GDPR). For example, withdrawing consent must not lead to any costs or place the data subject at a clear disadvantage. Likewise, the right to withdraw (Article 7(3) GDPR) must be as easy to exercise as the right to give consent in the first place. Notably, withdrawal must have a real effect, not merely produce a record in the system with no practical consequences.

As a temporal requirement, consent must be given prior to the processing activity. Accordingly, if the purposes of data processing change after consent was obtained, or if an additional purpose is envisaged, controllers must obtain a new and specific consent. 

Consent Failures in Practice

Although the legal requirements set out above are clear, many systems were never designed to support a meaningful “no”. The result is that validly given consent is routinely undermined across many categories of digital products.

Cookie Banners and Web Tracking

Cookie consent banners are the most visible example of consent failure. Accepting all cookies typically requires one click. Refusing, by contrast, often means navigating multiple screens, unchecking pre-selected options, and working through a lengthy list of individual settings for each data category. This asymmetry is a design choice, making the path to refusal significantly more difficult than the path to acceptance. The voluntariness of any consent recorded in such cases is questionable.

The same dynamic appears in so-called “pay or okay” models, where users must either consent to data processing or pay for a version of the service that does not require it. When faced with such systems, more than 99.9% of users consent to tracking, even though only 3% to 10% of users actually want to be tracked for personalized advertising. The gap between recorded consent and genuine preference is striking and difficult to ignore.

Beyond the banner itself, third-party tracking tools are frequently initialized at page load, before any consent choice is made. By the time the user sees the banner, data has already been collected and transmitted. The architecture has already made the choice before the user is offered one. In other words, the consent mechanism exists, but the consent does not. Notably, some browsers and privacy-focused tools will block or clear this data automatically, but compliance cannot depend on the user’s choice of browser.

For clarity, consent is not required for strictly necessary cookies, which are essential to the basic functioning of the website. They exist solely to deliver the service the user has requested. Where only such cookies are used, no cookie banner is legally required.

Mobile Applications

Mobile applications present a similar problem at scale. Many permissions, such as access to location, contacts, microphone, and camera, are typically requested at installation or at first launch, before the user has any meaningful understanding of what the application does or why it needs them. Accepting all permissions or not using the application at all is a condition of access, not a free or an informed choice.

The condition of access does not end at installation. When a user has declined a permission and the application continues to prompt for it, the repeated request itself becomes a form of pressure. A consent obtained because the user gave up refusing does not appear to be freely given. Consent obtained at the stage of installation is also typically treated as permanent. A user who originally agreed to location tracking is unlikely to revisit that choice, and the application is unlikely to invite them to. The architecture assumes ongoing consent where none has been meaningfully renewed.

The “only while using this app” permission option raises a further problem connected to the granularity of consent, specifically the scope of what the user has agreed to. What constitutes “active use” is not precisely defined, and data collected during permitted sessions may be retained and processed long after the session ends. Whether the system is designed to reflect what the user understood they were consenting to is rarely transparent. Where it does not, the consent is neither fully informed nor freely given.

Internet of Things and Connected Devices

Connected devices, such as smart home devices, wearables, and vehicles, take this a step further. Consent is typically obtained at the point of purchase or activation via lengthy legal terms that few users read or fully understand, and then treated as covering continuous, lifetime data collection. The user often has no practical mechanism to inspect what the device sends, to whom, or how often. As firmware updates expand the device’s capabilities and data flows, the gap between what was described at setup and what is actually collected widens, all without any new consent being sought.

Workplace and HR Systems

A different but equally significant challenge arises in the employment context. Due to the inherent power imbalance between employer and employee, consent is rarely capable of being freely given. However, many HR platforms, productivity monitoring tools, and internal communication systems are marketed to employers on the basis that employee consent, obtained through onboarding paperwork, provides a valid lawful basis for processing. The European Data Protection Board (EDPB) has made clear that for the majority of data processing at work, the lawful basis cannot and should not be the consent of employees (see EDPB Guidelines 05/2020 on consent). Organizations that have built internal systems on employee consent are frequently operating on an invalid foundation without being aware of it.

AI Training Data

One of the most significant consent challenges of recent years is the use of personal data to train machine learning models. Users who created accounts on platforms years ago almost certainly never consented to their content, behavior, or personal data being used for AI training purposes. Retroactive updates to privacy policies, such as “we may use your data to improve our services,” are increasingly being challenged as legally insufficient to cover a purpose that was not disclosed at the time of original consent collection. A policy change made years after consent was given cannot be regarded as freely given, specific, and informed consent.

Platform Dominance

If services have no comparable alternative, such as a professional network, a communication tool, or a platform with no competitor, it is difficult to determine whether refusal of consent actually carries no detriment. The choice to refuse is theoretical rather than real where a platform’s dominance or network effects make refusal effectively equivalent to losing access to a service that has become practically essential. Since the absence of a privacy-respecting alternative is not a design failure but a structural market condition, design alone cannot solve this problem. Addressing it may ultimately require intervention through competition law or digital markets regulation rather than data protection law.

A Structural Problem, not a Legal One

Across all the contexts described, the pattern is consistent. What they share is not carelessness, but a common architectural assumption that consent can be added to a system after the fundamental decisions about data collection have already been made. 

Most digital systems are designed to collect large amounts of data, concentrate control in one place, and connect freely with external services. These design choices simply reflect the easiest and most common approach to building software. Adding analytics tools is useful, integrating third-party services is time-saving, and retaining data broadly is the default. Although each decision is individually reasonable, collectively they produce a system that depends on broad data access to function. This dependency makes valid consent structurally difficult to achieve.

If the system cannot function without collecting location data, behavioral profiles, or third-party advertising integrations, then a user who refuses consent loses access to the service. Instead of a free choice, consent becomes a legal requirement applied to a system that was never designed to honor it. If a system breaks when a user says no, the architecture was wrong before the consent question was ever asked.

The problem is not that organizations set out to obtain invalid consent, but that their systems were never designed with genuine user choice as a constraint. The Principle of Least Authority offers a direct response, which aligns closely with the GDPR’s data minimization principle (Article 5(1)(c) GDPR). Accordingly, the required shift must take place at the design stage by building systems where broad consent is never needed.

Getting the Architecture Right

If consent failures are architectural in origin, the solution must also be architectural. The following measures outline how this approach could be implemented. 

Start with purpose, not data. The most effective stage at which to address consent is before any collection takes place. Before any data is collected, it should be determined what specific, identified purpose the data serves and whether collection is necessary for that purpose. Data minimization works when it is treated as a default engineering constraint from the start, not a compliance check at the end. Where processing can happen on the user’s device rather than a server, it should, as local processing reduces exposure before consent is ever needed.

Design the system to reflect consent boundaries. This requires isolating components so that internal services only receive the data they need for their specific function. Broad, permanent access should be replaced with scoped, time-limited permissions proportionate to the sensitivity of the data. Where individual-level tracking is not strictly necessary, analytics should be decoupled from identity. Many analytical purposes can be served with aggregated or anonymized data, and in such cases, that should be the default.

Treat third parties as a consent question, not an afterthought. Systems rarely become noncompliant through deliberate decisions. It typically occurs incrementally as tools are added without being checked against what users have consented to. Reviewing integrations against the existing consent scope before they are added, rather than after, is a straightforward way to prevent this.

Build for the full lifecycle. Consent does not end at the point of collection. Withdrawal needs to be a real backend process with a defined outcome and a clear answer to what becomes of data already collected. When new processing purposes are introduced, existing consent does not automatically extend to cover them, and reconsenting existing users where necessary should be treated as a standard step in product development. Data should be deleted or anonymized once it is no longer needed for its original purpose, since retaining it beyond that point constitutes as much of a consent failure as collecting it without consent in the first place. Auditability should be built in from the start, with a clear record of what was collected, when, under what consent, and by which component, rather than reconstructed when needed.

Where Security Audits Turn Policy into Practice

Getting the architecture right is a necessary first step. Verifying that it works as intended is the next. That is precisely what a security audit does. It is a technical examination of actual system behavior in safeguarding digital information from unauthorized access for the purposes of confidentiality, integrity and reliability. This is distinct from a legal review, which assesses whether a privacy policy is properly drafted, the correct lawful basis is chosen, and legal obligations are identified. While a legal review addresses what a system should do, a security audit addresses whether it does. Together, they form a holistic approach to privacy compliance.

In the context of consent and confidentiality, a security audit can surface issues that would otherwise remain invisible until a complaint or investigation forces them into view. It can determine whether third-party tracking tools are collecting data before a user has made any consent choice, whether consent state reaches downstream systems and refusals have any practical effect, and whether deletion and suppression pathways exist and function when consent is withdrawn. It can also identify gaps between what a system claims to collect and what it actually sends.

A security audit can determine whether a system actually honors the confidentiality promises that its privacy policy states. Once identified, most consent issues can be addressed. Resolving them has value beyond compliance, since users are increasingly aware of how their personal data is handled. Organizations that can demonstrate to users that their systems reflect stated privacy commitments build confidence that is difficult to achieve in any other way.

Consent Begins with Design

The best way to ensure consent is validly given is to design systems that do not depend on it. Consent is a lawful basis for processing, not a substitute for responsible data design. A system that relies on consent as its primary privacy safeguard has not solved the privacy problem. It has delegated the problem to the user, seeking broad consent at every turn rather than applying the Principle of Least Authority to its data flows. The shift required is not primarily legal but architectural. Systems should be designed so that consent is rarely needed rather than constantly requested, and where it is needed, the architecture should be capable of honoring a refusal. 

Ultimately, data minimization, local processing, scoped permissions, and isolated components are design decisions that determine whether valid consent is even possible. They are what make consent free, informed, specific, and unambiguous. When those decisions are made correctly, consent becomes what it was always intended to be: a genuine choice.

Written by: Dr. Dorothee Landgraf

Archives