Request a Demo

Parsing The ADPPA Draft From The Privacy-As-Code Perspective

The American Data Privacy and Protection Act is gaining attention as one of the most promising federal privacy bills in recent history. We highlight some of the key provisions with an emphasis on their relationship to privacy engineering.

A Bipartisan Proposal Gains Momentum

Last week, U.S. lawmakers publicly shared the full draft of the American Data Privacy and Protection Act (ADPPA). The ADPPA represents one of the most promising attempts at comprehensive federal consumer privacy law in recent years. A bipartisan group of House and Senate members presented the bill, and the bill’s language reflects complex negotiations with government and business stakeholders.

In this blog post, we review some of the highlights in the ADPPA draft, with a view toward how some of these requirements are enabled by privacy engineering. Then, we consider the broader privacy community’s response to the bill and how, whether in privacy tech or regulations, a collaborative and input-driven approach will yield the best outcome.

ADPPA Terms on Data Minimization

Clocking in at 64 pages, the ADPPA is a sizable document. But those familiar with existing privacy laws like GDPR, CCPA, or BIPA will find a similar pattern to the language. After defining a number of data types, organizations, and processing activities, the bill establishes overarching principles under Title I. Among them, a requirement on data minimization in Sec. 101:

A covered entity shall not collect, process, or transfer covered data beyond what is reasonably necessary [and] proportionate.

The full bill text—which we encourage you to read—offers further detail on the scope of data minimization. But the bottom line is this: In order to minimize data processing, a company must know the extent of its data processing. For privacy engineers, this means being able to explicitly label the types of personal data in business systems, alongside standardized descriptions of why that data is being used. The principle of data minimization and its central role in regulation are not novel concepts. In conjunction with increasingly technical actions by the Federal Trade Commission, it is clear from the ADPPA draft that data minimization is not a nice-to-have; it’s well on the way to a baseline expectation.

Privacy by Design in the ADPPA

While Privacy by Design makes a brief appearance in Sec. 103 of the current draft, the language remains broad. The bill could go further to reference privacy engineering and its implementation. For instance, the bill could nod to the established objectives from National Institute of Standards and Technology as they relate to privacy engineering.

This interest in referencing privacy engineering is not solely on the basis that we live and breathe privacy engineering every day. Overtly pointing to already agreed-upon federal objectives for privacy engineering would help make the overall bill more feasible to fully implement. Building a proactive privacy engineering function is in the best interests of companies that would need to comply with this law—or any law, as privacy engineering can help organizations adapt to dynamic policy landscapes. Privacy engineering is also in the best interests of regulators who ultimately want their standards to take hold.

The bill, if enacted, would call on the FTC to promptly release further guidance on the terms of privacy by design. We would be keen to see what that guidance looks like and how the privacy tech community can contribute.

Classifying Data: Sensitive Covered Data

The ADPPA defines “sensitive covered data” broadly, encompassing sixteen categories of data, including:

  • Financial account information
  • Precise geolocation information
  • Biometric information, such as facial geometry from facial recognition systems
  • Government-issued identifiers
  • Information pertaining to a minor
  • Private communications, such as emails and text messages
  • Genetic information
  • Log-in credentials
  • Information revealing an individual’s race, ethnicity, religion, union membership, or sexual orientation—in a manner that does not align with the individual’s reasonable expectation around that data processing

The classification of certain data categories as “sensitive covered data” has a number of ramifications for companies’ privacy operations. For instance, in order for a company to process or transfer an individual’s sensitive covered data, it must receive affirmative consent from that individual.

From an operational standpoint, the ADPPA’s wide definition of sensitive covered data would demand that companies have agreed-upon labels for data categories. In order for a company to adapt to a future law like ADPPA, any of the data categories considered sensitive would need to be clearly marked within databases. Those labels could offer a vital summary of where and how sensitive data is flowing through business systems. With this information, a company could better identify where it needs opt-in consent mechanisms and concise privacy notices, pursuant to the requirements of the ADPPA.

Standout Acronyms: DSR and PIA

Similar to the recently enacted Connecticut Data Privacy Act, the ADPPA would grant consumers a number of data subject rights like:

  • Right to access personal data
  • Right to erase personal data
  • Right to correct inaccurate personal data
  • Right to receive a portable export of personal data

The proposed timelines for companies to fulfill these requests depends on the size of the company. For large companies, DSRs might need to be fulfilled within 30 days. Comparing DSR requirements across regions, the ADPPA has as short of a timeline as GDPR does, and a shorter timeline than any under currently enacted state privacy laws in the U.S. If the ADPPA is enacted, with hundreds of millions of Americans able to submit access requests, a well-oiled workflow for DSR fulfillment is a necessity for both legal compliance and operational efficiency.

Additionally, the ADPPA requires companies to conduct privacy impact assessments, or PIAs, on a regular basis if the companies are of a certain size. Companies of all sizes are also required to proactively evaluate the risks in algorithms they intend to use in partially or solely automated decision-making. The requirements for privacy impact assessments and algorithmic design evaluations stand to shift privacy upstream for many companies. Impact assessments are high-order privacy projects, because they require so much low-level information, such as what data is being used and where exactly it resides. In addition to finally giving Americans a set of standards for privacy, this bill could materially improve technologies by safeguarding against privacy abuses before they ever take hold—conditioned on effective enforcement of this law.

ADPPA’s Private Right of Action and Preemption

Two of the most contentious aspects of modern U.S. privacy legislation are a private right of action and preemption. A private right of action enables consumers to pursue companies for violations, rather than leaving enforcement solely up to an authority like the Attorney General. Some commentators have mentioned that a private right of action is a necessary feature of a strong privacy law; the go-to example is the powerful Biometric Information Privacy Act in Illinois. Meanwhile, others maintain that it could overwhelm businesses if any consumer initiated legal action.

Currently, the ADPPA contains a limited private right of action, conditioned on a number of procedural requirements. First, the right would not take effect until four years after the law is enacted, presumably giving businesses the opportunity to get their privacy operations in line. Second, a series of notice requirements and coordination with states’ attorneys general and the Federal Trade Commission would be required.

Some lawmakers view the current terms on a private right of action as too burdensome for users to exercise their rights. Debates over a private right of action have sunk promising state privacy bills in recent years, and we are closely watching how the ADPPA’s terms on a limited private right of action will evolve in the coming weeks.

In addition to a private right of action, preemption has defined recent privacy debates in legislative chambers. If a federal law preempts a state law, it effectively replaces any rights under the state law with those granted by the federal one. Some privacy advocates are concerned that federal privacy law could nullify individual states’ privacy progress, particularly if a federal law is enacted with weaker consumer protections than those in the state law. In its current form, the ADPPA has a complex approach to preemption, where it generally preempts state privacy laws, but with a number of exceptions, including:

  • Specific terms of California’s CCPA around data breaches
  • The Biometric Privacy Act of Illinois
  • Laws that solely address facial recognition
  • Laws pertaining to student privacy

There is no binary answer to “Does the ADPPA preempt other privacy laws?” The answer is: it depends. This variation drives home the importance of a Privacy-as-Code approach, where companies describe privacy behaviors explicitly in the codebase. This descriptive layer—where each database has specific labels on the “what,” “how,” and “why” of the personal data contained therein—can enable companies to implement regionally specific governance requirements, and meet whatever style of preemption accompanies an enacted federal privacy law.

Community-Driven Privacy: Public Feedback on the ADPPA

In response to the ADPPA draft being released, privacy experts have shown a range of reactions, from enthusiasm and cautious optimism to uncertainty and disappointment.

A significant amount of attention is on the exemptions of the bill: which businesses and data types would be outside the scope of the ADPPA’s requirements. The ADPPA does not apply to employee data or de-identified data, and privacy commentators have pointed out how the bill’s definitions for these categories could be ambiguous in their current forms. De-identified data, in particular, is a challenging concept to strictly define, as the techniques for reconstruction attacks or other forms of re-identification become more sophisticated. Unambiguous definitions are the backbone of the Privacy-as-Code approach, and the refinement of these definitions might improve the public attitudes toward the bill as well as its eventual implementation.

As with innovation in general, privacy measures are fairer and more effective when they are developed with input from more groups. It’s one of the reasons why our own Fides platform is open-source! The ADPPA draft denotes numerous sections where lawmakers are particularly interested in public comment. We encourage you to consult with your organization on the draft ADPPA and share your input.

In concluding our first impressions of America’s most promising draft federal privacy bill to date, it would be remiss not to mention that the Fides open-source privacy engineering platform is built to let teams exercise the granular degrees of data context and control that would be required by such an all-encompassing regulation. If you want to get a head start on the ADPPA, now is the ideal time to start developing a proactive privacy engineering function in your organization. Fides is a platform that bridges silos between engineering and legal teams, to enforce nuanced policies across complex data infrastructure, automatically checking code for privacy compliance before it ever processes personal data. Clone the repo and get started today!

Ready to get started?

Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!

Request a Demo