My talk at this year’s Privacy Engineering Practice and Respect (PEPR) conference came on the heels of the Colorado House voting to pass the state’s comprehensive privacy legislation.
My talk at this year’s Privacy Engineering Practice and Respect (PEPR) conference came on the heels of the Colorado House voting to pass the state’s comprehensive privacy legislation. This regulatory news sums up one of my talk’s main points for engineers: users are looking for respectful systems, where respect is built into the processes that handle their data. Building respectful systems is both the right thing to do and—increasingly—the approach demanded by regulations worldwide.
For folks who could not attend PEPR, or those who want to revisit my talk, I’ve written up a couple of highlights on modern data erasure in distributed systems.
A single company will hold personal data across dozens of data systems, and these data systems often refer to one another. Thus, simply deleting a row or individual field could wreak havoc downstream on operations that, for example, rely on a string in a particular field that no longer exists. Maintaining these relationships is referred to as preserving “referential integrity”. In most case, erasure should not interrupt those existing data relationships held across tables. At the same time, erasure methods must be comprehensive, visiting every table in every data system at least once and effectively erasing all identifiable information. Failing to do so could yield a costly fine for non-compliance with the growing number of privacy regulations worldwide.
Performing erasure at scale requires a systematic traversal of all data systems that could hold users’ data. That said, the essential algorithm is fairly straightforward:
Without getting too into the weeds on data structures, this approach effectively creates a tree. Starting at the base (or root), a set of row IDs are collected. Then for each of those row IDs, their related row IDs branch off from that root repeatedly, until no further relationships exist. These are the “leaves” of the tree, and the safest order for deletion for this kind of structure is to start at these at the leaves and work backwards down to the root. With this reverse-order traversal, the erasure process can safely be interrupted without leaving the system in a bad state. In fact, your database constraints may force this!
Structural differences between application databases and data warehouses make the latter less conducive to this iterative approach. Data warehouses often contain many copies of personal data, as they are often denormalized tables. Furthermore, warehouse tables are usually indexed completely differently than application databases, which means that typically fast operations (like querying by user ID) may become painfully slow. As a result, there are typically more tables to visit, less efficiently, so erasure in a data warehouse is generally slower than it is in an application database.
Nevertheless, erasure is possible in a data warehouse. When implementing the same general algorithm as above, visiting thousands of these rows iteratively can be inefficient. To improve this, I’d recommend looking for where you can run bulk updates of records, taking advantage of the partitions or indexes that do exist in your data warehouse to carry out the operation at scale.
As a company, you’re only responsible for erasing data that you actually store. It sounds almost vacuously true, but it’s increasingly important to emphasize data minimization—storing only the necessary amount of data, for the minimum amount of time—as a design mindset in building data systems.
As the demand for data-driven applications has exploded, developers have largely defaulted to collecting as much personal data as possible, since it’s convenient to do so. This mindset has simply been the industry norm. However, proactive privacy steps like maintaining detailed metadata on categories of personal data aren’t typically a part of the development lifecycle. Instead, these tasks happen infrequently after deployment, taking up a lot of time. I believe we can and should build privacy into the design of data systems. Respectful systems are systems that users deserve, and they simplify regulatory compliance so you can focus on engineering instead of ticking checkboxes. Take data minimization as an example. Developers can practice respectful design by minimizing the data they collect from the outset, and furthermore proactively erasing data after it has fulfilled its business purposes. For example: expire all collected session data every few hours. In this case, erasure takes care of itself!
Virtually every company will need to implement privacy measures. It’s only a question of whether you’ll implement them proactively and efficiently, or retroactively and weigh down your team’s ability to innovate in the process.
On a conceptual level, data deletion is one of the most straightforward ideas in modern data privacy. Putting into action across distributed systems, however, is more of a challenge. Modern data rights are here to stay (see: Colorado on the brink of passing the latest comprehensive privacy law in the US), and frankly we as developers can make respectful design choices to make it less difficult!
In my opinion, waiting for regulation to catch up to reality is pretty unnecessary; building more respectful systems makes better products, because they help you earn users’ trust. As I was preparing for PEPR, I came across a tweet from Dr. Ann Cavoukian, the scholar who put the Privacy By Design framework on the map. In a conversation about privacy vs. innovation, Dr. Cavoukian replied:
Get rid of the “vs.” You can have privacy AND innovation! You just need to proactively embed privacy-protective measures into the design of your operations — bake it into the code, by design!
Couldn’t have said it better myself.
At Ethyca, we believe that software engineers are becoming major privacy stakeholders, but do they feel the same way? To answer this question, we went out and asked 337 software engineers what they think about the state of contemporary privacy… and how they would improve it.
The UK’s new Data Reform Bill is set to ease data privacy compliance burdens on businesses to enable convenience and spark innovation in the country. We explain why convenience should not be the end result of a country’s privacy legislation.
Our team at Ethyca attended the PEPR 2022 Conference in Santa Monica live and virtually between June 23rd and 24th. We compiled three main takeaways after listening to so many great presentations about the current state of privacy engineering, and how the field will change in the future.
For privacy engineers to build privacy directly into the codebase, they need agreed-upon definitions for translating policy into code. Ethyca CEO Cillian unveils an open source system to standardize definitions for personal data living in the tech stack.
Masking data is an essential part of modern privacy engineering. We highlight a handful of masking strategies made possible with the Fides open-source platform, and we explain the difference between key terms: pseudonymization and anonymization.
The American Data Privacy and Protection Act is gaining attention as one of the most promising federal privacy bills in recent history. We highlight some of the key provisions with an emphasis on their relationship to privacy engineering.
Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!Book a Demo