To get started in privacy engineering, these three objectives can help you identify the first steps in embedding privacy and respect into your organization’s tech stack.
Your legal team has tapped you to start up a privacy engineering program. With mounting privacy risk and unscalable processes, they understand that the GRC function cannot manage privacy alone—engineers need to be a part of the picture.
In this post, we dive into the three privacy engineering objectives from the National Institute of Standards and Technology (NIST). Combined with your deep knowledge of your organization’s tech stack, these questions can help you set the privacy engineering roadmap.
NIST is a US federal institute within the Department of Commerce. Founded in 1901, it is one of the foremost authorities in setting benchmarks and standards for technical disciplines. In 2017, NIST released a publication titled “An Introduction to Privacy Engineering and Risk Management in Federal Systems.” The publication offers outcomes-based, high-level guidance for any organization building a privacy engineering program.
The authors propose three objectives for privacy engineering, which we paraphrase as:
Like the Fair Information Practice Principles (FIPPs) or the Privacy by Design framework, this set of overarching principles can be a north star in the implementation of privacy engineering initiatives. As the authors note:
“A system should exhibit each objective in some degree to be considered a system that can support an agency’s privacy policies… System designers and engineers, working with policy teams, can use the objectives to help bridge the gap between high-level privacy principles and their implementation within systems.”
Next, we explore each privacy engineering objective.
A predictable system does not throw surprises at anyone who uses it. With this informal definition, we can see three important questions to ask at the outset of building a privacy engineering initiative:
Each question adds an important layer of detail to inform your privacy engineering roadmap.
The first question is an important exercise in identifying the categories of data subjects and those responsible for managing part of the company’s data system. Data subjects might be grouped under labels like employee, customer, and job applicant. Managers of a data system might include data engineers who maintain the infrastructure storing PII and lawyers who maintain privacy policies. Keep this list of stakeholders on hand and up-to-date. Maybe you’ll store this list as a table—doing so can help with the next two steps.
The second question is an opportunity to game out scenarios to avoid. Describe events that would catch a stakeholder off-guard, and iterate over your new list of stakeholders. For example, a customer would be surprised if they started receiving emails from a business they’ve never heard of. In this hypothetical, your company shared the customer’s data with this third party without giving fair notice to the customer. Let’s add a column to our table:
Events that would surprise this stakeholder
|Customer||A customer gets an email from an unknown third-party business, because your company shared contact information unbeknownst to the customer.|
|Data engineer||A data engineer is not able to access customer data needed for compiling a tax report.|
The third question gets at an opposite scenario: identifying ideal outcomes. For each entry in your list of stakeholders, identify how they interact with PII in your system and for what purpose. Think about what, where, and why. As a basic example: a customer provides contact information and payment information (the what), both of which are processed in the
customers database (the where), for the purpose of delivering shipments to the customer (the why).
Repeat this step for all of your data subjects, as well as internal managers of the data system. For instance, a data engineer extracts customer payment information from the
customers database for the purpose of compiling it alongside all other recent transactions for tax reporting. We start to develop a well-rounded picture of a predictable system:
Goal-state for how this stakeholder interacts with PII in the data system
|Customer||A customer’s contact information and payment information are processed in the
|Data engineer||A data engineer is able to access customers’ contact information and payment information in the
The process is an opportunity to make sure that your company’s data infrastructure covers each of the what/where/why statements you created, and that it doesn’t extend into extraneous use cases. This can be a starting point for identifying privacy engineering projects and priorities. For an example of predictability in action, check out how our CEO Cillian can describe privacy behaviors in the codebase to root out unlawful use cases directly in the CI pipeline, before code ever touches users’ data. Code reviews against privacy policies help to minimize surprises and create reliable systems.
Each stakeholder should have the right kind of controls over PII. Looking back at the list of stakeholders, different controls make sense for different stakeholders. For example, a California-based customer should be able to request a copy of their PII but not a copy of somebody else’s data. Looking internally, a data engineer should be able to carry out an access request but not calibrate a user’s consent preferences that the user has a legal right to express for themselves.
Under the “Predictability” objective, you have already specified how each stakeholder would interact with a zero-surprises system. Using this information, assess whether a technical system gives enough control to each stakeholder to carry out their roles. When receiving an access request, does a data engineer have the tools to comprehensively check all relevant data infrastructure for instances of the user’s personal data? On the flip side, ensure that each stakeholder does not have more control than they need. Are there technical measures in place to make sure that nobody but Jane Doe can receive a fulfilled access request for Jane Doe?
PII should remain dissociated from individuals under any circumstances, except when the system’s operations require a specific stakeholder to access this association.
Consider the different points of exposure for PII throughout your data flows. Are there opportunities to implement privacy-enhancing technologies that enable data processing without disclosing any PII? For instance, secure multiparty computation could allow your customers to submit sensitive information to a predictive model without disclosing this information to your data analysts.
Implementing technical measures that enforce dissociability while supporting innovative data processing is a sign of excellent privacy engineering. A tenet of privacy engineering is treating privacy and innovation as companions, rather than opponents. Embedding privacy in development starts and ends with having a clear picture of privacy risks and goals. Starting with these three objectives can help you begin building a reliable system with granular data controls and minimal PII exposure.
The three objectives for privacy engineering from NIST offer a set of guiding questions to shape a nascent privacy engineering program. They provide a powerful complement to other frameworks like Privacy by Design or the Fair Information Practice Principles. Each of these frameworks gives a different perspective on important aspects of PII processing, and getting familiar with more of these frameworks will help you ask the most relevant questions for your tech stack and your company.
Now that you are developing your privacy engineering roadmap, you might be searching for technical tools to implement predictability, manageability, and dissociability. The Fides devtools power proactive privacy engineering in the CI pipeline and at runtime, translating privacy policies into the codebase.
Ethyca launched its privacy engineering meetup, P.x, where Fides Slack Community members met and interacted with the Fides developer team. Two of our Senior Software Engineers, Dawn and Steve, gave presentations and demos on the importance of data minimization, and how Fides can make data minimization easier for teams. Here, we’ll recap the three main points of discussion.
We enjoyed two great days of security and privacy talks at this year’s Symposium on Usable Privacy and Security, aka SOUPS Conference! Presenters from all over the world spoke both in-person and virtually on the latest findings in privacy and security research.
At Ethyca, we believe that software engineers are becoming major privacy stakeholders, but do they feel the same way? To answer this question, we went out and asked 337 software engineers what they think about the state of contemporary privacy… and how they would improve it.
The UK’s new Data Reform Bill is set to ease data privacy compliance burdens on businesses to enable convenience and spark innovation in the country. We explain why convenience should not be the end result of a country’s privacy legislation.
Our team at Ethyca attended the PEPR 2022 Conference in Santa Monica live and virtually between June 23rd and 24th. We compiled three main takeaways after listening to so many great presentations about the current state of privacy engineering, and how the field will change in the future.
For privacy engineers to build privacy directly into the codebase, they need agreed-upon definitions for translating policy into code. Ethyca CEO Cillian unveils an open source system to standardize definitions for personal data living in the tech stack.
Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!Book a Demo