Ethyca’s CEO Cillian Kieran hosted a LinkedIn Live about the newly agreed upon EU AI Act. Read a summary of his talk and find a link to his slides on what governance, data, and engineering teams need to do to comply with the AI Act’s technical risk assessment and data governance requirements.
With the European Parliament’s landmark agreement on the Artificial Intelligence Act (AI Act) on December 8, 2023, our Ethyca CEO Cillian Kieran hosted a rapid-response LinkedIn Live about the technical requirements for governance, data, and engineering teams.
In his talk, Cillian covered how privacy and governance can be approached through an engineering perspective, what the main technical requirements of the EU AI Act are, focusing on risk management and data governance, and how engineers can implement solutions to those challenges.
Below is a recap of his main takeaways.
The EU AI Act categorizes AI systems into four different risk thresholds: low, limited, high-risk, and unacceptable risk. Each category level corresponds to the amount of regulation and oversight it’s subject to.
Low-risk, or minimal-risk models, have a low potential for creating risk or harm to EU subjects. These technologies include automated tools that don’t collect consumer data, such as spam filters. Low-risk AI systems have the least amount of regulations and oversight, if any, that they must follow.
Limited-risk AI systems, like an automated chatbot, are subject to more transparency guidelines. For example, limited-risk systems require extensive documentation about how the AI system was trained. Users must also be notified that they are interacting with an automated tool and should be given enough information to make an informed choice on whether or not to use it.
High-risk AI systems (the main focus of Cillian’s talk) are subject to strict regulations and guidelines. Although they are permitted on the market, these AI systems carry potentially high risks to users and must be heavily scrutinized before they’re deployed to the market.
Finally, unacceptable-risk AI systems are strictly forbidden in the EU, with a few exceptions. Examples of these banned practices of AI include the use of cognitive manipulation, individual predictive policing, and social scoring.
Cillian’s talk focused on how organizations can embed privacy into the development and deployment lifecycle of high-risk AI systems to mitigate potential risks and harms to users while complying with the AI Act.
AI systems that are categorized as high-risk have 10 technical legal obligations that must be adhered to. Cillian briefly described them during his talk. They are:
After outlining the 10 technical requirements of high-risk systems, Cillian dove deeper into two technical requirements: risk management and data governance, and how to perform them in a modern, complex tech stack.
Technical risk management involves the end-to-end identification, mitigation, and recording of risks in the AI development and deployment lifecycle. This is similar to the kinds of risk assessment used in broader governance, and the proactive approach of Privacy by Design.
To assess risk in complex, modern software development and data processing life cycles Cillian emphasizes that we need more context; We need to understand the purpose for building the system in the software development process, as well as the purpose of using the AI system for your organization in the data processing life cycle.
You would then need to know what data is being handled in those systems before you can start enforcing policy checks in the software and data processing lifecycle to ensure the appropriate data is in the appropriate system for the appropriate purposes. You’d also need an audit trail to prove compliance to regulators.
Technical data governance involves defining and applying policies that the organization has committed to based on regulatory requirements and internal policies. Data governance also involves enforcing those policies on business processes, like data engineering pipelines and data sets.
To enforce and govern policies on AI systems, you’d need the same thing for technical risk management: the purposes in relation to the software development and data processing lifecycle, as well as what data is being handled.
Additionally, you’d need to build context throughout your systems, i.e. purpose, data, subject, etc, and combine it with users’ privacy rights, such as the legal basis of processing, before you can start enforcing policy checks for the software development and data processing lifecycle.
Finally, Cillian goes through how we can actually do this. The answer: with an ontology, or a taxonomy that provides a uniform way to define and label the types of data being handled, the purposes of using that data, and the potential risks.
With a shared understanding of what these labels are across your organization, you’ll be able to label the data accurately so it doesn’t flow into the wrong models or systems for the wrong purposes. As Cillian said, “Labeling is a core, contextual capability of great governance from an AI perspective.”
With the right language, you can enforce these conditions into the software development and data processing lifecycle. Once the data is properly labeled, you and your governance teams can be alerted to potential risks, and proactively mitigate them by enforcing the policy.
You can find or download Cillian’s slides here.
If you have any more questions about the EU AI Act, Cillian’s LinkedIn live, or what your business needs to do to comply, schedule a meeting with one of our privacy deployment strategists today.
Today we’re announcing faster and more powerful Data Privacy and AI Governance support
See new feature releases enhancing user experience, adding new integrations and support for IAB GPP
Learn more about the privacy and data governance enhancements in Fides 2.27 here.
Read Ethyca’s CEO Cillian Kieran describe why and how an open data governance ontology enables companies to comply with data privacy regulations and frameworks.
Ethyca sponsored the Unpacking Privacy Engineering for Lawyers webinar for the Interactive Advertising Bureau (IAB) on December 14, 2023. Our CEO Cillian Kieran moderated the event and ran a practical discussion about how lawyers and engineers can work together to solve the technical challenges of privacy compliance. Read a summary of the webinar here.
Ethyca’s CEO Cillian Kieran hosted a LinkedIn Live about the newly agreed upon EU AI Act. Read a summary of his talk and find a link to his slides on what governance, data, and engineering teams need to do to comply with the AI Act’s technical risk assessment and data governance requirements.
Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!
Request a Demo