Ethyca’s Senior Software Developer Adrian Galvan explains how California is leading the way on AI regulations in the U.S.
2023 has been a landmark year for artificial intelligence (AI). AI innovations experienced a boom back in March with OpenAI’s release of GPT-4. Since then, other tech companies have thrown their hat into the ring, from Google’s Bard (a direct response to ChatGPT) to Meta’s LLaMA (a freely available AI model).
As the public started to form their opinions, those more privacy-focused started to wonder: “what does this mean for my data?” Luckily, the California Privacy Protection Agency (CPPA) is continuing to lead the way in privacy protections in the U.S. by addressing the rise of AI. Let’s go over the agency’s plans to strike a balance between AI innovation and privacy preservation through transparency and proper risk assessment.
Under the provisions of the California Privacy Rights Act (CPRA), the CPPA is spearheading efforts to draft regulations around AI and Automated Decision Making (ADM). The draft regulations, though not yet finalized, outline requirements for businesses using AI or Automated Decision-making Technologies (ADT).
Automated Decision-making Technologies (ADT) – Any system processing personal information to facilitate human decision-making.
One of the core aspects of the proposed regulations is requiring transparency in ADT. The CPPA plans on requiring companies to provide information on the following:
One of the most important focuses in automated decision-making is detecting and mitigating any biases that may unknowingly exist in ADTs. Having proper audits and a thorough understanding of the systems will go a long way for the public to gain trust in these technologies.
The second key component of the recent draft regulations is a focus on in-depth risk assessments. These risk assessments are designed to make businesses consider the security implications when working with personal data. To mitigate risk, the draft regulations recommend strong cybersecurity infrastructure standards, including the use of firewalls, encryption protocols, and intrusion detection systems. The open questions around this topic are:
Additionally, the CPPA proposes that a business’ incident response preparedness should be audited. This ensures that proper actions, reporting, and remediation are in place to minimize any potential damage in the event of a data breach.
The CPPA discussed these proposals during their last meeting on September 8th and are slated to reconvene on December 8th of this year. Be sure to keep your eyes on the agency and stay up-to-date on this ever-changing privacy landscape.
Ethyca’s VP of Engineering Neville Samuell recently spoke at the University of Texas at Austin’s Texas McCombs School of Business about privacy engineering and its role in today’s digital landscape. Read a summary of the discussion by Neville himself here.
Learn more about all of the updates in the Fides 2.24 release here.
Ethyca’s Senior Software Engineer Adam Sachs goes through the thought process of creating Fideslang, the privacy engineering taxonomy that standardizes privacy compliance in software development.
Learn more about all of the updates in the Fides 2.23 release here.
Our Senior Software Engineer Dawn Pattison walks you through implementing data minimization into your business.
Learn more about all of the updates in the Fides 2.22 release here.
Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!Request a Demo