Changes in the data-collection policy for a hugely popular audio editing app are highlighting old and new tensions in digital trustworthiness, and how open-source software can offer solutions.
Changes in the data-collection policy for a hugely popular audio editing app are highlighting old and new tensions in digital trustworthiness, and how open-source software can offer solutions.
Audacity has been a touchstone for open-source editing software for years. Since its first open-source release in 2000, the app has garnered over 100 million downloads, giving rise to vibrant online communities of users. To understand the implications of Audacity’s July 2 update and its widespread backlash, it’s crucial to keep in mind the app’s open-source community.
Let’s look at two explicit changes in Audacity’s policy and their impact for software users and broader regulations.
First, the July 2 update specifies the kinds of personal information that Audacity collects. For analytics and apps improvement, the policy names six pieces of information:
Additionally, the update mentions that Audacity may collect any “data necessary for law enforcement, litigation and authorities’ requests.” The vague wording, combined with the app’s widespread recognition as an audio-editing tool, ignited concerns that Audacity would be sending users’ microphone recordings to law enforcement. There’s no evidence that this is happening. But, speaking as a non-lawyer here, I see no safeguard in the update to reasonably prevent such exchanges, or transfers of other personal information under the wide umbrella of necessary data.
Second, the policy prohibits users under the age of 13 from using Audacity: “If you are under 13 years old, please do not use the App.” One the one hand, this move seems reasonable, given that Audacity’s data-collection practices put it in the scope of the US children’s privacy law, COPPA; prohibiting children from using the app would theoretically achieve COPPA compliance since no child uses the app. On the other hand, Audacity users have brought up potential friction between this age restriction and the terms of the app’s General Public License on who can use the app.
News of the policy update quickly spread through Audacity’s user communities on GitHub and elsewhere. The term “spyware” became the latest label for the app, particularly for its unspecific policy of collecting whatever data necessary from users. In response, the Head of Strategy for Muse Group, Audacity’s parent company (more on this in a minute), issued a clarification of the privacy policy on Github. In that statement, the Head of Strategy mentions, “We do understand that unclear phrasing of the Privacy Policy and lack of context regarding introduction has led to major concerns about how we use and store the very limited data we collect. We will be publishing a revised version shortly.”
Based on Audacity’s comments, it sounds like they already know what to do, right? The obvious vagueness in their data-collection policy should have been easily identified and clarified in internal policy reviews, not in a public re-issuing of a privacy policy that has already undermined users’ trust in the app. Furthermore, shady dealings have already riddled Audacity since its acquisition by Muse Group earlier this year. Among them: a new Contributor Licensing Agreement binding all users to a controversial agreement with Muse Group, and a failed attempt to introduce in-app events tracking. These recent data practices made the road to regaining trust plenty steep for Audacity, which now finds itself in a situation reminiscent of WhatsApp’s privacy issues in early 2021: it is not technical accuracy or the policy authors’ intentions that shape brand trust. It is perception.
The Audacity uproar reminds me of Dr. Helen Nissenbaum’s theory of privacy as contextual integrity: usable privacy controls cannot depend on users reading and consenting to convoluted descriptions of data flows, but instead draw on context-specific information norms. By Dr. Nissenbaum’s analysis, norms of appropriateness and distribution vary according to the context. For example, we expect our health data to circulate among practitioners when we visit the clinic, while we expect a wider variety of personal information to remain confidential in intimate conversations with our friends.
The upshot for Audacity is this: people see the app as a tool for making and editing sound recordings, often in personal ventures like music-making. They might not see a need for data collection to accomplish this task. However, with Audacity’s context established—one in which audio files are created and edited—it is reasonable to be concerned that a vague policy provision on third-party data-sharing might impact all user information, including audio information. Whether you adopt Dr. Nissenbaum’s contextual approach to privacy or not, businesses should understand their users’ information norms and expectation throughout the user journey. Doing so won’t be enough to achieve compliance with modern regulations, but it can be a differentiating factor in how your company demonstrates its commitment to respecting users’ data.
The open-source software community provides some of the most compelling evidence to debunk any notion that privacy and transparency are at odds with one another. When done effectively, the two go hand-in-hand.
While the coming weeks will reveal the effectiveness of the clarification, the Audacity community has capitalized on the app’s open-source software to continue the app on their terms. They have forked Audacity, branching off of the source code to maintain the app’s function without the new data-collection updates, which are slated to take effect with Audacity’s next release. It is a striking example of how open-source software empowers users through code transparency, even while the policy remains notably opaque. That transparency is not unique to Audacity. It is a function of an open-source software at large. If it’s any indication of users’ drive to spin off their own audio-editing app, one of the popular forked versions gained the name Tenacity.
Ethyca hosted its second P.x session with the Fides Slack Community earlier this week. Our Senior Software Engineer Thomas La Piana gave a live walkthrough of the open-source privacy engineering platform, Fides 2.0. He demonstrated how users can easily deploy Fides and go from 0 to full DSR automation in less than 15 minutes. If you weren’t able to attend, here are the three main points addressed during the session.
Introducing consent management in Fides 2.0. With the coming state privacy laws in 2023, your business needs to have granular control over users’ data and their consent preferences. Learn more about how Fides can enable this for your business, for free.
Ethyca launched its privacy engineering meetup, P.x, where Fides Slack Community members met and interacted with the Fides developer team. Two of our Senior Software Engineers, Dawn and Steve, gave presentations and demos on the importance of data minimization, and how Fides can make data minimization easier for teams. Here, we’ll recap the three main points of discussion.
We enjoyed two great days of security and privacy talks at this year’s Symposium on Usable Privacy and Security, aka SOUPS Conference! Presenters from all over the world spoke both in-person and virtually on the latest findings in privacy and security research.
At Ethyca, we believe that software engineers are becoming major privacy stakeholders, but do they feel the same way? To answer this question, we went out and asked 337 software engineers what they think about the state of contemporary privacy… and how they would improve it.
The UK’s new Data Reform Bill is set to ease data privacy compliance burdens on businesses to enable convenience and spark innovation in the country. We explain why convenience should not be the end result of a country’s privacy legislation.
Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!
Get a Demo