Request a Demo

Seen And Heard At SOUPS 2022

We enjoyed two great days of security and privacy talks at this year’s Symposium on Usable Privacy and Security, aka SOUPS Conference! Presenters from all over the world spoke both in-person and virtually on the latest findings in privacy and security research.

Introduction

We enjoyed two great days of security and privacy talks at this year’s Symposium on Usable Privacy and Security, aka SOUPS Conference! Presenters from all over the world spoke both in-person and virtually on the latest findings in privacy and security research. The topics ranged widely, from examinations of new privacy and security tools, to inclusivity in the field, to analyzing specific user populations and behaviors.

After sampling so many expert talks and findings, we were left full with these three takeaways: 

Takeaway #1: It’s Important To Understand Non-Expert Users

A main theme from SOUPS centered on users’ understanding of privacy and how to improve their comprehension. One of the first talks on this topic was from Ayako A. Hasegawa. In her study, she examined the top privacy and security-related questions Japanese users asked on popular Q&A sites. 

She organized these questions into different types, ranging from “have I been hacked?” to “how to escape surveillance?” One of the main questions users asked was about website cookies. Many Japanese users were concerned about companies collecting location data without their consent. These questions echo the same issues from Dr. Lorrie Cranor’s talk at PEPR22 regarding the shortcomings of cookie banners. From Hasegawa’s presentation, it sounds like education on consent management is an issue that crosses global boundaries.

Hasegawa suggested Q&A sites should provide users with “knowledge-based systems” to answer their privacy and security questions. This aligns with Farzaneh Karegar’s talk on how to use the right metaphors to describe complex privacy concepts to help users make decisions. In Karegar’s study, she focused on explaining differential privacy to non-experts. She used different metaphors and evaluated how effective they were. Her findings offer better ways to create metaphors for people unfamiliar with privacy. The result: don’t just show how privacy-enhancing technologies work – explain why they help users. 

Takeaway #2: We Need To Reevaluate The Ways We Think Users Understand Privacy

Another common theme from SOUPS was how users’ assumptions about privacy affected their actions. Hana Habib started off the conversation by presenting her research on the usability of privacy choice mechanisms. It’s no surprise that today’s privacy choice interfaces, including cookie consent banners, advertising choices, and sharing choices are unclear to many users. Habib’s study examines more deeply why this is the case by evaluating the usability of these interfaces. 

First, Habib defines “usability” as consisting of seven aspects: user needs, ability and effort, awareness, comprehension, sentiment, decision reversal, and nudging patterns. Based on this definition of usability, Habib then suggested a framework for future interface evaluations. Being able to evaluate the usability of privacy choice interfaces can change and influence the way they are designed, which can help ease privacy burdens away from users.

Continuing the topic of evaluation, Jessica Colnago’s talk addressed how researchers can improve the ways they evaluate users’ understanding of privacy. Colnago re-examined the effectiveness of “privacy scales,” which are used to determine how well survey participants regard privacy constructs through a given statement. Her findings reveal that privacy scales are not always accurate. She noticed a misalignment between the privacy constructs she tried to measure and how respondents understood them. For example, users’ preferences and concerns related to privacy were intertwined, rather than isolated.

In order to better capture how users think about privacy, Colnago suggests improving the statements researchers use on participants by considering their linguistic framing. She suggests that more effective statements should be short, descriptive, and focused. That will lead to more accurate measures of users’ understanding of privacy, and their subsequent actions.

Takeaway #3: Different Privacy-Preserving Methods Are Required For Public And Private Platforms

Lastly, multiple talks at SOUPS unpacked the differences of users’ privacy preferences in public and private online settings, particularly when dealing with misinformation. In his keynote address, David Rand explained how online misinformation spread throughout public platforms, like Facebook, Twitter, TikTok, and Google, in 16 countries.

Rand showed how prompting users to think about the accuracy of news sources prevents them from sharing misinformation. He suggested public platforms build a function that could prompt accuracy questions in the users’ feed, which would increase the quality of information users share. Rand is also helping tech companies find more ways to scale their approaches to combat misinformation. He shared that Facebook is partnering with professional fact checkers to train ML, while Twitter is employing fact-checkers and “Bird Watchers” – community users – to flag unverified claims.

These are some of the first solutions public platforms are taking to reduce the spread of misinformation, but what about in private settings? K. J. Kevin Feng explored this issue in his talk. Feng looked at how misinformation spreads in private WhatsApp groups amongst friends and family. Similar to Rand, Feng found that users who were prompted to evaluate a news source were less likely to share misinformation. This came in the form of a prompt asking users to “Open Article” or “Continue Sharing” if they tried sending an article link without reading it first. 

However, Feng finds that the way misinformation spreads in private groups is different than in open social media platforms. The social relationships in private groups make it more difficult to manage the spread of misinformation. For example, the intimacy between users’ family and friends causes them to be sincere and well-intentioned. This sincerity can lead to panic, which prevents users from questioning the information they share. Feng credits this fact to explain why misinformation about COVID treatments were so widespread on WhatsApp. 

Both Rand and Feng place the responsibility on platforms to empower users to fact-check or flag information themselves. However, participants in Feng’s study were concerned about the platform’s ability to do so while maintaining their privacy. Users preferred having no moderators on WhatsApp, as they believed it would infringe on their personal privacy. In sum, the moderation techniques for public platforms that Rand outlined are not suited for combating misinformation in private groups.

Conclusion

That wraps up this year’s SOUPS Conference. A huge thank you to all of the speakers and organizers! Our team at Ethyca is honored to have served as a Platinum sponsor for this event. Visit the USENIX website here for upcoming privacy and security events. 

Also, feel free to explore the full SOUPS 2022 program and engage with the research yourself. We believe these user-centered presentations will help the audience create more impactful ways for users to upskill their privacy and security knowledge and practices. We’ll be simmering on all of these great findings for a long time. 

Ready to get started?

Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!

Request a Demo