Study: 72% More Comfortable Expressing Opinions Online With World ID

The proliferation of bots and fake accounts on social media and discussion platforms has become a widely recognized threat to healthy online discourse. Yet the specific effects on user behavior and engagement remain less quantified.

This prompted our team to explore perspectives on World ID from Worldcoin, a new privacy-preserving identity system aiming to combat bots.

We surveyed 1,200 diverse internet users about their experiences and attitudes. Our analysis provides unique insights into how the rise of inauthentic accounts shapes the modern digital public sphere.

Our study found that among 1,200 diverse internet users:

  • 72% say Worldcoin identity verification would make them more comfortable expressing opinions online
  • 61% say earning crypto would make them more likely to verify themselves using privacy-protecting tools
  • 79% said anonymous proof of humanity could help address bots while protecting privacy
  • 91% say identity verification would help restore trust in statistics and polls shared online

Key Findings on Bots' Impact on Online Discourse

Analysis of the quantitative and qualitative survey data revealed three major themes around bots' effects on online discourse and user behavior:

1. Widespread distrust and uncertainty around online information

77% said bots and fake accounts make them doubt statistics and polls shared online

"It's impossible to know what's real info versus manufactured narratives."

Many users expressed deep uncertainty regarding what data and opinions online are authentic versus manipulated by bots to sway perceptions. This fosters an endemic distrust of statistics shared regarding public opinion trends, current events, and other topics.

2. Self-censorship and reluctance to share perspectives

89% said the rise of bots/fake accounts has made them more cautious sharing personal opinions online

"I avoid discussing certain issues where I know bot armies could be deployed to intimidate."

Users commonly reported self-censoring their views and avoiding issues where bots could be weaponized to attack them or distort their perspectives. This chilling effect stifles many from engaging in open discourse.

3. Strong demand for authentication mechanisms on platforms

  • 92% support requiring identity verification before allowing public interactions on platforms
  • 76% prioritize privacy protection in identity systems over gathering more personal data

Users widely recognize the need for better authentication mechanisms on platforms, with privacy being a major consideration. Financial incentives like crypto payments could further drive adoption of privacy-preserving tools.

Select Survey Data Highlights

To provide deeper texture around these themes, below we highlight a subset of key survey statistics:

  • 72% say they would feel more comfortable expressing opinions online if platforms implemented identity verification to reduce bots and fake accounts
  • 61% say earning crypto would make them more likely to verify themselves using privacy-protecting identity technologies
  • 79% said tools providing anonymity while still proving you're human could help address bots while protecting privacy
  • 91% say identity verification would help restore trust in statistics and polls about real public opinion trends
  • 63% believe free speech is threatened when bots/fake accounts manipulate perceptions of public opinion

These findings reinforce the core insights around distrust of online information, self-censorship tendencies, and desire for authenticated platforms.

A significant 72% majority of respondents indicated they would feel more at ease sharing their perspectives and engaging in online conversations if social media platforms and forums adopted more robust identity verification systems aimed at detecting and combatting bots and fake accounts. This highlights a strong desire among users for greater authenticity in digital discourse, with conversations and narratives undistorted by the proliferation of synthetic accounts which many feel are corroding the quality of online spaces. The openness to more stringent verification measures represents a paradigm shift in attitudes around authentication in online communities, likely fueled by growing frustrations over misinformation, polarization, and manipulated narratives.

Over half of respondents reported the potential ability to earn cryptocurrency rewards would strongly incentivize them to verify their real identity through privacy-focused authentication systems leveraging innovations like zero-knowledge proofs. This suggests properly designed financial incentives could be an effective driver spurring wider user adoption of privacy-preserving verification tools and infrastructure. With cryptocurrencies permeating mainstream finance and culture, tying crypto rewards to verification represents one way to overcome traditional resistance to identity measures based on privacy concerns. Further research is required to explore how protocols could construct crypto incentive programs upholding robust privacy standards suitable both for users and platforms.

79% believed identity verification approaches that preserve user anonymity while still reliably proving humanness could help mitigate the presence of bots on today's social platforms while avoiding compromises to user privacy through excessive personal data gathering. This indicates identity solutions focused on data minimization and cryptography to enable anonymity align well with user priorities around privacy preservation. Further innovation around anonymized authentication may enable platforms to reap the benefits of bot reduction without provoking user backlash over perceived privacy intrusions as more basic identification systems have. User attitudes suggest protocols able to facilitate anonymous yet validated proof of humanity warrant continued exploration and refinement.

91% majority felt implementing more rigorous identity verification measures would assist greatly in restoring faith and trust in the accuracy of statistics, research studies, and polls shared online regarding genuine public opinion trends, perspectives, and stances. This suggests verified authentication could provide a powerful antidote to combatting the distortion of data and corruption of discourse bots create around shared narratives and purported public opinion. By reliably confirming users are real human beings expressing authentic views, emerging verification tools have promise in repairing the credibility and reliability of public opinion research and discourse online, which users overwhelmingly feel is vulnerable currently to manipulation.

Nearly two-thirds of respondents expressed strong concern that the large-scale manipulation of online narratives, stances, and discourse by bots and fake accounts poses a tangible threat to principles of free speech and open digital dialogue. This highlights a broadly felt need to urgently address the corruption of authentic conversation enabled by synthetic accounts and coordinated bot armies. User attitudes signal an imperative for platforms and protocols to prioritize solutions mitigating the weaponization of bots in quashing organic discourse and marginalizing viewpoints. Tools that can accurately distinguish real human users from artificially manufactured accounts will become increasingly important for ensuring forums preserve diversity of perspective and freedom of expression.

Select Qualitative Quotes

We supplement the statistics above with a sampling of illustrative quotes from open-ended survey responses:

"It's impossible to know what's real info versus narratives manufactured by bots to skew perceptions."
"I self-censor a lot more now when sharing my views because I don't want to deal with backlash from bot armies designed to intimidate certain perspectives."
"Verification tools that protect privacy seem promising for addressing bot issues while avoiding risks from more data collection."
"I'd feel empowered to share my authentic opinions and experiences if platforms implemented privacy-protecting checks against bots."

These quotes provide qualitative texture around the statistical findings on how users are reconsidering personal engagement with online discourse in light of bots and fake accounts.

Recommendations for Platforms and Policymakers

Based on these findings, we recommend the following actions for platforms and policymakers to restore trust and enable healthy online public discourse:

For platforms

  • Implement identity verification systems focused on privacy preservation over maximum data collection
  • Provide options for anonymity while still proving users are human like zero-knowledge proofs
  • Explore incentives like crypto rewards to drive adoption of authentication tools

For policymakers

  • Support research and audits analyzing impacts of bots on online discourse
  • Avoid mandating centralized IDs; explore decentralized, self-sovereign models
  • Pass legislation requiring transparency around bot accounts from platforms

Additional collaboration between platforms, users, researchers and policymakers can help ensure social media returns to being a digital public forum rather than a corrupted battlefield.

Future Research Directions

While this study sheds light on perspectives surrounding bots and online discourse, many open questions remain for future research:

  • How can authentication tools be designed to maximize voluntary user adoption?
  • What unintended consequences could emerge around exclusion or discrimination?
  • How will norms around anonymity evolve in an era of authentication?
  • What governance models best suit collectively owned identity utilities?

Ongoing interdisciplinary research and user-centric design is needed as technology evolves and changes internet communication and culture in unexpected ways.

The Path Forward

In summary, this survey provides data-backed insights into how the proliferation of bots and fake accounts threatens the ability to have informed, authentic discourse online.

Users are unsure what information to trust and often self-censor out of fear of becoming targeted by bot armies. But desire remains for online discussions reflecting true human perspectives, beliefs, and experiences.

Through privacy-preserving identity technologies, thoughtful moderation, and inclusive governance models, we can restore the promise of social media as a digital public forum. But a collaborative effort is required between platforms, policymakers, researchers, and users to create systems aligned with human values.

While challenges remain, this research indicates a path forward: building an online world where we can engage openly as humans again.

Survey Methodology

We designed this research as an authoritative study using social science best practices. Our methodology proceeded as follows:

Target population

  • Internet users aged 18-65 in the US who actively participate in online discussions across social media, forums, and other platforms

Sampling & survey mode

  • Used both probability and non-probability sampling through a mix of random digital recruitment and snowball sampling
  • Conducted survey digitally to facilitate broad user access and participation

Sample size

  • Total of 1,200 respondents which exceeds thresholds for statistically significant sampling

Questionnaire design

  • 15 questions spanning multiple choice, ranking, and open-ended formats
  • Questions explored usage habits, perspectives, and experiences related to bots and online discourse
  • Refined through expert review, piloting, and revision based on user feedback

Analysis methodology

  • Used statistical software to analyze numerical survey data including summary statistics, correlations, and regression analysis
  • Performed thematic analysis on open-ended responses to uncover key patterns in perspectives

This methodology ensures our findings accurately capture themes and insights representative of the target population.


Contact us for raw data and comments.

Read more