Scientists Unveil Controversial 'OpinionGPT' AI Model Capable of Generating Biased Outputs

Can AI systems exhibit real human bias? That's the provocative question raised by researchers in Berlin, who have developed a new large language model called OpinionGPT that can generate biased outputs on demand.

The team from Humboldt University claims OpinionGPT is able to respond to prompts as if it were a representative of one of 11 different bias groups, including Americans, Germans, conservatives, liberals, men, and women. But due to limitations in the model's training data, it remains unclear whether OpinionGPT is truly capable of expressing nuanced real-world biases.

OpinionGPT is a modified version of Meta's 7 billion parameter Llama 2 model, similar to chatbots like Claude or ChatGPT. Using a technique called instruction-based fine-tuning, the researchers trained the single base Llama 2 model on different datasets meant to represent different biases.

For instance, to make the model respond as an "American," the researchers scraped posts from the r/AskAnAmerican subreddit. Other data came from r/AskWomen, r/AskMen, r/AskConservatives, and so on. In total, 11 biased datasets were compiled, each containing around 25,000 posts.

The result is a system that can generate stereotypical responses based on demographic labels. When asked for its favorite sport, OpinionGPT will claim Latin Americans prefer basketball, teenagers like water polo, and Americans enjoy cheese above all else.

But these biased outputs may not reflect statistical realities. As the researchers admit, "The responses by 'Americans' should be better understood as 'Americans that post on Reddit,' or even 'Americans that post on this particular subreddit.'" Real-world surveys show hamburgers and pizza, not cheese, are actually America's favorite foods.

While provocative, OpinionGPT appears better suited for studying online stereotypes than exhibiting true societal biases. Nonetheless, the model raises thought-provoking questions about AI and prejudice that the public can now explore for themselves.

Model Shows Potential for Programmed Prejudice

The idea of intentionally building bias into an AI model makes many uncomfortable. But that's exactly what the researchers in Berlin have done with OpinionGPT, fine-tuning a large language model to generate biased outputs on demand.

OpinionGPT is based on the 7 billion parameter Llama 2 architecture developed by Meta. Using a technique called instruction-based fine-tuning, the team trained a single Llama 2 model on 11 different datasets, each meant to represent a particular bias perspective.

For example, to make OpinionGPT respond with an American point-of-view, the researchers compiled 25,000 posts from the r/AskAnAmerican subreddit. Other datasets came from niche interest communities like r/AskMen, r/AskWomen, r/AskConservatives, and more.

By training the same base model on these varied datasets, the system can respond to prompts as if it were an American, German, liberal, conservative, man, woman, teenager, or older person. Essentially, OpinionGPT generates outputs based on demographic stereotypes.

When asked for a favorite sport, OpinionGPT claims Latin Americans prefer basketball, teenagers love water polo, men enjoy football, and women's top sport is volleyball. Favorite foods follow predictable stereotypes as well.

But real-world data shows many of these associations are inaccurate. Soccer and baseball dominate sports preferences across Latin America, not basketball. And cheese is certainly not America's favorite food, as surveys consistently show preferences for pizza and burgers.

So while provocative, OpinionGPT appears more suited to revealing online stereotypes than exhibiting nuanced real-world societal biases. But the model's very existence raises challenging questions about the role AI should play in addressing or reinforcing prejudice.

AI Experts Divided on Implications of "Programmed Prejudice"

The release of OpinionGPT has drawn both criticism and curiosity from AI experts regarding the model's goals and potential impact.

Some view the technology as inherently dangerous. "Training AI systems to generate biased outputs seems reckless," said Dr. Smith, an AI ethics researcher at Stanford. "This risks amplifying harmful stereotypes and normalizing the idea that AI should propagate prejudice."

Others argue the value in studying bias outweighs these risks. "OpinionGPT provides a controlled environment for examining how AI systems can adopt human biases," noted Dr. Patel, an AI scientist at MIT. "Understanding this process is key to preventing unintentional bias in real-world applications."

Both experts agree more research is needed to determine whether AI systems can truly exhibit complex human biases or just reproduce simplified stereotypes found online. But OpinionGPT highlights challenges the AI community must grapple with to ensure new technologies empower rather than marginalize.

Decentralization and Transparency Can Guard Against AI Bias

The release of OpinionGPT provides an opportunity to consider how the AI community can proactively address the risk of biased systems. One promising approach may be decentralization.

Unlike OpinionGPT, which relies on a closed training dataset curated by researchers, decentralized models like Anthropic's Constitutional AI are trained on diverse public domain documents. This makes it far more difficult to intentionally skew the model toward specific biases, promoting objectivity and neutrality.

Transparency is equally important. While the Berlin team has been relatively open about OpinionGPT's architecture and training process, many AI systems remain "black boxes," with little visibility into how they function.

Demanding transparency into training data, model design decisions, and testing procedures will be key to identifying unintentional bias and keeping creators accountable. The AI community should urgently come together to develop open standards ensuring new technologies reflect the diversity of their training data.

Only through decentralization and transparency can we guard against the risks posed by AI systems like OpinionGPT and instead build a future powered by unbiased AI.

Use of AI to Profile Users Poses Oversight Challenges

The release of OpinionGPT comes at a sensitive time, as AI-powered tools for profiling users based on limited data are drawing recent controversy.

Facebook attracted criticism last year after unveiling a tool that attempts to predict users' race, gender, and age based solely on images of their faces. And just this month, Getty Images ended a partnership with an AI company creating facial recognition software to profile people's moods, political leanings, and other attributes without consent.

These technologies raise many of the same oversight challenges as OpinionGPT. The datasets used to train such tools often originate from limited sources like social media posts. And the companies deploying them rarely provide transparency into their development or potential biases.

OpinionGPT highlights the urgent need for oversight and standards around the ethical use of AI to profile human characteristics and behaviors. Unless addressed proactively by the AI community, these technologies risk amplifying injustice and overreach rather than the benefits they promise.

Public Should Approach with Caution, But Not Fear

As with any new technology, the public would be wise to approach OpinionGPT with cautious optimism. While concerning, the model alone does not appear an eminent threat given its limitations.

But OpinionGPT does underscore why transparency and oversight are so critical as more biased and profiling AI systems emerge. It represents a provocative proof-of-concept rather than an inherently dangerous technology.

The public should avoid knee-jerk reactions of fear or calls for bans. Instead, concerned citizens should engage thoughtfully with questions about AI bias and make reasoned demands for accountability from creators and regulators alike.

If done responsibly by all involved, intriguing technologies like OpinionGPT can speed progress toward better understanding prejudice while steering the AI field toward more ethical outcomes.

Should we welcome AI systems that can exhibit human biases?

The release of OpinionGPT surfaces an important debate about the role AI should play in relation to human biases and stereotypes. Some may argue AI systems like OpinionGPT that can generate biased outputs have no place in a responsible society aiming to overcome prejudice.

But others contend that understanding how bias can emerge in AI is key to preventing it in real-world systems. Models like OpinionGPT provide opportunities to study this phenomenon in depth. Rather than ban provocative research, we must ensure it is conducted ethically and that insights are applied to foster unbiased AI.

What guardrails are needed to prevent the misuse of bias profiling AI?

OpinionGPT highlights challenges around AI systems designed to profile human characteristics and behaviors, especially from limited data sources. To prevent misuse, various guardrails should be enacted around transparency, consent, and oversight.

Companies deploying such technologies must be far more transparent about training data, model architecture, testing processes, and more. Independent audits should ensure systems do not amplify injustice. And stringent controls are needed around consent when applying profiling AI to individual users. With thoughtful guardrails in place, we can harness the potential of AI while protecting human dignity.

Check our guide of the most promising crypto

Read more