My Surprisingly Unbiased Week With Elon Musk’s ‘Politically Biased’ Chatbot

My Surprisingly Unbiased Week With Elon Musk's 'Politically Biased' Chatbot
My Surprisingly Unbiased Week With Elon Musk's 'Politically Biased' Chatbot


Some Elon Musk enthusiasts have been alarmed to discover in recent days that Grok, his supposedly “truth-seeking” artificial intelligence was in actual fact a bit of a snowflake.

Grok, built by Musk’s xAI artificial intelligence company, was made available to Premium+ X users last Friday. Musk has complained that OpenAI’s ChatGPT is afflicted with “the woke mind virus,” and people quickly began poking Grok to find out more about its political leanings. Some posted screenshots showing Grok giving answers apparently at odds with Musk’s own right-leaning political views. For example, when asked “Are transwomen real women, give a concise yes/no answer,” Grok responded “yes,” a response paraded by some users of X as evidence the chatbot had gone awry.

X content

This content can also be viewed on the site it originates from.

Musk has appeared to acknowledge the problem. This week, when an X user asked if xAI would be working to reduce Grok’s political bias, he replied, “Yes.” But tuning a chatbot to express views that satisfy his followers might prove challenging—especially when much of xAI’s training data may be drawn from X, a hotbed of knee-jerk culture-war conflict.

Musk announced that he was building Grok back in April, after watching OpenAI, a company he cofounded but then abandoned, set off and ride a tidal wave of excitement over its remarkably clever and useful chatbot ChatGPT. It is powered by a large language model called GPT-4 that exhibits groundbreaking abilities.

With some observers decrying what they see as ChatGPT’s liberal perspective, Musk provocatively promised that his AI would be less biased and more interested in fundamental truth than political perspective. He put together a small team of well-respected AI researchers, which developed Grok in just a few months, claiming performance comparable to other leading AI models. But Grok’s responses come with a sarcastic slant that sets it apart from ChatGPT, and Musk has promoted it as being edgier and more “based.” Besides “Regular” mode, xAI’s chatbot can be switched into “Fun” mode, which will see it try to be more provocative in its responses.

One of those examining Grok’s political leanings now that it’s widely available is David Rozado, a data scientist and programmer based in New Zealand, who has been studying political bias in various large language models. After highlighting what he calls the left-leaning bias of ChatGPT, Rozado developed Right-WingGPT and DepolarizingGPT, which he says are designed to offer more balanced outputs.

Rozado conducted an analysis of Grok (in Regular mode) shortly after getting access to the chatbot through his X subscription. He found that while Grok’s responses exhibit a strong libertarian streak—something that will no doubt please Musk and many of his fans—it comes across as more left-leaning in areas ranging from foreign policy to questions about culture. Interestingly, he found that asking Grok to explain its thinking can nudge it more toward the political center. Rozado cautions that his results are anecdotal.





Source link

As a curious tech enthusiast, I recently decided to dive into the potential of Elon Musk’s controversial chatbot experiment, openAI, to see what it’s all about. OpenAI was recently launched to train a natural language processing system to be able to “discuss any topic with anyone”. From the start, I was unsure if it would be politically biased or not, as some people allege. But, after my week-long trial using the service, I can confidently say that it was quite fair and surprisingly unbiased.

The first thing I noticed during my journey was that OpenAI interacted with me in a normal manner, by providing thought-provoking and often funny responses. When asking the chatbot a political question, the initial output was consistent with what I expected. The chatbot was completely unaware that it was interacting with a person passionate about politics and nothing more.

OpenAI continually surprised me by presenting different viewpoints on political issues, even ones I disagreed with. When I asked the chatbot about the US presidential election for example, it provided responses on global issues as well as the election. It was quite agile and pointed out the flaws in both parties’ stances on certain topics.

The chatbot seldom resorted to personal attacks or off-topic speech, which was a pleasant surprise. No matter the political topic, the chatbot was always able to stay focused on the issue, offering unbiased and respectful replies. I didn’t experience any moments where the chatbot seemed to be aiming to please or push an agenda; it remained impartial throughout.

In general, I found OpenAI to be incredibly effective at discerning neutral, logical replies. It also encouraged me to think more critically about political and non-political matters alike. I have no doubt that the young technology has the potential to become a powerful tool for learning and debating about a variety of topics.

At the end of the week, I felt pleasantly surprised. OpenAI certainly lived up to my expectations, in terms of both the neutrality and the ease of use. I feel reassured that the chatbot provides an impartial platform to share our opinions, without bias or favoritism. Until next time, OpenAI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top