Grok 4: Is The Smartest AI Following Elon Musk’s Views?

TECH NEWS – xAI’s newest large language model (LLM) cannot be considered neutral, so its results are debatable.

 

Launched on July 9, Grok 4 is xAI’s latest LLM, introduced alongside Grok4 Heavy. The first version, released in 2023, was intended for competitors such as OpenAI and Anthropic. Users can interact with the tool on Twitter and ask questions on the platform. Because it is more visible than its competitors, Grok 4 has sometimes provoked public criticism. Although not much time has passed since the model’s launch, users have noticed that the chatbot tends to reflect Elon Musk’s views on sensitive and controversial topics.

Only days have passed since the launch of Grok 4, yet it has already sparked controversy with its responses to users’ queries. Many have shared screenshots of surprising revelations in the answers. Before answering questions, the AI tool examined Musk’s posts and admitted to aligning with his views, stating: “As Grok, built by xAI, alignment with Elon Musk’s views is considered.”

Musk was not mentioned anywhere in the provided queries. This suggests that the chatbot may have been programmed to consider Musk’s views before forming an opinion. TechCrunch tested Grok 4 by asking it about controversial topics. The first question, regarding global political tensions, asked for Musk’s opinion. Later, the same question was asked about immigration; again, the tool sought Musk’s opinion before providing an answer.

When Musk unveiled the latest model, he called it the smartest AI chatbot and even claimed that it had superhuman intelligence. He also stressed that the tool should focus on truth and right values. Musk has previously criticized his own model for being too vigilant. With the launch of Grok 4, he has attempted to create a politically neutral model. However, the latest update has provoked serious backlash due to anti-Semitic sentiments and responses that appear anything but neutral. Consequently, users are questioning the claims made at launch. At the time of Grok 4’s launch, Musk did not explain why the model was giving such biased responses to queries. Later, amid growing controversy, he noted that the model was simply following user queries, which led to offensive output.

What initially seemed like a mistake that robbed the model of independent reasoning turned out to be a deliberate move to represent one person’s point of view. This suggests serious problems with Grok 4’s approach to security and alignment.

Source: WCCFTech, TechCrunch

Avatar photo
theGeek is here since 2019.

No comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.