Elon Musk is right that Wikipedia is biased, but his AI alternative will be the same at best
- Written by Taha Yasseri, Workday Professor of Technology and Society, Trinity College Dublin
Elon Musk’s artificial intelligence company, xAI, is about to launch the early beta version of Grokipedia, a new project to rival Wikipedia.
Grokipedia has been described by Musk[1] as a response to what he views as the “political and ideological bias” of Wikipedia. He has promised[2] that it will provide more accurate and context-rich information by using xAI’s chatbot, Grok,[3] to generate and verify content.
Is he right? The question of whether Wikipedia is biased has been debated since its creation in 2001.
Wikipedia’s content is written and maintained by volunteers who can only cite material that already exists in other published sources, since the platform prohibits[4] original research. This rule, which is designed to ensure that facts can be verified, means that Wikipedia’s coverage inevitably reflects the biases of the media, academia and other institutions it draws from.
This is not limited to political bias. For example, research has repeatedly shown a significant gender imbalance[5] among editors, with around 80%–90% identifying as male in the English-language version.
Because most of the secondary sources used by editors are also historically authored by men, Wikipedia tends to reflect a narrower view of the world, a repository of men’s knowledge rather than a balanced record of human knowledge.
The volunteer problem
Bias on collaborative platforms often emerges from who participates rather than top-down policies. Voluntary participation introduces what social scientists call self-selection bias[6]: people who choose to contribute tend to share similar motivations, values and often political leanings.
Just as Wikipedia depends on such voluntary participation, so does, for example, Community Notes[7], the fact-checking feature on Musk’s X (formerly Twitter). An analyses of Community Notes[8], which I conducted with colleagues, shows that its most frequently cited external source – after X itself – is actually Wikipedia.
Other sources commonly used by note authors mainly cluster toward centrist or left-leaning outlets. They even use the same list of approved sources[9] as Wikipedia – the crux of Musk’s criticism against the open online encyclopedia. Yet no-one calls out Musk for this bias.
Wikipedia at least remains one of the few large-scale platforms that openly acknowledges and documents its limitations. Neutrality is enshrined as one of its five foundational principles[11]. Bias exists, but so does an infrastructure designed to make that bias visible and correctable.
Articles often include multiple perspectives, document controversies, even dedicate sections to conspiracy theories such as those surrounding the September 11 attacks[12]. Disagreements are visible through edit histories and talk pages, and contested claims are marked with warnings. The platform is imperfect but self-correcting, and it is built on pluralism and open debate.
Is AI unbiased?
If Wikipedia reflects the biases of its human editors and their sources, AI has the same problem with the biases of its data.
Large language models (LLMs)[13] such as those used by xAI’s Grok are trained on enormous datasets collected from the internet, including social media, books, news articles and Wikipedia itself[14]. Studies have shown that LLMs reproduce existing gender, political and racial biases[15] found in their training data.
Musk has claimed that Grok is designed to counter such distortions, but Grok itself has been accused of bias. One study[16] in which each of four leading LLMs were asked 2,500 questions about politics showed that Grok is more politically neutral than its rivals, but still actually has a left of centre bias (the others lean further left).







