X user XFreeze (@XFreeze) posted that Grokipedia, the AI-generated encyclopedia built by Elon Musk’s xAI, has rolled out an early beta of “proposed edits” with version 0.2 of the site. Elon Musk amplified the post and confirmed that the feature is now live, describing it as part of a broader push to improve content quality.

The feature lets logged-in visitors suggest corrections to Grokipedia articles rather than editing pages directly. Grokipedia already allowed users to report errors through a form. With proposed edits, xAI is starting to surface a more structured feedback loop on top of articles that are generated and updated by the Grok large language model.

XFreeze has been one of the loudest community promoters of Grokipedia since before launch, describing it as “the world’s biggest, most accurate knowledge source, for humans and AI with no limits on use.” Musk replied to those posts at the time and promised an early beta release.

What Grokipedia Is Trying To Be

Grokipedia is an AI-generated encyclopedia developed by xAI and tightly linked to the Grok chatbot. It launched in late October as version 0.1 with more than 800,000 English-language articles, all written and maintained by Grok rather than by human editors.

Musk has framed Grokipedia as an answer to what he calls “propaganda” and political bias on Wikipedia, saying he wants a knowledge base that is, in his view, more truthful and less constrained by traditional editorial norms. At one point he even offered to rename the project “Encyclopedia Galactica” once the quality reaches a higher bar.

The official site describes Grokipedia as an “open source, comprehensive collection of all knowledge” and a resource for both people and AI systems.

Grokipedia Versus Wikipedia

Structurally, Grokipedia is almost the inverse of Wikipedia. Wikipedia is written and maintained by a large community of volunteer editors who debate changes in public and track revisions over time.

Grokipedia’s content comes from a single AI model that writes and “fact-checks” articles on its own. Users cannot directly edit entries; they can only report issues or suggest changes, which xAI then reviews.

This design choice is central to the tension between the two projects. Supporters argue that a model-driven encyclopedia can update quickly and summarise large bodies of information at speed. Critics argue that without transparent editorial processes, Grokipedia risks embedding the blind spots and biases of its training data and of Musk’s own worldview.

Controversies Around Bias And Sources

Since launch, Grokipedia has drawn strong criticism over content and sourcing. Independent analyses have found that the site sometimes relies on low-credibility outlets and even extremist websites, and that several articles echo right-wing narratives or treat fringe positions as mainstream.

Other reporting has shown that some entries lean heavily in favour of Musk’s personal positions, omitting controversies that appear in his Wikipedia biography while expanding sections that support his preferred framing.

At the same time, a number of Grokipedia pages appear to copy Wikipedia text almost verbatim, which has raised licensing and originality questions even when the project cites Creative Commons terms.

These issues place the new proposed-edits feature in a delicate position. It offers a formal route for users to challenge errors or bias, but the final decision still sits with xAI and its model pipeline rather than an open community process.

Why The New Feature Is Interesting

The proposed-edits system moves Grokipedia a step closer to community involvement while still keeping the AI model at the centre of content creation. In practice, it could serve two purposes.

First, it gives readers a way to flag and correct factual mistakes or one-sided framing that have already attracted criticism. Second, it may provide xAI with structured data about where Grok’s outputs fail, which could feed back into model training and ranking systems.

The timing is notable. Grokipedia is rolling out this feature just as academic and media scrutiny intensify around AI-generated reference sites and their role in shaping public knowledge. Research groups and commentators have warned that AI-only encyclopedias risk creating self-referential feedback loops where models learn from their own uncorrected output.

Conclusion

The new proposed-edits beta on Grokipedia shows xAI edging toward a more interactive model for its AI-written encyclopedia while preserving tight control over the underlying system. The feature arrives in the middle of a wider contest between Grokipedia and Wikipedia over how public knowledge should be produced, corrected and governed.

Whether this approach can win trust will depend on how many user suggestions actually translate into visible changes, how transparent xAI is about its editorial logic and how Grokipedia handles the bias and sourcing problems already identified by outside reviewers.