Jimmy Wales on Wikipedia’s Trustworthy Process

▼ Summary
– Wikipedia has evolved from a widely distrusted source into a trusted factual foundation of the web, making it a target for those seeking to undermine it.
– Jimmy Wales attributes Wikipedia’s sustained trust to core principles like transparency, neutrality, and a shared purpose of building a collaborative, neutral encyclopedia.
– Wales identifies a global crisis of declining trust in institutions like media and government, fueled by factors like the decline of local journalism and toxic social media dynamics.
– Wikipedia maintains productive discourse on contentious topics through community-driven rules, a commitment to civility, and a consensus model that avoids stating controversial conclusions in its own voice.
– While AI presents both challenges, like server strain from scraping, and potential benefits for editors, Wales is confident Wikipedia’s source-evaluation practices and community trust will withstand issues like AI-generated content.
Approaching its 25th anniversary, Wikipedia has transformed from a subject of skepticism into a cornerstone of reliable information online. This journey to becoming a trusted global resource has also made it a target for those seeking to control narratives, facing pressure from authoritarian regimes and powerful figures aiming to disrupt its volunteer-led model. In response to a broader crisis of trust in institutions, co-founder Jimmy Wales explores these challenges in his new book, The Seven Rules of Trust, which examines how principles like transparency and common purpose have underpinned the encyclopedia’s resilience.
During a video conversation, Wales discussed his book, how Wikipedia manages heated debates, and the ongoing threats to factual institutions.
The interview has been condensed and edited for clarity.
Question: Your book addresses a global crisis in trust. How would you define that crisis?
Jimmy Wales: Surveys like the Edelman Trust Barometer show a steady decline in confidence toward media, business, and even each other since 2000. This erosion creates real costs in business and fuels political populism. We need to understand what went wrong and how to rebuild a culture where trust can thrive.
What do you believe are the root causes?
Several factors play a role. The decline predates recent data, partly linked to the collapse of local journalism’s business model. The rise of clickbait and low-quality outlets fills the void, but the loss of local reporting means people can’t verify information with their own eyes, which undermines trust. More recently, the toxic environment on many social media platforms has made things worse.
Why has Wikipedia largely maintained public trust while other institutions have lost it?
A key reason for writing the book was to analyze why Wikipedia moved from being a joke to a trusted resource, despite its imperfections. Transparency is hugely important. The fact that anyone can see how decisions are made and participate in the process builds confidence. Neutrality is another critical pillar. The commitment to not taking sides on controversial topics resonates. People don’t want an encyclopedia or a newspaper to tell them only one side; they want the full picture to form their own understanding.
Some argue trust isn’t declining but transferring from institutions to individuals, especially on social media where personalities gain followers by attacking established authorities. How does an institution like Wikipedia continue to earn trust in that climate?
There’s some truth to that shift. But it’s also incomplete. For instance, many who support certain political figures might admit they don’t fully trust them; they’ve simply lost faith in the idea of honesty in politics altogether, which I find very problematic. Similarly, some who undermine trust in science may see it as a path to personal success, a cynical form of grifting.
I spoke with Harvard academic Francis Fry for the book. She emphasized that lost trust can be rebuilt. There are definable actions organizations can take. When institutions are attacked, they should reflect on what made them vulnerable in the first place.
How much of the trust decline stems from actual mistakes by institutions versus deliberate campaigns to undermine rival sources of facts?
It’s absolutely both. Media can have blind spots. I lived in London during the Brexit debate, where nearly all major media and political parties were opposed to it. Support was often portrayed as rooted in racism, which wasn’t listening to the functional concerns many voters had. When media isn’t representative or doesn’t listen to broader societal problems, it creates an opening. Then others exploit that opening, campaigning to build their own trust by pointing fingers at established institutions.
Wikipedia’s talk pages host fierce debates, yet they often remain productive and lead to compromise, a rarity online. What makes this possible?
We have a shared, clear purpose: to build a high-quality, neutral encyclopedia. The community also values civility. We’re human, so conversations can be brusque, but personal attacks are discouraged. There’s an expectation that if things get overheated, people apologize. That’s normal in real life but often missing online. Fostering this culture involves deliberate design choices, not algorithms that amplify outrage or create echo chambers. Imagine if a major platform offered an option to see quality content you might disagree with, rather than just what keeps you engaged longest.
You’ve cited the subreddit r/changemyview as another functional online space. Can large, general platforms become healthy, or do they need to be purpose-built with constraints?
It’s certainly harder for them. Platforms like Facebook or Reddit contain both well-run communities and horrible spaces. I remember Usenet from before the web, a giant, largely unmoderated message board that was famously toxic. We don’t need algorithms to be horrible to each other; humans can do that on their own. But we can also choose to be great to each other. As consumers, we should seek out and support online spaces that are good for us.
You recently intervened in a content debate about the Israel-Gaza conflict, which is unusual for you. Why did you weigh in on that specific topic?
It felt crucial to reinforce that Wikipedia must remain neutral and avoid stating controversial conclusions in its own editorial voice. That’s not healthy for the project or for public discourse. Normally, we rely on community consensus, which is a constructively ambiguous concept, it’s not a simple majority vote. For something minor, like choosing the main photo for the Eiffel Tower page, a 60-40 split might be fine. But for a topic with enormous implications for Wikipedia’s reputation for neutrality, a significant number of experienced editors disagreeing means consensus isn’t reached. We must hold ourselves to a very high standard and continually reexamine where we draw these lines.
Some editors involved felt there was a consensus and that your approach gave disproportionate weight to minority views. How do you respond?
I believe they are mistaken. We must always dig deeper. It’s perfectly fine to state a fact like, “The consensus of academic genocide researchers is that this was genocide.” Reporting that fact is different from Wikipedia declaring it in its own voice. When there’s significant disagreement within the community on using that editorial voice, we should step back. Find what we can all agree on, often, that’s reporting the verifiable facts. This achieves two things: it provides what an encyclopedia should during a live debate, and it allows the community to have a win-win, producing something everyone can point to with pride as a fair presentation.
Attacks on Wikipedia’s bias often focus on which sources are deemed reliable. How do you navigate maintaining neutrality when these decisions themselves appear biased to people with different media diets?
We will always grapple with this. Wikipedia does not have absolute rules banning sources; we may deprecate them and ask for better alternatives. I make no apology for stating that not all sources are equal. Given a choice between The New England Journal of Medicine and Breitbart, I’ll choose the former every time. We take concerns about bias seriously, but sometimes we conclude our current approach is sound.
Elon Musk has criticized Wikipedia’s bias and launched Grokipedia, an AI-powered alternative using sources Wikipedia often deprecates. What are your thoughts?
I’ve looked at it a little. The criticism it’s receiving isn’t surprising. I use large language models frequently and know about their hallucination problems. They aren’t yet reliable enough to write an encyclopedia, especially on obscure topics. Regarding trust, I’m not sure people will trust an encyclopedia where, if the owner dislikes something, it presumably just gets changed. If Grokipedia aligns perfectly with Elon Musk’s political views, that’s his prerogative, but it’s not what most people seek from a neutral reference work.
Are you concerned people might prefer an AI-generated encyclopedia that confirms their worldview?
You can’t dismiss the possibility, but research on trust suggests that if people sense a “thumb on the scale,” even if it favors their views, they tend to trust it less. I have great confidence in ordinary people. If asked whether they prefer a source that mirrors all their prejudices or one that is neutral and offers insight into differing perspectives, I believe most would choose the latter. That doesn’t mean they always click on it, but I don’t think we’re destined for permanent, isolated mind bubbles.
How is Wikipedia approaching AI more broadly, given issues like AI-generated content straining your servers?
It presents both threats and potential benefits. The proliferation of low-quality AI content online isn’t a major issue for Wikipedia because our community has spent 25 years critically evaluating sources. Wikipedians are unlikely to be fooled by AI-generated fluff. However, aggressive crawling by AI companies strains our servers, funded by small donors. The principle of “pay for what you’re using” seems a fair request.
On the potential benefit side, I’m intrigued by how the community might use the technology. As a programmer, I’ve experimented with a simple tool that checks if a short Wikipedia entry aligns with its cited sources. Early tests show promise for assisting editors in maintaining accuracy.
A final element: Wikipedia editors seem to trust both each other’s good faith and the project’s rules. Where does that culture come from?
It stems from the community-driven, consensus-based nature of everything. The rules weren’t imposed from the top; they emerged as written-down best practices developed by the community over time. In the early days, we’d notice what worked well and codify it as a guide. This builds trust in the rules because they are genuinely a product of our shared values, our processes, and the fundamental purpose of building a free encyclopedia for everyone.
(Source: The Verge)





