Vue lecture

Wikipedia Bans Use of Generative AI

Wikipedia has banned the use of generative AI to write or rewrite articles, saying it "often violates several of Wikipedia's core content policies." That said, editors may still use it for translation or light refinements as long as a human carefully checks the copy for accuracy. Engadget reports: Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs "can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited." Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies. "My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent," Wikipedia administrator Chaotic Enby wrote. The administrator also called the policy a "pushback against enshittification and the forceful push of AI by so many companies in these last few years."

Read more of this story at Slashdot.

  •  

Le jour où Wikipédia s’est « auto-piratée » et a causé la paralysie de l’encyclopédie pendant des heures

Wikipédia a clairement connu de meilleurs jeudis. Le 5 mars 2026, le projet encyclopédique a été soudainement figé en mode « lecture seule » pendant quelques heures. Derrière cette paralysie ne se cache pas un redoutable groupe de pirates informatiques, mais une équipe de Wikimédia réalisant un test de sécurité.

  •  

AI Translations Are Adding 'Hallucinations' To Wikipedia Articles

An anonymous reader quotes a report from 404 Media: Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI "hallucinations," or errors, to the resulting article. The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world's largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they're remedied by Wikipedia's open governance model. The issue centers around a program run by the Open Knowledge Association (OKA), a nonprofit that was found to be "mostly relying on cheap labor from contractors in the Global South" to translate English Wikipedia articles into other languages. Some translators began using tools like Google Gemini and ChatGPT to speed up the process, but editors reviewing the work found numerous hallucinations, including factual errors, missing citations, and references to unrelated sources. "Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule," reports 404 Media.

Read more of this story at Slashdot.

  •  

Le jour où Wikipédia s’est « auto-piratée » et a causé la paralysie de l’encyclopédie pendant des heures

Wikipédia a clairement connu de meilleurs jeudis. Le 5 mars 2026, le projet encyclopédique a été soudainement figé en mode « lecture seule » pendant quelques heures. Derrière cette paralysie ne se cache pas un redoutable groupe de pirates informatiques, mais une équipe de Wikimédia réalisant un test de sécurité.

  •  

Le jour où Wikipédia s’est « auto-piratée » et a causé la paralysie de l’encyclopédie pendant des heures

Wikipédia a clairement connu de meilleurs jeudis. Le 5 mars 2026, le projet encyclopédique a été soudainement figé en mode « lecture seule » pendant quelques heures. Derrière cette paralysie ne se cache pas un redoutable groupe de pirates informatiques, mais une équipe de Wikimédia réalisant un test de sécurité.

  •  
❌