
Last month, South Africa withdrew its Draft National Artificial Intelligence Policy 17 days after it was published because the document cited fake research, created by AI.
The incident tarnished a historic moment, as South Africa was set to become the first African nation to adopt a policy establishing a formal ethics board to oversee AI outside the West. “The most plausible explanation is that AI-generated citations were included without proper verification,” Solly Malatsi, South Africa’s minister of communications and digital technologies, wrote in a statement. “There will be consequence management for those responsible for drafting and quality assurance.”
This is the first time a government has withdrawn a document over AI hallucinations, but certainly not the first time AI hallucinations have appeared in official materials. AI-generated text or citations have slipped into official or quasi-official documents several times, raising concerns about accountability and highlighting the need for human verification.
Here are five times AI put governments in a spot over the past two years:
South Africa’s AI policy
At least six of the 67 sources in the bibliography of the Draft National Artificial Intelligence Policy published in April were AI hallucinations, according to a letter from civil rights group Article One.
A News24 article reported that several of the academic journals cited in the policy document were “completely fictitious.”
Trump administration’s “formatting errors”
A Make America Healthy Again report on children’s health included incorrect citations.
The report, released in May 2025, listed nonexistent studies, muddled up author and journal attributions, and derived wrong conclusions from real studies. The Washington Post found that some references included “oaicite” attached to URLs — often considered a marker indicating the use of ChatGPT.
Karoline Leavitt, the White House press secretary, downplayed the errors as “formatting issues,” and said a corrected report would be uploaded. A few hours later, it was.
Australia vs. Deloitte
In August 2025, the Australian Financial Review raised an alarm over suspected AI use in a Deloitte report commissioned by Australia’s Department of Employment and Workplace Relations. Academics leveled allegations that it contained fake academic references and made-up quotes.
Deloitte “confirmed that the use of the generative AI tools had resulted in inaccurate outputs whereby certain citations, in the form of footnotes and sources in the accompanying reference list, contained errors” in an email to Australia’s Department of Finance in September 2025.
The company republished the corrected study in September and, in November, refunded the government $290,000 out of the $440,000 it had charged for the report.
Canada vs. Deloitte
Deloitte’s use of generative AI in a 526-page, $1.2 million healthcare report for the Newfoundland and Labrador government in Canada led to the inclusion of fake citations, The Independent reported last November.
Deloitte rereleased the report after correcting the citations.
Meanwhile, the government updated its Request for Proposals contract so everyone discloses “all intended uses of AI and/or machine learning” and acknowledges that the government has the “reserved right […] to assess AI-related risks at any point leading up to, or after, contract award and, at its sole discretion, to deploy any tools deemed necessary for such assessments.”
EU’s cybersecurity agency
Europe’s apex cybersecurity body, ENISA, admitted that two of its threat reports published in 2025 were riddled with AI-hallucinated sources. Out of 492 footnotes in one of the reports, 26 were incorrect, according to researchers from the German public institution Westfälische Hochschule, cited by Der Spiegel magazine.
Researchers are wary — not so much about the use of AI, but of assigning it epistemic authority.
“ENISA let AI touch the one layer it should never touch unguarded: the truth layer,” Chiara Gallese, an AI law and data ethics researcher, wrote on LinkedIn. “That’s how hallucinations turn into institutional publications. And when this happens inside a cybersecurity authority with a €27 million budget, the problem isn’t skill. It’s the process. No mandatory verification step. No provenance checks. No clear rule for AI use. Just speed, convenience, and trust-by-default.”




