TL;DR

South Africa's Communications Minister Solly Malatsi withdrew the draft of the country's national AI policy after News24 found that at least 6 of its 67 academic citations citing fake articles in real journals were AI-generated hallucinations. The policy was approved by the Cabinet in March and published for public comment. Malatsi called it an “unacceptable mistake” and promised to manage the consequences. The scandal leaves South Africa without an AI governance framework and raises questions about its institutional capacity to regulate the technology.

South Africa's Department of Communications and Digital Technology spent several months drafting a national artificial intelligence policy. It proposed a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsman, a National AI Safety Institute, and an AI Insurance Superfund. It outlined five pillars of AI governance: skill capacity, responsible governance, ethical and inclusive AI, cultural preservation and human-centred deployment. It adopted a risk-based approach based on the EU AI Act. The cabinet approved this draft on 25 March. The Government Gazette published it for public comment on 10 April. And then the South African news outlet News24 checked the bibliography and found that at least six of the document's 67 academic citations did not exist. The magazines were real. There were no articles. The authors credited with foundational research on AI governance never wrote the papers attributed to them. The editors of the South African Journal of Philosophy, AI & Society and the Journal of Ethics & Social Philosophy independently confirmed to News24 that the cited articles were never published on their pages. According to Communications Minister Solly Malatsi, the most plausible explanation is that the drafters used a generative AI tool and published the output without verifying a single reference. A government policy designed to regulate artificial intelligence was weakened by the very artificial intelligence it failed to govern.

Return

Malatsi announces comeback on 27 AprilCalling the fictitious quotes an “unacceptable omission”, it said “the integrity and credibility of the draft policy has been compromised.” He said that results management will be followed for those responsible for drafting and quality assurance. “This failure is not merely a technical issue,” the minister said. The Chairman of the Parliamentary Portfolio Committee offered a more succinct assessment, suggesting the Department “skip using ChatGPT this time” when undertaking the rewrite. The document will be revised before being released again for public comment, but no timeline has been given. At a time when South Africa is now without a formal AI governance framework Governments around the world are grappling with how to regulate AIAnd the country's credibility as a serious participant in that conversation has taken a blow that will persist even after the policy revision.

The scam is not just that fake quotes surfaced in a government document. So they appeared in a government document about artificial intelligence, written by the department responsible for the country's digital technology strategy, precisely during the period when the world's most consequential AI governance debate was being fought in Brussels, Washington and Beijing. EU AI ActThe most ambitious regulatory framework for artificial intelligence, however, is struggling with delayed standards and an implementation timeline that has been pushed back to 2027 for high-risk systems. The United States has no federal AI laws and is seeing states create laws independently while the White House attempts to thwart their efforts. China has created AI regulations but enforces them selectively. In this scenario, South Africa offered a policy that could not survive bibliographic scrutiny.

Sample

The South African hallucination quote is an extreme case of a problem that is quietly spreading across institutions that use generative AI for research and drafting. A study published in Nature found that 2.6 percent of academic papers published in 2025 included at least one potentially hallucinogenic citation, up from 0.3 percent in 2024. If this rate translates into approximately seven million scholarly publications by 2025, more than 110,000 papers have invalid references. GPTZero, a Canadian detection startup, analyzed more than 4,000 research papers accepted at NeurIPS 2025, one of the world's leading AI conferences, and found more than 100 hallucination citations in at least 53 papers. In a separate multi-model study, only 26.5 percent of AI-generated bibliographic references were completely correct. The problem is structural: large language models generate citations through probabilistic token prediction rather than information retrieval. They don't look at the documents. They predict what a citation should look like based on the patterns in their training data, and when the prediction is confident enough, they produce a reference that reads as authoritative but indicates nothing.

The South African case is distinctive not because the technology hallucinates, which is a well-documented and inherent limitation of generic AI, but because the hallucination was published in an official government policy document that passed through Cabinet approval without verifying the references. The drafting process involved civil servants, subject matter consultation and ministerial review. Dumisani Sondlo, the department's AI policy chief, previously described policy development as “an act of acknowledging that we don't know enough.” This admission was not limited to acknowledging that the tool being used to help draft the policy was itself unreliable. The six fake quotes that News24 identified are the same ones that were caught. Whether the additional citations in the document's 67 references are genuine has not been publicly confirmed. The entire bibliography is now in doubt, and by extension, the analytical foundation on which the policy proposals were built is also in doubt.

side effects

The immediate result is that South Africa's AI governance timeline has been reset. The draft policy, which was intended to position the country as a leader in responsible AI adoption on the African continent, will need to be redrafted, reconsidered and reintroduced. The loss of institutional credibility extends beyond policy. If the department responsible for regulating AI cannot verify whether the sources in its own policy document are genuine, the question arises whether it has the capacity to evaluate the AI ​​systems it proposes to regulate. A multi-regulatory model was envisioned in this policy AI governance and human oversight It will be contained within the existing supervisory framework rather than centralized under a single authority. That model requires each participating regulator to have sufficient technical understanding to assess AI systems in its region. The hallucination scandal does not inspire confidence that the coordination department meets that threshold.

The broader lesson is not that governments should avoid using AI in policy development. It's that AI's failure mode is not dramatic. It doesn't crash. It does not display any error messages. It produces fluent, formatted, confident text that looks exactly like the output of a competent researcher. The fake quotes were not clearly wrong in South Africa's AI policy. They were admirable. He cited actual magazines. He gave credit for the work to real people. He followed the formatting conventions of academic references. The only way to catch them was to check whether each of them actually existed, a task that requires exactly the kind of systematic human verification that AI makes unnecessary. Growing public distrust towards AI Not irrational. It is a response to a technology that is powerful enough to draft a national policy and unreliable enough to fabricate the evidence on which the policy rests. South Africa's embarrassment is singular, but the underlying failure, using AI without the ability to verify its outputs, is not. It is happening in universities, law firms, newsrooms and government departments around the world. South Africa is the first government to publish receipts. Challenges of implementing AI regulation There are real ones, but they start from a condition that the South African department could not meet: understanding what the technology does before trying to write rules for it.

Categorized in: