In a development described by critics as “bitter irony”, South Africa has become the first nation to have its foundational artificial intelligence policy dismantled by the very technology it sought to regulate. On April 26, 2026, Solly Malatsi, Minister of Communications and Digital Technology, officially withdrew it. Draft National Artificial Intelligence (AI) Policy Following the discovery that the document was filled with fictitious, AI-generated citations. The scandal has not only stalled South Africa's digital ambitions, but also sparked a broader conversation about the erosion of human oversight in the age of generative AI.

The controversy erupted less than three weeks after the 86-page draft was published in the Government Gazette for public comment. Detailed investigations, particularly by civil rights groups Article OneAnd investigative journalists revealed that the policy's bibliography included at least six academic sources and journal titles that did not exist.

These “AI hallucinations” are a phenomenon where large language models (LLMs) like ChatGPT or Gemini invent credible sounding but completely fake information that includes citations such as from non-existent journals. South African Journal of Philosophy and AI And fabricated studies were attributed to real scholars who never wrote on those specific topics. Minister Malatsi acknowledged that the most “plausible explanation” was that officials used AI to help draft the document and failed to verify the output.

Ambitious outline in limbo

Before the scandal was exposed, the draft policy was hailed as one of the most progressive frameworks in the Global South. It proposed several landmark institutions designed to mitigate the risks of automation while promoting innovation:

  • National AI Commission: A centralized body to coordinate AI strategy across government departments.

  • AI Ethics Board: A panel was tasked with ensuring that the algorithmic system aligns with South Africa's constitutional values ​​and human rights.

  • AI Insurance Superfund: Modeled on the Road Accident Fund, the proposal seeks to create a state-backed compensation mechanism for citizens harmed by AI-driven decisions or automated accidents.

  • Socio-Economic Rights: The policy specifically framed universal high-speed Internet access as a “socioeconomic right”, proposing massive investments in 6G and low-Earth-orbit satellite connectivity for deprived rural areas.

With the document's withdrawal, these ambitious plans now sit in regulatory limbo, leaving South Africa's budding tech sector without a clear roadmap.

Institutional consequences and suspension

The fallout from the “draft debacle” was intense. By April 30, 2026, Department of Communications and Digital Technology (DCDT) announced Precautionary suspension of two high ranking officers Involved in drafting and quality assurance process. Director General Noncubela Jordan-Diani said that “irresponsible use of AI” has compromised the integrity of the state's digital leadership.

The scandal also exposed a secondary “hallucination” crisis within home department. Following the news in DCDT, researchers discovered more than 100 unattributed references in a revised white paper on citizenship and immigration, leading to a further suspension and government-wide audit of all policy documents produced since the release of ChatGPT in late 2022.

A significant portion of the reaction focused on the government's failure to consult local experts. Top AI researchers from South African universities have come forward to reveal that they were never approached to assist in drafting the DCDT.

There are allegations of “bureaucratic infighting”, with sources saying that a more robust, expert-led policy developed by the Department of Science, Technology and Innovation (DSTI) was sidelined in favor of DCDT's flawed, AI-assisted version. Critics argue that the government attempted to “shortcut” the complex process of policy making, prioritizing speed over the rigorous academic and stakeholder engagement required for such a transformative technology.

Global lessons in AI sovereignty

The South African incident is a stark warning to other countries racing to regulate AI. This highlights growing “sovereignty risks” where governments, in an effort to appear modern, may rely more on tools provided by Western tech giants to write their own laws.

Minister Malatsi's public apology emphasized that “vigilant humanitarian oversight” is not just a policy suggestion, but a prerequisite for governance. The debacle has been compared to the 2025 incident where Deloitte Australia was forced to return funds to the government after using AI to produce a report containing fake case studies. However, for a sovereign state to fall into the same trap when writing a law designed to prevent such errors is seen as a far more significant institutional failure.

By May 2026, South Africa finds itself at a crossroads. While the country remains an AI leader on the continent, ranking high on the Global AI Readiness Index, the “hallucinations scandal” has seriously damaged its credibility.

The government has promised a revised draft and a more rigorous editorial process, but no new timelines have been provided. For now, the “digital iron curtain” in South Africa is not one of censorship, but of administrative caution. The lesson learned in Pretoria is clear: in the age of artificial intelligence, the most valuable asset a government has is not its algorithms, but its human accountability.

Categorized in: