By Kim Reeve, Partner and Jared Shorkend, Associate at Weber Wentzel

In the near future, imagine a rainy Wednesday morning in Sandton. A claims handler opens his laptop, and immediately, things start moving much faster than before. For each new email, the company's artificial intelligence system (AI) has already drafted a suggested reply. The inbox is also lighter, as a public chatbot, trained on policy terminology and constantly being improved, now handles most client and broker queries. Need a meeting? An AI assistant automatically schedules it, sets reminders, and even takes minutes.

These days the claims themselves look a little different. One email involves a policyholder who was hit by a driverless taxi. The chatbot has already collected the details, and an AI screener has cleared it of fraud, leaving the handler to wait on the vehicle's computer logs. Another email could be about a medical assistance approval cover based entirely on AI diagnostics, with a pharmaceutical chatbot standing by to instantly answer any drug-related queries.

Then there's the professional indemnity notification: A reputable financial advisor is reporting that an unknown party used a deepfake video of his likeness to provide fraudulent investment advice. The followers acted on it and lost money, and he is taking the precaution of notifying his insurer while his lawyers assess the consequences.

None of this is science fiction; These processes are active and becoming common in South Africa. But as we all know, progress brings risks. The rapid adoption of AI is leading to a growing list of developments across industries. For example, Stanford University's 2025 AI Index reported that AI-related “incidents” recorded worldwide in 2024 increased by 56.4% compared to the previous year.

It is safe to say that any organization using AI faces potential risks. In March 2026, a California jury found media platforms Meta and YouTube liable for US$3 million in a damages claim related to their algorithms. We've also seen Tesla held liable for a fatal vehicle accident involving its Autopilot system and Air Canada forced by a tribunal to honor a waiver accidentally granted by its chatbot. In the United Kingdom, an AI facial recognition system misidentified a woman as a shoplifter, leading to a baseless search and emotional trauma. Closer to home, the Financial Sector Conduct Authority (FSCA) has raised concerns about deepfake videos of prominent figures endorsing fraudulent schemes, a trend that has already been linked to the eventual liquidation of at least one financial services provider. Meanwhile, generic AI developers are facing massive intellectual property lawsuits over their training datasets.

Globally, lawmakers are struggling to catch up. The European Union has adopted its own comprehensive AI Act, and Denmark is considering copyright protection for personal likenesses against deepfakes. Here in South Africa, a draft The National AI Policy Framework was published in 2024 and is expected to be gazetted soon for a formal 60-day public consultation process, but is expected to be finalized only during the 2026/2027 financial year.

Regulatory changes in financial services

In the absence of clear legislative or policy guidelines, South African regulators and industry bodies are taking steps to set the rules of the game, particularly in the insurance sector. In November 2025, the FSCA and the Prudential Authority (PA) jointly published a landmark, first-of-its-kind report titled “Artificial Intelligence in the South African Financial Sector”.

This joint report provides a clear picture of where the industry stands: While banks lead the way in AI adoption at 52%, the insurance sector has taken a more cautious approach, adopting AI at only 8%. However, insurers are planning to expand the use of AI in underwriting and claims management in a big way. To manage this effectively, the FSCA and the PA are urging financial institutions to adopt strong governance frameworks, ensure board-level oversight, and use recognized “clarification methods” so that AI-driven decisions are transparent and auditable. They also specifically mandate that institutions must clearly disclose whether AI influences consumer-facing decisions, such as credit evaluation or insurance pricing.

Furthermore, AI is fundamentally changing the fraud landscape. Criminals are now using AI to create “artificial identities” – combining stolen real IDs with fake names and AI-generated images to bypass insurers’ onboarding verifications. In response to these sophisticated threats, leading industry bodies such as the Association for Savings and Investments South Africa (ASISA) and the South African Insurance Association (SAIA) are taking a collaborative approach. ASISA and SAIA have jointly established a Computer Security Incident Response Team to monitor emerging cyber threats, report on attack methods and share intelligence across the region.

Outside the direct insurance sector, we are also seeing bodies such as the Independent Regulatory Board for Auditors, the South African Institute of Chartered Accountants and the Association of Arbitrators issuing important guidance on using AI responsibly. These guidelines will likely dictate how courts apply the classic Kruger vs. Coetzee The test for negligence, i.e. asking whether a reasonable professional would have anticipated the loss and taken steps to prevent it. Ultimately, any data processing or privacy issues associated with AI systems will also require strict alignment with the information regulator under the Protection of Personal Information Act (POPIA). In this context, the information regulator has raised concerns that the number of data breaches occurring in South Africa has increased dramatically, with the number of security compromise incidents increasing by 40% in 2025 compared to the previous year.

Insurance Feedback: Silent vs Affirmative Cover

In view of these profound changes in the way businesses operate, the risk landscape has fundamentally changed, making adequate insurance coverage essential. Right now, most policies cover AI risks through “silent cover”, meaning that AI is not mentioned explicitly, but the risks are covered under general policy wordings. In contrast, “positive cover” would explicitly target AI risks. As AI claims inevitably increase and coverage disputes develop, we can expect the insurance industry to move increasingly toward clear, positive AI policies.

For businesses adopting AI, it is important to prioritize comprehensive AI coverage, as well as ensure strong governance frameworks, POPIA compliance, and monitoring of regulatory developments.

-Subscribe to our newsletter-

Newsletter: Sign up to receive daily updates from IT News Africa

Please correct the fields marked below.





















Categorized in: