South Africa's draft national AI policy was published for public comment on 10 April, marking a new phase of artificial intelligence deployment, risk and accountability in the country.

Much of the legal discussion around AI has historically assumed an instrumental model, where AI is treated as a tool to inform human decision-making, and accountability rests with the final decision-maker. Legal risk has focused on bias, accuracy, interpretability and misuse of outputs.

Increasingly, systems are no longer limited to generating output for human thought and are now making decisions and acting independently within the operational environment. This development, described as agentic AI, represents a significant change in the way legal risk and accountability are assessed.

With agentic AI, systems are designed to pursue objectives and determine how and when to act with little risk arising from actions taken by the system. While erroneous AI outputs can be reviewed and corrected before harm occurs, an executive autonomous action can result in immediate legal and commercial consequences that cannot be easily reversed or may already have caused harm.

Although it operates at a scale and speed beyond human norms, agentic AI can be considered a delegated decision-making authority. Organizations delegate authority to employees and automate processes subject to regularly defined boundaries, approvals, and oversight.

Agent AI fits within this legal framework, but autonomous systems perform tasks continuously, at high volumes, without human judgment at the point of action. This influences how risk manifests in practice, including whether the authority to delegate to an autonomous system was appropriate, the risks were considered, and appropriate constraints as well as monitoring and escalation mechanisms were implemented.

Image Source: From DC Studios
Draft AI policy approved for publication for public comment

No relief in delegation of authority

The Companies Act 71 of 2008 places a further limitation on the concept of delegating authority to AI systems.

Corporate decisions to deploy a system, define its mandate, and determine the scope of its autonomous operations remain board-level responsibilities. Directors retain non-delegable fiduciary duties with respect to those decisions throughout the life of the deployment, and these duties may be breached if the AI ​​programs they oversee make actual oversight impossible.

burden of proof

The Electronic Communications and Transactions Act 25 of 2002 (ACTA) is the statutory framework that governs contracts in automated decision making. However, it is limited to the contractual area and must be considered alongside other legal frameworks.

Section 25(c) of the Act generally applies to the communication of data messages as a default arrangement. For agentic AI, this means that messages generated by a system programmed or configured by an organization are attributed to the organization that implemented them, and the burden of proving system failure is on the organization.

Image Source: From DC Studios
GenAI in SA: balancing legal risk and operational rewards for businesses
jeanine moodley 24 March 2026

Misalignment of risk allocation in contracts

Where agentic systems cause a business to breach contractual obligations, or where they conclude contracts on behalf of the systems organisation, liability under contract law arises. However, many technology agreements were drafted on the assumption that systems operate deterministically and under close human supervision.

This may result in misalignment between the autonomy granted to the system and the allocation of risk in warranties, indemnities, audit rights and limitations of liability. In third-party deployments, this misalignment is compounded by standard-form vendor terms that exclude liability for autonomous behavior or provide limited recourse for downstream consequences.

The common law of agency reinforces the contractual risk when considered alongside the Act. Organizations that deploy agentic AI systems act in the position of a principal, and the operation of the system within its authorized scope is attributed to that organization, where it results in transactions or data messages that appear authorized.

Estoppel (or apparent authority) is particularly important, as courts may regard weak governance or tacit acceptance of AI-generated output as removing an entity's ability to claim that the action was not explicitly authorized – even where no formal decision was ever made to authorize the action.

Image supplied: Pierre Berger: Partner at Alchemy Law Africa
Is Generative AI a Threat to Corporate Legal Privilege?
25 February 2026

Deferred liability for damages

Delictual principles apply where harm is caused to a third party through the autonomous conduct of an agentic AI system. In accordance with the common law governing delay claims, courts will consider foreseeability, causation, the reasonableness of precautions taken and legal policy considerations.

The fact that an autonomous system was supplied or enabled by a third party is unlikely to disrupt the causal chain, where the deployer had control over configuration, permissions, and use.

The common law principle of mutual liability, when considered alongside section 25 of the ACTA, provides the most directly applicable framework for attributing vicarious liability to a deploying organization for harm caused by its agent AI system. The relationship between principal and agent is one of the analogous categories that South African courts have recognized as capable of establishing reciprocal liability.

An organization that deploys an agentic system is provided with a defined mandate to pursue specified objectives, within configured parameters, on behalf of the organization. Where the system causes harm while executing that mandate, the deploying organization will be held to be the appropriate bearer of liability.

The deployment of autonomous systems does not lower the applicable standard of care, rather, it may expand the scope of what harms are considered foreseeable and what safety measures are considered appropriate.

Image Source: Limbi007 -
Beyond the dashboard: Why leaders can't outsource critical decision-making to AI
31 March 2026

Data and consumer protection requirements

When agent systems process personal information or make decisions affecting individuals, additional risks may arise under the Personal Information Protection Act 4 of 2013.

Section 71 prohibits subjecting any data to a decision that results in legal consequences, where that decision is based solely on automated processing of personal information for the purpose of providing a profile of the individual. Exceptions apply where appropriate measures are in place, including the opportunity to represent and provide information on the logic underlying the automated processing.

In consumer-facing situations, Section 61 of the Consumer Protection Act 68 of 2008 imposes strict liability for damages caused by unsafe goods, defects, hazards or inadequate warnings. If the requirements of section 25 of the Act are met, data messages from agentive systems, including instructions and system-implemented actions, will be attributed to entities.

Is risk mitigation possible?

The laws governing AI contracts, the harms caused by AI, and how directors should oversee AI are overseen by different legal frameworks, each demanding its own analysis, risk assessment and mitigation strategies.

Risk mitigation strategies include defining the scope of AI authorization, implementing error-notification mechanisms, and updating technology agreements to clearly define the 'action space' of any AI agent. Risk assessment, human review of resulting decisions, and ongoing, comprehensive monitoring are also required.

Ultimately, businesses should approach agentic AI governance not only as a compliance hurdle, but also as a strategic management of a range of risks that the law already recognizes.

Categorized in: