By Mandy Duncan, Country Manager, Cape Networking South Africa

Trying to scale AI on old digital foundations is like playing Jenga on a wobbly table: early wins quickly pile up, but as ambition grows, every inherent weakness becomes magnified.

South African organizations are getting to this point through AI. Adoption is accelerating, experimentation is widespread, and productivity gains are visible, yet many are trying to build ambitious AI capabilities on foundations that were never designed for the scale and complexity of the AI ​​age.

While many factors shape AI success, from data and skills to governance and security, networks, often treated as background infrastructure, are emerging as one of the most important enablers of sustainable AI success.

The speed is real, but uneven

The economic potential of AI is well established. IDC Research shows that organizations are achieving an average return of $3.7 for every $1 invested in generic AI, with leading adopters seeing returns of more than $10.

South Africa is clearly part of this change. PwC's local modeling suggest that AI could contribute 1.2 percentage points to national GDP over the next decade, even at today's adoption levels.

Yet a gap is emerging between adoption and implementation. while a kpmg survey The finding reveals that 71% of African CEOs are investing in AI, with respondents also citing integrating AI into core functions as their top challenge. In fact, much of today's adoption is bottom-up and tactical – teams experimenting with tools without a coordinated plan for scale, security, or long-term sustainability.

This approach leads to quick wins, but it also creates risks. Without the right foundation, initial productivity gains may stagnate; Technical debt accumulates and trust in AI diminishes, not because the technology fails, but because the environment cannot support it.

AI exposes vulnerabilities in networks

AI workloads behave very differently from traditional enterprise applications. Training models generate massive east-west traffic in data centers and cloud environments, while inference demands ultra-low latency and consistent performance to deliver real-time predictions and decisions.

At the extreme end of the spectrum, the world's fastest supercomputer – hosted Lawrence Livermore National Laboratory-Can perform quintillion calculations per second. This level of high-performance computing is only possible because networks can move large amounts of data predictably, securely, and at speed, underscoring how intensive AI workloads are compared to traditional enterprise applications.

Traditional networks, designed for predictable north-south traffic, were not built for this scale or volatility. Today, networks must securely connect infrastructure, applications, users, and data while supporting computing-intensive workloads and increasingly complex hybrid environments. When networks fail to run, the consequences are clear: congestion slows down models, computations are wasted, downtime increases, and returns on AI investments diminish.

South Africa's constraints raise risks

These challenges have increased at the local level. Organizations face a persistent skills shortage, lack of infrastructure and increasing regulatory and compliance requirements. Network transformation is capital-intensive, and few can afford to wholesale replace legacy environments, forcing many to modernize in a phased, practical approach.

As a result, organizations across a variety of sectors are beginning to rethink not only how they upgrade their networks, but how those networks are conceived in the first place. Instead of implementing AI in legacy environments, leaders are moving toward AI-native systems, designed from the ground up with AI as a core component.

In practice, this means embedding intelligence directly into the network management layer. AI-native networks simplify operations, increase productivity, and deliver more reliable performance at scale by continuously analyzing network behavior and predicting issues before they impact users. Teams gain deep visibility into the performance of applications, infrastructure, and third-party services, allowing them to quickly identify the source of problems and resolve incidents in hours instead of days.

The result is exceptional user and operator experiences. For example, many of Cape's hospitality partners are using AI-enabled networks to recognize returning guests as soon as they connect, personalize digital interactions in real-time, and securely support high-density conference venues with multiple vendors moving in and out of the network. At large-scale events such as the Nedbank Golf Challenge, AI-native networking has enabled thousands of attendees to seamlessly connect while receiving real-time, location-aware information on their devices, demonstrating how network design directly shapes experience and operational performance.

The shift also reflects a broader move toward modular network design, where capabilities operate as flexible, cloud-based components rather than as one tightly coupled system. The benefit is a network that can respond dynamically to changing demands while supporting more intelligent, automated operations and reducing reliance on manual, ticket-based processes. Importantly, modularity must be coupled with interoperability and vendor-neutral standards that allow organizations to combine best-of-breed components without being locked into a single supplier.

In a skills-constrained market, this matters. By reducing manual configuration and troubleshooting and enabling phased upgrades in hybrid environments, AI-native networking makes modernization more achievable even in resource-constrained settings, reducing operational costs while reducing pressure on scarce skills.

The combination of AI-native networking with modular design, which lays the foundation for more goal-driven, agentic AI, allows organizations to simplify network management and maintain reliable performance even as AI-powered applications place heavy and less predictable demands on the network.

Building networks with and for AI

The way forward is not disruption for its own sake but deliberate, phased modernization. Both will have to create AI-ready networks with AI And for ai.

AI-native networks can simplify deployment, automate troubleshooting, strengthen security, and reduce operational complexity. Built for AI, they provide advanced, high-speed and low-latency architectures, reliable data movement, and compliance by design, ensuring AI workloads can run efficiently and securely at scale.

The important thing is that after this there can be no pressure on compliance and security. As AI attack surfaces expand and regulatory scrutiny intensifies, networking and security must be designed together. When designed together, compliance becomes easier to manage rather than harder to enforce.

South Africa's AI moment is already underway. Whether this turns out to be a sustainable gain or a fragile pile of early wins will depend on the strength of the foundation underneath. Without flexible, AI-native networking, AI initiatives stall in the pilot stage, no matter how promising the use case.

For business leaders, the challenge is clear: Treat network modernization as a strategic enabler of scalable AI, not as a technical afterthought, or risk building AI ambitions on a foundation that cannot be sustained.

-Subscribe to our newsletter-

Newsletter: Sign up to receive daily updates from IT News Africa

Please correct the fields marked below.





















Categorized in: