SharonAI : Sharon AI - 10K Presentation (Sharon AI company presentation May 2026)

SHAZ

Published on 05/10/2026 at 11:57 pm EDT

COMPANY PRESENTATION

Table of Contents

Introduction

The Market Opportunity

Our Platform, Technology & Infrastructure

Commercial Model & Customer Traction

Management & Governance

Introduction

Who is Sharon AI?

Sharon AI is an Australian Neocloud operator, purpose-built to power the next generation of Artificial Intelligence and High Performance Computing

01

What we do?

Design and operate AI-native infrastructure optimised for

large-scale AI training, inference and high-performance computing - not general-purpose cloud

Deliver GPU-as-a-Service, AI Platform (PaaS) and

high-performance storage as a fully

integrated AI ecosystem

Solutions are targeted at a broad range of customers - including enterprise, government and hyperscalers

02

Why we succeed?

Privileged access to cutting-edge GPUs: One of only three NVIDIA Cloud Partners in Australia, with access to B200, B300 GPUs and anticipated access to GB300 GPUs

AI-optimised performance: Purpose-built networking, storage and orchestration delivers higher utilisation and efficiency than traditional cloud

Hosted in Australia - a globally attractive jurisdiction: Australian-hosted, low-latency, secure infrastructure built to meet enterprise, government and

regulated-industry requirements

Capital-efficient scale: Deploys into tier-III / IV data centres (e.g. NEXTDC) rather than building from scratch, enabling rapid expansion and faster time-to-revenue

03

Proof points

Demonstrated ability to deploy and operate AI superclusters at scale for sophisticated global customers

Enterprise-grade traction with landmark contracts including Canva, GMI Computing, and a US$1.26bn, 5-year agreement with ESDS Software Solutions

~70MW of data centre capacity

Company History

Since being incorporated in February 2024, Sharon AI has delivered on key strategic, operational and financing milestones

Key Strategic and Operational Milestones

Dec-24 Certified as an NVIDIA Cloud Partner

Mar-25

Announced planned development of Sharon AI Supercluster,

comprising 1,016 GPUs at NEXTDC's M3 Data Centre

Sep-25 Entered into agreement with Cisco for strategic collaboration

Dec-25 Announced 50MW Data Center Capacity Expansion

Key Financing Milestones

Feb-24 Sharon AI incorporated

Dec-25 ~US$103m pre-IPO via a Convertible Note offering

Agreement with NEXTDC, bringing total capacity to 54MW

Signed Canva contract

Feb-26

Reached binding agreement with Cisco - announcing secure AI Factory, providing 1,024 B300 GPU capacity

Feb-26 Listed on the NASDAǪ, raising ~US$125m

Mar-26 Agreement with WWT for deployment of HPC infrastructure

Signed GMI contract

Apr-26

Signed ESDS Software Solutions contract for anticipated US$1.26bn TCV, 5-year

Apr-26

Sharon AI confirms US$74m in total proceeds from the sale of its 50% stake in Texas Critical Data Centers

Apr-26

US$350m convertible note offering led by Oaktree Capital

Management, L.P.

The Problem That We're Focused on Solving

The Problem

AI adoption is accelerating faster than the infrastructure

required to support it

Demand for AI training and inference has outpaced the global supply of high-performance GPUs and power-dense data centres - both of which have increasingly long lead-times

Hyperscale clouds are optimised for general workloads - not deterministic, non-contended AI performance

Increasing requirements for sovereign data residency, security and compliance further constrain options

There is a structural gap between AI ambition

and fit-for-purpose AI infrastructure

Our Solution

To be the trusted AI infrastructure platform enabling AI at

scale

Deliver dedicated, HPC and GPU compute built to NVIDIA reference architecture

Provide secure, low-latency, sovereign, AI infrastructure hosted

in world-class APAC data centres

Enable customers to move from experimentation to production with speed, certainty and capital efficiency

Partner deeply across the AI ecosystem to ensure access to next-generation GPUs, power and connectivity

Build essential digital infrastructure that underpins the next generation of AI applications

Sharon AI exists to make large-scale AI deployment

reliable, scalable and economically viable

The Market Opportunity

Key GPUaaS Operators

Neoclouds are complementary operators to hyperscalers, with US Neoclouds solving global capacity shortfalls, while Australian Neoclouds are providing a solution for domestic compute demand

Hyperscalers

US Neoclouds

Australian Neoclouds

Global AI infrastructure demand is expanding at a pace that exceeds hyperscalers' ability to deploy incremental capacity in some markets given constraints on power availability, construction timelines and supply chain complexity

US Neoclouds have emerged to address these constraints, offering purpose-built, GPU-dense infrastructure, faster provisioning and flexible commercial

models compared with general-purpose hyperscaler GPUaaS, and are increasingly partnering with hyperscalers and AI end-users to deliver GPU capacity

023. Take-or-pay models are gaining traction with these infrastructure-style commercial contracts securing minimum revenue, mitigatin g demand volatility, improving capital planning and enabling scaled deployment

034. Australian Neoclouds have emerged to support domestic AU compute demand, enabling regional AI adoption through locally compli ant and operationally resilient compute solutions

Neoclouds vs Hyperscalers GPUaaS Service Offerings

Neoclouds are fundamentally optimised for AI workloads, while hyperscalers operate general-purpose cloud platforms to support a broad range of use cases

Dimension

Hyperscalers

Neoclouds

General-purpose cloud platforms

Purpose-built platforms

Architecture to balance compute, storage, and networking across diverse workloads

Designed for high GPU density

Performance may be constrained by multi-tenant and heterogenous workload requirements

Supports higher utilisation and more consistent performance for large-scale training and inference

Standardised configurations optimised for flexibility and scale across customers

Reduced virtualisation overhead with environments configurable to specific model requirements

Pricing and provisioning typically bundled within broader cloud service offerings

Capacity commitments, provisioning timelines, and pricing aligned to sustained GPU usage

Compliance and infrastructure management standardised across regions and services

Enables more directed management of power availability, data sovereignty, and compliance

Provides foundational cloud infrastructure and establishes pricing and capacity benchmarks

Operates as a complementary layer addressing capacity, performance, and deployment constraints for AI workloads

Evolution of Market Share in the US

The US GPU market has shifted from hyperscale-dominated supply to a bifurcated system where Neoclouds now absorb demand at scale

Estimated market share (as at 2024)

AWS Microsoft Google CoreWeave Lamda Nebius

Crusoe

WhiteFiber

27%

22%

16%

12%

G%

G%

2%

1%

1%

1%

GPU Cloud

Marketplaces

Other

Data Sovereignty & Regulations to Drive Regional Champions

Tightening data sovereignty and privacy rules in Australia and the US are generating a structural advantage for trusted, in-country AI infrastructure providers

US to curb global chip shipments; most markets will face restrictions

Regulations

Description

US

AI Gov. Framework

Guidance for AI deployment

CLOUD Act

Grants US authorities data access

HIPAA

Regulates health information

American Privacy Rights

Act (Proposed)

State-level consumer data protection

Australia

SOCI Act

Mandates security / resilience

reporting

Privacy Act 1S88 and Aus Privacy Principles

Regulates handling of personal data

CPS 234

Financial services information

security

My Health Record Act 2012

National electronic health records

Barriers to Entry Key Driver of Success for Early Entrants

Access to GPUs, grid constrained high-density power, tightening data regulation, and capital requirements establish barriers to entry into Global Neocloud and GPUaaS markets

Securing timely access to

GPUs remains a key challenge

Manufacturing constraints and hyperscaler prioritisation reduces supply

Emerging providers must navigate long lead times and competition from larger

incumbents

NVIDIA Cloud Partner Program (NCP) membership serves as a key mitigant to allocation headwinds for Sharon AI

Rapid growth in AI workloads is reshaping data centre infrastructure requirements

High-performance GPU clusters demand a substantial and reliable power supply

Sharon AI's relationship and agreement with NEXTDC and other data centre operators aligns demand for high-density compute with medium and long-term power planning

Countries increasingly

mandate that sensitive data

including customer information or AI model datasets remain within national borders

Providers must establish localised infrastructure, ensure secure data handling, and adapt to evolving regulatory frameworks, which raises

operational costs and slows international expansion

Entry into the Neocloud market

requires substantial capital investment and highly specialised talent

Capital investment could be

required across GPU

procurement, data centre construction, networking infrastructure, and maintenance operations

Specialised talent also essential, e.g. engineers specialising in HPC, distributed AI systems, and data centre

operations

Our Platform, Technology & Infrastructure

Our Solution Offering

Sharon AI provides access to accelerated computing infrastructure solutions specifically targeted to AI and HPC

applications, delivered "as-a-Service"

Current GPUs which Sharon AI offers to customers1

B200

B300

GB300

Engineered on the NVIDIA Blackwell

architecture

Next-generation data centre GPU in the

Blackwell Ultra series

Built on next generation NVIDIA Blackwell

architecture Super Chip

Designed for deployment within enterprise

GPU clusters and AI infrastructure

Engineered to accelerate demanding AI and

large-scale compute workloads

Achieves a 50% increase in performance

compared to the GB200s

Engineered for demanding AI training and

inference workloads

Engineered for extreme-scale AI inference and

reasoning

System-level "AI Factory" deployments

Scalable, on-demand access to high performance GPU cloud compute

Service engineered to accelerate complex AI workloads across use cases:

Model training

Inference

Research computing

Visual computing

Proprietary PaaS combining cloud infrastructure with expert AI, ML and HPC operational support

Delivers end-to-end AI development and deployment capabilities through unified interface

Provides highly scalable and cost-effective cloud storage designed for large-scale AI and HPC datasets

Services include:

S3 Compatible Cloud Storage

High Performance SSD Storage

Archive and Backup

Target Customers and Illustrative Use Cases

Sharon AI solutions are targeted at a broad range of customers which require sovereign, high-performance AI infrastructure for training, inference and mission-critical workloads at scale

Corporations integrating AI into workflows and require scalable infrastructure for model training and inference

Usecases: ML model training, inference, data analytics, generative AI applications

Enterprises

Hyperscalers

Large global cloud and internet platform companies that procure GPU and data centre capacity in massive contiguous blocks torun and scale their own cloud services and AI workloads

Usecases: Building dedicated AI clusters, inference at scale, model serving

AI Labs

AI research firms dedicated to researching, developing and applying AI require scalable infrastructure

Use cases: ML model training, inference, data analytics, generative AI applications

Research Institutes and Universities

Academic and scientific organizations conducting complex simulations and data-intensive research that depend on high-performance parallel processing capabilities

Usecases: LLM development, model research, compute intensive proof of concepts

Governmental Authorities

Public sector bodies seeking sovereign cloud capabilities and secure infrastructure for sensitive computational tasks

Usecases: High performance computing, climate modelling, genomics, scientific simulation

AI Start-ups and Developers

Early-stage companies and individual developers requiring flexible, on-demand access to powerful GPU resources

Use cases: Sovereign AI infrastructure, secure computing, defence applications

GPU Aggregators and Marketplaces

Platforms that aggregate GPU capacity from various providers to serve a broad user base. These customers provide Sharon AI with immediate access to a

wide demand pool

Usecases: Immediate access to broad customer pools, spot and short-term market participants

Our Approach is to Partner With the Best in the World

Sharon AI works with global leaders in AI and digital infrastructure to ensure best practices and on time delivery of Sharon AI's solutions

World Wide

Technology

NVIDIA is the designer and technology provider of the AI processors (GPUs) used by Sharon AI and is a key partner under NVIDIA's NCP program. NVIDIA also provides enablement support for deploying its enterprise-grade infrastructure

NEXTDC serves as the primary co-location data centre provider,

hosting Sharon AI's hardware infrastructure

Cisco provides AI-ready networking infrastructure, and will support the go-to-market strategy facilitating engagement with large-scale enterprise and government customers

WWT provides support for end-to-end procurement, assembly, delivery and installation of large-scale compute infrastructure

Lenovo provides access to hardware procurement and lifecycle services as part of the TruScale program

Key Delivery Partners: NVIDIA

NVIDIA is a manufacturer of Sharon AI's GPUs and is a key operational partner for Sharon AI. Sharon AI's certified NCP status provides it access to NVIDIA's latest technologies, technical support and go-to-market collaboration

Certified NVIDIA Cloud Partner (NCP) following successful deployment of NVIDIA reference architectures

Preferential access to NVIDIA roadmap and supply with visibility into NVIDIA's product roadmap

Purpose-built on NVIDIA reference architecture - all major GPU clusters, including Sharon AI's Supercluster, are architected to NVIDIA's reference designs

Deep hardware and software integration across deployed NVIDIA GPUs

across training and inference

Supply

B200

B300

GB300

Demand

Nvidia refers qualified customers to partners with available capacity

Key Delivery Partners: NEXTDC

NEXTDC S6

Primary, non-exclusive co-location DC provider, hosting the Company's AI cloud infrastructure across Tier III and Tier IV facilities in Australia

Access to up to ~54MWs of high-density capacity across NEXTDC sites, underpinning GPU expansion

~2MW operational capacity at M3 facility

~13MW distributed capacity at S6 and S3 facilities

40MW of contiguous capacity via an expansion agreement

NEXTDC's Australian-based, Digital Transformation Agency-certified facilities support data sovereignty and government-grade workloads

Facilities are purpose-built for advanced GPUs, engineered for NVIDIA reference architectures including liquid-cooled, high-power deployments required for B200, B300 and GB300 GPU clusters

The M3 data centre hosts Sharon AI's Supercluster deployment (1,016 GPUs)

NEXTDC M3

NEXTDC S3

NEXTDC M2

NEXTDC provides the sovereign data centre backbone including high power, liquid-cooled capacity, and Tier III and IV resilience that enables Sharon AI to deploy GPU Superclusters at speed and scale

Data Centre Strategy Beyond NEXTDC

While our anchor partnership with NEXTDC continues to support future deployment, Sharon AI is expanding capacity through a mix of third-party colocation and infrastructure partnerships

While NEXTDC is the primary host, Sharon AI has secured an additional partnership with a

domestic data centre for ~15MW of data centre capacity

Sharon AI continues to actively pursue additional partnerships with other high-quality colocation providers

- Additional partnerships will provide further power and capacity, ensuring the company is positioned for its next phase of growth with adequate headroom beyond currently contracted MWs

Sharon AI remains focused on establishing partnerships with data centre operators, so that it can scale without the capital intensity and long lead times associated with building large-scale data centres

Sharon AI has approximately ~70MW of data centre capacity

20

Disclaimer

SharonAI Holdings Inc. published this content on May 11, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 11, 2026 at 03:56 UTC.