Today 1125

Yesterday 6321

All 63148936

Sunday, 29.03.2026
Transforming Government since 2001


Sovereign AI matters for cities because it helps strengthen and extend smart city efforts -enabling cities to achieve outcomes such as less congestion, faster response times, improved safety, and tangible savings in energy and operations. AI can deliver these results in measurable ways when it is deployed on an architecture designed for low latency, high availability, and clear governance, while keeping sensitive data and operational control under trusted local management. This city-level shift also reflects a broader national trend, as more countries are investing in Sovereign AI to keep critical data, infrastructure, and model governance under local control.

This means real-time inference close to where data is generated, combined with centralized orchestration for updates, monitoring, and reporting. Done well, it strengthens privacy and operational security and helps cities move beyond pilots to reliable, city-scale deployments. ASUS enables this approach with secure, scalable platforms that bring AI from central compute to the edge in real-world city deployments.

Sovereign Cloud vs. Public Cloud

Public cloud refers to shared, on-demand computing services operated by large cloud providers. This has become the default model for scaling many digital workloads, because it is fast to deploy and cost-efficient Sovereign cloud is not a replacement for public cloud. It is typically chosen for workloads that require stricter controls on data residency, access, and auditability.

In practice, cities often use both models side by side, selecting the right environment based on sensitivity, compliance, and operational risk. In Q3 2025, global spending on cloud infrastructure services reached US$102.6 billion, and the market grew by more than 25% over the first nine months of 2025, putting full-year 2025 revenue on track to exceed US$400 billion.

ASUS and Taiwan AI Cloud support governments and enterprises in building Sovereign AI capabilities that remain secure, high-performance, and under local control. A key enabler is our AI Foundry Service (AFS) — a turnkey, full-stack offering that combines HPC infrastructure, optimized software, and cloud-native/on-prem AI tools to speed up deployment and operations.

Compute & Model Lifecycle

To make sovereign AI work in city operations, you need an end-to-end path that supports not only deployment, but the full AI lifecycle – from building and training models to validating, deploying, operating, and updating them. That requires a path you can build piece by piece over time: starting with cloud for scaling services and deployment, adding HPC (high-performance computing) as heavy training needs emerge, and introducing edge for real-time decisions close to where data is created.

Taiwan AI Cloud, an ASUS subsidiary, connects these layers with private 5G so data can move securely to the edge when low latency matters, and scale back to cloud and HPC when more compute is needed. Cities can start with specific use cases and expand step by step over time, while keeping control of data residency and governance, as well as model versions and updates across the lifecycle.

This also means offering a faster alternative to building an AI data center from scratch. With our AFS POD one-stop deployment service, we deliver the core building blocks—GPU compute, storage, high-speed networking, power and cooling, and security—so regulated and data-sensitive teams can run training and inference on dedicated infrastructure without sending data to public clouds.

On top of that, we provide MLOps (Machine Learning Operations) tooling to deploy, monitor, and update models in production, inference, internal knowledge-base chat and retrieval, document processing and extraction, and AI assistants, including intelligent customer service. This helps teams launch projects with minimal coding while keeping the full model lifecycle governed, well-documented, and ready for review.

What to Measure: PUE, Latency, and Outcomes

To assess whether a sovereign AI deployment is working, it helps to track metrics across infrastructure efficiency, real-time performance, and business impact. On the infrastructure side, organizations should monitor energy efficiency with the PUE (Power Usage Effectiveness) metric, which measures how much total facility power a data center consumes compared with the power used by IT equipment such as GPU servers. In GPU-dense environments, where power delivery and cooling drive a large share of operating cost, PUE is important because it shows how efficiently the site runs and whether non-compute energy use, such as cooling and power distribution, is being kept under control.

On the performance side, organizations should measure end-to-end latency and service stability. Latency is the time it takes to move data and return a response, and it is often the deciding factor to keep inference close to where data is generated. Fast storage and high-speed networking are critical contributors to low latency and reliable operation.

Finally, organizations should track operational outcomes, such as time to deploy and scale the platform, how well GPU and storage resources are utilized, and measurable improvements in day-to-day workflows. This could include higher diagnostic throughput and more consistent results when models are integrated into existing enterprise systems.

For ASUS, the objective is to turn these metrics into repeatable delivery: combining ASUS infrastructure with Taiwan AI Cloud’s deployment and operations capabilities to provide a governed AI Factory that can be rolled out in weeks, operated under clear security and compliance controls, and scaled as demand grows. This helps customers move from pilots to production faster — reducing operational risk and total cost of ownership (TCO) while keeping sensitive data and inference workflows under their own control.

Case Studies: Smart City & Healthcare

The following case studies illustrate how sovereign AI translates into real-world impact when low-latency connectivity, governed compute, and a well-run model lifecycle come together in operational environments.

Tainan shows what the full path from compute to deployment looks like in a sovereign AI city setting. Models are trained on TWCC (Taiwan Computing Cloud), Taiwan’s national AI and supercomputing cloud platform, while inference runs in the city government’s own server room. This cloud-to-edge split is the hybrid deployment model: centralized compute for training and local, on-premises inference for low latency and data control. Drones and roadside cameras stream live imagery over 5G for near real-time computer vision analytics.

The platform supports the city’s Intelligent Operations Center across 194 intersections, with image recognition reported at up to 98 percent and overall traffic flow improvement cited at 95 percent. Tainan has also deployed more than 4,000 smart parking pillars, with the city reporting cumulative carbon reductions of 993 metric tons.

In healthcare, Taiwan AI Cloud has supported the Chang Gung Memorial Hospital in setting up an AI inference platform that runs inside the hospital environment. The hospital has deployed multiple AI models and LLM-based assistants for medical imaging workflows – including X-ray, ultrasound, and heart monitoring – integrated with existing imaging systems so clinicians get decision support directly where they work. In addition, the platform is used for broader clinical support, such as osteoporosis risk screening, estimating how well the heart is pumping, and helping emergency teams quickly spot critical signs in medical images, while keeping sensitive patient data under hospital control.

---

Autor(en)/Author(s): Peter Wu

Dieser Artikel ist neu veröffentlicht von / This article is republished from: Asus Press, 16.03.2026

Bitte besuchen Sie/Please visit:

Go to top