Today 1048

Yesterday 3037

All 47755376

Tuesday, 7.10.2025
Transforming Government since 2001

Recently, in front of a captive Singapore audience, two men had a conversation that could reshape the world’s thinking about AI regulation. One represents the vendor side: Thomas Roehm, SAS’s vice president of corporate marketing, armed with decades of analytics experience and war stories from Fortune 500 implementations. The other echoed the regulator’s dilemma: Frankie Phua, managing director and head of group risk management at United Overseas Bank, who wrestles with a fundamental question that keeps data leaders awake at night: How do you govern AI that evolves faster than the rules meant to contain it?

Their conversation, held at SAS Innovate On Tour in Singapore, was part of a broader discussion about Singapore’s groundbreaking Project MindForge. It offered a rare glimpse into how data-driven regulation actually gets built, highlighting the messy intersection of business reality and regulatory necessity.

From principles to practice: The data-driven regulatory framework

Project MindForge is driven by the Veritas Initiative, a Monetary Authority of Singapore (MAS) initiative that examined the risks and opportunities of AI technology for the financial services sector. But its origins trace back further, to Singapore’s methodical approach to AI governance, which began in 2019 as part of the Singapore National AI Strategy.

“I must say, I’m very proud to be a Singaporean in Singapore working in the Singapore banking industry,” Phua told the Singapore audience. “Why do I say so? Because over the past period, I think started in 2019, MAS has engaged the banking industry to start the AI journey.”

Singapore’s Monetary Authority began with something revolutionary: Phua oversees various risk management functions, including credit risk, market risk, operational risk, technology risk, and ESG/climate risk. He has over 30 years of extensive experience, making him exactly the kind of practitioner regulators need to involve from the beginning.

The journey began with what Phua called the “FEAT principles,” which stand for Fairness, Ethics, Accountability, and Transparency. "Basically, it’s very important we talk about AI governance to have an AI principle first,” Phua explained, “because when we look at any governance, it must be measured against certain principles.”

But principles without operationalization are just philosophy. Project Veritas attempted to translate those four principles into actionable frameworks. “So what are the tasks to do for fairness? What are the tests to do for transparency, for sustainability?” said Phua, sharing the questions that the project looked at. “Of course, it became a more challenging process.”

Then ChatGPT arrived.

Regulatory agility in the age of GenAI

“In 2022, ChatGPT came about. We first start talking about GenAI,” Phua recounted. Suddenly, the carefully constructed frameworks for traditional AI seemed quaint. The industry launched the MindForge consortium, which aims to examine the risks and opportunities of Gen AI in the financial services industry.

This risk-innovation approach is where MindForge distinguishes itself from typical regulatory initiatives. Rather than government officials writing rules, Singapore created what Phua calls an “ecosystem approach” — practitioners from banks, insurers, and technology companies collaborating to write the handbook for themselves.

"MAS is leaving it to the financial institution in Singapore to write this handbook for the industry, so that they leave it to us to try to govern AI in a practical way, rather than it comes out with their own guidance and force it on the banks to adopt."

The approach reflects a fundamental insight about modern regulation: in rapidly evolving technological domains, the regulated often know more about the practical implications than the regulators.

The governance-as-code paradigm

Roehm, who is responsible for SAS Corporate Marketing, adds crucial context about why this matters beyond Singapore’s borders.

During his presentation, which set the scene for the discussion, Roehm outlined AI decisioning already embedded across various industries: “Today, we help banks predict and prevent fraud as we analyze billions of transactions across the world,” he explained. “We’re working with hotels, helping them manage a variety of data to forecast demand, manage inventories and dynamically-priced rooms.”

These aren’t future applications but are a current reality. The regulatory challenge isn’t preparing for AI adoption; it’s governing AI that’s already making consequential decisions — and that’s a critical point.

Roehm’s examples illustrated the stakes: “In the public sector, SAS is working together with local and federal agencies to help improve lives... providing solutions that help identify children at risk for social workers, or Smart City solutions that help predict and mitigate the risk of flooding.”

When AI systems are making decisions about child welfare and flood prevention, the luxury of slow, consensus-based regulation evaporates.

The taxonomy challenge: Defining AI for regulatory compliance

One of the most revealing moments in the conversation came when Phua addressed what he considers the biggest challenge in AI governance: “What do you define as AI? In fact, in Project MindForge, we have a lot of debate on what is AI.”

This isn’t academic hair-splitting. “Because you’re trying to govern AI, you need to know what AI is,” Phua emphasized. For traditional models, identification was straightforward: if you built a model, you knew you had AI to govern. But modern AI presents more complex scenarios.

Phua explained how vendor-embedded AI with the solution sets can introduce governance gaps. “Some vendors, halfway through, will introduce some AI — from an AI governance point of view, we must have a process to be able to identify [this] so that we can govern.”

This definitional challenge highlights why Singapore’s collaborative approach is important. Regulators writing rules in isolation might miss these practical complexities that practitioners encounter daily.

Data stewardship and cognitive governance

Both speakers addressed a concern that haunts many data leaders: whether AI systems might erode human cognitive capabilities. “Even without AI, a lot of us will not think because we are lazy,” Phua observed, with characteristic directness. “That’s why people fall prey to scams.”

But his experience with GenAI suggests a different dynamic: “Recently I was queuing up for food... So I use ChatGPT. I was, at that point, interested in stablecoin… I asked the first question, and it gave me the answer. I didn’t like it. I asked again. I challenged. I keep on asking. Eventually, I got a very good answer.”

The key insight: “With ChatGPT, with GenAI, if we know how to think critically, if we know how to ask the right question all the time, actually, it becomes a very powerful tool.”

This perspective reframes the governance challenge. Rather than protecting humans from AI, effective governance might focus on enhancing human-AI collaboration.

Federated regulatory architecture: Scaling the Singapore model

Mind Forge Phase 2 is producing tangible outputs. “This handbook we’re going to publish soon, because I think we have finished drafting it,” Phua revealed. The handbook will provide “very practical guidance” addressing 44 identified AI risks with specific mitigation strategies.

More importantly, Singapore’s approach offers a template for other jurisdictions. Rather than waiting for perfect understanding before acting, they’re building governance infrastructure iteratively, with deep practitioner involvement.

“We are not validating the GenAI model itself,” Phua explained about their approach to generative AI. “We are applying the GenAI model… to use cases that we want to use. So when you talk about AI validation, we are talking about validating the use case.”

This distinction between validating models versus validating applications represents critical thinking about governance in an era of foundational models.

As data leaders worldwide grapple with similar challenges, Singapore’s MindForge project offers more than policy prescriptions.

The conversation between Roehm and Phua suggests that we’re witnessing the emergence of a new regulatory paradigm, one where the pace of technological change demands collaborative and iterative approaches to governance. It demonstrates that effective AI governance emerges from the intersection of regulatory vision and practitioner expertise, where data doesn’t just inform the rules but helps write them.

Whether other jurisdictions can adapt Singapore’s model remains an open question. And it’ll probably be a key one that determines whether the island nation’s approach can establish a more regional framework that interfaces well with other national agendas.

---

Autor(en)/Author(s): Winston Thomas

Dieser Artikel ist neu veröffentlicht von / This article is republished from: CDO Trends, 29.09.2025

Bitte besuchen Sie/Please visit:

Go to top