Generating key takeaways...
A new survey highlights critical gaps in AI risk ownership and oversight among Singaporean organisations, signalling urgent need for stronger governance amid accelerating AI deployment.
Even as Singaporean organisations move rapidly from experimentation to strategic AI use, governance and clear ownership of AI risk lag behind, according to Okta’s AI Security Poll. The live poll, conducted in November at Okta’s Oktane on the Road event in Singapore, found 53% of respondents say AI security risk falls to the CISO or the security function, while 25% reported no single person or function currently owns AI risk in their organisation. The survey also reported limited board engagement: half said their boards are aware of AI-related risks but only 31% reported full board oversight. [1][2][3]
According to the original report, visibility into AI behaviour is weak: only 31% of respondents expressed confidence in their ability to detect if an AI agent is operating outside its intended scope, and 33% do not monitor AI agent activity at all. The poll highlighted technical blind spots that raise the prospect of data leakage and uncontrolled use, naming integrations (36%) and Shadow AI or unapproved tools (33%) as prominent vulnerabilities. Alarmingly, just 8% said their identity systems are fully equipped to secure non-human identities such as AI agents, bots and service accounts. [1][2][3]
Okta’s regional vice-president, Stephanie Barnett, framed the gap as a governance imperative, saying: “Organisations in Singapore are adopting AI at speed, which signals growing maturity in how the technology is being used. We are seeing a shift from early experimentation to responsible, strategic adoption. The next step is ensuring governance and security evolve at the same pace.” The report’s authors urged organisations to treat AI agents as first-class identities within existing security and lifecycle controls. [1]
Industry data shows this concern fits a broader shift: a recent identity-security survey found 85% of organisations now consider Identity and Access Management (IAM) crucial to their cybersecurity posture, up from 79% the prior year, and that managing non-human identities presents distinct challenges such as dynamic lifespans, lack of traceable ownership and reliance on API tokens. The survey highlighted difficulty in controlling access (78%), lifecycle management (69%) and limited visibility (57%) as persistent problems. Such findings underscore why security leaders increasingly view identity as central to AI risk mitigation. [4]
Other regional studies point to compounding pressures. A KnowBe4 study reported that more Singapore IT leaders rank AI among the top five defensive tools against sophisticated threats, while a Proofpoint report warned that generative AI adoption and expanding data volumes are intensifying insider and accidental data-loss risks , with employees, contractors and compromised accounts frequently cited as sources of major data loss. Together, these surveys suggest organisations face both an operational visibility problem and an expanding attack surface as AI is embedded across workflows. [5][7]
The company announcement accompanying Okta’s findings outlined product developments intended to address unmanaged identities and governance gaps, with new Workforce Identity Cloud capabilities designed to improve control, visibility and lifecycle management of service accounts and other non-human identities. The company claims these features will reduce risks from unmanaged identities and social engineering, though experts say adoption and board-level oversight must follow technical fixes for governance to be effective. [6]
“As AI becomes more embedded across workflows, organisations need to treat AI agents like any other and apply the same discipline to securing AI agents as they do to human users,” the report concluded, framing identity-first controls and clearer ownership as the next practical steps for organisations seeking to reconcile rapid AI adoption with resilient security and governance. [1]
📌 Reference Map:
##Reference Map:
- [1] (FutureCIO) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 7
- [2] (Singapore Business Review) – Paragraph 1, Paragraph 2
- [3] (CyberSecurityAsia) – Paragraph 1, Paragraph 2
- [4] (ITPro) – Paragraph 4
- [5] (ComputerWeekly / KnowBe4) – Paragraph 5
- [6] (Okta press release) – Paragraph 6
- [7] (Proofpoint) – Paragraph 5
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative presents recent findings from Okta’s AI Security Poll conducted in November 2025 at Okta’s Oktane on the Road event in Singapore. The earliest known publication date of similar content is December 11, 2025, indicating freshness. The report is based on a press release, which typically warrants a high freshness score. No discrepancies in figures, dates, or quotes were found. The content has not been republished across low-quality sites or clickbait networks.
Quotes check
Score:
10
Notes:
The direct quote from Stephanie Barnett, Vice President, Asia Pacific & Japan at Okta, is unique to this narrative. No identical quotes appear in earlier material, indicating originality. No variations in wording were found.
Source reliability
Score:
9
Notes:
The narrative originates from Okta, a reputable organisation known for its expertise in identity and security solutions. The press release is hosted on Okta’s official website, enhancing its credibility.
Plausability check
Score:
9
Notes:
The claims about Singaporean organisations’ struggles with AI risk governance align with broader industry concerns. The statistics provided are consistent with Okta’s known focus on identity and security. The language and tone are consistent with corporate communications. No excessive or off-topic details are present.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is fresh, original, and originates from a reputable source. The claims are plausible and consistent with Okta’s known focus on identity and security. No significant credibility risks were identified.
