Corporate AI

Setting Up AI Securely in the Enterprise

Most enterprises are deploying AI tools reactively, without a security framework. Here is the structured approach I use with clients across Singapore and Hong Kong.

2026-03-15 · 6 min read

The conversation I have most often with enterprise clients in Singapore and Hong Kong right now goes like this:

"We know our employees are already using ChatGPT and Claude for work. We need to get ahead of this — but we don't know where to start."

This is the right instinct, and the right time to act. Here is the framework I use.

Start With a Shadow AI Audit

Before you can govern AI usage, you need to understand what is already happening.

Shadow AI — employees using personal AI accounts for work tasks — is the norm across APAC enterprises right now. It is not malicious. It is practical. People found tools that make them faster, and they used them.

What a shadow AI audit looks like:

  1. Review proxy and DNS logs for known AI service domains (openai.com, claude.ai, gemini.google.com, perplexity.ai, and the long tail of vertical AI tools)
  2. Survey department heads — not IT, department heads — about which AI tools their teams are using
  3. Classify usage by data sensitivity: are employees inputting customer data? Confidential documents? Source code?

The audit typically takes two to three weeks. What you find will shape everything that follows.

Define Your Data Classification First

You cannot make a sensible AI policy without knowing what data you have and how sensitive it is.

Most enterprises already have a data classification framework on paper. In practice, it is often not applied consistently. Before deploying any enterprise AI tool, pressure-test your classification:

This is not just a compliance exercise. It determines which AI tools your organisation can safely use for which tasks.

Choose a Deployment Model

There are three deployment models for enterprise AI, each with different risk profiles:

1. Managed SaaS (Microsoft 365 Copilot, Google Workspace AI)

The fastest path to deployment. Data stays within your existing Microsoft or Google tenancy. Governance integrates with your existing DLP and compliance tooling.

Risk: dependency on the vendor's AI infrastructure and model choices. You get what they give you.

2. API-Based Integration

Your organisation calls AI APIs (OpenAI, Anthropic, Google Vertex) directly, typically via an internal platform or gateway. You control which models are used, what data is sent, and you own the audit trail.

Risk: requires more engineering effort. Governance needs to be built rather than inherited.

3. Self-Hosted / Private Cloud

Models run on your own infrastructure (or a dedicated cloud tenancy). Maximum data control, minimum vendor risk.

Risk: significant operational overhead. Only justified for the most sensitive use cases, or where regulatory requirements demand it.

Most APAC enterprises end up with a hybrid: Managed SaaS for productivity use cases, API-based for custom applications, and self-hosted for a small number of high-sensitivity workloads.

Build the Governance Layer

Governance does not mean blocking AI. It means creating the conditions where AI can be used safely at scale.

Policy: What can be done with AI?

Write a clear, plain-language AI acceptable use policy. Cover: what data may be input into AI tools, which tools are approved, how to handle AI-generated outputs, and what to do if something goes wrong.

Make it short enough that people will actually read it.

Access: Who can use which tools?

Not every employee needs access to every AI capability. Segment access by role and by data sensitivity. A customer-facing team should not have the same AI access as your legal or finance teams.

Integrate AI tool access into your existing IAM system. Provision and deprovision via your IdP. If someone leaves the company, their AI access should be revoked automatically — just like everything else.

Monitoring: What is being done with AI?

Route all enterprise AI usage through a gateway that logs requests. You do not need to read every prompt — but you need to know that a customer support agent sent 500 requests containing customer email addresses to an unmanaged model, and you need to be able to investigate it.

This is where an AI gateway earns its keep.

The MAS and PDPO Angle

For financial services organisations in Singapore, the Monetary Authority of Singapore's guidelines on AI are clear: you must be able to explain AI-assisted decisions, maintain data governance, and demonstrate that AI systems are subject to appropriate oversight.

For any organisation processing personal data of Singapore or Hong Kong residents, the PDPO and PDPA apply to data input into AI systems, just as they apply to data stored in your databases.

The practical implication: if your AI audit reveals that employees are inputting personal customer data into unmanaged AI tools, you have a live compliance issue — not a future one.

Start Small, Show Value, Expand

The organisations that get AI governance right in 2026 are not the ones that write the most comprehensive policies. They are the ones that pick a starting point — usually one department with a clear use case and manageable data sensitivity — and build a working model that others can follow.

Start with productivity: summarisation, drafting, research. Demonstrate that safe AI deployment is possible. Then expand to higher-value, higher-sensitivity use cases with the governance infrastructure already in place.

The worst outcome is not moving too slowly. It is moving fast without infrastructure, discovering a data incident six months in, and having to dismantle everything to rebuild it properly.

Build the foundation first. The speed comes after.

← All Articles