AI Security

Why Zero Trust Must Extend to Your AI Stack

Most organisations apply Zero Trust to users and devices but leave their AI pipelines wide open. Here's what that gap looks like — and how to close it.

2026-05-01 · 5 min read

Most organisations have made real progress on Zero Trust for users and devices. Identity verification, least-privilege access, conditional policies — these are increasingly standard in mature security programmes across APAC.

But there is a gap that almost every enterprise is missing: their AI stack operates completely outside the Zero Trust model.

The Problem No One Is Talking About

When a developer integrates an LLM API into a product, they typically do three things:

  1. Grab an API key from the provider dashboard
  2. Store it in a .env file (or worse, hardcode it)
  3. Ship to production

That API key now has broad access to a powerful model that can generate, summarise, translate, and reason — with no identity context, no per-request authentication, no audit trail, and no policy enforcement.

This is the antithesis of Zero Trust.

What Zero Trust Actually Means for AI

Zero Trust is not a product you buy. It is a set of principles: never trust, always verify, enforce least privilege, and assume breach.

Applied to an AI stack, this means:

The Architecture That Closes the Gap

The pattern I recommend for APAC enterprises deploying AI in 2026:

1. Route all AI traffic through a gateway

An AI gateway (Cloudflare AI Gateway, AWS Bedrock Gateway, or a self-hosted proxy) gives you a single control plane for all model traffic. You get rate limiting, caching, and — critically — a log of every request and response.

2. Attach identity to every request

Pass the authenticated user's identity (or service identity) as metadata on every AI call. This flows from your IdP through your app layer to the gateway. If your IdP is Okta or Azure AD, this is a solved problem — it just requires a deliberate integration.

3. Apply per-request policy

Different request types should have different permissions. A request to summarise public documentation is lower risk than a request to generate code with access to internal APIs. Policy enforcement at the gateway level makes this manageable.

4. Implement output guardrails

Prompt injection, data exfiltration via LLM output, and harmful content generation are real attack vectors. Guardrails at the model response layer — content classification, PII detection, and response filtering — are now table stakes.

The Compliance Angle

For organisations in Singapore and Hong Kong operating under MAS TRM, PDPO, or GDPR, the audit trail question is not optional. If your AI system processes customer data — even indirectly — you need to demonstrate that access was controlled and logged.

"We used an API key" is not a satisfactory answer to a regulatory audit.

Getting Started

You do not need to boil the ocean. Start with one AI integration in production, apply these four controls, and use it as the template for everything that follows.

Zero Trust for AI is not a future state. It is a gap your organisation has right now.

The organisations that close it in 2026 will be the ones with a defensible AI security posture when the regulatory scrutiny arrives — and it will.

← All Articles