You Gave Your AI Agent Cloud Access. Here's What Can Go Wrong at 2am

AI agents are moving from demos into production faster than most teams are ready for. And when they get to production, they need credentials. They need access to APIs, databases, storage buckets, and cloud services. That access has to come from somewhere, and right now most teams are handling it the same way they handled service account permissions five years ago: broad enough to work, with every intention of scoping it down later.

Later never comes.

This talk is for developers and engineers who are building or operating AI-powered applications and want to understand what can go wrong when an agent holds real cloud permissions, and more importantly, what to do about it before something goes wrong rather than after. I'll walk through how adversarial inputs and indirect prompt injection can manipulate an agent into abusing its own credentials, show a live demo of what that looks like in practice on GCP, and then cover the concrete patterns that actually contain the blast radius: minimal-privilege agent identities, behavioral monitoring, and architectural decisions you can make now that make a significant difference later.

No scare tactics. No vendor pitches. Just practical stuff you can take back and apply to whatever you are building.