Lockdown Lab #2 CRITICAL Copilot Studio

Restrict who can create and publish Copilot Studio agents

Restrict who can create and publish Copilot Studio agents

Shadow IT is bad enough. Add generative AI, and you’ve got a recipe for serious data exfiltration. I’ve seen organizations completely miss this control.

Any licensed user can spin up a Copilot Studio agent that connects to your SharePoint, Dataverse, SQL, or even external APIs. They can publish it to Teams, and suddenly, an innocent-looking HR bot is silently funneling sensitive data out via an HTTP connector. This is a critical governance gap.

The fix is straightforward: disable generative AI agent publishing tenant-wide in the Power Platform admin center. Then, configure environment routing to sandbox environments and use Power Platform environment-level security roles to explicitly control who can create and publish. This isn’t optional; it’s foundational.

To disable tenant-wide publishing, navigate to Power Platform admin center > Environments > Settings > Generative AI > Agent publishing. Turn it OFF for unauthorized makers.

Don’t let your “helpful” internal bots become your next data breach vector. Lock this down now.

The fix

# Check environment creation restrictions
Add-PowerAppsAccount
Get-TenantSettings | Select-Object -ExpandProperty powerPlatform |
  Select-Object -ExpandProperty governance

Reference: https://learn.microsoft.com/en-us/microsoft-copilot-studio/security-and-governance

Mark this as done

Open the interactive hardening checklist and tick this off in your environment.

Open interactive checklist →