Skip to main content
Terraform IaC Platform Engineering Cloud Architecture DevOps

Terraform Workspaces Are Not Environments (I Spent Two Years Thinking They Were)

YN
Yaroslav Naumenko
|

Posted on a Saturday because this is the kind of thing that keeps you up at night.


Look, I’m going to save you some pain.

Two years ago I was pretty proud of our Terraform setup. We had a single repo, a single root module, and three workspaces: dev, stg, prd. Clean. Tidy. Made sense in my head. I’d read the Terraform docs, saw “workspaces,” saw “multiple state files,” and thought — yeah, that’s environment separation.

It isn’t.


What Workspaces Actually Are

Here’s the thing nobody puts in bold: workspaces share a state backend. They share a provider config. They share the blast radius.

When you run terraform workspace select prd, you’re not switching to an isolated environment. You’re pointing at a different slice of the same backend. Same GCS bucket (or S3 bucket, or whatever). Same provider credentials in most setups. Same everything, really — just a different key in the state path.

HashiCorp’s own docs actually say this, in a section most people scroll past: workspaces aren’t designed for managing distinct environments — they’re meant for testing changes to a configuration. Not the same thing.

Most teams miss this. We did.


Where It Breaks

It held together fine until we needed separate IAM boundaries per environment.

Production needed to be locked down — separate service account, separate permissions, audited access. Dev needed to be loose so engineers could iterate fast. Staging needed to mirror prod closely enough to catch auth bugs before they hit real traffic.

With workspaces? You can’t do that cleanly. Your provider block is the same. Your backend config is the same. You’re doing count = terraform.workspace == "prd" ? 1 : 0 gymnastics in your module code. It gets messy fast.

We also had this nagging feeling that a bad terraform apply in the wrong workspace could touch production state. That feeling was correct.


What Real Isolation Actually Looks Like

After a painful few months, we migrated to isolated root modules. Here’s what “real isolation” meant in practice:

  • Separate backend configs per environment — not just different key paths inside the same bucket, but different buckets in different accounts entirely, with IAM that physically can’t reach across
  • Separate provider authentication per environment — different service accounts, different credential files, different assume-role configs
  • Separate root modules — not workspace conditionals buried in shared modules, but distinct entry points per environment
  • Separate pipelines with distinct approval gates — dev auto-applies on merge, staging requires a PR review, prod requires a manual approval step

Zero shared state between environments after the migration. Zero.


The Migration Was Not Fun

I’m not going to pretend otherwise. What started as three workspaces had grown to fourteen across services and teams by the time we actually fixed it. Each one with its own drift, its own quirks, its own “we definitely didn’t mean to create that resource” surprises.

The pattern that worked: terraform state mv with -state and -state-out flags to move resources between state files, fix up the root module for the target environment, import what couldn’t be moved cleanly, and delete the old workspace only after the new one was confirmed healthy.

One thing that bit us: any terraform_remote_state data sources referencing the old workspace paths broke silently when we changed state locations. Check those first. Write them down before you start.

Do the whole thing in dev first. Do it slowly. Write down what broke.


The Payoff

Honestly? The infra feels boring now, in the best possible way. Commit to dev, auto-apply, it doesn’t touch staging. PR for staging, review, it doesn’t touch prod. Nobody’s nervous on deploy Friday because there’s no shared blast radius.

Boring infrastructure is good infrastructure.


If you’re using workspaces for environment separation today — not judging, I did it too — just know that at some point it will bite you. The earlier you rearchitect, the cheaper that pain is.

Happy Saturday.

YN

Yaroslav Naumenko

Cloud Infrastructure Architect specializing in PCI/HIPAA/FedRAMP compliant solutions at scale. Over a decade building on AWS & GCP.

Need Help With Your Cloud Infrastructure?

Book a free 15-minute call and let's discuss your needs.