-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support separate backend configurations for each workspace #16627
Comments
Hi @2rs2ts, Thanks for opening the issue. A workspace is something that exists within a backend, so having a separate backend per workspace doesn't really fit the current model. However, I would like to eventually have a more convenient way to switch between backend configurations. If we get something like that in place, I think it would be easier to switch backend+workspace combinations. |
@jbardin I would expect that when I switch workspaces, I am basically getting a completely different environment, much like python's virtualenv. Or maybe think of it in this way: a workspace is like a completely separate clone of your repo. When I switch workspaces I should be able to apply changes without worrying about writing to the same backend. I mean, what you're describing sounds like all it takes is a little mishap and you apply development configuration to a customer-facing environment. It's kind of hard to see the value of workspaces now that you're telling me this. Without having a ton of internal knowledge it sounds like all you have to do is invert the order of workspaces and backends. (Probably easier said than done.) Backends should exist within a workspace. When you switch workspaces you shouldn't have to re-init unless you're actually trying to change backend configuration or do other things init does. And when you use Just one way to implement it would be to store all the backend configuration in subfolders where each subfolder is a separate workspace. So on a fresh clone and |
Hi @2rs2ts, I appreciate the feedback. Workspaces only exist within a backend, or to state it another way, until a backend is loaded (including the default "local" backend) there are no workspaces. There are no plans to invert this relationship only in the cli, because this would conflict with how the backends operate, and would break integrations with other software. As for this specific use case, I do want to have a method for easily switching between backend configurations. So in essence, we are looking into accomplishing what you want here, it's just not going to be done with workspaces alone. |
@jbardin Is there an ETA for this method for switching between backend configurations? Perhaps a milestone target? |
Also, do you know how other people work around this issue (i.e. the lack of support for concurrent operations)? Seems to me like a pretty common issue to have so I'm surprised I haven't seen any issues filed with similar complaints. Perhaps there's a workaround I haven't thought of. |
@2rs2ts, Sorry, we don't have a set timeline for that enhancement. Most users and organization use a single backend per config, so there's usually nothing to work around. If the backend does need to be change, the |
I meant concurrently working on different workspaces. It's fine if you don't have a timeline. We'll come up with a workaround. Hopefully in the future terraform will support working on different workspaces concurrently. Thanks for all the info! |
Just been working around the same issue, for ourselves we would like to use the same backend type, but a different bucket for the remote state; separate buckets for dev, test prod for example. Our workaround has been to use git branches of the same name and then to have separate clones of our git repo at the relevant branches. |
Although it doesn't address the specific use-case described here, we do have a guide on the usage pattern the current implementation was designed for, which may be useful as a starting point for those who don't have existing practices in place or different regulatory requirements. We do still plan to address this eventually by some means, but the team's current focus is on the configuration language. We plan to switch focus to other areas later in the year and will hopefully be able to implement something in this area at that point. We need to spend a little time prototyping first since, as noted before, the current architecture is not able to accommodate this requirement. More details to some once we've completed that investigation and have a more specific plan. |
Just came across this. I had originally interpreted workspaces as completely separate environments, including backend, rather than within a backend. I struggled to make sense of it, since I do different AWS account for different environments. So if my single backend is an S3 bucket on account A, and workspace My workaround ended up being having separate directories, each of which just has the backend and includes a shared directory as a module for everything else. It is a bit redundant on the
|
This appears to be a terrible design decision. Especially after reading dozens of similar comments. You appear to have done the exact opposite to what a workspace is supposed to be. An isolated environment separate from each other. The fact that dozens of people are telling you the same thing should inspire you to fix this issue in a more understandable way. And if this breaks tools, then just change the tools, they were broken in the first place. So now is the time to fix them. Here is a simple use case which breaks the current idea of workspaces.
This is a really common scenario. Other people are just getting around your problems by using workarounds. But it would be nice if the tools operated in a better way which allows easier isolation between workspaces Maybe a workspace flag on new called --share-backend=yes|no would easily resolve this problem |
So I was sufficiently motivated to solve this problem. I want that if I create a separate workspace, I want to not share backends.
Usage:
for localstack, I have a provider with custom endpoints for localstack, for staging, I have an s3 backend that I use to share state with my team. Problem solved! I could go further, I suppose I don't want to always override the workspace command like this, I could introduce a 'isolated-workspace' meta command which when found in the args, will apply the above code, then reinsert a 'workspace' command after it and when workspace is found on it's own will do the default terraform behaviour. But for me, for this situation, I'm happy. |
@christhomas Great to hear that you found a solution. I'm a bit hesitant to adapt it, but I completely agree with you on your first post. I cannot understand how they didn't see this scenario coming: sharing staging and production states while keeping dev states local is one of the most natural usages for environments. |
I've been using it for months and if you are consistent in working with workspaces like this, then it actually works out really nicely. No problems encountered so far. |
@christhomas Thanks to share your solution. I would like to know how to works your solution. Can you share your entire code? I'm a bit lost... |
What would you like to know? That is the full solution, apart from the top lines which obtain the credentials from your IAM role. You could replace that with just using environment variables, or passing them into the script using the script parameters. What part don't you understand? |
If anybody wants to ask questions, please ask here: https://gist.github.com/christhomas/ea90cc55502a3f804f0b6a8e59d05e60 |
Has there been any updates/progress on this issue? |
It's very awful that this task hasn't get any updates :( |
Found this in the midst of a Google search. It's fairly easy to use separate backends in a CI/CD scenario by using environment variables. (or at least, that's how I think the Terraform extension for Azure DevOps does it, anyway) But switching workspaces in this way locally is a bit of a pain :( |
Would really like a solution to being able to locally deploy and remote deploy using the same terraform files. The folder structure is really not great. |
I am also interested in this - we have the use case where we deploy to completely separate AWS accounts, and would like to store our state in an S3 bucket within each account. I hoped Workspaces would enable this, but at the moment it looks like we're going to have to continue to change the back end block manually each time unless I've misunderstood :-(. |
For what it's worth I am trying to do something similar albeit simpler. Really all I need is one workspace that deploys into our sandbox account and all the other workspaces to deploy into our production account. I don't mind if the S3 backend is only in one account for all the workspaces. The following appears to work (haven't had a chance to test thoroughly).
and
Notice the |
Are we any closer to having workspaces that are treated as completely separate environments with different backends? |
I've ran into the same situation as many others above. It's possible to get the desired behaviour using the https://gist.github.com/ppar/c4353e812f64f082dc7de8df7f1d6fdd My approach doesn't use Workspaces, but AFAICS it addresses the expectation that people here are having from Workspaces. |
I solved it by doing something like this since workspace is part of the backend. However; I'm not sure if this is considered bad practice Script:
and then I just call it: I have two backend.hcl files
Content as follows:
Running |
Is there any updates on this issue? |
I suspect then that what is told here isn't true or possible? https://dev.to/aws-builders/mastering-terraform-how-to-manage-multiple-environments-with-dynamic-s3-backends-1p9 |
Terraform Version
Terraform v0.10.8
Terraform Configuration Files
Debug Output
N/A
Crash Output
N/A
Expected Behavior
I should be able to run any terraform command with
TF_WORKSPACE=...
to separate the operations between different states, as originally requested in #14447, and implemented for the new workspaces terminology in #14952. I would expect that, therefore, in order for this to work terraform would need to configure the backends for each workspace separately, so that multiple states can be manipulated in parallel. Switching workspaces should not cause any messages about the backend being reconfigured.Actual Behavior
terraform init
does not care about workspace; it always tries to reconfigure the current backend when I switch to a new workspace and then initialize that one for a new state.Steps to Reproduce
Please list the full steps required to reproduce the issue, for example:
TF_WORKSPACE=foo terraform init -backend-config=bucket=mybucket -backend-config=region=us-east-1 -backend-config=key=foo.tfstate
TF_WORKSPACE=bar terraform init -backend-config=bucket=mybucket -backend-config=region=us-east-1 -backend-config=key=bar.tfstate
Terraform will say the backend configuration has changed and will ask to copy state from s3.
Important Factoids
Our team uses one tfstate per AWS region per environment. So for instance our qa environment has state files for us-west-2, us-east-1, us-east-2, and so on respectively, and our production environment also has its own state files for us-west-2, us-east-1, us-east-2 and so on respectively. It is extremely common for us to be performing terraform operations against multiple regions per environment at once, and sometimes we even do it for multiple environments at once (similar to the use case in #14447)
References
terraform init
breaks our use case, but I thought perhaps theTF_WORKSPACE
env var would be the secret sauce...The text was updated successfully, but these errors were encountered: