It is a feature and a good one (with a caveat though).
terraform is best used on a per solution basis: one solution, one dedicated terraform project that will manage its state and its own state only. Multiple projects and people work carry out work in the same cloud account in parallel, and their terraform projects are not meant to interfere with each other. It works best for solutions that make use of fully managed cloud services.
Then there are also platform or connectivity level cloud resources (e.g. Transit Gateway and subnets that are mapped into the internal organisational network address space in AWS) that a random terraform project ought not to manage.
Lastly, if there is an actual need, a resource that terraform does not know about can be manually imported into the terraform's project state. This works best when infrastructure level resources were manually created a while ago and now have to be refactored into and managed by a terraform project. It is a tedious process that has to proceed with a lot of caution.
The caveat. It gets somewhat tricky when a non-serverless cloud resource requires an explicit subnet range allocation within an existing and managed CIDR or similar. There is no one solution to fit it all but containing such projects to their own dedicated VPC and setting up the VPC peering between a solution specific VPC and the main account VPC usually works satisfactory. That is, for example, how Kafka (AWS MSK) can be introduced into an AWS account without affecting existing CIDR mapping.
Each of your projects could use a distinct terraform workspace; the hypothetical `terraform <plan|apply> --nuke` would already need to look across all workspaces, considering all state. Or/also it could look at multiple remote states.
> Then there are also platform or connectivity level cloud resources (e.g. Transit Gateway and subnets that are mapped into the internal organisational network address space in AWS) that a random terraform project ought not to manage.
Not a 'random' one sure, personally I'd still want it somewhere. I suppose the hypothetical command might want an optional whitelist of non-tf-managed stuff to ignore though. (But then, you could whitelist it just by writing the terraform and importing it?)
> Lastly, if there is an actual need, a resource that terraform does not know about can be manually imported
The hard/annoying part that I'd like this command for is discovering these resources. i.e. it's not just that terraform does not know about them, it's that I probably don't. Or at least I don't realise they're not captured in terraform.
A very easy one to overlook is security group rules: unless you define them inline in terraform (i.e. ingress/egress blocks on a security group resource) then adding additional rules outside of terraform does not cause a diff. So you might be testing them out by manually poking around, and then you forget to terraform them/remove them, and they're left there forever with terraform blissfully unaware, and if you ever happen to notice it might not be obvious whether they're needed or not.
Essentially, it'd be useful for enforcing that terraform's used for everything; maintaining 'IaaC' hygiene.
terraform is best used on a per solution basis: one solution, one dedicated terraform project that will manage its state and its own state only. Multiple projects and people work carry out work in the same cloud account in parallel, and their terraform projects are not meant to interfere with each other. It works best for solutions that make use of fully managed cloud services.
Then there are also platform or connectivity level cloud resources (e.g. Transit Gateway and subnets that are mapped into the internal organisational network address space in AWS) that a random terraform project ought not to manage.
Lastly, if there is an actual need, a resource that terraform does not know about can be manually imported into the terraform's project state. This works best when infrastructure level resources were manually created a while ago and now have to be refactored into and managed by a terraform project. It is a tedious process that has to proceed with a lot of caution.
The caveat. It gets somewhat tricky when a non-serverless cloud resource requires an explicit subnet range allocation within an existing and managed CIDR or similar. There is no one solution to fit it all but containing such projects to their own dedicated VPC and setting up the VPC peering between a solution specific VPC and the main account VPC usually works satisfactory. That is, for example, how Kafka (AWS MSK) can be introduced into an AWS account without affecting existing CIDR mapping.