Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk:
Persistent disks can be attached to a compute instance using the
attached_disk section within the compute instance configuration.
However there may be situations where managing the attached disks via
the compute instance config isn't preferable or possible, such as
attaching dynamic numbers of disks using the count variable.
Therefore a straightforward approach with creation of google_compute_instance with attributes attached_disk may not be applicable in this case.
In brief the idea is to create persistent disks first, then create VM instances and attach new disks to the freshly created instances. This test deployment consists of a root configuration file my_instances.tf and reusable module ./instance/main.tf.
1. Persistent disks google_compute_disk can be created independently. For the sake of simplicity 5 literal blocks in the root file my_instances.tf were used.
2. Call the reusable module instance/main.tf and pass the VM attributes and a list of disks to the module so that:
- Create a VM instance
google_compute_instance;
- Use binding objects
google_compute_attached_disk to attach new empty disks to the freshly created VM instance.
To process the disk list the for_each meta-argument is used.
Since the for_each accepts only a map or a set of string, the toset function is used to convert a list of disks to a set.
Terraform by HashiCorp > Compute Engine > Data Sources > google_compute_instance
Terraform by HashiCorp > Compute Engine > Resources > google_compute_disk
Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk
Terraform by HashiCorp > Configuration language > Resources > lifecycle.ignore_changes
Terraform by HashiCorp > Configuration language > Resources > When to Use for_each Instead of count
Terraform by HashiCorp > Configuration language > Resources > The each Object
Terraform by HashiCorp > Configuration language > Resources > Using Sets
$ cat my_instances.tf
resource "google_compute_disk" "test-d01-data" {
name = "test-d01-data"
type = "pd-ssd"
size = 60
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-data-disk" {
name = "test-d01-data-disk"
type = "pd-standard"
size = 15
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-commitlog-disk" {
name = "test-d01-commitlog-disk"
type = "pd-ssd"
size = 30
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d02-data" {
name = "test-d02-data"
type = "pd-ssd"
size = 60
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d02-data-disk" {
name = "test-d02-data-disk"
type = "pd-standard"
size = 15
zone = "europe-west3-c"
}
module "test-d01" {
source = "./instance"
instance_name = "test-d01"
disks = [
google_compute_disk.test-d01-data.name,
google_compute_disk.test-d01-data-disk.name,
google_compute_disk.test-d01-commitlog-disk.name
]
}
module "test-d02" {
source = "./instance"
instance_name = "test-d02"
disks = [
google_compute_disk.test-d02-data.name,
google_compute_disk.test-d02-data-disk.name
]
}
$ cat instance/main.tf
variable "instance_name" {}
variable "instance_zone" {
default = "europe-west3-c"
}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {
default = "default"
}
variable "disks" {}
resource "google_compute_instance" "vm_instance" {
name = var.instance_name
zone = var.instance_zone
machine_type = var.instance_type
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
access_config {
# Allocate a one-to-one NAT IP to the instance
}
}
lifecycle {
ignore_changes = [attached_disk]
}
}
resource "google_compute_attached_disk" "vm_attached_disk" {
for_each = toset(var.disks)
disk = each.key
instance = google_compute_instance.vm_instance.self_link
}
$ terraform fmt
$ terraform init
$ terraform plan
$ terraform apply