2

I am trying to create list of VM to which there is list of disk needs to be created and attached per VM.

In below example I have to create test-d01 Compute with 3 disk test-d01-data and test-d01-data-disk, and test-d01-commitlog-disk similarly test-d02 Compute with 2 disk test-d02-data-01 and test-d02-data-02.

For expample below VM_info represents required configuration and it would

    {
      name       = "test-d01"
      zone       = "us-east1-b"    
      disk = [
        {
          disk_name = "test-d01-data"
          disk_type = "pd-ssd"
          disk_size = "60"
        },
        {
          disk_name = "test-d01-data-disk"
          disk_type = "pd-standard"
          disk_size = "15"
        },
        {
          disk_name = "test-d01-commitlog-disk"
          disk_type = "pd-ssd"
          disk_size = "30"
        }
      ]
    },
    {
      name       = "test-d02"
      zone       = "us-east1-b"
           
        disk=[
        {
        disk_name = "test-d02-data"
        disk_type = "pd-ssd"
        disk_size = "60"
        },
        {
        disk_name = "test-d02-data-disk"
        disk_type = "pd-standard"
        disk_size = "15"
        }
        ]
    },
  ]
2
  • What's your problem? What's your error? How can we help? Commented Jun 26, 2020 at 11:54
  • Basically I need to create terraform shared module which will achieve the above requirement. Right now I can create number of VM with equal number of disk attached to it.But now I need update module with unequal number of disk like first VM with 3 disk , 2nd VM with 2 disk and go on...So is it possible in terraform with any collection or any other way..Need a help on this Commented Jun 27, 2020 at 8:32

3 Answers 3

1

great idea. When I use this I do get a

terraform plan
var.disks
  Enter a value: 2

var.instance_name
  Enter a value: DDVE5


Error: Reference to undeclared resource

  on main.tf line 39, in resource "google_compute_attached_disk" "vm_attached_disk":
  39:   instance = google_compute_instance.vm_instance.self_link

A managed resource "google_compute_instance" "vm_instance" has not been
declared in the root module.

cat main.tf

variable "instance_name" {}
variable "instance_zone" {
  default = "europe-west3-c"
}
variable "instance_type" {
  default = "n1-standard-1"
}
variable "instance_subnetwork" {
  default = "default"
}
variable "disks" {}

provider "google" {
  credentials = file("key.json")
  project     = "ddve50"
  region      = "europe-west3"
  zone        = "europe-west3-a"
}

resource "google_compute_instance" "vm-instance" {
  name         = "ddve-gcp-5-7-2-0-20-65"
  machine_type = "f1-micro"
  tags         = ["creator", "juergen"]
  boot_disk {
    initialize_params {
      image = "ddve"
      type  = "pd-ssd"
    }
  }
  network_interface {
    network = "default"
  }
}

resource "google_compute_attached_disk" "vm_attached_disk" {
  for_each = toset(var.disks)
  disk     = each.key
  instance = google_compute_instance.vm_instance.self_link
}

cat ../my_instances.tf

resource "google_compute_disk" "ddve-gcp-5-7-2-0-20-65-nvram" {
  name = "ddve-gcp-5-7-2-0-20-65-nvram"
  type = "pd-ssd"
  size = 10
}

resource "google_compute_disk" "ddve-gcp-5-7-2-0-20-65-m1" {
  name = "ddve-gcp-5-7-2-0-20-65-m1"
  type = "pd-standard"
  size = 1024
}

module "ddve-test-d01" {
  source        = "./instance"
  instance_name = "ddve-test-d01"
  disks = [
    google_compute_disk.ddve-gcp-5-7-2-0-20-65-nvram,
    google_compute_disk.ddve-gcp-5-7-2-0-20-65-m1

  ]
}
Sign up to request clarification or add additional context in comments.

Comments

0

Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk:

Persistent disks can be attached to a compute instance using the attached_disk section within the compute instance configuration. However there may be situations where managing the attached disks via the compute instance config isn't preferable or possible, such as attaching dynamic numbers of disks using the count variable.

Therefore a straightforward approach with creation of google_compute_instance with attributes attached_disk may not be applicable in this case.

In brief the idea is to create persistent disks first, then create VM instances and attach new disks to the freshly created instances. This test deployment consists of a root configuration file my_instances.tf and reusable module ./instance/main.tf.

1. Persistent disks google_compute_disk can be created independently. For the sake of simplicity 5 literal blocks in the root file my_instances.tf were used.

2. Call the reusable module instance/main.tf and pass the VM attributes and a list of disks to the module so that:

  • Create a VM instance google_compute_instance;
  • Use binding objects google_compute_attached_disk to attach new empty disks to the freshly created VM instance.

To process the disk list the for_each meta-argument is used.
Since the for_each accepts only a map or a set of string, the toset function is used to convert a list of disks to a set.

Terraform by HashiCorp > Compute Engine > Data Sources > google_compute_instance
Terraform by HashiCorp > Compute Engine > Resources > google_compute_disk
Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk
Terraform by HashiCorp > Configuration language > Resources > lifecycle.ignore_changes
Terraform by HashiCorp > Configuration language > Resources > When to Use for_each Instead of count
Terraform by HashiCorp > Configuration language > Resources > The each Object
Terraform by HashiCorp > Configuration language > Resources > Using Sets

$ cat my_instances.tf
resource "google_compute_disk" "test-d01-data" {
  name  = "test-d01-data"
  type  = "pd-ssd"
  size = 60
  zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-data-disk" {
  name  = "test-d01-data-disk"
  type  = "pd-standard"
  size = 15
  zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-commitlog-disk" {
  name  = "test-d01-commitlog-disk"
  type  = "pd-ssd"
  size = 30
  zone = "europe-west3-c"
}

resource "google_compute_disk" "test-d02-data" {
  name  = "test-d02-data"
  type  = "pd-ssd"
  size = 60
  zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d02-data-disk" {
  name  = "test-d02-data-disk"
  type  = "pd-standard"
  size = 15
  zone = "europe-west3-c"
}

module "test-d01" {
  source           = "./instance"
  instance_name    = "test-d01"
  disks = [
    google_compute_disk.test-d01-data.name,
    google_compute_disk.test-d01-data-disk.name,
    google_compute_disk.test-d01-commitlog-disk.name
  ]
}
module "test-d02" {
  source           = "./instance"
  instance_name    = "test-d02"
  disks = [
    google_compute_disk.test-d02-data.name,
    google_compute_disk.test-d02-data-disk.name
  ]
}

$ cat instance/main.tf
variable "instance_name" {}
variable "instance_zone" {
  default = "europe-west3-c"
  }
variable "instance_type" {
  default = "n1-standard-1"
  }
variable "instance_subnetwork" {
  default = "default"
  }
variable "disks" {}

resource "google_compute_instance" "vm_instance" {
  name         = var.instance_name
  zone         = var.instance_zone
  machine_type = var.instance_type
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-9"
      }
  }
  network_interface {
    subnetwork = "${var.instance_subnetwork}"
    access_config {
      # Allocate a one-to-one NAT IP to the instance
    }
  }
  lifecycle {
    ignore_changes = [attached_disk]
  }
}

resource "google_compute_attached_disk" "vm_attached_disk" {
  for_each = toset(var.disks)
  disk = each.key
  instance = google_compute_instance.vm_instance.self_link
}

$ terraform fmt
$ terraform init 
$ terraform plan 
$ terraform apply

1 Comment

Ahh ignore_changes attached disk might be what I am looking for. I have my PGDATA volume on some regional attached disks and I don't want Terraform to delete them or make changes if those disks resize etc. I think I can safely ignore them..
0

you can alos get the disks attached with:

resource "google_compute_disk" "default" {
  name = "test-disk"
  type = "pd-ssd"
  zone = var.instance_zone
  # image = "debian-9-stretch-v20200805"
  labels = {
    environment = "dev"
  }
  physical_block_size_bytes = 4096
}

resource "google_compute_attached_disk" "vm_attached_disk" {
  count    = var.disks
  disk     = google_compute_disk.default.id
  instance = google_compute_instance.vm-instance.id
}

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.