Hello!
In the previous article, we learn how to install terraform. And now we will learn how using terraform you can create and delete resources in AWS.
In order for terraform to manage resources in AWS, you need to install terraform AWS provider. Terraform will do this automatically if I specify AWS provider in the terraform code and call terraform init
.
I will create a terraform_code directory and in this directory, I will create a file with the name provider.tf. The name of the file can be any, but I prefer to call it provider.tf, because it is easier to understand what is in this file based on a name.
mkdir terraform_code
touch provider.tf
I will add the following content to provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.23.0"
}
}
}
The terraform section indicates which provider to use and which version. AWS Provider release information can be found on GitHub.
Once a file is saved, you can immediately execute terraform init
to load the provider. But before that, you need to choose which version of terraform will be used. To do this, I will create a file named .terraform-version and write 1.2.6 into it. Then tfenv will do everything automatically.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "4.23.0"...
- Installing hashicorp/aws v4.23.0...
- Installed hashicorp/aws v4.23.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
When the command is finished successfully, a file named .terraform.lock.hcl and a directory named .terraform will be created. The content of .terraform.lock.hcl will look something like this
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/aws" {
version = "4.23.0"
constraints = "4.23.0"
hashes = [
"h1:JDJLmKK61GLw8gHQtCzmvlwPNZIu46/M5uBg/TDlBa0=",
"zh:17adbedc9a80afc571a8de7b9bfccbe2359e2b3ce1fffd02b456d92248ec9294",
"zh:23d8956b031d78466de82a3d2bbe8c76cc58482c931af311580b8eaef4e6a38f",
"zh:343fe19e9a9f3021e26f4af68ff7f4828582070f986b6e5e5b23d89df5514643",
"zh:6b8ff83d884b161939b90a18a4da43dd464c4b984f54b5f537b2870ce6bd94bc",
"zh:7777d614d5e9d589ad5508eecf4c6d8f47d50fcbaf5d40fa7921064240a6b440",
"zh:82f4578861a6fd0cde9a04a1926920bd72d993d524e5b34d7738d4eff3634c44",
"zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425",
"zh:a08fefc153bbe0586389e814979cf7185c50fcddbb2082725991ed02742e7d1e",
"zh:ae789c0e7cb777d98934387f8888090ccb2d8973ef10e5ece541e8b624e1fb00",
"zh:b4608aab78b4dbb32c629595797107fc5a84d1b8f0682f183793d13837f0ecf0",
"zh:ed2c791c2354764b565f9ba4be7fc845c619c1a32cefadd3154a5665b312ab00",
"zh:f94ac0072a8545eebabf417bc0acbdc77c31c006ad8760834ee8ee5cdb64e743",
]
}
In this file, all the versions of the dependencies used by terraform will be specified, and the versions specified in the file will be used when I run terraform init
next time. The .terraform directory will contain downloaded modules and providers
|-- .terraform
| `-- providers
| `-- registry.terraform.io
| `-- hashicorp
| `-- aws
| `-- 4.23.0
| `-- linux_amd64
| `-- terraform-provider-aws_v4.23.0_x5
Also, in order to manage AWS resources, you need to generate an Access Key and Secret Key in AWS for your user and save them in the file ~/.aws/credentials
[default]
aws_access_key_id = ACCESS_KEY
aws_secret_access_key = SECRET_KEY
Let’s start writing the code. I will create a file named main.tf, which should be in the same directory as provider.tf. The following content can be added both in provider.tf and in main.tf or you create a completely new file, for example, terraform.tf. I will add the next lines to the provider.tf, because it is also the configuration of provider
provider "aws" {
region = "us-east-1"
profile = "default"
}
I specify which region and which AWS profile to use. A few steps above I created a profile named default when saved Access Key and Secret Key. One terraform code can use several profiles which will use different regions or different AWS accounts.
In main.tf i will add next
resource "aws_instance" "foo" {
ami = "ami-0cff7528ff583bf9a"
instance_type = "t2.micro"
subnet_id = "subnet-222a93327f9a744ed"
vpc_security_group_ids = ["sg-2220a119757753b6e"]
tags = {
Env = "Dev"
}
volume_tags = {
"Env" = "Dev"
}
}
This code describes with which parameters to create an ec2 instance. I specify which AMI to use (Amazon Linux 2), in which subnet ,and with which security group to create a server. I also indicate which tags to use for the server and disk.
I save the file and run terraform plan -out tf.plan
$ terraform plan -out tf.plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.foo will be created
+ resource "aws_instance" "foo" {
+ ami = "ami-0cff7528ff583bf9a"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_stop = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = (known after apply)
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ subnet_id = "subnet-222a93327f9a744ed"
+ tags = {
+ "Env" = "Dev"
}
+ tags_all = {
+ "Env" = "Dev"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ user_data_replace_on_change = false
+ volume_tags = {
+ "Env" = "Dev"
}
+ vpc_security_group_ids = [
+ "sg-2220a119757753b6e",
]
+ capacity_reservation_specification {
+ capacity_reservation_preference = (known after apply)
+ capacity_reservation_target {
+ capacity_reservation_id = (known after apply)
+ capacity_reservation_resource_group_arn = (known after apply)
}
}
+ ebs_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
+ enclave_options {
+ enabled = (known after apply)
}
+ ephemeral_block_device {
+ device_name = (known after apply)
+ no_device = (known after apply)
+ virtual_name = (known after apply)
}
+ maintenance_options {
+ auto_recovery = (known after apply)
}
+ metadata_options {
+ http_endpoint = (known after apply)
+ http_put_response_hop_limit = (known after apply)
+ http_tokens = (known after apply)
+ instance_metadata_tags = (known after apply)
}
+ network_interface {
+ delete_on_termination = (known after apply)
+ device_index = (known after apply)
+ network_card_index = (known after apply)
+ network_interface_id = (known after apply)
}
+ private_dns_name_options {
+ enable_resource_name_dns_a_record = (known after apply)
+ enable_resource_name_dns_aaaa_record = (known after apply)
+ hostname_type = (known after apply)
}
+ root_block_device {
+ delete_on_termination = (known after apply)
+ device_name = (known after apply)
+ encrypted = (known after apply)
+ iops = (known after apply)
+ kms_key_id = (known after apply)
+ tags = (known after apply)
+ throughput = (known after apply)
+ volume_id = (known after apply)
+ volume_size = (known after apply)
+ volume_type = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Saved the plan to: tf.plan
To perform exactly these actions, run the following command to apply:
terraform apply "tf.plan"
The terraform plan -out tf.plan
command will analyze what should be created and display it on the screen. Some of the values will be specified as, for example, + ami = "ami-0cff7528ff583bf9a"
the rest will have the value user_data = (known after apply)
, which means that at the moment terraform does not yet know what the values of these parameters will be and they will be known after the resources are created. Using the -out tf.plan
key, the result will be saved to a file. And if you agree with resources which terraform going to create/delete/update, you need to run terraform apply "tf.plan"
and the corresponding resources will be create/delete/update.
If resources created by terraform are not needed anymore they can be deleted with terraform destroy