Hi there!

I recently embarked on a journey to automate the creation of AWS Synthetics using Terraform. Along the way, I encountered a few challenges that I managed to overcome. In this post, I will share my experiences and the solutions I found.

locals {
  common_tags = {
    cost-center = "observability"
    label       = "canary"
    environment = "staging"
  }
}

data "archive_file" "login" {
  type       = "zip"
  source_dir = "${path.module}/functions/login/"
  output_path = "${path.module}/functions/login.zip"
}

resource "aws_synthetics_canary" "login" {
  name                 = "login_test"
  artifact_s3_location = "s3://my_canary_bucket/"
  execution_role_arn   = "role_arn"
  handler              = "index.handler"
  zip_file             = data.archive_file.login.output_path
  runtime_version      = "syn-nodejs-puppeteer-3.4"
  start_canary         = true

  schedule {
    expression = "rate(5 minutes)"
  }

  run_config {
    timeout_in_seconds = 300
  }

  tags = merge(
    local.common_tags
  )
}

Problem 1: Tagging the Automatically Created Lambda

The first obstacle I faced was the lack of tags on the Lambda function automatically created by the canary. To address this, I utilized the null_resource in Terraform. The Lambda ARN can be obtained using aws_synthetics_canary.login.engine_arn. However, the ARN includes the lambda version, which is not required for tagging purposes. To remove the version from the Lambda ARN, I added the following code to locals:

  lambda_regex       = "arn:aws:lambda:[a-z]{2}-[a-z]+-\\d{1}:\\d{12}:function:[a-zA-Z0-9-_]+"
  login_lambda_name  = regex(local.lambda_regex, aws_synthetics_canary.login.engine_arn)

By using the updated ARN with null_resource, I was able to tag the Lambda. The null_resource runs whenever there are changes in the common_tags folder or when the aws_synthetics_canary ID changes. In the provisioner "local-exec" block, I utilized the aws lambda tag-resource command to tag the Lambda. However, it’s important to note that aws lambda tag-resource supports tags in the format KeyName1=string,KeyName2=string, while the tags in my code were in a map. To convert them to a string, I added the following code to locals:

  tags               = trimsuffix(join(" ", formatlist("%s=%s,", keys(local.common_tags), values(local.common_tags))), ",")

After creating the aws_synthetics_canary, the Lambda will be tagged using the null_resource.

resource "null_resource" "tag_login_backend_lambda" {
  triggers = {
    tags   = local.tags
    canary = aws_synthetics_canary.login.id
  }
  provisioner "local-exec" {
    command = "aws lambda tag-resource --resource ${local.login_lambda_name} --tags '${local.tags}'"
    environment = {
      AWS_PROFILE = "production
    }
  }

  depends_on = [aws_synthetics_canary.login]
}

Problem 2: Environment Variables Support

Another challenge I encountered was the lack of support for environment variables in the aws_synthetics_canary resource. To overcome this limitation, I used the null_resource once again. I added the environment variables to locals, creating a map with the desired variables. In my case, there was only one variable, app_url:

  canary_variables = {
    "app_url" = "https://mpostument.com"
  }

Triggers look similar to the previous example, only canary_variables has a type map and needs to be converted to string because triggers only support the string type. The aws synthetics update-canary command needs to include additional parameters besides environment variables. If they are not provided, the aws cli will set them to default. In my case, I am using values retrieved from the Terraform resource aws_synthetics_canary.login. This guarantees that those parameters will be the same for aws_synthetics_canary.login and null_resource.

resource "null_resource" "add_environment_variables_login_canary" {
  triggers = {
    canary_variables = join(",", [for key, value in local.canary_variables : "${key}=${value}"])
    canary           = aws_synthetics_canary.login.id
  }
  provisioner "local-exec" {
    command = "aws synthetics update-canary --name ${aws_synthetics_canary.login.name} --run-config '${jsonencode({ TimeoutInSeconds : aws_synthetics_canary.create.run_config[0].timeout_in_seconds, MemoryInMB : aws_synthetics_canary.create.run_config[0].memory_in_mb, ActiveTracing : aws_synthetics_canary.create.run_config[0].active_tracing, EnvironmentVariables : var.canary_variables })}'"
    environment = {
      AWS_PROFILE = var.aws_profile_name
    }
  }
  depends_on = [aws_synthetics_canary.create]
}

Problem 3: Automatic Canary Code Update

One of the challenges I encountered was the lack of automatic updates for the canary code. Currently, the canary code does not update automatically, and to trigger an update, the archive name must be different each time. To achieve this, I utilized the filemd5 function, which calculates the hash of the file. By incorporating this function into the archive_file configuration, the canary code is updated whenever the file’s hash changes.

locals {
  login_function_location = "${path.module}/functions/login/nodejs/node_modules/index.js"
}

data "archive_file" "login" {
  type       = "zip"
  source_dir = "${path.module}/functions/login/"
  output_path = "${path.module}/functions/login-${filemd5(local.login_function_location)}.zip"
}

By incorporating the filemd5 function into the configuration of the archive_file, the hash of the file will be calculated each time. If the hash has changed, a new archive will be created, ensuring that the canary code is updated accordingly.

Problem 4: Cleanup Issue with terraform destroy

One significant issue I encountered is related to the cleanup process when executing terraform destroy. Unfortunately, the automatically created lambda function, aws_synthetics_canary, is not removed as expected. I am currently working on finding a solution to address this problem and ensure proper cleanup of resources when destroying the infrastructure.

Update: Environment Variables Support

An exciting update regarding the AWS provider! The latest version, 4.5.0, has introduced support for environment_variables. This means you can now leverage this feature by specifying a map of the required environment variables within the run_config block. It provides a more streamlined approach to managing environment variables for your AWS Synthetics canary.

I hope these improvements provide better clarity and understanding. If you have any additional questions or suggestions, please let me know.

locals {
  canary_variables = {
    "app_url" = "https://mpostument.com"
  }
  login_function_location = "${path.module}/functions/login/nodejs/node_modules/index.js"
}

data "archive_file" "login" {
  type       = "zip"
  source_dir = "${path.module}/functions/login/"
  output_path = "${path.module}/functions/login-${filemd5(local.login_function_location)}.zip"

resource "aws_synthetics_canary" "login" {
  name                 = "login_test"
  artifact_s3_location = "s3://my_canary_bucket/"
  execution_role_arn   = "role_arn"
  handler              = "index.handler"
  zip_file             = data.archive_file.login.output_path
  runtime_version      = "syn-nodejs-puppeteer-3.4"
  start_canary         = true

  schedule {
    expression = "rate(5 minutes)"
  }

  run_config {
    timeout_in_seconds    = 300
    environment_variables = local.canary_variables
  }

  tags = merge(
    local.common_tags
  )
}