Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3941.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0a79bdcb4e7a58da1 Launch Stack
    HVM (arm64) ami-0ff7e7b2620af37df Launch Stack
    ap-east-1 HVM (amd64) ami-0e560fbea8c7a0080 Launch Stack
    HVM (arm64) ami-0a8d9d19424c87b7a Launch Stack
    ap-northeast-1 HVM (amd64) ami-0522fb8f1e4169635 Launch Stack
    HVM (arm64) ami-0373f5ea6ee698131 Launch Stack
    ap-northeast-2 HVM (amd64) ami-086569ff1178078dc Launch Stack
    HVM (arm64) ami-0665b81de1aa11d3e Launch Stack
    ap-south-1 HVM (amd64) ami-0720a765d525518aa Launch Stack
    HVM (arm64) ami-0d3fd216b7830d3b3 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0fea901efb418a0c2 Launch Stack
    HVM (arm64) ami-0fce7bb93ec61c736 Launch Stack
    ap-southeast-2 HVM (amd64) ami-00fd268423f7e16a3 Launch Stack
    HVM (arm64) ami-0c8105b6640f459dc Launch Stack
    ap-southeast-3 HVM (amd64) ami-00e5fa5a1a49a9080 Launch Stack
    HVM (arm64) ami-016881eb3c1a12fd4 Launch Stack
    ca-central-1 HVM (amd64) ami-026b440fb51699740 Launch Stack
    HVM (arm64) ami-0005273055f6ff0b5 Launch Stack
    eu-central-1 HVM (amd64) ami-07dd01af168971cd0 Launch Stack
    HVM (arm64) ami-0dae0052e99eac021 Launch Stack
    eu-north-1 HVM (amd64) ami-03a6011c81d6715dd Launch Stack
    HVM (arm64) ami-0b428454b4c8d46d9 Launch Stack
    eu-south-1 HVM (amd64) ami-0b0c7452f3bd8b4b3 Launch Stack
    HVM (arm64) ami-03959d9d759717139 Launch Stack
    eu-west-1 HVM (amd64) ami-0a5400d961a059608 Launch Stack
    HVM (arm64) ami-0a0cc5c807853b4d1 Launch Stack
    eu-west-2 HVM (amd64) ami-0ad7b138a975a1eda Launch Stack
    HVM (arm64) ami-0b4723890693b0eb0 Launch Stack
    eu-west-3 HVM (amd64) ami-05cee7ac64cfe0d60 Launch Stack
    HVM (arm64) ami-0765a6e3461d77d9b Launch Stack
    me-south-1 HVM (amd64) ami-046f7e6d8bbfaf2b0 Launch Stack
    HVM (arm64) ami-068b2da885536235d Launch Stack
    sa-east-1 HVM (amd64) ami-086ab1d97aa881306 Launch Stack
    HVM (arm64) ami-082a69a50f6f3ec39 Launch Stack
    us-east-1 HVM (amd64) ami-03415a8ae2f357cbd Launch Stack
    HVM (arm64) ami-047af98132d77b9ac Launch Stack
    us-east-2 HVM (amd64) ami-069e1166c7050f53a Launch Stack
    HVM (arm64) ami-0c8fc42857f103dd7 Launch Stack
    us-west-1 HVM (amd64) ami-0975deac66cb0b413 Launch Stack
    HVM (arm64) ami-07ccac7d5eca7e372 Launch Stack
    us-west-2 HVM (amd64) ami-01ee29c381da1e5c2 Launch Stack
    HVM (arm64) ami-0d784892ae913ef1b Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3913.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-045a269f4a432578f Launch Stack
    HVM (arm64) ami-02cdebe10ca1dd183 Launch Stack
    ap-east-1 HVM (amd64) ami-00ac5a87e7ef63305 Launch Stack
    HVM (arm64) ami-062a38b26d1f82feb Launch Stack
    ap-northeast-1 HVM (amd64) ami-085de329af75b2282 Launch Stack
    HVM (arm64) ami-0f4be078488915793 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0c33e9315559ffe2e Launch Stack
    HVM (arm64) ami-015284aa67ba68194 Launch Stack
    ap-south-1 HVM (amd64) ami-00e2c192d374773ba Launch Stack
    HVM (arm64) ami-028fa850bfc972e6c Launch Stack
    ap-southeast-1 HVM (amd64) ami-0047403de89d23340 Launch Stack
    HVM (arm64) ami-0ca57dfbb61a44644 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0bba3c218ff4006ec Launch Stack
    HVM (arm64) ami-037f7d278d0ce0baa Launch Stack
    ap-southeast-3 HVM (amd64) ami-0749d0a489d39df2e Launch Stack
    HVM (arm64) ami-0e4b2729f77851e07 Launch Stack
    ca-central-1 HVM (amd64) ami-0765ce391b926d2d1 Launch Stack
    HVM (arm64) ami-057351c98b77fc979 Launch Stack
    eu-central-1 HVM (amd64) ami-0d9fbb4b1a9db1962 Launch Stack
    HVM (arm64) ami-0d56dcbdc99825e9e Launch Stack
    eu-north-1 HVM (amd64) ami-0aa56313bd2dcb545 Launch Stack
    HVM (arm64) ami-0c4ca9a7a7717b9a3 Launch Stack
    eu-south-1 HVM (amd64) ami-0fd7f207751df3c90 Launch Stack
    HVM (arm64) ami-0456a5cf6fa191004 Launch Stack
    eu-west-1 HVM (amd64) ami-031458e664370d5ea Launch Stack
    HVM (arm64) ami-08844ad32e741e302 Launch Stack
    eu-west-2 HVM (amd64) ami-0c47c713111fac25a Launch Stack
    HVM (arm64) ami-089553161d5c63d01 Launch Stack
    eu-west-3 HVM (amd64) ami-0f162f767457d0dd4 Launch Stack
    HVM (arm64) ami-0714b394f5b191f27 Launch Stack
    me-south-1 HVM (amd64) ami-09260dbd89ec229d0 Launch Stack
    HVM (arm64) ami-05309626fed6baeec Launch Stack
    sa-east-1 HVM (amd64) ami-062cf4fb41423803f Launch Stack
    HVM (arm64) ami-0c9e105a347fda33f Launch Stack
    us-east-1 HVM (amd64) ami-0ad1f852514059105 Launch Stack
    HVM (arm64) ami-046c43bc307d64872 Launch Stack
    us-east-2 HVM (amd64) ami-0563db860e8d05325 Launch Stack
    HVM (arm64) ami-0afec012318af7324 Launch Stack
    us-west-1 HVM (amd64) ami-0e475981f910342c5 Launch Stack
    HVM (arm64) ami-0fdce2f723770057d Launch Stack
    us-west-2 HVM (amd64) ami-0f6d4cc94bab7847a Launch Stack
    HVM (arm64) ami-0f977b6c0838b5976 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3815.2.2.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0c86ffb00d64d8a49 Launch Stack
    HVM (arm64) ami-076bb47e664c188d8 Launch Stack
    ap-east-1 HVM (amd64) ami-00c9aaddc5bdf4760 Launch Stack
    HVM (arm64) ami-0e828e617d44cf12a Launch Stack
    ap-northeast-1 HVM (amd64) ami-0dce824a3a60bd9f6 Launch Stack
    HVM (arm64) ami-0675bb6e44902b2a4 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0a8dff59c9c8ee430 Launch Stack
    HVM (arm64) ami-075b3ac6bffe4a1b0 Launch Stack
    ap-south-1 HVM (amd64) ami-00ca3ef2b5b97fe86 Launch Stack
    HVM (arm64) ami-06ea2b23b5ea2800b Launch Stack
    ap-southeast-1 HVM (amd64) ami-0fe07b0d9e4281ded Launch Stack
    HVM (arm64) ami-0b953d24229c87e3b Launch Stack
    ap-southeast-2 HVM (amd64) ami-067a1a67f65f3d1e4 Launch Stack
    HVM (arm64) ami-0f2373cd589057ee9 Launch Stack
    ap-southeast-3 HVM (amd64) ami-03125430e28e1fbb5 Launch Stack
    HVM (arm64) ami-035ccc5383aa5a464 Launch Stack
    ca-central-1 HVM (amd64) ami-0bb9ea14eaa2c819d Launch Stack
    HVM (arm64) ami-02d8cd25b91492047 Launch Stack
    eu-central-1 HVM (amd64) ami-0e3dafd523e6323b5 Launch Stack
    HVM (arm64) ami-01e639462d5b9e084 Launch Stack
    eu-north-1 HVM (amd64) ami-0857ee185521281a8 Launch Stack
    HVM (arm64) ami-0fc6de34d24fc862b Launch Stack
    eu-south-1 HVM (amd64) ami-0cfc49402be5fe7c7 Launch Stack
    HVM (arm64) ami-088d895494f96655b Launch Stack
    eu-west-1 HVM (amd64) ami-032a9ac473b206066 Launch Stack
    HVM (arm64) ami-0d9265cab844f72fa Launch Stack
    eu-west-2 HVM (amd64) ami-0b1e22310c51c4cf7 Launch Stack
    HVM (arm64) ami-06f3b10dc5eff468f Launch Stack
    eu-west-3 HVM (amd64) ami-09ac6bac5c2652569 Launch Stack
    HVM (arm64) ami-0a4fcad74e47d182b Launch Stack
    me-south-1 HVM (amd64) ami-021b6c18246926828 Launch Stack
    HVM (arm64) ami-0bd9068d356247d41 Launch Stack
    sa-east-1 HVM (amd64) ami-046f9594a0b76275e Launch Stack
    HVM (arm64) ami-0147048b7a421722a Launch Stack
    us-east-1 HVM (amd64) ami-062684ab1bc02e30d Launch Stack
    HVM (arm64) ami-01dc58af2dc14b66d Launch Stack
    us-east-2 HVM (amd64) ami-018ab7de7bb5fd15c Launch Stack
    HVM (arm64) ami-07552c5470ad651f9 Launch Stack
    us-west-1 HVM (amd64) ami-08b85d8504cc0eca8 Launch Stack
    HVM (arm64) ami-019fbfc859ece9d19 Launch Stack
    us-west-2 HVM (amd64) ami-04a8c4172849209ab Launch Stack
    HVM (arm64) ami-0cd102564710099ca Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-03415a8ae2f357cbd (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-03415a8ae2f357cbd (amd64), Beta ami-0ad1f852514059105 (amd64), or Stable ami-062684ab1bc02e30d (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .