Resource Provisioners
Provisioners are used to run
scripts
or commands
on a local (where Terraform is running) or remote machine. They are also used to copy a file or directory onto the destination machine while creating or destroying the resource. Provisioners serve many purposes such as bootstrapping a resource with defaults, cleaning up temporary files or configurations before deleting a resource, and applying configuration management tools such as Ansible, Chef Puppet onto a resource amongst several others.
The configuration of a provisioner block in Terraform may, in general, contain sensitive values such as passwords, API keys, or other secret information. Such sensitive values can be obtained from sensitive variables or other sensitive output values. For security reasons, Terraform is designed to automatically suppress all log output generated by a provisioner if it detects any sensitive values within the provisioner block's configuration.
Provisioners Syntax
The basic syntax for a Provisioner block is as follows:
resource "aws_instance" "instance " {
# resource arguments
provisioner "provisioner_type" {
# provisioner_type_arguments
}
}
Let's break down this syntax:
provisioner
is the Reserved keyword used to define a provisioner block within Terraform configuration and templates.
provisioner_type
is the type of provisioner you want to use.
provisioner_type_arguments
are arguments specific to the type of provisioner in question.
Example: Using local-exec provisioner to run a script on the local machine
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
tags {
name = "my-ec2-instance"
}
provisioner "local-exec" {
command = "echo 'Hello, World!' > hello.txt" kehfkwhfkwfuwbjdhjbwjedgjwvedjhwvedjwvjdvwfvwjvdjhvdjhqvdjhvqjdvjwdgwegvdjhwevd
}
}
Here, the local-exec provisioner will run a command on the local machine and will create a file named hello.txt containing "Hello, World!".
The self Object
In Terraform, when defining a provisioner inside a resource block, one cannot refer to its parent resource by name directly. It is an intentional design decision taken by the Terraform developers.
The reason for this restriction is to avoid
circular dependencies
. If it were allowed to reference, in the provisioner block of a resource, another resource by its name, that would set up a situation where the creation of a resource depends on a provisioner when the provisioner itself then depends on the resource. That would, in effect, make it impossible for Terraform to manage the resources properly.
To avoid this sort of circular dependency, Terraform has a special
self
object available inside provisioner blocks. The self object represents the parent resource the provisioner is a part of, and it gives you access to all that resource's attributes.
Example: Using self to reference an aws_instance's public_ip attribute
If you have an AWS EC2 instance (aws_instance) and you want to reference the instance's public IP address in a remote-exec provisioner, you can use self.public_ip:
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
}
inline = [
"echo 'Hello from Terraform!' > /tmp/greeting.txt"
]
}
}
Here, self.public_ip refers to the public_ip attribute of the aws_instance resource, which is the parent resource of the remote-exec provisioner.
Connection block
Most provisioners need access to the remote resource either by
SSH
or WinRM
to transfer the file or run scripts. For this, they require a nested connection block
inside your configuration, which contains every important detail that the provisioner needs to know about how to connect to a resource, such as the protocol to use, for instance, SSH, or WinRM; the hostname or IP address of a remote resource; and the username and password or authentication credentials required for connecting to a remote instance, among other necessary configuration options.
Let's take a look at an example, Here, we make use of the remote-exec provisioner to run a few commands on a remote EC2 instance, which is basically a virtual machine, once it is created.
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
private_key = file("~/.ssh/my_private_key.pem")
}
inline = [
"sudo apt-get update"
"sudo apt-get install -y nginx"
]
}
}
Here, with this setup, the remote-exec provisioner will execute commands on the created EC2 instance remotely. The connection block inside the provisioner tells it to connect to the instance via the "ssh" protocol using the public IP "self.public_ip", the "ubuntu" user, and a private key file at "~/.ssh/my_private_key.pem".
Once the connection is established, the provisioner will run the commands listed inside of the inline block. In this case, that would update the package lists and install the Nginx web server on a remote instance of EC2.
Available Argument of Connection Block
The connection block is used for establishing a connection to a remote resource. It supports several arguments, which can be used to tailor the connection. These will fall under two categories: common to SSH and WinRM connection types, or specific to either of the two.
Common Arguments
The following arguments are supported by both SSH and WinRM connection types:
Argument
Usage
type
Specify the type of connection, one of "ssh", "winrm".
host
Remote resource hostname or IP address, this is a
required
argument.
user
The username to use for the connection.
password
The password to use for the connection. This should not be set unless you're using a trusted connection, for security reasons.
port
The port number to connect via on the remote resource.
timeout
Timeout to wait for the connection to become available.
SSH-Specific Arguments
The following arguments are supported only with the SSH connection type:
Argument
Usage
private_key
A path to a private key file to use for authentication.
certificate
The contents of a signed Certificate Authority (CA) Certificate and is used to establish a secure connection to the remote host. The certificate argument must be used in conjunction with a private_key.
agent
This provides authentication to the remote host.
agent_identity
It specify which identity the SSH agent should use for authentication.
target_platform
This is the target platform that one wishes to connect to. It affects the default script path applied for remote execution.
host_key
This identifies the remote host and grants a trusted connection.
WinRM-specific Arguments
The following options are only valid with the WinRM connection type:
Argument
Usage
HTTPS
Enables HTTPS communication between Terraform and the remote Windows machine. Set it to true if the connection must be performed using HTTPS instead of HTTP.
insecure
It allows connections to servers with self-signed or invalid certificates. This reduces the security of the connection. Set to true to skip validating the HTTPS certificate chain.
use_ntlm
Enable NTLM authentication thus no longer a need to have basic authentication enable within a target guest. If true then enables the use of NTLM instead of the default.
cacert
Specify the CA certificate to use in validating the HTTPS certificate chain.
Location of Connection Block
The scope of influence of a connection block in Terraform is determined through the location where it is placed. You instruct which provisioner to use which connection settings by placing these connection blocks in different locations. Generally, you can place a connection block in three locations, such as Inside a resource block, Inside a provisioner block, Inside both resource and provisioner block.
Scenario 1: Connection block inside of a resource block
A connection block placed directly inside a resource block is called a
resource-level
connection block. In this case, all provisioners within that resource will use that same connection block to connect to the created resource.
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
password = "123456@01?"
}
provisioner "remote-exec" {
inline = [
"echo 'Hello, World!'"
]
}
provisioner "remote-exec" {
inline = [
"echo 'Goodbye, World!"
]
}
}
In this case the both provisioners (remote-exec) will use the same connection settings defined in the connection block at the resource level.
Scenario 2: Connection block in a provisioner block
When a connection block is nested inside a provisioner block, then this is known as a
provisioner-level
connection block. In that case, it will affect only that particular provisioner. Other provisioners inside the same resource will not be affected.
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
password = "nonrootuser"
}
inline = [
"echo 'Hello from Terraform!' > /tmp/greeting.txt"
]
}
}
In the above example, only a provisioner with the connection block will use those connection settings.
Scenario 3: Connection block inside of both the provisioner and resource block
If you have a connection block inside a resource block, and you also have a connection block inside of a provisioner block, the provisioner-level connection block will override resource-level connection settings for that particular provisioned.
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
password = "nonrootuser"
}
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "root"
password = "rootuser"
}
inline = [
"chmod 777 /root/permissionfile.txt"
]
}
provisioner "remote-exec" {
inline = [
"cat ~/.aws/config"
]
}
}
In the above example, a provisioner-level connection block overrides a resource-level one for the first provisioner (remote-exec), but the second provisioner will use resource-level connection settings.
Provisioner Types
There are three types of provisioners in Terraform:
- file Provisioner
- local-exec Provisioner
- remote-exec Provisioner
file Provisioner
In Terraform, a File Provisioner is a type of provisioner that allows the user to copy files or directories from a user's local machine (the one running Terraform) to a newly created resource, such as a virtual machine.
For this, Terraform connects first to the created virtual machine using
SSH
or WinRM
protocols type connection block and then copies all the specified files/directories to the target resource.
Example: Copying Files to Newly Created Resources
Consider the following example of Terraform configuration:
resource "aws_instance" "file_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
provisioner "file" {
source = "/home/local/files/script.sh"
destination = "/home/ubuntu/script.sh"
connection {
type = "ssh"
host = self.public_ip
user = "root"
password = "rootuser"
}
}
}
Here, we are copying a script file called script.sh via the File Provisioner from the machine on which we are running Terraform onto a newly created instance of EC2, which in this case is referred to by the resource called aws_instance. The source attribute specifies the path of the file locally and the destination attribute specifies the remote resource's path, to which the file copy needs to be done. Here, a connection block is used to connect via SSH to the EC2 server, which in turn is used by the File Provisioner to transfer the file.
Argument of File Provisioner
The terraform File Provisioner expects a few arguments specifying the source file or directory to be copied and a destination on a remote system. Supported arguments are:
source
The source argument specifies the file or directory to copy from the running Terraform's local machine. The path can be given relative to the current working directory or as an absolute path. This argument cannot be used with the content argument.
content
The content argument allows you to write the content directly into the destination file or directory. If the destination is a file, then the content will be written to that file. If it is a directory, it will create a file named
tf-file-content
inside that directory with the specified content. The content argument cannot be used along with the source argument.
destination
The destination argument specifies the path on the remote system where the file or directory will be written. This argument is required.
Uploading a file using File Provisioner
When uploading a file with the File Provisioner, it is important to remember that both source and destination arguments must present a full file path down to the filename. If not, then Terraform proceeds to follow the rules of uploading a directory using file provisioner.
Suppose you have a file called my_config.txt on your computer at C:\Users\LocalUser\Documents. You want to put it into the /home/centos/files directory on your remote VM, then you would configure the File Provisioner as follows:
provisioner "file" {
source = "C:\Users\LocalUser\Documents\my_config.txt"
destination = "/home/centos/files/my_config.txt"
}
Uploading a directory using File Provisioner
If you are using an SSH connection type with the File Provisioner in Terraform and you want to upload a directory to a destination directory on a remote machine, the destination directory must already exist. Because if you don't have a destination directory, you will fail with some sort of error. So in order to work around this situation, you have to create the folder beforehand, and then try uploading just the file. You can do this with a remote-exec provisioner just before the file provisioner. The remote-exec provisioner allows you to run commands on the remote machine over SSH. In this case, after the directory is created, it will be able to upload the file to the remote machine without issues.
Here is an example of how you might use the remote-exec provisioner to create a directory on the remote machine before uploading a directory using the file provisioned:
resource "null_resource" "upload_directory" {
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
password = "123456@01?"
}
provisioner "file" {
source = "/home/user/localdir/"
destination = "/opt/myapp/remotedir "
}
provisioner "remote-exec" {
inline = [
"mkdir -p /opt/myapp/remotedir"
]
}
}
In the example above, remote-exec provisioner will execute the
mkdir -p /opt/myapp/remotedir
command on the remote machine to create the /opt/myapp/remotedir directory. That would ensure the destination directory exists before the file provisioner attempts to upload the local directory to the remote machine.
The file provisioner then uploads the contents of the local /home/user/localdir to the /opt/myapp/remotedir directory on the remote machine. The trailing slash on the source parameter ensures the contents of the local directory are uploaded, rather than the directory itself.
When dealing with a winrm connection type inside Terraform's File Provisioner, the destination directory will be autocreated on the remote Windows system if it doesn't exist.
Behavior of Destination
When copying directories from source to destination, the presence or absence of a trailing slash (
/
) in the source path determines the behavior of a destination, be it whether the directory name will be embedded within the destination or whether the destination will be created.
Scenario 1: No trailing slash
If source path is /foo (no trailing slash) and destination is /tmp, then Terraform will automatically create the foo directory on the remote machine if it does not already exist. At the end, contents of the local /foo directory will be uploaded to remote machine's /tmp/foo directory.
NOTE:
SSH
never creating destination directory (in these case tmp), but it can create other directories (in these case foo) in remote machine.
provisioner "file" {
source = "/foo"
destination = "/tmp"
}
Scenario 2: Trailing slash present
If the source path is /foo/ (with trailing slash), and the destination is /tmp, then the files of the local /foo directory will be uploaded directly into the remote machine's /tmp directory.
provisioner "file" {
source = "/foo/"
destination = "/tmp"
}
Uploading a content using File Provisioner
The content argument with the file provisioner allows writing content directly into a file or directory on a remote machine. The content argument cannot be used along with the source argument. If the destination is a file, then content specified by the content argument will be written directly to that file.
For instance, say that you wanted to write a simple "Hello, World!" message to a file on the remote machine. You might do that with the content argument:
resource "null_resource" "upload_content" {
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
password = "123456@01?"
}
provisioner "file" {
content = "Hello, World!\n"
destination = "/tmp/example.txt"
}
}
In this example the string "Hello, World!\n" will be written straight into the file /tmp/example.txt on the remote machine.
If the given destination is a directory, a new file named
tf-file-content
with the given content will be created in that directory.
resource "null_resource" "upload_content" {
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
password = "123456@01?"
}
provisioner "file" {
content = "Hello, World!\n"
destination = "/tmp/"
}
}
In this case, a new file called tf-file-content will be created inside the /tmp directory containing "Hello, World!\\n".
local-exec Provisioner
The local-exec provisioner is a way of running a command or script on the machine where Terraform is itself running, after a resource (like an AWS instance) has been successfully created. It does not run the command directly on the newly created resource.
Suppose you are working on a web application and you wanted to containerize it using Docker, with a goal of running it locally during development. You can automate this process using Terraform's local-exec provisioner.
resource "null_resource" "upload_directory" {
provisioner "local-exec" {
command = "docker build -t my-app . && docker run -d -p 8080:8080 my-app"
}
}
Arguments of local-exec
The following are the arguments supported:
command
The command argument is a required parameter in the local-exec provisioner that specifies the command to be executed in the local machine. It is executing in a shell, meaning it can use environment variables for variable substitution and can also use Terraform variables. This makes it possible to customize the behavior of the command.
However, there is an important security consideration that will help in the use of Terraform variables in the command. Do not use Terraform variables for variable substitution in the command because it causes shell injection vulnerability. Shell injection vulnerabilities occur when an attacker injects malicious commands into a system by taking advantage of various vulnerabilities in the ways commands are executed. In this case, with the command including Terraform variables, an attacker might inject malicious code into the system.
A good practice to avoid vulnerabilities to shell injection is to pass variables to the command through the environment parameter, making use of environment variable substitution. This way, it is safe to pass on variables to the command without opening your system to security risks.
working_dir
working_dir is an optional parameter that allows you to specify a working directory in which a command is executed. It can take relative and absolute path values. working_dir is useful when you run the script because sometimes you are required to locate your script in your local directory for running, and sometimes the scripts also rely on files or any other resources in a specific directory.
Here's a sample of how you would use the working_dir argument in a Terraform resource:
resource "null_resource" "running_script" {
provisioner "local-exec" {
command = "bash_script.sh 'argument1' 'argument2'"
working_dir = "/home/localuser/scripts-dir"
}
}
In this example, the working_dir parameter notifies Terraform to run the execution of the bash_script.sh command in the /home/localuser/scripts-dir directory, not in the current working directory. It makes sure that the location of the script is proper and also the script may access any files or resources it needs within that directory.
interpreter
The interpreter argument is an optional parameter in Terraform, which sets the interpreter and its arguments for the execution of commands. It's pretty useful when there is a need to execute a command with the help of some certain interpreter, like running a shell script or running a Python script.
The interpreter is a list of arguments where the first argument is the interpreter itself, the program which will run the command. It could be a shell like
/bin/bash
, or an interpreter like /usr/bin/python
, or any other executable capable of running a command. Remaining arguments are the ones that customize the behavior of the interpreter; these are added before the command. You can specify the interpreter either as a relative path from the current working directory or using one of several kinds of absolute paths.
Here is an example of how you could use the interpreter parameter:
resource "null_resource" "running_interpreter" {
provisioner "local-exec" {
command = "echo 'foo'"
interpreter = ["/bin/bash", "-c"]
}
}
When Terraform runs the local-exec, it executes the following command line in terminal.
/bin/bash -c echo 'foo'
If the interpreter argument is not provided, Terraform will choose a default interpreter based on the OS of the system. For Unix-like systems this would generally be
/bin/sh
and for Windows something like cmd.exe
.
environment
The environment parameter is an optional parameter in Terraform where you can pass on the environment variables to the command argument. In addition to the environment argument, environment variables set within a shell running Terraform can also be accessed by the command argument.
Now, suppose you wish to execute a curl command that relies on two environment variables. Let
API_KEY
be one that is expected to be set in the shell environment and METHOD
set through the environment argument in the Terraform.
resource "null_resource" "passing_environment" {
provisioner "local-exec" {
command = "ecurl -X $METHOD 'https://api.example.com/data' -H 'Authorization: Bearer $API_KEY'"
environment = {
METHOD = "GET"
}
}
}
When Terraform runs this command, it will use the METHOD specified by the environment block and the API_KEY defined with the shell environment.
when
The when parameter is an optional attribute in Terraform. It provides an ability for a provisioner to define when it should actually execute a command. By default, Terraform runs provisioners in the create lifecycle phase, meaning the command will execute at the time when the associated resource is created. You can use the when parameter to define when the command should run.
There are two values that are possible for the when parameter:
create
: This is the default behavior. The command will be executed when the associated resource is created.
destroy
: The command will execute when the associated resource is destroyed.
Below is a sample usage of the when parameter:
resource "null_resource" "provisioner_when" {
provisioner "local-exec" {
command = "echo 'I will run when this resource is destroyed!'"
when = destroy
}
}
In this example, the command will run only when the null_resource is being destroyed.
quiet
This quiet parameter is an optional boolean parameter in Terraform. If true, Terraform will not print the actual command to be executed on standard output while running a command through Terraform. Commonly, this will be the console or terminal window. Instead, it will print a message
Suppressed by quiet=true
. However, the output of the command that gets executed would be printed anyway, irrespective of the quiet setting.
Here is how you might use the quiet parameter in a Terraform configuration:
resource "null_resource" "provisioner_when" {
provisioner "local-exec" {
command = "echo 'This is the command output.'"
quiet = true
}
}
When Terraform runs this configuration, instead of printing to the console the command being executed, you'll simply see a message which says "Suppressed by quiet=true". The actual output of the echo command, which is "This is the command output.", will still be printed to the console.
remote-exec Provisioner
The remote-exec provisioner in Terraform is responsible for running some scripts or commands on a remote resource, like a virtual machine or a container, right after the resource has just been created. This provisioner is useful to run a configuration management tool like Ansible or Puppet to configure the remote resource, install software or applications on the remote resource, run a script to configure the remote resource for something specific, or any other remote configuration tasks.
All the remote-exec provisioners need a connection to a remote resource using SSH or WinRM, depending on what target machine an operating system has. Once connected, it runs one or more scripts or commands on the remote resource.
The remote-exec provisioner is similar to the local-exec provisioner except that the key difference between these two is that while local-exec runs a command on the local machine where Terraform is running, remote-exec runs a command on a remote resource.
The following is an example that uses the remote-exec provisioner:
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "root"
password = "rootuser"
}
inline = [
"sudo puppet apply /etc/puppet/manifests/site.pp"
]
}
}
In the example above, the remote-exec provisioner will SSH into the AWS instance and execute the puppet apply command to apply the Puppet manifest file /etc/puppet/manifests/site.pp. This could be used to configure the instance with a certain setup or configuration
Argument of remote-exec Provisioner
The remote-exec provisioner can take three types of arguments to execute commands on a remote resource:
inline
The inline argument is a list of command strings that will be executed on the remote resource. When using the inline argument to specify a list of commands, the provisioner will execute those commands using a default shell. The most common default shell is
/bin/sh
(the Bourne shell), but it can vary based on the OS and how the remote resource is configured.
If you would like to control what shell is used to execute your commands, you may specify that first in the list of commands. The inline argument is not usable with the script or scripts arguments as all three are mutually exclusive with each other, only one of them will be used in a single remote-exec provisioner block.
The following example illustrates the use of the inline argument:
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "root"
password = "rootuser"
}
inline = [
"#!/bin/bash"
"sudo apt-get update"
"sudo apt-get install -y apache2"
"sudo service apache2 start"
]
}
}
Here, an inline argument is used with the shell Bash to run three commands on the remote resource.
script
The script argument is a path to a script that will be executed on a remote server instead of providing a list of the commands directly in your Terraform configuration. The script file is first copied to the remote machine using
scp
, and then executed on the remote server. This argument accepts both relative and absolute paths to the script file and You can't use the script argument together with the inline or scripts arguments.
For example, consider having a script file called my_script.sh on your local machine, and you want to execute it against a remote resource using the remote-exec provisioner in Terraform. Here is how you would pass the script argument.
resource "null_resource" "provisioner_script" {
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "root"
private_key = file("./private/key")
}
script = "../../../my_script.sh"
}
}
When you run the Terraform configuration, this will copy the my_script.sh file to the remote resource and execute it. This allows for the execution of more complex scripts on the remote machine, rather than just a list of commands using the inline argument.
scripts
The scripts argument is an extension of the script argument. It allows you to specify a list of script files that will be executed on a remote machine. This argument is useful when you need to execute multiple scripts in a particular order. It will copy the script files to the remote machine by using scp and executing it in the order that it has been specified. This argument accepts both relative and absolute paths to the script file Using the scripts argument with the inline or script arguments isn't allowed.
Suppose there are two script files, script1.sh and script2.sh, in the same directory your Terraform configuration file resides. You would like these scripts executed against your remote machine in some order.
resource "null_resource" "provisioner_scripts" {
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "root"
private_key = file("./private/key")
}
script = [
"./script1.sh"
"./script2.sh"
]
}
}
Scripts is given a list of two script files, script1.sh and script2.sh. Terraform will copy both script files to the remote machine and scripts will be executed in the order they are mentioned in this list, script1.sh gets executed first and then script2.sh.
Creation-Time Provisioners
Provisioners in Terraform perform their tasks on a resource in the creation or deletion lifecycle phase of the resource. By default, all provisioners in Terraform execute at the time of creation of resource for the creation lifecycle phase and not during any other lifecycle events such as update or delete of the resource, so called Creation-time provisioner. The creation-time provisioners have a purpose of executing "bootstrapping" of a system. Bootstrapping is the process of creating a setup with an initial configuration and basic resource settings when the resource is created.
If a creation-time provisioner fails, Terraform marks the resource as "tainted". To rectify this, Terraform plans the deletion and recreation of that tainted resource on the next run of terraform apply. Terraform does so because a failed provisioner might leave the resource in a weird, semi-configured state. This normally causes undesired behavior or results. The process is known as
Tainting
.
However, you can skip the Tainting process by using the on_failure attribute in provisioners. The on_failure attribute allows you to customize what your provisioners do in the event that a failure occurs.
Example: Running a script to configure an EC2 instance
Suppose you want to create an EC2 instance through Terraform and want Terraform to run a script on the instance to configure with some initial settings, like installing a web server and configuring the firewall.
resource "aws_instance" "example_ec2_instance" {
ami = "ami-0123456789"
instance_type = "t2.micro"
provisioner "remote-exec" {
connection {
type = "ssh"
host = self.public_ip
user = "root"
password = "rootuser"
}
inline = [
"#!/bin/bash"
"sudo apt update"
"sudo apt install -y apache2"
"sudo ufw allow 'Apache'"
"sudo systemctl enable apache2"
"sudo systemctl start apache2"
]
}
}
Here, the provisioner SSH into the instance and updates the package list, then installs Apache, configures the firewall to allow HTTP traffic, and finally starts the Apache service using inline script. This provisioner would run in the process of creation of an EC2 instance and Terraform would then mark the resource as tainted and plan to delete and recreate it in their next run after they fail.
Destroy-Time Provisioners
A provisioner that runs during the deleting lifecycle phase of a resource is called a Destroy-Time Provisioner. Destroy Provisioners will run before the resource is destroyed. If a Destroy-Time Provisioner fails (i.e. the script or command returns an error), Terraform will halt the deletion of that resource and report an error. Terraform will also mark the resource as "tainted," which on the next invocation of terraform apply, Terraform will try to run the Destroy-Time Provisioner in an attempt to recover from the failure. So this ensures that unless the provisioner's task has completed successfully, the resource is never deleted.
The purpose of the destroy-time provisioners is to execute some sort of cleanup or maintenance on a resource before it is ultimately removed from the infrastructure. Quite useful in scenarios like the cleaning of resources, cleanup, graceful shutdown, and many more.
Let's say you have a Terraform configuration that provisions an AWS RDS database instance. When you're ready to delete the database instance, you want to make sure you take a final database snapshot before the instance is deleted so that you have a backup of your database in case you need to restore it later. You now can make a Destroy-Time Provisioner run a script doing a final snapshot of the database before instance is being destroyed.
resource "aws_db_instance" "database" { allocated_storage = 2 engine = "mysql" instance_class = "db.t2.micro" db_name = "test" username = "myuser" password = "mypassword" provisioner "local-exec" { when = destroy command = "aws rds create-db-snapshot --db-instance-identifier ${self.id} --db-snapshot-identifier final-snapshot" } }
In the above example, aws_db_instance "db_instance " has a Destroy-Time Provisioner which executes a script creating a final database snapshot using AWS CLI. When a provisioner is configured with
when = destroy
, the provisioner will run only during the destroy phase of resource lifecycle. The command argument specifies the script that should be run which takes a snapshot of the database instance.
The destroy provisioner will only run if its containing resource block remains in the Terraform configuration at the time of destruction. If the whole resource block, including the provisioner, is removed from configuration, the destroy provisioner won't run.
This can be problematic when you want to remove a resource that has a destroy-time provisioner. If you just remove the resource block from the configuration, the destroy provisioner won't run, and the resource might not get properly cleaned up. Due to this limitation, Terraform suggests a multi-step process to safely remove a resource with a destroy-time provisioner:
- First, insert
count = 0
under the resource configuration. This will invoke the destroy of any already existing resources of the same kind.
- Run terraform apply. This will destroy the instances of the resources. During this, it will run the destroy-time provisioner.
- Once it has destroyed the resources you can remove the entire resource block including the provisioner blocks from the Terraform configuration.
- Run terraform apply again. This time it should do nothing since the resources were destroyed in the above step.
Multiple Provisioners
Terraform supports multiple provisioners in one resource block. The order in which the provisioners will run is the order they have been declared in the configuration file. You can combine creation-time and destroy-time provisioners on the same resource block. Terraform will only run the provisioners that are relevant for the current operation (i.e., creation or destruction).
resource "null_resource" "provisioner_when" {
provisioner "local-exec" {
when = destroy
command = "echo 'First Destroy provisioner'"
}
provisioner "local-exec" {
command = "echo 'First Creation provisioner'"
}
provisioner "local-exec" {
command = "echo 'Second Creation provisioner'"
}
provisioner "local-exec" {
when = destroy
command = "echo 'Second Destroy provisioner'"
}
}
In this example, null_resource named provisioner_when with 4 provisioners. 2 provisions run during 'Create' (terraform apply) and 2 provisions run during 'Destroy' (terraform destroy).
on_failure
When a provisioner does encounter an error, it has the potential to affect the entire run of Terraform. By default, if a provisioner fails, Terraform will itself fail and not continue running. But you can change this behavior using the on_failure setting. The allowed values are:
continue
: If a provisioner fails, Terraform will just ignore the error and proceed with the creation or destruction process.
fail
: If a provisioner fails, Terraform will error and halt the entire operation. If this is a create/destroy provisioner, Terraform will also mark the resource as "tainted".
Example: Generating an SSH Key Pair
Consider the case of developing a resource that declares a requirement for a generated SSH key pair, if the generation of the key pair fails for some reason, you might not want it to successfully create the resource.
resource "null_resource" "ssh_key" {
provisioner "local-exec" {
command = "ssh-keygen -t rsa -b 4096 -C 'my_ssh_key' -f my_ssh_key"
on_failure = fail
}
}
In the above example, if the execution of ssh-keygen command fails to create the ssh key pair (because permission issue or disk space issue), Terraform will bubble up an error message with the provisioner failed and it will immediately stop creating the null_resource resource. Also, Due to Creation-time provisioner mark the resource as "tainted", so that it will be recreated from scratch on the next terraform apply.
Example: Sending a Notification Email
Suppose you have a resource that, upon creation, sends out an email notification to a certain email address. Suppose, for some reason, sending the notification email fails, but you still want the resource to be created.
resource "null_resource" "send_email_notification" {
provisioner "local-exec" {
command = "curl -X POST 'https://api.net/v3/mydomain.com/messages' -H 'Authorization: Bearer MY_API_KEY' -H 'Content-Type: application/json' -d '{\"from\": \"[email protected]\", \"to\": \"[email protected]\", \"subject\": \"Resource ${self.id} Created\"}'"
on_failure = continue
}
}
In the above example, in case the above curl command fails to send an email due to a network issue or API rate limit, Terraform will ignore the error and proceed to execute other operations of creating the null_resource resource with no error messages displayed or action aborted.
Related Pages
Feedback
Was this page helpful?