A preparation week
gp env AWS_ACCESS_KEY_ID=whatever
gp env AWS_SECRET_ACCESS_KEY=whatever
gp env AWS_DEFAULT_REGION=eu-central-1
for this env vars to be recognized in Terraform peocessing, it is good to also add them with "TF_VAR_" prefix:
gp env TF_VAR_AWS_ACCESS_KEY_ID=whatever
gp env TF_VAR_AWS_SECRET_ACCESS_KEY=whatever
gp env TF_VAR_AWS_DEFAULT_REGION=eu-central-1
After next time Gitpod will start the env, this Gitpod vars will be also available as normal env vars.
And testing command is aws sts get-caller-identity
I am using random provider
terraform init
terraform plan
terraform apply
There are also terraform modules
After adding additional provider, we need to run
terraform init -upgrade
We can destroy/delete whatever was previously created by
terraform destroy
Instruction how to migrate this infra to cloud: https://developer.hashicorp.com/terraform/tutorials/cloud/cloud-migrate
To connect to Terraform Cloud, we need to place token into
open /home/gitpod/.terraform.d/credentials.tfrc.json
and then
terraform login
Variables for Terraform cloud should be set in the Workspace: https://app.terraform.io/app/qbeckmansion/workspaces/terra-house-1/variables
And the deployed resources can be seen here: https://app.terraform.io/app/qbeckmansion/workspaces/terra-house-1
Terraform has variables. Can be declared in variables.tf. Can be ovverriten by
terraform -var name="value"
For modules, if there is new output that module should return, it shhould be defined
- first in module outputs.tf
- second, in root module outputs.tf
There can be
Plain data values such as Local Values and Input Variables don't have any side-effects to plan against and so they aren't valid in replace_triggered_by. You can use terraform_data's behavior of planning an action each time input changes to indirectly use a plain value to trigger replacement.
https://developer.hashicorp.com/terraform/language/resources/terraform-data
In Terraform Cloud we can have local or remote project run (plan or apply) For local, it means the entire run will happen localy on own machine (eg on Gitpod env, or in docker, or on own computer. For remote, it means thing will happen on TCloud. Then all env variables must be set in the TCoud as well. Separately we can connect TCloud to our repo, and given branch, and trigger TCloud run, when new code is pushed to branch. In this case, project should be in "remote" Execution Mode.
aws s3 cp public/index.html s3://bartosz-tfc-bootcamp/index.html
Provisioners allow you to execute commands on compute instances eg. a AWS CLI command.
They are not recommended for use by Hashicorp because Configuration Management tools such as Ansible are a better fit, but the functionality exists.
This will execute command on the machine running the terraform commands eg. plan apply
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
}
https://developer.hashicorp.com/terraform/language/resources/provisioners/local-exec
This will execute commands on a machine which you target. You will need to provide credentials such as ssh to get into the machine.
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
https://developer.hashicorp.com/terraform/language/resources/provisioners/remote-exec
And the command to list files
terraform console
> fileset( "${path.root}/public/assets", "*")
For each allows us to enumerate over complex data types
[for s in var.list : upper(s)]
This is mostly useful when you are creating multiples of a cloud resource and you want to reduce the amount of repetitive terraform code.
You can create custom provider to turn off all by writing a go application that will provide a set of CRUD operations. A good starting point is to use scaffolding project and/or goign through this doc.
The way it work, is that in TF we can create new resource, and when executing terraform apply
, the provider will use Create Go-lang function, from the CRUD set. If we do a change in the resource, and then terraform apply
, the Update opration will be executed.
This CRUD operations can do anything. Eg call external RestAPI endpoint from our custom service.