We'll create a serverless data storing and sharing application — using Terraform! We will be able to send message to it, and retrieve them with a unique message ID.
Sender:
Recipient:
Message:
I've taught, mentored, and worked in cloud infrastructure for 10+ years. Many terraform course I took only built Virtual Machines and datacenter like resources but failed to teach and take advantage of modern Cloud products and resource which also allow building applications that are cheaper, scalabale and needs less administration and management.
We'll build things, break things, and learn fast — together.
Enroll now and get started with real-world Terraform projects.
AWS: is a Cloud Service Provider owned by Amazon. You can host servers, databases, networks, firewalls there like in a data center.
Resource: A Resource is an AWS object like a virtual machine, a database or a network. Resources are the building blocks of your application.
Deployment: A Deployment is the process of creating and configuring resources.
Cloud Service Provider | Logo |
---|---|
Google Cloud Platform (GCP) | ![]() |
Microsoft Azure | ![]() |
Digital Ocean, Akamai / Linode, Vultr, Hetzner,Alibaba Cloud, Oracle Cloud
SNS Topics are a AWS Resources to send messages to a group of subscribers.
SNS Topics are also very quickly deployed and SNS Topics with the same name can exist in the same account making them ideal candidates to test connections between IaC tools like Terraform and AWS.
Provide a Name or Identifier for this resource
We just pick the bare minimum default options
Scroll to the bottom and click the "Create topic" button
That's it, the topic resource got created.
Click on "Topics" on the left to verify everything is as expected
IaC makes tracking resources and deployments into AWS reliable and reproducible. IaC is scalable.
The code can be used as documentation.
IaC are not a general purpose languages/frameworks.
IaC languages are declarative.
When applied they update the state of a system to match the IaC configuration.
Terraform is an IaC Framework and command line toolofficial tutorial
Nomenclature:
Providers: Connectors to cloud providers like AWS or other services.
Command Line access: Powershell (Windows) / Terminal (Mac/Linux)
Actions will be done both in Command Line and through Graphical User Interface (GUI) where possible.
PS C:\> mkdir C:\terraform
PS C:\> cd C:\terraform
Actions will be done both in Command Line and through Graphical User Interface (GUI) where possible.
Go to the Terraform Download Page
Download Terraform for Windows
Alternatively download it through Powershell, be aware that the command below downloads version 1.11.4. You can replace the URL with the Link from the Download Page to get the latest version.
PS C:\> Invoke-WebRequest -Uri https://releases.hashicorp.com/terraform/1.11.4/terraform_1.11.4_windows_amd64.zip
Unzip terraform into the directory we created
PS C:\terraform> Expand-Archive -LiteralPath "$env:USERPROFILE\Downloads\terraform_1.11.4_windows_amd64.zip" -DestinationPath 'C:\terraform\'
If everything worked correctly and we type terraform in the terminal now we should see an output with the available terraform commands like this: (abbreviated)
PS C:\terraform> .\terraform.exe
....
Main commands:
init Prepare your working directory for other commands
validate Check whether the configuration is valid
plan Show changes required by the current configuration
apply Create or update infrastructure
destroy Destroy previously-created infrastructure
....
Terraform is an IaC Framework official tutorial
Nomenclature:
Providers: Connectors to cloud providers like AWS or other services.
For terraform to know to which cloud and account to interact with we need the AWS Provider
Human Users: Log in to the dashboard and interact with AWS in unpredicatble ways; testing new features, developing new products, etc.
Machine users: interact with or within terraform in very predictable ways and patterns and doing repetitiv work; creating files (e.g. log files) in the same location, checking status of systems every few minutes
Login again to AWS Dashboard and search for IAM
Click on Users
Create Users
Provide name for Users, but do not enable dashboard access since we only use this one for terraform.
Attach Power User policy
Attach Power User policy
Review, verify and create
When finished we can see the new user in the IAM Dashboard
AWS provides username and password logins to the Dashboard for human users.
For machine users, like terraform, AWS provides Access and secret keys. In this section we create those for the terraform user we made earlier.
Important, if you don't download or save the key at this point you will not be able to do so later and won't be able to use these keys. So you would have to create new keys and probably delete the ones you created here.
The Information from the CSV file or the dasboard should look like this:
Never share those credentials and treat them with the highest security mindset.
The credentials below are fake and are only used for exmplarory purpose.
Access key ID,Secret access key
AKIA5FAKEKEYSTUFF,899-fake&secret&key
In our terraform directory, we create a new file called providers.tf and put in the code for an aws provider with the credentials from the previous section.
provider "aws" {
region = "us-east-2"
access_key = "AKIA5FAKEKEYSTUFF"
secret_key = "899-fake&secret&key"
}
With the aws provider added we run terraform init to install the provider. The command output should look like this:
PS C:\terraform> .\terraform.exe init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v5.94.1...
- Installed hashicorp/aws v5.94.1 (signed by HashiCorp)
....
Terraform has been successfully initialized!
If you run terraform without any arguments you get a list of all available commands. The output should look like this:
PS C:\terraform> .\terraform.exe init
Usage: terraform [global options] $lt subcommand $gt [args]
The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.
Main commands:
init Prepare your working directory for other commands
validate Check whether the configuration is valid
plan Show changes required by the current configuration
apply Create or update infrastructure
destroy Destroy previously-created infrastructure
All other commands:
console Try Terraform expressions at an interactive command prompt
fmt Reformat your configuration in the standard style
force-unlock Release a stuck lock on the current workspace
get Install or upgrade remote Terraform modules
graph Generate a Graphviz graph of the steps in an operation
import Associate existing infrastructure with a Terraform resource
login Obtain and save credentials for a remote host
logout Remove locally-stored credentials for a remote host
metadata Metadata related commands
modules Show all declared modules in a working directory
output Show output values from your root module
providers Show the providers required for this configuration
refresh Update the state to match remote systems
show Show the current state or a saved plan
state Advanced state management
taint Mark a resource instance as not fully functional
test Execute integration tests for Terraform modules
untaint Remove the 'tainted' state from a resource instance
version Show the current Terraform version
workspace Workspace management
Global options (use these before the subcommand, if any):
-chdir=DIR Switch to a different working directory before executing the
given subcommand.
-help Show this help output, or the help for a specified subcommand.
-version An alias for the "version" subcommand.
In this section we learn about the subcommands that terraform uses to do its job, mainly there are
In this section we learn about the subcommands that terraform uses to do its job, mainly there are
The remaining commands are advanced and I will cover them in a future video of this series.
To find the neccessary syntax for a given rerources we can go to the AWS provider docs on the Hashicorp Website.
To test that terraform works correctly we create a test resource, it should adhere to these requirements:
SNS topics are perfect for this, they only need 3 lines of code, deploy within a few seconds and can exist by and on itself. VPCs (virtual networks) need more code, and S3 buckets have to be globally unique making them less suitable for testing.
so lets create a file named main.tf an put in the following code. The code and more info can be found here
resource "aws_sns_topic" "test_topics" {
name = "first-test-resource"
}
Running terraform plan
we get some output like the one below showing us which
resources terraform would created (marked with a green + ), destroy (marked with a red - )
or
changed (marked with a yello ~)
PS C:\terraform> .\terraform.exe plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_sns_topic.test_topics will be created
+ resource "aws_sns_topic" "test_topics" {
+ arn = (known after apply)
+ beginning_archive_time = (known after apply)
+ content_based_deduplication = false
+ fifo_topic = false
+ id = (known after apply)
+ name = "first-test-resource"
+ name_prefix = (known after apply)
+ owner = (known after apply)
+ policy = (known after apply)
+ signature_version = (known after apply)
+ tags_all = (known after apply)
+ tracing_config = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Running terraform plan
we get some output like the one below showing us which
resources terraform would created (marked with a green + ), destroy (marked with a red - )
or
changed (marked with a yello ~)
PS C:\terraform> .\terraform.exe plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_sns_topic.test_topics will be created
+ resource "aws_sns_topic" "test_topics" {
+ arn = (known after apply)
+ beginning_archive_time = (known after apply)
+ content_based_deduplication = false
+ fifo_topic = false
+ id = (known after apply)
+ name = "first-test-resource"
+ name_prefix = (known after apply)
+ owner = (known after apply)
+ policy = (known after apply)
+ signature_version = (known after apply)
+ tags_all = (known after apply)
+ tracing_config = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_sns_topic.test_topics: Creating...
aws_sns_topic.test_topics: Creation complete after 1s [id=arn:aws:sns:us-east-2:929529788344:first-test-resource]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Running terraform plan
we get some output like the one below showing us which
resources terraform would created (marked with a green + ), destroy (marked with a red - )
or
changed (marked with a yello ~)
PS C:\terraform> .\terraform.exe plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_sns_topic.test_topics will be created
+ resource "aws_sns_topic" "test_topics" {
+ arn = (known after apply)
+ beginning_archive_time = (known after apply)
+ content_based_deduplication = false
+ fifo_topic = false
+ id = (known after apply)
+ name = "first-test-resource"
+ name_prefix = (known after apply)
+ owner = (known after apply)
+ policy = (known after apply)
+ signature_version = (known after apply)
+ tags_all = (known after apply)
+ tracing_config = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_sns_topic.test_topics: Creating...
aws_sns_topic.test_topics: Creation complete after 1s [id=arn:aws:sns:us-east-2:929529788344:first-test-resource]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
After removing the sns topic from the main.tf file and running terraform plan
we
see
in the output below how terraform would remove the sns topic if we ran terraform apply.
PS C:\terraform> .\terraform.exe plan
aws_sns_topic.test_topics: Refreshing state... [id=arn:aws:sns:us-east-2:929529788344:first-test-resource]
hoolahoop
whyisitnotworking
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# aws_sns_topic.test_topics will be destroyed
# (because aws_sns_topic.test_topics is not in configuration)
- resource "aws_sns_topic" "test_topics" {
- application_success_feedback_sample_rate = 0 -> null
- arn = "arn:aws:sns:us-east-2:929529788344:first-test-resource" -> null
- content_based_deduplication = false -> null
- fifo_topic = false -> null
- firehose_success_feedback_sample_rate = 0 -> null
- http_success_feedback_sample_rate = 0 -> null
- id = "arn:aws:sns:us-east-2:929529788344:first-test-resource" -> null
- lambda_success_feedback_sample_rate = 0 -> null
- name = "first-test-resource" -> null
- owner = "929529788344" -> null
- policy = jsonencode(
{
- Id = "__default_policy_ID"
- Statement = [
- {
- Action = [
- "SNS:GetTopicAttributes",
- "SNS:SetTopicAttributes",
- "SNS:AddPermission",
- "SNS:RemovePermission",
- "SNS:DeleteTopic",
- "SNS:Subscribe",
- "SNS:ListSubscriptionsByTopic",
- "SNS:Publish",
]
- Condition = {
- StringEquals = {
- "AWS:SourceOwner" = "929529788344"
}
}
- Effect = "Allow"
- Principal = {
- AWS = "*"
}
- Resource = "arn:aws:sns:us-east-2:929529788344:first-test-resource"
- Sid = "__default_statement_ID"
},
]
- Version = "2008-10-17"
}
) -> null
- signature_version = 0 -> null
- sqs_success_feedback_sample_rate = 0 -> null
- tags_all = {} -> null
# (17 unchanged attributes hidden)
}
Plan: 0 to add, 0 to change, 1 to destroy.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
Lambda Functions are AWS Resources that can run functions of code. They can be used to retrieve, send or edit information. Lambda functions also run in the cloud without the need to provision a server.AWS Lambda
Serverless API to read and sava arbitrary text data
In this course we will be focusing on building resources, in the second part of this series we will dive deeper into the more advanced topics and structures of terraform.
Create a new file named database.tf it will contain the configuration for a Database in AWS
resource "aws_dynamodb_table" "data_store" {
name = "data-store"
hash_key = "id"
billing_mode = "PAY_PER_REQUEST"
attribute {
name = "id"
type = "N"
}
What does this code do?
resource "aws_dynamodb_table" "data_store" {
name = "our-db"
hash_key = "id"
billing_mode = "PAY_PER_REQUEST"
attribute {
name = "id"
type = "N"
}
Line 1 tells terraform this block is a resource of type aws_dynamodb_table with the name data_store. Line set the AWS Name to our-db. Important: data_store is the reference for terraform, while our-db is the name in AWS. AWS has no concept of the terraform reference name. Line 3 sets the hash_key to the field, "id" and Line 4 set the billing mode is set to PAY_PER_REQUEST, which means that you will be charged based on the number of requests made to the table, and since we will not make a lot of requests this is the cheapest option. Line 5 defines an attribute for the table, in this case the attribute id is defined as a number type (N). Finally, it tags the resource with the project name udemy
Lambda functions can execute code in the AWS withouth the need to provision servers. The terraform configuration can be seen here
In this course we are using 3 differnet objects to manage permissions in AWS.
Roles differ from Users as they meant to be used by other AWS services. They also don't have a password or access keys. Instead, they are assumed by other AWS services or users.
IAM Policies are documents written in JSON that define permissions for AWS resources. They can be attached to users, groups or roles. They are used to define what actions are allowed or denied on specific resources.
For the Lambda Role to interact with other Resources in AWS (APi Gateway and Dynamo Database) it needs to have thoser permissions in AWS. This can be done by using an IAM Role. The terraform configuration can be seen here
Create a new file named iam.tf
resource "aws_iam_role" "test_role" {
name = "test_role"
# Terraform's "jsonencode" function converts a
# Terraform expression result to valid JSON syntax.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
We create a new file send_lambda.tf
Some terraform projects only have 3-4 files named like main.tf, variables.tf, outputs.tf and providers.tf
. When terraform runs in directory it reads
all files with terraform extension like; .tf .tfjson then it organizes the resources and builds or updates them. I recommend using meaningful filenames so that we
can rely information to a reader without them having to open the file.
Serverless API to read and sava arbitrary text data
Serverless API to read and sava arbitrary text data