Introduction

Managing infrastructure manually through cloud provider consoles is error-prone, slow, and nearly impossible to reproduce consistently. A single misconfigured security group or forgotten subnet can cascade into hours of debugging. Infrastructure as Code (IaC) eliminates these problems by letting you define your entire infrastructure in declarative configuration files that can be versioned, reviewed, tested, and reused.

Terraform, developed by HashiCorp, is the most widely adopted IaC tool in the industry. It works across all major cloud providers — AWS, Azure, Google Cloud, and hundreds of others — using a single consistent workflow. Instead of clicking through web consoles or writing provider-specific scripts, you describe your desired infrastructure state in HashiCorp Configuration Language (HCL), and Terraform figures out how to make it happen.

What problems does Terraform solve?

  • Reproducibility: Deploy identical environments for development, staging, and production from the same configuration files.
  • Version control: Track every infrastructure change in Git, enabling code reviews and rollback capabilities.
  • Automation: Integrate infrastructure provisioning into CI/CD pipelines to eliminate manual steps.
  • Drift detection: Compare the actual state of your infrastructure against your configuration to detect unauthorized changes.
  • Dependency management: Terraform automatically determines the correct order to create, update, or destroy resources based on their dependencies.
  • Multi-cloud orchestration: Manage resources across multiple providers in a single configuration.

This guide walks you through everything you need to go from zero to deploying real cloud infrastructure with Terraform.


Prerequisites

Before starting, make sure you have:

  • A cloud provider account (AWS, Azure, or GCP). Most providers offer free tiers suitable for learning.
  • A terminal with shell access (bash, zsh, or PowerShell).
  • Git installed for version control.
  • A text editor with HCL syntax support. VS Code with the HashiCorp Terraform extension is recommended.
  • Basic familiarity with command-line operations and cloud computing concepts.

Installing Terraform

Terraform is distributed as a single binary. Choose the installation method for your operating system.

Linux (Ubuntu/Debian)

# Add the HashiCorp GPG key
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg

# Add the official repository
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

# Install Terraform
sudo apt update && sudo apt install terraform

macOS

# Using Homebrew
brew tap hashicorp/tap
brew install hashicorp/tap/terraform

Windows

# Using Chocolatey
choco install terraform

# Or using winget
winget install Hashicorp.Terraform

Verify the installation

terraform -version

You should see output similar to:

Terraform v1.6.6
on linux_amd64

Enable tab completion for your shell:

terraform -install-autocomplete

HCL Syntax Basics

Terraform configurations are written in HashiCorp Configuration Language (HCL). HCL is designed to be both human-readable and machine-parsable. Understanding its core syntax elements is essential.

Blocks

Blocks are the fundamental structural unit in HCL. Every piece of configuration is contained within a block.

# Block syntax: type "label1" "label2" { ... }
resource "aws_instance" "web_server" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t3.micro"
}

A block has a type (resource), zero or more labels ("aws_instance", "web_server"), and a body enclosed in curly braces containing arguments and nested blocks.

Arguments

Arguments assign values to names within a block:

resource "aws_instance" "web_server" {
  # Simple string argument
  instance_type = "t3.micro"

  # Numeric argument
  count = 3

  # Boolean argument
  associate_public_ip_address = true

  # List argument
  security_groups = ["sg-abc123", "sg-def456"]

  # Map argument
  tags = {
    Name        = "web-server"
    Environment = "production"
  }
}

Expressions and References

HCL supports expressions for dynamic values:

# String interpolation
name = "server-${var.environment}"

# Attribute reference
subnet_id = aws_subnet.main.id

# Conditional expression
instance_type = var.environment == "production" ? "t3.large" : "t3.micro"

# Built-in functions
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index)

# For expressions
upper_names = [for name in var.names : upper(name)]

Comments

# Single line comment

// Also a single line comment

/*
  Multi-line
  comment block
*/

Providers and Authentication

Providers are plugins that let Terraform interact with cloud platforms, SaaS services, and other APIs. You must declare which providers your configuration requires.

Declaring providers

# terraform block specifies provider version constraints
terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# Provider configuration with region
provider "aws" {
  region = "us-east-1"
}

The ~> 5.0 version constraint allows any version in the 5.x range but not 6.0 or higher. This protects you from breaking changes in major releases.

Authentication methods

Never hardcode credentials in your Terraform files. Use environment variables or credential files instead.

AWS — Environment variables:

export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"

AWS — Shared credentials file:

provider "aws" {
  region  = "us-east-1"
  profile = "my-profile"
}

Azure — Service principal:

export ARM_SUBSCRIPTION_ID="your-subscription-id"
export ARM_TENANT_ID="your-tenant-id"
export ARM_CLIENT_ID="your-client-id"
export ARM_CLIENT_SECRET="your-client-secret"

Azure — Provider configuration:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 3.0"
    }
  }
}

provider "azurerm" {
  features {}
}

Multiple provider configurations

You can configure the same provider multiple times for multi-region deployments:

provider "aws" {
  region = "us-east-1"
  alias  = "east"
}

provider "aws" {
  region = "eu-west-1"
  alias  = "europe"
}

resource "aws_instance" "east_server" {
  provider      = aws.east
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t3.micro"
}

resource "aws_instance" "europe_server" {
  provider      = aws.europe
  ami           = "ami-0d71ea30463e0ff8d"
  instance_type = "t3.micro"
}

Resources

Resources are the most important element in Terraform. Each resource block describes one or more infrastructure objects.

Defining resources

resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "main-vpc"
  }
}

resource "aws_subnet" "public" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "us-east-1a"
  map_public_ip_on_launch = true

  tags = {
    Name = "public-subnet"
  }
}

In the example above, the subnet references the VPC with aws_vpc.main.id. Terraform uses this reference to determine that the VPC must be created before the subnet.

Implicit and explicit dependencies

Terraform automatically detects dependencies through resource references. When automatic detection is not enough, use depends_on:

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t3.micro"
  subnet_id     = aws_subnet.public.id  # Implicit dependency

  depends_on = [aws_internet_gateway.main]  # Explicit dependency
}

Resource lifecycle

Control how Terraform handles resource updates with lifecycle rules:

resource "aws_instance" "web" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t3.micro"

  lifecycle {
    # Create replacement before destroying the original
    create_before_destroy = true

    # Ignore changes to tags made outside Terraform
    ignore_changes = [tags]

    # Prevent accidental destruction
    prevent_destroy = true
  }
}

Meta-arguments: count and for_each

Create multiple instances of a resource:

# Using count
resource "aws_subnet" "private" {
  count             = 3
  vpc_id            = aws_vpc.main.id
  cidr_block        = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index + 10)
  availability_zone = element(["us-east-1a", "us-east-1b", "us-east-1c"], count.index)

  tags = {
    Name = "private-subnet-${count.index + 1}"
  }
}

# Using for_each with a map
resource "aws_subnet" "named" {
  for_each = {
    web = "10.0.1.0/24"
    app = "10.0.2.0/24"
    db  = "10.0.3.0/24"
  }

  vpc_id     = aws_vpc.main.id
  cidr_block = each.value

  tags = {
    Name = "${each.key}-subnet"
  }
}

Prefer for_each over count when each instance has a meaningful identity. With count, removing an item from the middle of a list causes all subsequent items to shift, triggering unnecessary replacements.


Data Sources

Data sources let you fetch information about existing infrastructure that was not created by your current Terraform configuration:

# Look up the latest Amazon Linux 2 AMI
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

# Use the data source in a resource
resource "aws_instance" "web" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t3.micro"
}

Common use cases for data sources include looking up AMI IDs, retrieving existing VPC information, reading IAM policies, and querying DNS zones.

# Fetch an existing VPC by tag
data "aws_vpc" "existing" {
  filter {
    name   = "tag:Name"
    values = ["production-vpc"]
  }
}

# Read the current AWS account identity
data "aws_caller_identity" "current" {}

output "account_id" {
  value = data.aws_caller_identity.current.account_id
}

Variables and Outputs

Input variables

Variables parameterize your configurations so you can reuse them across environments:

# variables.tf

variable "environment" {
  description = "Deployment environment (dev, staging, production)"
  type        = string
  default     = "dev"

  validation {
    condition     = contains(["dev", "staging", "production"], var.environment)
    error_message = "Environment must be dev, staging, or production."
  }
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t3.micro"
}

variable "allowed_cidrs" {
  description = "List of CIDR blocks allowed to access the application"
  type        = list(string)
  default     = ["0.0.0.0/0"]
}

variable "resource_tags" {
  description = "Tags to apply to all resources"
  type        = map(string)
  default = {
    Project   = "terraform-demo"
    ManagedBy = "terraform"
  }
}

Set variable values through multiple methods (in order of precedence, highest first):

# 1. Command-line flag
terraform apply -var="environment=production"

# 2. Variable definition file
terraform apply -var-file="production.tfvars"

# 3. Environment variable
export TF_VAR_environment="production"

# 4. Default value in variable declaration
# 5. Interactive prompt (if no default is defined)

A .tfvars file:

# production.tfvars
environment   = "production"
instance_type = "t3.large"
allowed_cidrs = ["10.0.0.0/8", "172.16.0.0/12"]

resource_tags = {
  Project     = "web-platform"
  Environment = "production"
  ManagedBy   = "terraform"
  CostCenter  = "engineering"
}

Local values

Locals assign names to expressions for reuse within a module:

locals {
  common_tags = merge(var.resource_tags, {
    Environment = var.environment
    Timestamp   = timestamp()
  })

  name_prefix = "${var.project}-${var.environment}"
}

resource "aws_vpc" "main" {
  cidr_block = "10.0.0.0/16"
  tags       = merge(local.common_tags, { Name = "${local.name_prefix}-vpc" })
}

Output values

Outputs expose information about your infrastructure after deployment:

# outputs.tf

output "vpc_id" {
  description = "ID of the created VPC"
  value       = aws_vpc.main.id
}

output "public_subnet_ids" {
  description = "IDs of the public subnets"
  value       = aws_subnet.public[*].id
}

output "web_server_public_ip" {
  description = "Public IP address of the web server"
  value       = aws_instance.web.public_ip
  sensitive   = false
}

output "database_connection_string" {
  description = "Database connection string"
  value       = "postgresql://${aws_db_instance.main.endpoint}/${aws_db_instance.main.db_name}"
  sensitive   = true  # Hides value in CLI output
}

After running terraform apply, view outputs with:

terraform output
terraform output vpc_id
terraform output -json

State Management

Terraform state is the mechanism that maps your configuration to real-world resources. The state file (terraform.tfstate) contains the IDs, attributes, and metadata of every resource Terraform manages.

Local state

By default, Terraform stores state in a local file. This works for individual learning but is unsuitable for team environments:

# State is stored in terraform.tfstate in the working directory
ls -la terraform.tfstate

Problems with local state:

  • No locking — concurrent terraform apply runs can corrupt the state.
  • No shared access — team members cannot collaborate.
  • Risk of data loss — accidental deletion loses all resource tracking.

Remote state backends

For production use, store state remotely. The most common backend is AWS S3 with DynamoDB for state locking:

# backend.tf

terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket"
    key            = "environments/production/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"
  }
}

Create the S3 bucket and DynamoDB table before configuring the backend:

# Create the S3 bucket for state storage
aws s3api create-bucket \
  --bucket my-terraform-state-bucket \
  --region us-east-1

# Enable versioning for state history and recovery
aws s3api put-bucket-versioning \
  --bucket my-terraform-state-bucket \
  --versioning-configuration Status=Enabled

# Enable server-side encryption by default
aws s3api put-bucket-encryption \
  --bucket my-terraform-state-bucket \
  --server-side-encryption-configuration '{
    "Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "aws:kms"}}]
  }'

# Block all public access
aws s3api put-public-access-block \
  --bucket my-terraform-state-bucket \
  --public-access-block-configuration \
    BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true

# Create DynamoDB table for state locking
aws dynamodb create-table \
  --table-name terraform-state-lock \
  --attribute-definitions AttributeName=LockID,AttributeType=S \
  --key-schema AttributeName=LockID,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST \
  --region us-east-1

Azure Blob Storage backend:

terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "tfstatestorage"
    container_name       = "tfstate"
    key                  = "production.terraform.tfstate"
  }
}

State commands

# List all resources in the state
terraform state list

# Show details for a specific resource
terraform state show aws_instance.web

# Move a resource to a different address (renaming)
terraform state mv aws_instance.web aws_instance.web_server

# Remove a resource from state without destroying it
terraform state rm aws_instance.legacy

# Import an existing resource into state
terraform import aws_instance.web i-1234567890abcdef0

# Pull remote state to a local file for inspection
terraform state pull > state_backup.json

# Force unlock a stuck state lock
terraform force-unlock LOCK_ID

Warning: State manipulation commands can be destructive. Always back up your state before running state mv or state rm operations.


Modules

Modules are reusable, self-contained packages of Terraform configuration. Every Terraform configuration is technically a module — the root module.

Module structure

A well-organized module follows this directory structure:

modules/
└── vpc/
    ├── main.tf          # Resource definitions
    ├── variables.tf     # Input variables
    ├── outputs.tf       # Output values
    ├── versions.tf      # Provider version constraints
    └── README.md        # Documentation

Creating a module

# modules/vpc/variables.tf

variable "vpc_cidr" {
  description = "CIDR block for the VPC"
  type        = string
}

variable "environment" {
  description = "Environment name"
  type        = string
}

variable "public_subnet_cidrs" {
  description = "CIDR blocks for public subnets"
  type        = list(string)
}

variable "availability_zones" {
  description = "Availability zones for subnets"
  type        = list(string)
}
# modules/vpc/main.tf

resource "aws_vpc" "this" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "${var.environment}-vpc"
    Environment = var.environment
  }
}

resource "aws_internet_gateway" "this" {
  vpc_id = aws_vpc.this.id

  tags = {
    Name = "${var.environment}-igw"
  }
}

resource "aws_subnet" "public" {
  count                   = length(var.public_subnet_cidrs)
  vpc_id                  = aws_vpc.this.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = var.availability_zones[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.environment}-public-subnet-${count.index + 1}"
  }
}

resource "aws_route_table" "public" {
  vpc_id = aws_vpc.this.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.this.id
  }

  tags = {
    Name = "${var.environment}-public-rt"
  }
}

resource "aws_route_table_association" "public" {
  count          = length(aws_subnet.public)
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}
# modules/vpc/outputs.tf

output "vpc_id" {
  description = "ID of the VPC"
  value       = aws_vpc.this.id
}

output "public_subnet_ids" {
  description = "IDs of the public subnets"
  value       = aws_subnet.public[*].id
}

output "internet_gateway_id" {
  description = "ID of the internet gateway"
  value       = aws_internet_gateway.this.id
}

Using a module

# Root module - main.tf

module "vpc" {
  source = "./modules/vpc"

  vpc_cidr            = "10.0.0.0/16"
  environment         = "production"
  public_subnet_cidrs = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  availability_zones  = ["us-east-1a", "us-east-1b", "us-east-1c"]
}

# Reference module outputs
resource "aws_instance" "web" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t3.micro"
  subnet_id     = module.vpc.public_subnet_ids[0]
}

Using modules from the Terraform Registry

The Terraform Registry hosts thousands of community and verified modules:

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.1.0"

  name = "production-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway = true
  single_nat_gateway = true

  tags = {
    Environment = "production"
    ManagedBy   = "terraform"
  }
}

Always pin module versions. Unpinned modules can introduce breaking changes without warning.


Your First Deployment

This section walks through deploying a complete infrastructure stack on AWS. The configuration creates a VPC, public subnet, security group, and an EC2 instance.

Project setup

mkdir terraform-first-deployment && cd terraform-first-deployment

Step 1 — Define providers and versions

# versions.tf

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

Step 2 — Define variables

# variables.tf

variable "aws_region" {
  description = "AWS region for resource deployment"
  type        = string
  default     = "us-east-1"
}

variable "environment" {
  description = "Environment name"
  type        = string
  default     = "dev"
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t3.micro"
}

variable "my_ip" {
  description = "Your public IP address for SSH access (CIDR format)"
  type        = string
  default     = "0.0.0.0/0"  # Restrict this in production
}

Step 3 — Create the network

# main.tf

# --- VPC ---
resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "${var.environment}-vpc"
    Environment = var.environment
  }
}

# --- Internet Gateway ---
resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "${var.environment}-igw"
  }
}

# --- Public Subnet ---
resource "aws_subnet" "public" {
  vpc_id                  = aws_vpc.main.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "${var.aws_region}a"
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.environment}-public-subnet"
  }
}

# --- Route Table ---
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main.id
  }

  tags = {
    Name = "${var.environment}-public-rt"
  }
}

resource "aws_route_table_association" "public" {
  subnet_id      = aws_subnet.public.id
  route_table_id = aws_route_table.public.id
}

# --- Security Group ---
resource "aws_security_group" "web" {
  name        = "${var.environment}-web-sg"
  description = "Security group for web server"
  vpc_id      = aws_vpc.main.id

  ingress {
    description = "SSH access"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [var.my_ip]
  }

  ingress {
    description = "HTTP access"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTPS access"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    description = "Allow all outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "${var.environment}-web-sg"
  }
}

# --- AMI Data Source ---
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

# --- EC2 Instance ---
resource "aws_instance" "web" {
  ami                    = data.aws_ami.amazon_linux.id
  instance_type          = var.instance_type
  subnet_id              = aws_subnet.public.id
  vpc_security_group_ids = [aws_security_group.web.id]

  user_data = <<-EOF
    #!/bin/bash
    yum update -y
    yum install -y httpd
    systemctl start httpd
    systemctl enable httpd
    echo "<h1>Hello from Terraform</h1><p>Instance: $(hostname -f)</p>" > /var/www/html/index.html
  EOF

  tags = {
    Name        = "${var.environment}-web-server"
    Environment = var.environment
  }
}

Step 4 — Define outputs

# outputs.tf

output "vpc_id" {
  description = "ID of the VPC"
  value       = aws_vpc.main.id
}

output "instance_id" {
  description = "ID of the EC2 instance"
  value       = aws_instance.web.id
}

output "instance_public_ip" {
  description = "Public IP address of the web server"
  value       = aws_instance.web.public_ip
}

output "instance_public_dns" {
  description = "Public DNS name of the web server"
  value       = aws_instance.web.public_dns
}

output "web_url" {
  description = "URL to access the web server"
  value       = "http://${aws_instance.web.public_dns}"
}

Step 5 — Deploy

# Initialize the working directory (downloads provider plugins)
terraform init

# Format your configuration files
terraform fmt

# Validate the configuration syntax
terraform validate

# Preview the changes Terraform will make
terraform plan

# Apply the configuration (creates real resources)
terraform apply

Terraform displays a detailed plan showing every resource it will create, modify, or destroy. Type yes to confirm:

Plan: 7 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

After completion, Terraform displays the outputs:

Apply complete! Resources: 7 added, 0 changed, 0 destroyed.

Outputs:

instance_public_ip = "54.210.123.45"
web_url = "http://ec2-54-210-123-45.compute-1.amazonaws.com"

Step 6 — Clean up

When you are done experimenting, destroy all resources to avoid ongoing charges:

terraform destroy

Terraform Workflow

The standard Terraform workflow consists of four commands:

terraform init

Initializes the working directory. Downloads provider plugins, configures backends, and installs modules:

terraform init

# Upgrade providers to the latest version within constraints
terraform init -upgrade

# Reconfigure the backend
terraform init -reconfigure

terraform plan

Generates an execution plan showing what Terraform will do without making changes:

# Standard plan
terraform plan

# Save the plan to a file for later execution
terraform plan -out=tfplan

# Plan for a specific target resource
terraform plan -target=aws_instance.web

# Plan for destruction
terraform plan -destroy

terraform apply

Executes the changes required to reach the desired state:

# Apply with interactive approval
terraform apply

# Apply a saved plan (no approval prompt)
terraform apply tfplan

# Apply with auto-approval (use in CI/CD pipelines)
terraform apply -auto-approve

# Apply changes to a specific resource
terraform apply -target=aws_instance.web

terraform destroy

Destroys all resources managed by the configuration:

# Destroy with interactive approval
terraform destroy

# Destroy with auto-approval
terraform destroy -auto-approve

# Destroy a specific resource
terraform destroy -target=aws_instance.web

Additional useful commands

# Format configuration files
terraform fmt -recursive

# Validate configuration syntax
terraform validate

# Display the current state or a saved plan
terraform show

# Generate a visual dependency graph (requires Graphviz)
terraform graph | dot -Tpng > graph.png

# Open the Terraform console for expression evaluation
terraform console

Best Practices

Directory structure

Organize larger projects with a clear directory structure:

project/
├── environments/
│   ├── dev/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   │   ├── terraform.tfvars
│   │   └── backend.tf
│   ├── staging/
│   │   └── ...
│   └── production/
│       └── ...
├── modules/
│   ├── vpc/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   ├── compute/
│   │   └── ...
│   └── database/
│       └── ...
└── README.md

Version pinning

Always pin provider and module versions:

terraform {
  required_version = ">= 1.6.0, < 2.0.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.31"
    }
  }
}

State locking and encryption

  • Enable state locking to prevent concurrent modifications.
  • Encrypt state at rest — it often contains sensitive data like database passwords.
  • Enable versioning on your state storage bucket for recovery.
  • Never commit state files to version control.

Add to your .gitignore:

# Terraform
*.tfstate
*.tfstate.*
.terraform/
*.tfvars
!example.tfvars
crash.log
override.tf
override.tf.json
*_override.tf
*_override.tf.json
.terraform.lock.hcl

Workspaces

Terraform workspaces let you manage multiple environments with the same configuration:

# Create and switch to a new workspace
terraform workspace new staging

# List workspaces
terraform workspace list

# Switch to an existing workspace
terraform workspace select production

# Show the current workspace
terraform workspace show

Reference the workspace name in your configuration:

resource "aws_instance" "web" {
  instance_type = terraform.workspace == "production" ? "t3.large" : "t3.micro"

  tags = {
    Environment = terraform.workspace
  }
}

Naming conventions

  • Use lowercase and underscores for resource names: aws_instance.web_server.
  • Use descriptive names that reflect the resource purpose.
  • Prefix resource names with the environment or project name.
  • Use consistent tag schemas across all resources.

Security considerations

  • Store secrets in a secrets manager (AWS Secrets Manager, HashiCorp Vault) rather than in Terraform variables.
  • Mark sensitive outputs with sensitive = true.
  • Use IAM roles and service principals with least-privilege permissions.
  • Audit state file access since it contains resource attributes including sensitive values.

Troubleshooting

Common errors and solutions

Error: Provider not found

Error: Failed to query available provider packages

Run terraform init to download the required providers. If you recently changed provider versions, run terraform init -upgrade.

Error: Resource already exists

Error: creating EC2 Instance: InvalidParameterValue: The instance ID 'i-abc123' already exists

The resource exists in your cloud account but not in Terraform state. Import it with:

terraform import aws_instance.web i-abc123

Error: State lock

Error: Error locking state: Error acquiring the state lock

Another process holds the lock. Wait for it to finish or, if the lock is stale (the process crashed), force-unlock:

terraform force-unlock LOCK_ID

Error: Cycle detected

Error: Cycle: aws_security_group.a, aws_security_group.b

Two resources reference each other, forming a circular dependency. Break the cycle by using separate aws_security_group_rule resources instead of inline rules.

Error: Invalid value for variable

Error: Invalid value for variable

Check that your variable value matches the declared type and passes any validation rules. Inspect .tfvars files or environment variables for typos.

Debugging

Enable detailed logging to diagnose issues:

# Set the log level (TRACE, DEBUG, INFO, WARN, ERROR)
export TF_LOG=DEBUG

# Send logs to a file
export TF_LOG_PATH="terraform.log"

# Run Terraform with logging enabled
terraform plan

Reset logging when you are done:

unset TF_LOG
unset TF_LOG_PATH

Recovering from state issues

If the state becomes corrupted or out of sync:

# Pull the current remote state
terraform state pull > backup.tfstate

# Refresh the state to match real infrastructure
terraform refresh

# Plan to see what Terraform thinks needs to change
terraform plan

# If a resource was manually deleted, remove it from state
terraform state rm aws_instance.deleted_server

Summary

Terraform provides a powerful, declarative approach to infrastructure management. In this guide you covered:

  • HCL syntax: blocks, arguments, expressions, and references.
  • Providers: configuring cloud provider authentication and multi-region deployments.
  • Resources: defining infrastructure, managing dependencies, and controlling lifecycle behavior.
  • Variables and outputs: parameterizing configurations and exposing deployment information.
  • State management: remote backends, locking, and state manipulation commands.
  • Modules: creating reusable infrastructure packages and leveraging the Terraform Registry.
  • Full deployment: building a VPC, subnet, security group, and EC2 instance from scratch.
  • Best practices: directory structure, version pinning, security, and workspace management.

Start with small, focused configurations. As your infrastructure grows, extract reusable components into modules and establish remote state management early. Infrastructure as Code is a skill that compounds — every configuration you write makes the next one faster and more reliable.