TL;DR — Quick Summary

Master Ansible playbooks and roles for infrastructure automation. Covers inventory, modules, Jinja2 templates, ansible-vault, error handling, and LAMP deploy.

ANSIBLE — PLAYBOOKS, ROLES AND VAULT Inventory INI / YAML group_vars/ host_vars/ Playbook plays / tasks handlers / vars Ansible Engine Roles / Modules Vault / Jinja2 SSH webservers nginx / php dbservers mysql / backup loadbalancers haproxy / certs ansible-vault Encrypted secrets AES-256 Roles tasks / handlers templates / files defaults / meta Agentless push model — inventory drives everything

Managing infrastructure at scale without automation is a recipe for configuration drift, missed steps, and 3 AM incidents. Ansible playbooks and roles give you a structured, repeatable way to define exactly what every server should look like and enforce that state idempotently across your entire fleet. This guide covers the complete Ansible workflow — from inventory management and playbook architecture through roles, Jinja2 templates, ansible-vault secrets, and error handling — with a real-world LAMP stack deployment as the capstone example.

Prerequisites

  • Ansible installed on a control node (Ubuntu 22.04+ or any Linux/macOS machine)
  • SSH key-based access to one or more managed hosts
  • Python 3 on all target hosts (pre-installed on most distributions)
  • Basic familiarity with YAML syntax and Linux command line

Ansible Architecture: How the Push Model Works

Ansible uses a push-based, agentless architecture. Your control node (laptop or CI runner) pushes configuration to managed nodes over SSH. There is no daemon to run, no database of state, and nothing to install on targets beyond a working Python interpreter.

The execution chain is: Inventory → Playbook → Modules.

  1. Ansible reads the inventory to determine which hosts to target
  2. It parses the playbook to build an ordered list of tasks
  3. For each task, Ansible copies a small Python module to the remote host via SSH, executes it, captures the result, and removes the module
  4. Results (ok / changed / failed / skipped) are aggregated and printed to the terminal

This model means every playbook run is self-contained. If a host is unreachable, only that host fails — the rest continue.

Inventory Management

INI vs YAML format

Both formats express the same data. INI is terser; YAML is more explicit and supports complex host variables inline.

# inventory.ini
[webservers]
web1.example.com
web2.example.com ansible_port=2222

[dbservers]
db1.example.com
db2.example.com

[production:children]
webservers
dbservers

[all:vars]
ansible_user=deployer
ansible_ssh_private_key_file=~/.ssh/id_ed25519
# inventory.yml
all:
  vars:
    ansible_user: deployer
    ansible_ssh_private_key_file: ~/.ssh/id_ed25519
  children:
    webservers:
      hosts:
        web1.example.com:
        web2.example.com:
          ansible_port: 2222
    dbservers:
      hosts:
        db1.example.com:
        db2.example.com:
    production:
      children:
        webservers:
        dbservers:

group_vars and host_vars

Rather than embedding all variables in the inventory file, place them in dedicated directories that Ansible automatically loads:

project/
  inventory.ini
  group_vars/
    all.yml          # applies to every host
    webservers.yml   # applies to hosts in [webservers]
    dbservers.yml
  host_vars/
    web1.example.com.yml   # applies only to web1
  site.yml
# group_vars/webservers.yml
http_port: 80
https_port: 443
document_root: /var/www/html
worker_processes: 4
# host_vars/web1.example.com.yml
server_name: web1.example.com
ssl_cert_path: /etc/ssl/certs/web1.pem

Dynamic inventory with the aws_ec2 plugin

For cloud environments, static inventory files become unmanageable as instances scale. The aws_ec2 plugin queries the AWS API at runtime:

# aws_ec2.yml
plugin: amazon.aws.aws_ec2
regions:
  - us-east-1
  - eu-west-1
filters:
  instance-state-name: running
  "tag:Environment": production
keyed_groups:
  - key: tags.Role
    prefix: role
  - key: placement.region
    prefix: region
# Use the dynamic inventory file directly
ansible-playbook -i aws_ec2.yml site.yml

Playbook Structure

A playbook contains one or more plays. Each play maps a set of tasks to a host group.

# site.yml
- name: Configure web servers
  hosts: webservers
  become: true
  vars:
    app_name: myapp
    app_port: 8080

  tasks:
    - name: Install nginx
      apt:
        name: nginx
        state: present
        update_cache: true

    - name: Deploy nginx config from template
      template:
        src: templates/nginx.conf.j2
        dest: /etc/nginx/nginx.conf
        owner: root
        group: root
        mode: "0644"
      notify: Restart nginx

    - name: Ensure nginx is running and enabled
      service:
        name: nginx
        state: started
        enabled: true

  handlers:
    - name: Restart nginx
      service:
        name: nginx
        state: restarted

Conditionals with when

- name: Install apache2 (Debian only)
  apt:
    name: apache2
    state: present
  when: ansible_os_family == "Debian"

- name: Install httpd (Red Hat only)
  yum:
    name: httpd
    state: present
  when: ansible_os_family == "RedHat"

- name: Run only on web1
  debug:
    msg: "Primary web server"
  when: inventory_hostname == "web1.example.com"

Loops with loop

- name: Install required packages
  apt:
    name: "{{ item }}"
    state: present
  loop:
    - nginx
    - curl
    - ufw
    - fail2ban

- name: Create application directories
  file:
    path: "{{ item.path }}"
    state: directory
    owner: "{{ item.owner }}"
    mode: "{{ item.mode }}"
  loop:
    - { path: /var/www/myapp, owner: www-data, mode: "0755" }
    - { path: /var/log/myapp, owner: www-data, mode: "0750" }
    - { path: /etc/myapp, owner: root, mode: "0755" }

Jinja2 templates

Templates let you generate configuration files dynamically from variables:

{# templates/nginx.conf.j2 #}
user www-data;
worker_processes {{ worker_processes | default('auto') }};

events {
    worker_connections {{ worker_connections | default(1024) }};
}

http {
    server {
        listen {{ http_port }};
        server_name {{ server_name }};
        root {{ document_root }};

        {% if enable_ssl | default(false) %}
        listen {{ https_port }} ssl;
        ssl_certificate {{ ssl_cert_path }};
        ssl_certificate_key {{ ssl_key_path }};
        {% endif %}

        location / {
            try_files $uri $uri/ =404;
        }
    }
}

Tags for selective task execution

- name: Install packages
  apt:
    name: "{{ item }}"
    state: present
  loop: "{{ packages }}"
  tags:
    - install
    - packages

- name: Deploy configuration
  template:
    src: app.conf.j2
    dest: /etc/app/app.conf
  tags:
    - config
    - deploy
# Run only tasks tagged 'config'
ansible-playbook site.yml --tags config

# Skip tasks tagged 'install'
ansible-playbook site.yml --skip-tags install

Essential Modules

ModulePurposeKey Parameters
apt / yumPackage managementname, state, update_cache
copyCopy static files to remotesrc, dest, owner, mode
templateDeploy Jinja2 templatessrc, dest, owner, mode
serviceManage system servicesname, state, enabled
userManage user accountsname, groups, shell, state
fileCreate dirs, symlinks, set permspath, state, owner, mode
lineinfileEdit lines in a filepath, regexp, line, state
commandRun a command (no shell features)argv or positional, creates
shellRun shell commands with pipescmd, chdir
gitClone or update a git reporepo, dest, version
docker_containerManage Docker containersname, image, state, ports
debugPrint variable valuesmsg, var

Role Structure

Roles are the primary mechanism for organizing and sharing Ansible automation. Create one with:

ansible-galaxy init roles/webserver

This scaffolds the full directory:

roles/webserver/
  tasks/
    main.yml        # Entry point — task list
  handlers/
    main.yml        # Triggered by notify
  templates/
    nginx.conf.j2   # Jinja2 templates
  files/
    index.html      # Static files for copy module
  vars/
    main.yml        # High-priority variables (not overridable)
  defaults/
    main.yml        # Low-priority defaults (easily overridden)
  meta/
    main.yml        # Role dependencies, Galaxy metadata

defaults vs vars

  • defaults/main.yml: Low precedence. Any caller can override these. Use for sensible defaults that users should be able to customize.
  • vars/main.yml: High precedence. These override most other variable sources. Use for values the role requires to function correctly.
# roles/webserver/defaults/main.yml
http_port: 80
https_port: 443
worker_processes: auto
document_root: /var/www/html
enable_ssl: false
# roles/webserver/tasks/main.yml
- name: Install nginx
  apt:
    name: nginx
    state: present
    update_cache: true

- name: Create document root
  file:
    path: "{{ document_root }}"
    state: directory
    owner: www-data
    group: www-data
    mode: "0755"

- name: Deploy nginx configuration
  template:
    src: nginx.conf.j2
    dest: /etc/nginx/nginx.conf
    owner: root
    group: root
    mode: "0644"
  notify: Restart nginx

- name: Enable and start nginx
  service:
    name: nginx
    state: started
    enabled: true
# roles/webserver/handlers/main.yml
- name: Restart nginx
  service:
    name: nginx
    state: restarted

- name: Reload nginx
  service:
    name: nginx
    state: reloaded

Role dependencies in meta

# roles/webserver/meta/main.yml
galaxy_info:
  author: yourname
  description: Install and configure nginx
  min_ansible_version: "2.14"
dependencies:
  - role: common
  - role: security

Using roles in a playbook

# site.yml
- name: Configure web tier
  hosts: webservers
  become: true
  roles:
    - role: common
    - role: security
    - role: webserver
      vars:
        http_port: 80
        enable_ssl: true
        document_root: /var/www/myapp

Ansible Galaxy for Community Roles

# Install a role from Galaxy
ansible-galaxy install geerlingguy.mysql

# Install a collection (newer namespace)
ansible-galaxy collection install community.docker

# Install from a requirements file
ansible-galaxy install -r requirements.yml
# requirements.yml
roles:
  - name: geerlingguy.nginx
    version: "3.2.0"
  - name: geerlingguy.mysql
    version: "4.3.2"
collections:
  - name: community.docker
    version: ">=3.4.0"
  - name: amazon.aws
    version: ">=7.0.0"

Ansible Vault for Secrets Management

Encrypting files and strings

# Create a new encrypted file
ansible-vault create group_vars/all/vault.yml

# Edit an existing encrypted file
ansible-vault edit group_vars/all/vault.yml

# Encrypt an existing plain-text file
ansible-vault encrypt secrets.yml

# Decrypt a file to plain text (use carefully)
ansible-vault decrypt secrets.yml

# Encrypt a single string for inline use
ansible-vault encrypt_string 'MyS3cr3tP@ss' --name 'db_password'

The standard pattern is to keep two variable files per group: one plain-text with non-sensitive vars, and one vault-encrypted with sensitive values:

# group_vars/all/vars.yml (plain text, committed to git)
db_host: db1.example.com
db_port: 3306
db_name: myapp
db_user: appuser
db_password: "{{ vault_db_password }}"
# group_vars/all/vault.yml (encrypted, committed to git)
vault_db_password: "SuperSecret123!"
vault_api_key: "abcdef1234567890"

Using a vault password file

# Store the vault password in a file (never commit this to git)
echo "my-vault-password" > ~/.vault_pass
chmod 600 ~/.vault_pass

# Run playbooks without interactive prompts
ansible-playbook site.yml --vault-password-file ~/.vault_pass
# ansible.cfg
[defaults]
vault_password_file = ~/.vault_pass

Error Handling

ignore_errors and failed_when

- name: Check if application is installed
  command: which myapp
  register: myapp_check
  ignore_errors: true
  changed_when: false

- name: Install application only if missing
  apt:
    name: myapp
    state: present
  when: myapp_check.rc != 0

- name: Run migration script
  command: /opt/myapp/migrate.sh
  register: migration_result
  failed_when: "'ERROR' in migration_result.stdout"
  changed_when: "'Changes applied' in migration_result.stdout"

block / rescue / always

block groups tasks so that rescue runs on failure and always runs regardless:

- name: Deploy application with rollback
  block:
    - name: Stop application service
      service:
        name: myapp
        state: stopped

    - name: Deploy new application version
      git:
        repo: https://github.com/org/myapp.git
        dest: /opt/myapp
        version: "{{ app_version }}"

    - name: Start application service
      service:
        name: myapp
        state: started

  rescue:
    - name: Rollback to previous version
      git:
        repo: https://github.com/org/myapp.git
        dest: /opt/myapp
        version: "{{ previous_version }}"

    - name: Start application with previous version
      service:
        name: myapp
        state: started

    - name: Notify team of failed deployment
      debug:
        msg: "Deployment of {{ app_version }} failed. Rolled back to {{ previous_version }}."

  always:
    - name: Record deployment attempt
      shell: echo "{{ app_version }} attempted at $(date)" >> /var/log/deployments.log
      changed_when: false

Idempotency Best Practices

Ansible modules are designed to be idempotent, but your custom tasks need care:

  • Prefer modules over shell/command: modules check state before acting; shell always runs
  • Use creates with command: args: creates: /path/to/file skips the command if the file exists
  • Set changed_when: false on commands that only read state (like health checks)
  • Use lineinfile instead of shell echo >> for modifying config files
  • Avoid apt: update_cache: true on every task: use cache_valid_time: 3600 so it only refreshes when stale

ansible.cfg Configuration

[defaults]
inventory = ./inventory.ini
remote_user = deployer
private_key_file = ~/.ssh/id_ed25519
host_key_checking = False
retry_files_enabled = False
stdout_callback = yaml
roles_path = ./roles:~/.ansible/roles
interpreter_python = auto_silent
forks = 10

[privilege_escalation]
become = True
become_method = sudo
become_user = root

Debugging and Testing

# Dry run -- show what would change without applying
ansible-playbook site.yml --check --diff

# Maximum verbosity -- shows SSH commands and module output
ansible-playbook site.yml -vvv

# Run a playbook against a single host for testing
ansible-playbook site.yml --limit web1.example.com

# Validate playbook syntax without connecting to hosts
ansible-playbook site.yml --syntax-check

# Print all facts gathered from a host
ansible web1.example.com -m setup
# Use the debug module to inspect variable values mid-play
- name: Show resolved variables
  debug:
    msg: "App: {{ app_name }}, Port: {{ app_port }}, User: {{ ansible_user }}"

- name: Capture and inspect command output
  command: systemctl status nginx
  register: nginx_status
  ignore_errors: true
  changed_when: false

- name: Print nginx status output
  debug:
    var: nginx_status.stdout_lines

Tool Comparison

FeatureAnsibleTerraformPuppetChefSaltStack
Primary useConfig management + orchestrationInfrastructure provisioningConfig managementConfig managementConfig management + remote exec
ArchitectureAgentless (SSH)Agentless (API)Agent (Puppet agent)Agent (chef-client)Agent or agentless
LanguageYAMLHCLPuppet DSLRuby (DSL)YAML / Python
State trackingStatelessState file (tfstate)PuppetDBChef serverSalt mine
Learning curveLowMediumHighHighMedium
IdempotentYes (mostly)YesYesYesYes
Best forDay-2 ops, app deployCloud provisioningLarge enterpriseComplex configLarge-scale remote exec

Ansible and Terraform are complementary: use Terraform to create cloud resources (VMs, networks, databases), then Ansible to configure them after creation.

Real-World Example: LAMP Stack Deployment

This playbook automates a complete Linux + Apache + MySQL + PHP deployment, using roles and vault-encrypted passwords:

# lamp.yml
- name: Deploy LAMP stack
  hosts: webservers
  become: true
  vars:
    php_version: "8.3"
    mysql_root_password: "{{ vault_mysql_root_password }}"
    mysql_db_name: myapp
    mysql_db_user: appuser
    mysql_db_password: "{{ vault_mysql_db_password }}"
    app_repo: https://github.com/org/myapp.git
    app_version: main

  tasks:
    - name: Install Apache and PHP packages
      apt:
        name:
          - apache2
          - "libapache2-mod-php{{ php_version }}"
          - "php{{ php_version }}"
          - "php{{ php_version }}-mysql"
          - "php{{ php_version }}-curl"
          - "php{{ php_version }}-mbstring"
          - "php{{ php_version }}-xml"
        state: present
        update_cache: true

    - name: Install MySQL server
      apt:
        name:
          - mysql-server
          - python3-pymysql
        state: present

    - name: Start and enable MySQL
      service:
        name: mysql
        state: started
        enabled: true

    - name: Set MySQL root password
      mysql_user:
        login_unix_socket: /var/run/mysqld/mysqld.sock
        name: root
        password: "{{ mysql_root_password }}"
        host_all: true
        state: present

    - name: Create application database
      mysql_db:
        login_user: root
        login_password: "{{ mysql_root_password }}"
        name: "{{ mysql_db_name }}"
        state: present

    - name: Create application database user
      mysql_user:
        login_user: root
        login_password: "{{ mysql_root_password }}"
        name: "{{ mysql_db_user }}"
        password: "{{ mysql_db_password }}"
        priv: "{{ mysql_db_name }}.*:ALL"
        state: present

    - name: Enable Apache mod_rewrite
      apache2_module:
        name: rewrite
        state: present
      notify: Restart Apache

    - name: Clone application from git
      git:
        repo: "{{ app_repo }}"
        dest: /var/www/html
        version: "{{ app_version }}"
        force: true

    - name: Set ownership of web root
      file:
        path: /var/www/html
        owner: www-data
        group: www-data
        recurse: true

    - name: Deploy Apache virtual host
      template:
        src: templates/vhost.conf.j2
        dest: /etc/apache2/sites-available/myapp.conf
        owner: root
        group: root
        mode: "0644"
      notify: Restart Apache

    - name: Enable virtual host
      command: a2ensite myapp.conf
      args:
        creates: /etc/apache2/sites-enabled/myapp.conf
      notify: Restart Apache

    - name: Disable default virtual host
      command: a2dissite 000-default.conf
      args:
        removes: /etc/apache2/sites-enabled/000-default.conf
      notify: Restart Apache

    - name: Ensure Apache is started and enabled
      service:
        name: apache2
        state: started
        enabled: true

  handlers:
    - name: Restart Apache
      service:
        name: apache2
        state: restarted

Run it with vault-encrypted secrets:

ansible-playbook -i inventory.ini lamp.yml --vault-password-file ~/.vault_pass

Summary

  • Ansible’s agentless, push-based architecture means you only need SSH access to start automating
  • Inventory (INI, YAML, or dynamic) drives all host targeting; group_vars and host_vars keep variables organized
  • Playbooks chain plays, tasks, handlers, and Jinja2 templates into a readable, declarative workflow
  • Roles (ansible-galaxy init) package tasks, handlers, templates, and defaults into reusable, shareable units
  • ansible-vault encrypts secrets at rest; pair vault files with plain-text variable files to keep git history clean
  • block/rescue/always provides structured error handling and rollback logic for production deployments
  • Always validate with --check --diff and -vvv before running against production hosts