TL;DR — Quick Summary

Complete guide to Gravitational Teleport for secure SSH, Kubernetes, database, and app access. Certificate auth, SSO, RBAC, audit logs, and Access Requests.

Teleport by Gravitational is an open-source identity-aware infrastructure access platform that replaces static SSH keys, VPNs, and jump hosts with short-lived certificates, SSO, and a complete audit trail. This guide covers every layer of Teleport: architecture, installation, node enrollment, RBAC, SSH, database, Kubernetes, and application access — ending with a production SSO deployment walkthrough.

Prerequisites

  • Linux server (Ubuntu 22.04+ or RHEL 8+) with a public FQDN for the Proxy Service.
  • DNS A record pointing teleport.example.com to the proxy IP.
  • Port 443 (HTTPS/ALPN) open inbound to the proxy.
  • tsh and tctl CLI tools installed on your workstation.
  • An identity provider (GitHub, Okta, Azure AD, or Google Workspace) for SSO — optional for lab use.

Teleport Architecture

Teleport has four core components that can run on one node or be split across a cluster:

ComponentRole
Auth ServiceCertificate Authority. Issues short-lived TLS/SSH certs, stores audit logs, manages cluster state in a backend (etcd, DynamoDB, Firestore, or SQLite).
Proxy ServicePublic-facing HTTPS/SSH reverse proxy. Handles tsh logins, web UI, and ALPN multiplexing (SSH, Kubernetes API, DB protocols all on port 443).
SSH Node Agentteleport daemon on each Linux/Windows server, registers with Auth, handles session recording.
Access AgentsSpecialized daemons: db_service (databases), kube_service (Kubernetes API), app_service (HTTP/TCP apps), windows_desktop_service (RDP).

Certificate-based auth replaces SSH keys. When a user logs in via tsh, the Auth Service issues an SSH certificate valid for max_session_ttl (typically 8–12 hours). No long-lived private keys are distributed or rotated — the cert expires automatically.


Installation

Self-Hosted: apt/yum

# Ubuntu/Debian
curl https://goteleport.com/static/install.sh | bash -s 15.0.0
apt-get install teleport

# RHEL/Rocky Linux
yum install teleport

Self-Hosted: Docker Compose

services:
  teleport:
    image: public.ecr.aws/gravitational/teleport:15
    volumes:
      - ./teleport.yaml:/etc/teleport/teleport.yaml
      - teleport-data:/var/lib/teleport
    ports:
      - "443:443"
      - "3022:3022"

Self-Hosted: Kubernetes (Helm)

helm repo add teleport https://charts.releases.teleport.dev
helm install teleport-cluster teleport/teleport-cluster \
  --namespace teleport --create-namespace \
  --set clusterName=teleport.example.com \
  --set acme=true \
  --set acmeEmail=admin@example.com

Teleport Cloud

Teleport Cloud is a managed Proxy+Auth service. You only deploy agents (SSH, DB, Kube) in your infrastructure and connect them to your cloud tenant at yourtenant.teleport.sh.


Initial Setup and ACME TLS

Generate the initial config with automatic Let’s Encrypt certificates:

teleport configure \
  --cluster-name=teleport.example.com \
  --public-addr=teleport.example.com:443 \
  --acme \
  --acme-email=admin@example.com \
  -o /etc/teleport/teleport.yaml

Enable and start:

systemctl enable --now teleport

Create the first admin user:

tctl users add admin --roles=editor,access --logins=root,ubuntu

Adding Nodes

Token-Based Join

# Create a short-lived token on the auth server
tctl tokens add --type=node --ttl=5m

# On the new server
teleport start \
  --roles=node \
  --token=<token> \
  --auth-server=teleport.example.com:443

EC2/IAM Join (AWS — No Token Required)

Configure the Auth Service to accept AWS instance identity documents:

# teleport.yaml on auth server
auth_service:
  tokens:
    - type: node
      aws_iid_ttl: 5m

On each EC2 instance, Teleport verifies its own AWS Instance Identity Document — no static join token needed. Works the same way for Azure Managed Identity and GCP Instance Identity.


User Authentication: SSO and MFA

GitHub SSO Connector

kind: github
version: v3
metadata:
  name: github
spec:
  client_id: "GITHUB_OAUTH_APP_ID"
  client_secret: "GITHUB_OAUTH_APP_SECRET"
  redirect_url: "https://teleport.example.com/v1/webapi/github/callback"
  teams_to_roles:
    - organization: my-org
      team: devops
      roles: ["devops"]
    - organization: my-org
      team: developers
      roles: ["developer"]
tctl create github-connector.yaml

SAML/OIDC

Teleport supports SAML 2.0 (Okta, Azure AD, OneLogin) and OIDC (Google Workspace, Auth0). The connector YAML follows the same pattern — provide the IdP metadata URL and attribute mappings to Teleport roles.

MFA Enforcement

kind: cluster_auth_preference
version: v2
metadata:
  name: cluster-auth-preference
spec:
  type: github    # or saml, oidc, local
  second_factor: "on"   # off | otp | webauthn | on (any)
  webauthn:
    rp_id: "teleport.example.com"

second_factor: "on" accepts TOTP (Google Authenticator, Authy), WebAuthn hardware keys (YubiKey, Touch ID), or passkeys.


RBAC: Roles and Labels

Teleport RBAC uses roles with allow and deny rules matched against node labels.

kind: role
version: v7
metadata:
  name: devops
spec:
  options:
    max_session_ttl: "12h"
    record_session:
      default: "best_effort"  # or "strict"
  allow:
    logins: ["ubuntu", "ec2-user"]
    node_labels:
      env: ["production", "staging"]
      team: ["devops"]
    rules:
      - resources: ["session"]
        verbs: ["read", "list"]
  deny:
    node_labels:
      restricted: ["true"]
kind: role
version: v7
metadata:
  name: developer
spec:
  options:
    max_session_ttl: "8h"
  allow:
    logins: ["deploy"]
    node_labels:
      env: ["staging", "dev"]
  allow:
    request:
      roles: ["production-access"]
      thresholds:
        - approve: 1
          deny: 1

Apply with tctl create role.yaml. The request block enables Access Requests — developers can ask for temporary production-access with one approval.


SSH Access with tsh

# Login (opens browser for SSO + MFA)
tsh login --proxy=teleport.example.com

# List accessible nodes
tsh ls

# SSH to a node
tsh ssh ubuntu@web-server-01

# SCP file transfer
tsh scp ./deploy.sh ubuntu@web-server-01:/tmp/

# Port forward (local 5432 → remote Postgres)
tsh ssh -L 5432:localhost:5432 ubuntu@db-server

# Run command on multiple nodes
tsh ssh --labels env=staging "systemctl restart nginx"

Session recording is automatic. Every session is stored as structured events plus an asciinema-compatible recording. Replay from the web UI or CLI:

tsh play <session-id>

Database Access

Register a database agent:

# teleport.yaml db_service section
db_service:
  enabled: true
  databases:
    - name: prod-postgres
      protocol: postgres
      uri: "prod-db.internal:5432"
      static_labels:
        env: production
    - name: prod-mysql
      protocol: mysql
      uri: "mysql.internal:3306"
    - name: aws-rds
      protocol: postgres
      uri: "mydb.xxxx.us-east-1.rds.amazonaws.com:5432"
      aws:
        region: us-east-1
        rds:
          iam_auth: true   # uses IAM instead of password

Connect as a user:

tsh db login prod-postgres --db-user=appuser --db-name=mydb
tsh db connect prod-postgres   # opens psql
tsh db connect prod-mysql      # opens mysql CLI

# MongoDB
tsh db connect mongo-atlas --db-user=reader

# Redis
tsh db connect redis-cache

Teleport acts as a protocol-aware proxy — it understands the native database wire protocol, enforcing auth without requiring the database port to be exposed.


Kubernetes Access

# teleport.yaml kube_service section
kubernetes_service:
  enabled: true
  kube_cluster_name: "production"
  kubeconfig_file: /var/lib/teleport/kubeconfig
# List clusters
tsh kube ls

# Login to a cluster (writes kubeconfig)
tsh kube login production

# Use kubectl normally
kubectl get pods -n default

# Teleport enforces impersonation — the role defines allowed k8s groups

Role YAML for Kubernetes:

allow:
  kubernetes_labels:
    env: ["production"]
  kubernetes_resources:
    - kind: pod
      namespace: "*"
      name: "*"
      verbs: ["get", "list", "exec"]

Application Access

Teleport can proxy internal HTTP/HTTPS apps and TCP services without a VPN:

app_service:
  enabled: true
  apps:
    - name: grafana
      uri: http://grafana.internal:3000
      labels:
        env: production
    - name: internal-api
      uri: tcp://internal-api.corp:8080
      protocol: tcp

Users access https://grafana.teleport.example.com — Teleport adds a JWT header with the user’s identity, so the app can verify who is accessing it. SSO protects every internal app without modifying the app itself.


Audit Log

All events — logins, SSH sessions, DB queries, kubectl exec, config changes — are stored as structured JSON:

# Query audit log
tctl audit events --event-type=session.start --format=json

# Events from a date range
tctl audit events \
  --from="2026-03-01" \
  --to="2026-03-23" \
  --event-type=db.session.query

Long-term storage backends: S3 (with Athena for SQL queries), GCS (with BigQuery), or DynamoDB. Session recordings stream to S3/GCS during the session — not after — ensuring recordings survive even if the proxy crashes.


Access Requests: Just-in-Time Privilege Escalation

# Request production-access role
tsh request create --roles=production-access \
  --reason="Investigating P1 incident #4821"

# Approver reviews and approves
tsh request review --approve <request-id>

# Or deny
tsh request review --deny <request-id>

Teleport ships Slack, PagerDuty, ServiceNow, and Jira plugins that post request notifications to the right channel. Approvals can be made directly from Slack.


Teleport vs Alternatives

FeatureTeleportHashiCorp BoundaryStrongDMBastion HostTailscale SSHOpenSSH
Auth methodShort-lived certsDynamic credentialsCredential injectionStatic keysWireGuard + certsStatic keys
SSO/IdPYes (SAML/OIDC/GitHub)YesYesManualOIDCNo
Session recordingFull (SSH+DB+K8s)NoYesNoNoNo
Database accessYes (protocol-aware)YesYesTunnel onlyNoNo
KubernetesYes (RBAC-aware)PartialYesNoNoNo
Access RequestsYes (built-in)NoNoNoNoNo
Open sourceYes (Apache 2.0)Yes (MPL 2.0)NoN/APartialYes
Self-hostedYesYesNo (SaaS)YesPartialYes
Audit logStructured + S3/GCSLimitedYesManualNoManual

Production Deployment with SSO (Okta Example)

A realistic production setup — Teleport Cloud + Okta SAML + hardware MFA:

1. Create Okta SAML app pointing to https://company.teleport.sh/v1/webapi/saml/acs. Map Okta groups to Teleport role attributes.

2. Create SAML connector:

kind: saml
version: v2
metadata:
  name: okta
spec:
  acs: "https://company.teleport.sh/v1/webapi/saml/acs"
  attributes_to_roles:
    - name: "groups"
      value: "devops"
      roles: ["devops"]
    - name: "groups"
      value: "developers"
      roles: ["developer"]
  entity_descriptor_url: "https://company.okta.com/app/xxx/sso/saml/metadata"

3. Require hardware MFA in cluster_auth_preference: second_factor: webauthn.

4. Deploy node agents in AWS using IAM join — no token distribution needed.

5. Set up Access Request plugin for Slack: teleport-slack configure --token=xoxb-... --channel=#infra-access.

6. Configure S3 audit log backend in teleport.yaml so recordings survive infrastructure events.


Gotchas and Edge Cases

  • Clock skew kills certificates. Auth, Proxy, and all nodes must have NTP sync within ±30 seconds. Use chrony or timesyncd.
  • ALPN multiplexing requires port 443. If you split ports (3023 for SSH, 3080 for HTTPS), old tsh clients work but some proxy-to-proxy features break.
  • Wildcard TLS for app access. Application access subdomains (app.teleport.example.com) require a wildcard cert or a separate ACME challenge per app.
  • EC2 IAM join needs IMDSv2. Instance metadata endpoint must be reachable. Check security groups blocking 169.254.169.254.
  • Kubernetes impersonation. The kube_service agent needs impersonate ClusterRole permissions — a missing RBAC binding is the most common K8s setup failure.
  • Session recording storage costs. SSH sessions are ~100 KB/min. A 100-server fleet with active sessions can generate significant S3 costs — use lifecycle policies to move to Glacier.

Troubleshooting

ProblemSolution
tsh login redirects loopCheck SSO connector redirect_url matches the proxy public address exactly
Nodes not appearing in tsh lsVerify node agent can reach Auth on port 443; check join token expiry
Database connection refusedConfirm db_service is running; check firewall between agent and DB host
Kubernetes exec forbiddenAdd exec verb to the role’s kubernetes_resources block
Session not recordingCheck record_session policy; verify S3 bucket permissions for the Auth Service IAM role
MFA device not acceptedRe-register WebAuthn credential with tsh mfa add; check rp_id matches proxy FQDN

Summary

  • Certificate authority — Teleport replaces static SSH keys with short-lived certs that expire automatically.
  • SSO + MFA — GitHub, Okta, Azure AD, Google Workspace; TOTP, WebAuthn, hardware keys.
  • RBAC with node labels — roles match servers by environment/team tags, not IP lists.
  • Full audit trail — every session recorded as structured events + video, stored in S3/GCS.
  • Database and Kubernetes — protocol-aware proxying without exposing ports or distributing credentials.
  • Access Requests — just-in-time privilege escalation with Slack/PagerDuty approval workflows.
  • Open source — Apache 2.0, self-hostable, or managed via Teleport Cloud.