Tag: mtls

  • Self-generated certificates

    Self-generated certificates

    What they are, how mTLS works, how to build them with easy-rsa, and how to store them safely with git-crypt.

    Certificates, CA certificates, and private keys

    • Digital certificate (X.509): A signed data structure that binds an identity (subject) to a public key. It includes fields like subject, issuer, serial number, validity period, and extensions (for example, key usage, extended key usage, Subject Alternative Name). Certificates are public and can be shared.
    • CA certificate: A certificate belonging to a Certificate Authority. A CA uses its private key to sign end-entity certificates (server or client). A root CA is self-signed. Often, you use an offline root CA to sign an intermediate CA, and that intermediate signs end-entity certificates. Clients and servers trust a CA by installing its certificate (trust anchor) and validating chains: end-entity → intermediate(s) → root.
    • Private key: The secret counterpart to a public key. It is used to prove possession (signing) and decrypt data for which the public key was used to encrypt in certain schemes. Private keys must be kept confidential, access-controlled, and ideally encrypted at rest with a passphrase or stored in hardware (TPM/HSM). If a private key is compromised, all certificates tied to it must be considered compromised and should be revoked.

    Notes:

    • “Self-signed certificate” means a certificate signed by its own key (typical for root CAs, and sometimes used ad hoc for a server). “Self-generated” is commonly used to mean you run your own CA and issue your own certs, rather than buying from a public CA.
    • Revocation is handled using CRLs (Certificate Revocation Lists) or OCSP. easy-rsa focuses on CRLs.

    Server vs client certificates and how mTLS works

    • Server certificate:
      • Purpose: Server proves its identity to clients (for example, a web server to a browser).
      • Extensions: Extended Key Usage (EKU) must include serverAuth.
      • Names: Must contain Subject Alternative Name (SAN) entries covering the hostnames or IPs the client connects to. Clients verify that the requested hostname matches a SAN and that the certificate chains to a trusted CA.
    • Client certificate:
      • Purpose: Client proves its identity to the server (for example, a service or user accessing an API).
      • Extensions: EKU should include clientAuth.
      • Names: Often the Common Name (CN) or a SAN identifies the user, device, or service. The server maps this identity to an account or role for authorization.
    • mTLS (mutual TLS):
      1. Client initiates the TLS handshake.
      2. Server sends its certificate chain. Client validates the chain to a trusted CA and checks the hostname/IP against SANs.
      3. Server requests a client certificate. Client sends its certificate chain and proves possession of the private key.
      4. Server validates the client’s certificate against its trusted CA(s) and applies authorization rules.
      5. Both sides derive session keys; the connection is encrypted and mutually authenticated.

    Operational considerations:

    • Distribute only CA certificates (public) to trust stores on clients/servers.
    • Protect private keys; rotate and revoke as needed.
    • Keep CRLs up to date on servers that verify client certs.

    Generating and maintaining certificates with easy-rsa

    easy-rsa is a thin wrapper around OpenSSL that maintains a PKI directory and simplifies key/cert lifecycle. Commands below are for easy-rsa v3.

    Install:

    • Debian/Ubuntu: sudo apt-get install easy-rsa
    • RHEL/CentOS/Fedora: sudo dnf install easy-rsa
    • macOS (Homebrew): brew install easy-rsa

    Initialize a new PKI and configure defaults:
    mkdir corp-pki && cd corp-pki easyrsa init-pki

    Create a file named vars in this directory to set defaults. Example vars:

    set_var EASYRSA_ALGO ec
    set_var EASYRSA_CURVE secp384r1
    set_var EASYRSA_DIGEST "sha256"
    set_var EASYRSA_REQ_COUNTRY "US"
    set_var EASYRSA_REQ_PROVINCE "CA"
    set_var EASYRSA_REQ_CITY "San Francisco"
    set_var EASYRSA_REQ_ORG "Example Corp"
    set_var EASYRSA_REQ_OU "IT"
    set_var EASYRSA_REQ_CN "Example-Root-CA"
    set_var EASYRSA_CA_EXPIRE 3650
    set_var EASYRSA_CERT_EXPIRE 825
    set_var EASYRSA_CRL_DAYS 30

    Build a root CA (ideally on an offline machine):
    $ easyrsa build-ca
    (Use build-ca nopass only for labs; in production, protect the CA key with a passphrase and keep the CA host offline.)

    Optional: two-tier CA (recommended for production):

    • On an offline host, create an offline root CA; keep it offline and backed up.
    • On an online or semi-online host, create an intermediate CA by generating a CSR there and signing it with the offline root. In easy-rsa that means setting up two PKIs:
      1. Root PKI: build-ca (self-signed root).
      2. Intermediate PKI:
        easyrsa init-pki; easyrsa build-ca
        … but here you actually want an intermediate: generate-req for “intermediate” and sign it with the root using sign-req ca on the root environment.
        Then use the intermediate to sign servers/clients.
        If you’re new to this, start with a single CA and evolve to a root + intermediate later.

    Generate a server key and CSR:
    $ easyrsa gen-req web01 nopass
    This creates:

    • pki/private/web01.key (private key)
    • pki/reqs/web01.req (CSR)

    Sign the server certificate:
    Basic:
    $ easyrsa sign-req server web01

    Adding SANs:

    • easy-rsa 3.1 and newer supports a CLI flag:
      $ easyrsa --subject-alt-name="DNS:web01.example.com,IP:203.0.113.10" sign-req server web01
    • For older versions, edit pki/x509-types/server to include a subjectAltName line, or upgrade. A common pattern is to create a custom x509 type that adds:
      subjectAltName = @alt_names
      [ alt_names ]
      DNS.1 = web01.example.com
      IP.1 = 203.0.113.10

    Results are placed in pki/issued/web01.crt. Verify:
    opensslverifyCAfilepki/ca.crtpki/issued/web01.crt openssl x509 -in pki/issued/web01.crt -noout -text

    Generate a client certificate:
    easyrsagenreqalicenopass easyrsa sign-req client alice

    Distribute artifacts:

    • Servers: web01.key (private), web01.crt (server cert), CA chain (ca.crt and any intermediates).
    • Clients (for mTLS): alice.key (private), alice.crt (client cert), CA chain used by the server if the client also needs to verify the server.

    Revocation and CRL:

    • Revoke a certificate:
      $ easyrsa revoke alice
    • Regenerate the CRL:
      $ easyrsa gen-crl
    • Install pki/crl.pem wherever revocation is enforced (for example, on servers that validate client certs). Refresh it periodically; controlled by EASYRSA_CRL_DAYS.

    Renewal and rotation:

    • Easiest and safest: issue a new key and cert before expiry, deploy it, then revoke the old cert.
    • Keep pki/index.txt, pki/serial, and the entire pki directory backed up; they are the authoritative database of your PKI.

    Diffie-Hellman parameters:

    • Only needed by some servers or VPNs still using finite-field DHE:
      $ easyrsa gen-dh
    • Modern TLS with ECDHE does not require dhparam files.

    Good practices:

    • Use strong algorithms: EC (secp384r1) or RSA 3072/4096.
    • Use SANs for server certificates; clients validate hostnames against SANs, not CNs.
    • Limit cert lifetimes and automate rotation.
    • Protect private keys with passphrases when possible and with strict filesystem permissions (chmod 600).

    Keeping private keys safe with Git and git-crypt

    Goal: version and collaborate on your PKI (CA database, issued certs, CRLs), while ensuring private keys are encrypted at rest in the Git repository and on remotes.

    How git-crypt works:

    • You mark specific paths as “encrypted” via .gitattributes.
    • git-crypt encrypts those files in the repository objects and on remotes. When authorized users unlock locally, files are transparently decrypted in the working tree.
    • Access can be granted with GPG public keys (recommended) or with a shared symmetric key.

    Set up a repository and protect sensitive paths:

    $ cd corp−pki
    $ git init
    $ git-crypt init

    Create .gitattributes with rules such as:

    pki/private/** filter=git-crypt diff=git-crypt
    pki/reqs/** filter=git-crypt diff=git-crypt
    *.key filter=git-crypt diff=git-crypt

    Then:

    git add .gitattributes
    git commit -m "Protect private material with git-crypt"

    Authorize collaborators (GPG-based):
    $ git-crypt add-gpg-user YOUR_GPG_KEY_ID
    Repeat for each user who should be able to decrypt. They must have your repository and their corresponding private key to unlock.

    Working with the repo:

    • After initializing and adding users, add your PKI directory content. Private keys and CSRs under the protected paths will be encrypted in Git history and on the remote.
    • Push to a remote as usual; the remote stores ciphertext for protected files.

    Cloning and unlocking:

    gitclone<repo>
    cd <repo>
    $ git-crypt unlock

    For GPG-based access, your local GPG agent will prompt; for symmetric, provide the shared key.

    Pre-commit guard (optional but smart):

    • Add a pre-commit hook that aborts if any file containing a private key would be committed outside protected paths. Example logic:
      • If a staged file contains “—–BEGIN PRIVATE KEY—–” (or RSA/EC PRIVATE KEY), check with “git check-attr filter <file>” that git-crypt will encrypt it; otherwise fail the commit with guidance.
    • Also .gitignore unencrypted exports or temporary files.

    CI/CD:

    • On CI, install git-crypt, import a CI-specific GPG private key (or provide the symmetric key via the CI secret store), and run git-crypt unlock before build/deploy steps.
    • Never print secrets to logs; restrict artifact access.

    Caveats and best practices:

    • If you accidentally committed a secret before adding git-crypt rules, it is already in history. You must rewrite history (for example, with git filter-repo) and rotate the secret.
    • Keep the root CA private key offline and out of Git entirely when possible. If you must keep it in Git, ensure it is strongly protected: encrypted by git-crypt, passphrase-protected, and access tightly controlled.
    • Public artifacts (CA certificate, issued certificates, CRLs) can remain unencrypted, but assess privacy needs; certs can contain identifying info.
    • Enforce least privilege in Git hosting: only grant git-crypt decryption rights to people or systems that truly need the private materials.
    • Combine with full-disk encryption and strict filesystem permissions (chmod 600 on keys). Consider hardware-backed GPG keys for git-crypt.

    Quick end-to-end example

    • Create a CA and a server/client cert:mkdir corp-pki && cd corp-pki easyrsa init-pki
      easyrsa build−ca
      easyrsa gen-req web01 nopass
      easyrsa −−subject−alt−name="DNS:web01.example.com" sign−reqserverweb01

      easyrsa gen-req alice nopass
      easyrsa sign−req client alice

      easyrsa gen-crl
    • Put under Git with encryption of sensitive files:
      git init
      git-crypt init
      printf "pki/private/∗∗ filter=git−crypt diff=git−crypt" "\npki/reqs/∗∗ filter=git−crypt diff=git−crypt" "\n∗.key filter=git−crypt diff=git−crypt" > .gitattributes

      git add .
      git commit −m "PKI bootstrap with protected private material"

      git remote add origin <your-remote>
      git push −u origin main

      git-crypt add-gpg-user <YOUR_GPG_KEY_ID>
      git commit -m "Grant decryption to maintainer"

      git push
    • Test mTLS with curl:
      On server: install web01.key and web01.crt; configure to require client certs and trust ca.crt.
      On client:
      curl --cacert pki/ca.crt --cert pki/issued/alice.crt --key pki/private/alice.key https://web01.example.com/

    With these patterns you can own the full lifecycle: generate, distribute, rotate, and revoke certificates; enforce mTLS; and keep the sensitive pieces encrypted even when stored in Git and on remote servers.

  • Stunnel: the simplest way to stitch private services across clouds

    Stunnel: the simplest way to stitch private services across clouds

    If you need to connect private TCP services across machines and clouds without deploying a full VPN, stunnel is often the fastest, safest, and least fussy way to do it. It wraps any TCP connection in TLS, giving you encryption and authentication with a single lightweight daemon and a short config file. Because it rides over standard outbound Internet connectivity and can multiplex multiple services on one port, stunnel makes multi-cloud private networking practical without changing routing, installing kernel modules, or re-architecting your apps.

    What stunnel is and why it’s different

    • A TLS wrapper for TCP: stunnel terminates and initiates TLS, then forwards bytes to a local or remote TCP port. Your apps keep speaking their native protocols (PostgreSQL, Redis, MySQL, MQTT, custom services) and don’t need to know about TLS.
    • Simple by design: single binary, tiny configuration, no kernel changes, no overlay networks. It’s closer to “secure netcat” than to a VPN.
    • Runs anywhere: Linux, BSD, macOS, Windows, containers. Package-managed on most distros.
    • Production-hardened: in use for decades, based on OpenSSL/LibreSSL, with features like mutual TLS, chroot, dropping privileges, OCSP/CRL, and strict cipher control.

    Why stunnel is ideal for multi-cloud private service connectivity

    • Works over the public Internet, safely: mutual TLS authenticates both sides; traffic is end-to-end encrypted. You can keep your upstream services bound to localhost or private IPs and expose only stunnel.
    • No network plumbing: no VPC peering, no IPSec/WireGuard setup, no route tables. Just open a TCP port on the server side (often 443) and allow outbound on the client side.
    • One port, many services: stunnel can use TLS SNI to multiplex several backends on a single public IP/port (usually 443), so you can traverse strict egress firewalls and simplify security groups.
    • Multi-provider friendly: run a small stunnel on each cloud VM. Your app connects to localhost; stunnel handles the secure hop across clouds.
    • Incremental: add one service at a time. No need to rewire everything into a mesh or L3 VPN.

    Common patterns
    1) Hub-and-spoke

    • A central “hub” server exposes port 443 with stunnel.
    • Each “spoke” (in any cloud) runs a client-mode stunnel that dials the hub and provides a local port for the application to connect to.
    • Good for small teams and many readers of a few central services.

    2) Service-to-service bridges

    • One stunnel instance front-ends a private service on the server.
    • Another stunnel instance on the consumer side exposes a local port that connects to the remote stunnel over TLS.
    • Great for connecting databases, queues, or internal APIs across clouds, regions, or on-prem to cloud.

    3) Single IP, many services via SNI

    • Use one public IP:443 and multiple service blocks on the server, each with an SNI hostname (e.g., pg.example.com, redis.example.com).
    • Clients set the matching SNI name per service and reuse the same remote IP and port.

    Minimal, practical example

    Goal: An app in Cloud A consumes PostgreSQL and Redis running in Cloud B without exposing either service directly.

    Certificates

    • Create a small private CA and issue server and client certificates, or use ACME/Let’s Encrypt for the server and a private CA for clients.
    • Put CA certificate on both sides. Put server cert/key on server, client cert/key on client. Enable mutual TLS.

    Server (Cloud B) example configuration
    Global options:

    • setuid = stunnel
    • setgid = stunnel
    • chroot = /var/lib/stunnel
    • output = /var/log/stunnel.log
    • debug = info
    • sslVersionMin = TLSv1.2
    • options = NO_RENEGOTIATION
    • cert = /etc/stunnel/server.crt
    • key = /etc/stunnel/server.key
    • CAfile = /etc/stunnel/ca.crt
    • verify = 2 (require and verify client certs)

    PostgreSQL service:

    • [pg]
    • accept = 0.0.0.0:443
    • sni = pg.example.com
    • connect = 127.0.0.1:5432

    Redis service:

    • [redis]
    • accept = 0.0.0.0:443
    • sni = redis.example.com
    • connect = 127.0.0.1:6379

    Notes:

    • Both services share port 443 and are selected by SNI. Keep PostgreSQL and Redis bound to localhost; only stunnel is public.
    • If you prefer separate ports, use accept = 443 for pg and accept = 444 for redis, and omit SNI.

    Client (Cloud A) example configuration
    Global options:

    • client = yes
    • output = /var/log/stunnel.log
    • debug = info
    • sslVersionMin = TLSv1.2
    • cert = /etc/stunnel/client.crt
    • key = /etc/stunnel/client.key
    • CAfile = /etc/stunnel/ca.crt
    • verifyChain = yes
    • OCSPaia = yes (optional, enables OCSP via AIA if using public CAs)

    PostgreSQL local endpoint:

    • [pg]
    • accept = 127.0.0.1:5432
    • connect = hub.public.ip.or.name:443
    • sni = pg.example.com
    • checkHost = pg.example.com
    • delay = yes (resolve DNS at connect time)

    Redis local endpoint:

    • [redis]
    • accept = 127.0.0.1:6379
    • connect = hub.public.ip.or.name:443
    • sni = redis.example.com
    • checkHost = redis.example.com
    • delay = yes

    Now your applications point at localhost:5432 and localhost:6379. Stunnel carries traffic securely to Cloud B and into the private services.

    Multi-cloud high availability tips

    • Multiple upstreams: specify connect as a list on the client (e.g., connect = ip1:443 ip2:443). Use failover = rr for round-robin or default priority order for failover.
    • DNS and rotation: use delay = yes so DNS is re-resolved at connect time; pair with multiple A records.
    • Health checks: stunnel logs to syslog; integrate log monitoring. You can also run a simple TCP health probe against the client’s local accept ports.

    Security hardening checklist

    • TLS policy: set sslVersionMin = TLSv1.2 or TLSv1.3, define strong ciphers/ciphersuites if you have compliance needs.
    • Mutual TLS everywhere: verify = 2 on server, verifyChain and checkHost/checkIP on client.
    • Least privilege: setuid/setgid to a dedicated user; use chroot; restrict filesystem permissions on keys.
    • Certificate lifecycle: automate renewal (ACME for server certs), HUP stunnel to reload. For client cert rotation, use short lifetimes or CRLs (CRLfile) if revocation is needed.
    • Don’t enable TLS compression. Keep NO_RENEGOTIATION enabled.
    • Firewalls: only expose your stunnel port(s); keep backends on loopback or private subnets.

    Operational conveniences

    • Single-port multiplexing with SNI reduces security group sprawl and helps traverse locked-down networks that only allow 443 egress.
    • Defer hostname resolution with delay = yes to survive IP changes without restarts.
    • Transparent proxying is available on Linux if you must preserve source IPs for the backend, but it requires advanced routing/iptables and capabilities; most deployments don’t need it.
    • Systemd integration is straightforward; most packages install a service unit. Send SIGHUP to reload configs and new certs.

    How stunnel compares to alternatives

    • WireGuard/OpenVPN: full L3 VPNs that stitch networks and routes. Great for broad connectivity, but more moving parts, privileged setup, and potential blast radius. Stunnel is easier for a few explicit services.
    • SSH tunnels: quick and familiar but harder to manage at scale, weaker policy and TLS compatibility, and less robust for multi-tenant multiplexing.
    • NGINX/HAProxy/Caddy (TCP/stream): more features and L7 routing, but heavier and often oriented to server-side termination. Stunnel is tiny, neutral, and equally happy on client or server.
    • Service meshes: powerful but complex. Stunnel is the opposite: minimal and manual, ideal when you just need secure pipes.

    When stunnel is not a fit

    • UDP traffic (e.g., DNS, some message brokers) is out of scope.
    • Dynamic multi-hop routing, discovery, or policy-based connectivity requires a mesh/VPN or SD-WAN solution.
    • If you must expose original client IPs to backends without extra networking, you’ll need transparent proxying or different tooling.

    10-minute quickstart checklist
    1) Install stunnel on both ends from your distro packages.
    2) Create or obtain certificates, place CA on both sides, server/client keys on their respective nodes.
    3) Write one service block per backend. On the server, map accept (public) to connect (private). On the client, map a local accept to remote connect, set sni and checkHost.
    4) Open firewall for the server’s public port (often 443). Ensure client can reach it outbound.
    5) Start stunnel, watch logs, test with your app against the client’s local port.
    6) Add services incrementally; consider SNI to reuse the same public port.

    Bottom line
    Stunnel is the pragmatic sweet spot for securely connecting multiple private services across publicly reachable servers and multiple cloud providers. It gives you strong TLS, mutual authentication, and multi-service multiplexing with minimal operational overhead. For teams that want secure, explicit connections rather than full-blown network overlays, stunnel is often the simplest and most reliable tool for the job.