Stunnel: the simplest way to stitch private services across clouds

If you need to connect private TCP services across machines and clouds without deploying a full VPN, stunnel is often the fastest, safest, and least fussy way to do it. It wraps any TCP connection in TLS, giving you encryption and authentication with a single lightweight daemon and a short config file. Because it rides over standard outbound Internet connectivity and can multiplex multiple services on one port, stunnel makes multi-cloud private networking practical without changing routing, installing kernel modules, or re-architecting your apps.

What stunnel is and why it’s different

  • A TLS wrapper for TCP: stunnel terminates and initiates TLS, then forwards bytes to a local or remote TCP port. Your apps keep speaking their native protocols (PostgreSQL, Redis, MySQL, MQTT, custom services) and don’t need to know about TLS.
  • Simple by design: single binary, tiny configuration, no kernel changes, no overlay networks. It’s closer to “secure netcat” than to a VPN.
  • Runs anywhere: Linux, BSD, macOS, Windows, containers. Package-managed on most distros.
  • Production-hardened: in use for decades, based on OpenSSL/LibreSSL, with features like mutual TLS, chroot, dropping privileges, OCSP/CRL, and strict cipher control.

Why stunnel is ideal for multi-cloud private service connectivity

  • Works over the public Internet, safely: mutual TLS authenticates both sides; traffic is end-to-end encrypted. You can keep your upstream services bound to localhost or private IPs and expose only stunnel.
  • No network plumbing: no VPC peering, no IPSec/WireGuard setup, no route tables. Just open a TCP port on the server side (often 443) and allow outbound on the client side.
  • One port, many services: stunnel can use TLS SNI to multiplex several backends on a single public IP/port (usually 443), so you can traverse strict egress firewalls and simplify security groups.
  • Multi-provider friendly: run a small stunnel on each cloud VM. Your app connects to localhost; stunnel handles the secure hop across clouds.
  • Incremental: add one service at a time. No need to rewire everything into a mesh or L3 VPN.

Common patterns
1) Hub-and-spoke

  • A central “hub” server exposes port 443 with stunnel.
  • Each “spoke” (in any cloud) runs a client-mode stunnel that dials the hub and provides a local port for the application to connect to.
  • Good for small teams and many readers of a few central services.

2) Service-to-service bridges

  • One stunnel instance front-ends a private service on the server.
  • Another stunnel instance on the consumer side exposes a local port that connects to the remote stunnel over TLS.
  • Great for connecting databases, queues, or internal APIs across clouds, regions, or on-prem to cloud.

3) Single IP, many services via SNI

  • Use one public IP:443 and multiple service blocks on the server, each with an SNI hostname (e.g., pg.example.com, redis.example.com).
  • Clients set the matching SNI name per service and reuse the same remote IP and port.

Minimal, practical example

Goal: An app in Cloud A consumes PostgreSQL and Redis running in Cloud B without exposing either service directly.

Certificates

  • Create a small private CA and issue server and client certificates, or use ACME/Let’s Encrypt for the server and a private CA for clients.
  • Put CA certificate on both sides. Put server cert/key on server, client cert/key on client. Enable mutual TLS.

Server (Cloud B) example configuration
Global options:

  • setuid = stunnel
  • setgid = stunnel
  • chroot = /var/lib/stunnel
  • output = /var/log/stunnel.log
  • debug = info
  • sslVersionMin = TLSv1.2
  • options = NO_RENEGOTIATION
  • cert = /etc/stunnel/server.crt
  • key = /etc/stunnel/server.key
  • CAfile = /etc/stunnel/ca.crt
  • verify = 2 (require and verify client certs)

PostgreSQL service:

  • [pg]
  • accept = 0.0.0.0:443
  • sni = pg.example.com
  • connect = 127.0.0.1:5432

Redis service:

  • [redis]
  • accept = 0.0.0.0:443
  • sni = redis.example.com
  • connect = 127.0.0.1:6379

Notes:

  • Both services share port 443 and are selected by SNI. Keep PostgreSQL and Redis bound to localhost; only stunnel is public.
  • If you prefer separate ports, use accept = 443 for pg and accept = 444 for redis, and omit SNI.

Client (Cloud A) example configuration
Global options:

  • client = yes
  • output = /var/log/stunnel.log
  • debug = info
  • sslVersionMin = TLSv1.2
  • cert = /etc/stunnel/client.crt
  • key = /etc/stunnel/client.key
  • CAfile = /etc/stunnel/ca.crt
  • verifyChain = yes
  • OCSPaia = yes (optional, enables OCSP via AIA if using public CAs)

PostgreSQL local endpoint:

  • [pg]
  • accept = 127.0.0.1:5432
  • connect = hub.public.ip.or.name:443
  • sni = pg.example.com
  • checkHost = pg.example.com
  • delay = yes (resolve DNS at connect time)

Redis local endpoint:

  • [redis]
  • accept = 127.0.0.1:6379
  • connect = hub.public.ip.or.name:443
  • sni = redis.example.com
  • checkHost = redis.example.com
  • delay = yes

Now your applications point at localhost:5432 and localhost:6379. Stunnel carries traffic securely to Cloud B and into the private services.

Multi-cloud high availability tips

  • Multiple upstreams: specify connect as a list on the client (e.g., connect = ip1:443 ip2:443). Use failover = rr for round-robin or default priority order for failover.
  • DNS and rotation: use delay = yes so DNS is re-resolved at connect time; pair with multiple A records.
  • Health checks: stunnel logs to syslog; integrate log monitoring. You can also run a simple TCP health probe against the client’s local accept ports.

Security hardening checklist

  • TLS policy: set sslVersionMin = TLSv1.2 or TLSv1.3, define strong ciphers/ciphersuites if you have compliance needs.
  • Mutual TLS everywhere: verify = 2 on server, verifyChain and checkHost/checkIP on client.
  • Least privilege: setuid/setgid to a dedicated user; use chroot; restrict filesystem permissions on keys.
  • Certificate lifecycle: automate renewal (ACME for server certs), HUP stunnel to reload. For client cert rotation, use short lifetimes or CRLs (CRLfile) if revocation is needed.
  • Don’t enable TLS compression. Keep NO_RENEGOTIATION enabled.
  • Firewalls: only expose your stunnel port(s); keep backends on loopback or private subnets.

Operational conveniences

  • Single-port multiplexing with SNI reduces security group sprawl and helps traverse locked-down networks that only allow 443 egress.
  • Defer hostname resolution with delay = yes to survive IP changes without restarts.
  • Transparent proxying is available on Linux if you must preserve source IPs for the backend, but it requires advanced routing/iptables and capabilities; most deployments don’t need it.
  • Systemd integration is straightforward; most packages install a service unit. Send SIGHUP to reload configs and new certs.

How stunnel compares to alternatives

  • WireGuard/OpenVPN: full L3 VPNs that stitch networks and routes. Great for broad connectivity, but more moving parts, privileged setup, and potential blast radius. Stunnel is easier for a few explicit services.
  • SSH tunnels: quick and familiar but harder to manage at scale, weaker policy and TLS compatibility, and less robust for multi-tenant multiplexing.
  • NGINX/HAProxy/Caddy (TCP/stream): more features and L7 routing, but heavier and often oriented to server-side termination. Stunnel is tiny, neutral, and equally happy on client or server.
  • Service meshes: powerful but complex. Stunnel is the opposite: minimal and manual, ideal when you just need secure pipes.

When stunnel is not a fit

  • UDP traffic (e.g., DNS, some message brokers) is out of scope.
  • Dynamic multi-hop routing, discovery, or policy-based connectivity requires a mesh/VPN or SD-WAN solution.
  • If you must expose original client IPs to backends without extra networking, you’ll need transparent proxying or different tooling.

10-minute quickstart checklist
1) Install stunnel on both ends from your distro packages.
2) Create or obtain certificates, place CA on both sides, server/client keys on their respective nodes.
3) Write one service block per backend. On the server, map accept (public) to connect (private). On the client, map a local accept to remote connect, set sni and checkHost.
4) Open firewall for the server’s public port (often 443). Ensure client can reach it outbound.
5) Start stunnel, watch logs, test with your app against the client’s local port.
6) Add services incrementally; consider SNI to reuse the same public port.

Bottom line
Stunnel is the pragmatic sweet spot for securely connecting multiple private services across publicly reachable servers and multiple cloud providers. It gives you strong TLS, mutual authentication, and multi-service multiplexing with minimal operational overhead. For teams that want secure, explicit connections rather than full-blown network overlays, stunnel is often the simplest and most reliable tool for the job.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *