If you need to run a docker container (or multiple) and restrict it to a specific vpn, gluetun seems to be the de-facto way to do it. And it works great, for that specific container. But it gets a bit messy when you want to have other containers route their traffic through it, especially if you’re running this as a lightweight container in proxmox.
I recently stumbled upon podman pods and quadlets, and it felt like these might be a better way to do it? For my use case, I’m using a unprivileged LXC container running in proxmox, but this should apply anywhere.
Concept Link to heading
Podman has the concept of a Pod, similar to kubernetes. It’s different to docker compose in that everything shares the same network namespace. Each container in the pod is essentially the same ip. Contrasting this to docker, the best you can do is to create a shared network and put all your containers on that network.
The main idea is:
- The pod will define the networking config.
- A vpn container will live in the pod and act as the default gateway for all network activity in the pod.
- Any other container in the pod will automatically route traffic via the vpn.
Setup Link to heading
LXC Link to heading
If you’re running on proxmox, using an unprivileged LXC container is the best way to do this. I’d suggest using the podman script from community-scripts. The important thing to note is this needs the following options enabled during creation:
- You will only have one container, so this will contain your workloads, so size it appropriately.
- Enable TUN/TAP support
- Enable Nesting
Regardless if you use the script or not, these are the lines you should have in your /etc/pve/lxc/<id>.conf
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
Prepare the host Link to heading
Ensure you are on a modern version of podman. At least 4.4 is necessary to support quadlets. If you use the script above, you should be fine, but try to be on the most recent version possible.
podman --version
Create the pod Link to heading
Create the pod unit at /etc/containers/systemd/secure-net.pod
[Unit]
Description=Secure VPN Pod Network
After=network-online.target
Wants=network-online.target
[Pod]
PodName=secure-net
# Critical for LXC: Use slirp4netns instead of bridge
# 'port_handler=slirp4netns' is stable for incoming ports
Network=slirp4netns:port_handler=slirp4netns
# Keep your ports here -- only explicitly listed
# ports will be exposed, regardless if the containers
# bind to all interfaces. only needed if you are serving
# webapps, etc
PublishPort=8080:8080
[Service]
# Prevents systemd from killing the pod when it spawns processes
Delegate=yes
# If the infra container dies, restart it
Restart=always
[Install]
WantedBy=multi-user.target
Next create the vpn container unit at /etc/containers/systemd/vpn.container:
[Unit]
Description=VPN Client Container
# Ensure this starts AFTER the pod shell is ready
After=secure-net-pod.service
Requires=secure-net-pod.service
[Container]
Image=docker.io/qmcgaw/gluetun:latest
ContainerName=gluetun-vpn
AutoUpdate=registry
# Connect to the running Pod
Pod=secure-net.pod
# Load variables from your file
EnvironmentFile=/etc/containers/systemd/vpn.env
# REQUIRED for VPNs to work
AddCapability=NET_ADMIN
AddDevice=/dev/net/tun
[Service]
Restart=always
# Optional: Clean up dependencies
# BindsTo=secure-net-pod.service
[Install]
WantedBy=multi-user.target
Notice we’re still using gluetun, and pointing at a vpn.env for the environment variables. Here’s an example of /etc/containers/systemd/vpn.env for nordvpn:
VPN_SERVICE_PROVIDER=nordvpn
VPN_TYPE=wireguard
WIREGUARD_PRIVATE_KEY=<your key here>
SERVER_COUNTRIES=Switzerland
Finally create your workload, for this example, let’s just use a simple alpine image:
[Unit]
Description=Demo App
# Start only after the VPN container is running
After=vpn.service
Requires=vpn.service
[Container]
Image=alpine:latest
ContainerName=demo-app
# this is just a demo, your apps should have a command or
# entry-point that stays up
Exec=sleep infinity
# Assign to the same Pod
Pod=secure-net.pod
[Service]
Restart=always
[Install]
WantedBy=multi-user.target
Test it out Link to heading
After creating all the unit files above, next we are ready to test it out:
systemctl daemon-reload
systemctl start secure-net-pod
If all is good, you should see the pod in the active state (but not the containers yet, hang tight)
systemctl status secure-net-pod
You should also see it running via podman ps, something like this, which also shows our port mapping is working as expected:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e1e59691191 localhost/podman-pause:5.4.2-1766335321 2 minutes ago Up 2 minutes 0.0.0.0:8080->8080/tcp secure-net-infra
Now you’re ready to add the vpn:
systemctl start vpn
Again same as above, you should see the service status as active via systemctl status vpn and you should see your gluetun:latest container running via podman ps.
Finally let’s spin up the app and check that it’s routed through the VPN (a Swiss ip in my example):
systemctl start app
Your final output from podman ps should be something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e1e59691191 localhost/podman-pause:5.4.2-1766335321 12 minutes ago Up 12 minutes 0.0.0.0:8080->8080/tcp secure-net-infra
ff503fc7866c docker.io/qmcgaw/gluetun:latest 6 minutes ago Up 6 minutes 0.0.0.0:8080->8080/tcp, 8000/tcp, 8388/tcp, 8888/tcp, 8388/udp gluetun-vpn
1b17f10cbf56 docker.io/library/alpine:latest sleep infinity 7 seconds ago Up 7 seconds 0.0.0.0:8080->8080/tcp demo-app
Now let’s do that ip check with podman exec demo-app wget -qO- ipinfo.io:
{
"ip": "94.101.114.138",
"city": "Zürich",
"region": "Zurich",
"country": "CH",
"loc": "47.3667,8.5500",
"org": "AS136787 PacketHub S.A.",
"postal": "8000",
"timezone": "Europe/Zurich",
"readme": "https://ipinfo.io/missingauth"
}
Success!
These are not like normal systemd units, you don’t need to actually run systemd enable on them, they should be back up on a reboot.
Auto-updating latest containers Link to heading
You may have noticed the vpn container is tagged with AutoUpdate=registry. This tells podman it’s eligible for auto-updates, but doesn’t actually auto-update itself. Running podman-auto-update manually will do the update-check, pull the image and restart if needed. You can put this on a scchedule, or there’s a buit in timer, but I’ll leave that to you.
Pinned versions are safer and probably what you should be using anyway :)