This blog is part of a series on self-hosting. I bought a server, and I’m setting it up. But I wonder: Am I doing things right?
By sharing, others can share their ideas, and together we can help people.
Last time, we used Docker & Portainer to create and manage our containers. We also update them automatically. Now we’re ready to host all kinds of services on our server! But how do we access them when we’re not home?
Hello World
Before we start accessing containers from the outside, we need a container to access. Let’s get a simple HTTP server working. We’ll use the httpd
image. We’ll assign the host’s port 1234
to it.
After deploying, we can go to [Your Server IP]:1234
and there it is: The famous “It works!” message.
Exposing your containers: This is very risky!
We’re going to create two ways of accessing our server. Most services we’ll access with VPN. In Part 2, we’ll set up a VPN so we can connect to our home, use our home internet and access our services. However, we want to expose some services onto the internet.
While we’ve done great, and setup automatic updates on our machine & Docker containers, unfortunately that doesn’t mean we’re protected. Each Docker container can have security issues, certainly if they’re not being updated. Even if they’re being updated, you don’t immediately know what has been updated. They might only update their exposed application, and not the underlying web server, for example.
What if you do get hacked? The hacker will have access to the container, and can start looking around your local network for vulnerable devices. They can also access any volumes you have connected to your Docker containers. In the volumes there’s sometimes the docker.sock, which will give them access to run any containers, so avoid connecting any containers to the internet which have access to docker.sock.
While you can limit the risk by checking on your containers, the best way to limit the risk is to not expose them at all. So far, the only one I would like to expose is Jellyfin, so I can show my parents videos I’ve made on their Chromecast. The rest can be behind a VPN just fine.
Access from outside
To get access from the outside, we’ll need to forward a port to our server. For this, it helps if our server has a static IP on our network. Ubuntu has a tool for this called Netplan.
Note: Most routers let you configure a static IP for a machine based on MAC address. Even though I haven’t done this personally, this can be an alternative for what we’re doing in this step.
First, we need to check our current network settings. By running netplan status
we get our current configuration. This will likely show multiple interfaces. There’s likely only one that:
- Is ‘online’, indicated by the green dot on the left
- Does not contain the word ‘docker’ after the number.
After the number, there’s the interface. This is likely something like eth0
, and will likely be before the word ethernet
. We’re also looking for our ‘Gateway’, which is the first IP in bold after ‘Routes’.
At ‘Addresses’ we’ll find our IP in bold. We’ll need to pick a new one! We can change the number after the last dot, and by choosing a high number (but below 256!) we lower the changes our router assigns this IP to another device when our server is off. So if you got back the IP 192.168.1.15
, you can pick 192.168.1.234
for example.
Now we would like a little bit of help, so there’s a Netplan Configuration Generator. For the interface name, we have to enter eth0
, and for the IP, we take our IP and add /24
. The gateway & DNS fields can be left empty. Although, if you want to use custom DNS servers, this is a good time to set them. We get the following configuration:
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
addresses: [192.168.1.234]
routes:
- to: default
via: 192.168.1.1
Create a file at /etc/netplan/99_config.yaml
, and put the configuration from the generator into this file.
Afterward, run sudo netplan apply
. If you were connected with ssh, you’ll need to reconnect on the new IP. Now you can run hostname -I
again to see if it worked!
Setting up your domain
It’s entirely possible to use your external IP to access all services, however this means you’ll need to set up a separate port for each exposed server and remember them. It’s easier to use your own custom domain.
My internet service provider also provides a static IP. This means I won’t have to mess with anything ‘dynamic dns’ related things. If you do not have a static IP address, I recommend ddclient, which also has a Docker container.
I’m using a domain where all subdomains are pointed at my home IP with a wildcard. This means I never have to change DNS settings whenever I expose a new service.
This also makes it a little bit harder to find the exposed services on my network, since you actually need to know the full address. It also means there’s no public list of services on this domain. A little security through obscurity as a bonus.
This results in the following DNS settings:
@ | AAAA | [IPv6 address] |
@ | A | [IPv4 address] |
www | CNAME | @ |
* | AAAA | [IPv6 address] |
* | A | [IPv4 address] |
Port forwarding
We’ll need to forward port 80 and 443 to our server. For this, we also need the IP we’ve used earlier. However, I can’t explain to you how to do this since it’s different for each router.
If you enter the previously found default gateway in your browser, you’ll likely be met with a login screen. Together with the magic of searching online and the brand and model of your router, you should be able to find your way here.
HTTP & SSL: Caddy!
Now it’s time to make our beautiful hello world available with Caddy. We first need to create two directories. One for config, the other for data. I create the following directories: ~/docker/caddy/conf
and ~/docker/caddy/data
.
In the conf
directory, we’ll make a new file called Caddyfile
and put in the following. Replace example.com
with your domain, and the IP with the static IP you have set up earlier.
hello.example.com {
reverse_proxy http://192.168.1.234:1234
}
Now we’re ready to add the Caddy container in Portainer. We select caddy:latest
and add port mapping for 80 & 443.
Now in Volumes we add the following two volumes to bind the folders we created before:
Remember to go to ‘Restart policy’ and select ‘Unless stopped’. Now click ‘Deploy container’, wait a few seconds and browse to your domain.
Conclusion
There we go! We can access our httpd
container on our website, right on our own subdomain. SSL has been automatically setup by Caddy. We can add more lines to our Caddyfile to add access to more services. What are you going to expose?
See you in the next blog, where we’ll set up our VPN and make sure we can easily access our services on our own network.
Be First to Comment