I use restic on Linux, but duplicati seems like the new hotness and it's cross platform
I've sysadmined wordpress for about 7 years professionally, so this stuff is as easy as making cereal to me. But there is a few steps. On a high level:
- Subdomain must point to your public IP
- Your public IP probably changes sometimes so you should have a way to automate updating the IP for your home server via the DNS provider's API
- Your router must be forwarding port 80 and 443 to your server
- Your server needs a web server software that can take the request and map it to the right virtual host for your site (and you need to make said virtual host)
- The wordpress install needs to have used wp-cli or a wp-config.php hack to change the domain of the site that wordpress thinks it is running as
- You need to secure the domain with letsencrypt, certbot will do this for you
This is the steps for a traditional web server, but since we usually use docker around these parts, instead of the normal web server software (apache or nginx) the way to use in docker is "letsencrypt nginx proxy companion" which will route an incoming connection to the docker container running wordpress and handle letsencrypt for you.
There are also a few other ways one might commonly set this all up, and what steps you are missing depends on the way you are hosting wordpress right now.
If you fill in some of the missing information on what you do or dont have from the steps above, I'll let you know what's next. Or you can send me a PM on reddit and I'll help you out!
snap install docker
"I heard you liek containers, so I put your containers in a container"
Your whole life becomes much simpler when you use docker.
Elevator pitch: Docker containers are preconfigured services which run isolated from the rest of your system and only expose individual directories you map into the container. These directories are the persistence part of the application and survive a restart of the container or the host system. Just backup your scripts and the data directories and you have backed up your entire server.
I have a few scripts as examples. 'cd "$(dirname "$0")"' changes to the directory the script is stored in, and therefore will create and map data directories from that parent directory.
Letsencrypt proxy companion will set up a single listener for web and ssl traffic, setup virtual hosts automatically, and setup SSL, all with automations.
First, you need letsencrypt nginx proxy companion:
#!/bin/bash
cd "$(dirname "$0")"
docker run --detach
--restart always
--name nginx-proxy
--publish 80:80
--publish 443:443
--volume $(pwd)/certs:/etc/nginx/certs
--volume $(pwd)/vhost:/etc/nginx/vhost.d
--volume $(pwd)/conf:/etc/nginx/conf.d
--volume $(pwd)/html:/usr/share/nginx/html
--volume /var/run/docker.sock:/tmp/docker.sock:ro
--volume $(pwd)/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro
--volume $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro
--volume $(pwd)/acme:/etc/acme.sh
jwilder/nginx-proxy
docker run --detach
--restart always
--name nginx-proxy-letsencrypt
--volumes-from nginx-proxy
--volume /var/run/docker.sock:/var/run/docker.sock:ro
--env "DEFAULT_EMAIL=YOUR_EMAIL_ADDRESS_GOES_HERE@MYDOMAIN.COM"
jrcs/letsencrypt-nginx-proxy-companion
Then for each service, you can start with a docker command as well with a few extra environment variables. Here is one for nextcloud:
docker run -d \
--name nextcloud
--hostname cloud.MYDOMAIN.COM
-v $(pwd)/data:/var/www/html
-v $(pwd)/php.ini:/usr/local/etc/php/conf.d/zzz-custom.ini
--env "VIRTUAL_HOST=cloud.MYDOMAIN.COM"
--env "LETSENCRYPT_HOST=cloud.MYDOMAIN.COM"
--env "VIRTUAL_PROTO=http"
--env "VIRTUAL_PORT=80"
--env "OVERWRITEHOST=cloud.MYDOMAIN.COM"
--env "OVERWRITEPORT=443"
--env "OVERWRITEPROTOCOL=https"
--restart unless-stopped
nextcloud:25.0.0
And Plex (/dev/dri is quicksync for hardware transcode):
docker run \
--device /dev/dri:/dev/dri \
--restart always \
-d \
--name plex \
--network host \
-e TZ="America/Chicago" \
-e PLEX_CLAIM="claim-somerandomcharactershere" \
-v $(pwd)/config:/config \
-v /my/media/directory/on/host/system:/media \
plexinc/pms-docker
Obsidian:
docker run --rm -d \
--name obsidian
-v $(pwd)/vaults:/vaults
-v $(pwd)/config:/config
--env "VIRTUAL_HOST=obsidian.MYDOMAIN.COM "
--env "LETSENCRYPT_HOST=obsidian.MYDOMAIN.COM "
--env "VIRTUAL_PROTO=http"
--env "VIRTUAL_PORT=8080"
ghcr.io/sytone/obsidian-remote:latest
I have found transcoding to work noticeably better when using quicksync (the intel chip native encoder) rather than a GPU.
At this point, I think the only real reason you would want a GPU is for LLMs.
One thing nobody has mentioned here, I run all my services as a docker container. It makes them very easy to back up, and very easy to segregate. If a service gets compromised, in theory, it's isolated to what it can access inside the docker container and can't compromise the host. And if you delete and rebuild the container, any damage done in the container dies with it.
Running home assistant with docker is as simple as the command:
sudo docker run -d \
--name homeassistant \
--restart=unless-stopped \
-e TZ=America/Chicago \
-v $(pwd)/homeassistant:/config \
--network=host \
homeassistant/home-assistant
There is of course, more details to learn and the devils are in the details, but thankfully anything you want to know on how to set up your network in this regard you can just ask chatgpt.
I have an automation that is triggered by a door open/close sensor that I have attached to the flushing arm in my toilet with a custom made 3d printed mount for the sensor, which triggers a script on the server which connects to the chromecast speaker in the bathroom and plays the final fantasy 7 battle victory theme whenever someone flushes the toilet. It is perhaps my favorite part of my home.