Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@7Sea_Sailor@lemmy.dbzer0.com cover
@7Sea_Sailor@lemmy.dbzer0.com avatar

7Sea_Sailor

@7Sea_Sailor@lemmy.dbzer0.com

This profile is from a federated server and may be incomplete. Browse more on the original instance.

How should I do backups?

I have a server running Debian with 24 TB of storage. I would ideally like to back up all of it, though much of it is torrents, so only the ones with low seeders really need backed up. I know about the 321 rule but it sounds like it would be expensive. What do you do for backups? Also if anyone uses tape drives for backups I am...

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Both UnraidFS and mergerFS can merge drives of separate types and sizes into one array. They also allow removing / adding drives without disturbing the array. None of this is possible with traditional RAID (or at least not without a significant time sink for re-making the array), no matter the type of RAID you use.

I made wanderer - a self-hosted trail and GPS track database (lemmy.world)

Over the last two months, I developed wanderer. It is a self-hosted alternative to sites like alltrails.com or in other words a self-hosted trail database. It started out more as a small hobby project to teach myself some new technologies but in the end, I decided to develop it into a fully-fledged application....

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Because using a containerization system to run multiple services on the same machine is vastly superior to running everything bare metal? Both from a security and a ease-of-use standpoint. Why wouldnt you use docker?

7Sea_Sailor , (edited )
@7Sea_Sailor@lemmy.dbzer0.com avatar

If you dont fear using a little bit of terminal, caddy imo is the better choice. It makes SSL even more brainless (since its 100% automatic), is very easy to configure (especially for reverse proxying) yet very powerful if you need it, has a wonderful documentation and an extensive extension library, doesnt require a mysql database that eats 200 MB RAM and does not have unnecessary limitations due to UI abstractions. There are many more advantages to caddy over NPM. I have not looked back since I switched.

An example caddyfile for reverse proxying to a docker container from a hostname, with automatic SSL certificates, automatic websockets and all the other typical bells and whistles:

https://yourdomain.com {
  reverse_proxy radarr:7878
}
7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

AdGuard Home supports static clients. Unless the instance is being used over TCP (port 53, unencrypted), it is by far the better way to use clientnames in the DNS server addresses and unblock the clients over that.

For DoT: clientname.dns.yourdomain.com
For DoH: https://dns.yourdomain.com/dns-query/clientname

A client, especially a mobile one, can simply not guarantee always having the same IP address.

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Caddy and Authentik play very nicely together thanks to caddy forward_auth directive. Regarding acls, you'll have to read some documentation, but it shouldnt be difficult to figure out whatsoever. The documentation and forum are great sources of info.

7Sea_Sailor , (edited )
@7Sea_Sailor@lemmy.dbzer0.com avatar

Theres a Dockerfile that you can use for building. It barely changes the flow of how you setup the container. Bigger issue imo is that it literally is the code they use for their premium service, meaning that all the payment stuff is in there. And I don't know if the apps even have support for connecting to a custom instance.

Edit: their docs state that the apps all support custom instances, making this more intruiging

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

The demo instance would be their commercial service I suppose: https://ente.io/. Since, as are their own words, the github code 1:1 represents the code running on their own servers, the result when selfhosting should be identical.

otl , to Selfhosted
@otl@hachyderm.io avatar

Another successful OpenBSD setup

I've been buying these little boxes from AliExpress for years to use as firewalls and routers. My oldest one is almost 9 years old now! OpenBSD installs just fine. Just a BIOS tweak to always boot up after power is restored.

@selfhosted

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Ive wanted one of these for a while to replace my ISPs modem+router+switch+wifi-AP. But apparently these devices can be funky to get a good wifi going, and I don't feel like adding three (mini pc, switch, AP) new devices to my "we don't talk about it" corner where all the IT is stored. Do you know anything about wifi on these?

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Is location the only reason to not use it as the AP? If I had a larger house I'd agree, but as I live in a small apartment, the current router location can easily serve the entire flat, so that is no concern right now.

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

You can docker compose up -d <service> to (re)create only one service from your Dockerfile

Self hosted Wetransfer?

Hello, i am looking for a self hosted application for sharing files like with wetransfer. I have tried the discontinued Firefox Send which has nice features like link expiry and works great in general but lacks authentication (only offers simple password protection). I also want the option to share with registered users. Is...

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

It supports sharing via public link. But I don't think it has sharing with registered users via username.

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

I'll plug another subsonic compatible server here: gonic. It does not have a web player ui, which saves on RAM. And it is really fast too.

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.

At the homelab (A in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it in network: host mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.

The original source IP is available to your local docker containers by making use of the X-Forwarded-For header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:

{
        servers {
                trusted_proxies static 192.168.144.1/24 100.111.166.92
        }
}

replacing the first IP with the gateway in the docker network, and the second IP with the "virtual" IP of server A inside the tailnet. Your containers, if they're written properly, should automatically read this value and display the real source IP in their logs.

Let me know if you have any further questions.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Hey! I'm also running my homelab on unraid! :D

The reverse proxy basically allows you to open only one port on your machine for generic web traffic, instead of opening (and exposing) a port for each app individually. You then address each app by a certain hostname / Domain path, so either something like movies.myhomelab.com or myhomelab.com/movies.

The issue is that you'll have to point your domain directly at your home IP. Which then means that whenever you share a link to an app on your homelab, you also indirectly leak your home location (to the degree that IP location allows). Which I simply do not feel comfortable with. The easy solution is running the traffic through Cloudflare (this can be set up in 15 minutes), but they impose traffic restrictions on free plans, so it's out of the question for media or cloud apps.

That's what my proxy VPS is for. Basically cloudflare tunnels rebuilt. An encrypted, direct tunnel between my homelab and a remote server in a datacenter, meaning I expose no port at home, and visitors connect to that datacenter IP instead of my home one. There is also no one in between my two servers, so I don't give up any privacy. Comes with near zero bandwith loss in both directions too! And it requires near zero computational power, so it's all running on a machine costing me 3,50 a month.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

I'm still on the fence if I want to expose Jellyfin publicly or not. On the one hand, I never really want to stream movies or shows from abroad, so there's no real need. And in desperate times I can always connect to Tailscale and watch that way. But on the other, it's really cool to simply have a web accessible Netflix. Idk.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

If you're referring to the "LabProxy VPS": So that I don't have to point a public domain that I (plan to) use more and more in online spaces to my personal IP address, allowing anyone and everyone to pinpoint my location. Also, I really don't want to mess with the intricacies of DynDNS. This solution is safer and more reliable than DynDNS and open ports on my router thats not at all equipped to fend off cyberspace attacks.

If you're referring to the caddy reverse proxy on the LabProxy VPS: I'm pointing domains that I want to funnel into my homelab at the external IP of the proxy VPS. The caddy server on that VPS reads these requests and reverse-proxies them onto the caddy-port from the homelab, using the hostname of my homelab inside my tailscale network. That's how I make use of the tunnel.
This also allows me to send the crowdsec ban decisions from the homelab to the Proxy VPS, which then denies all incoming requests from that source IP before they ever hit my homelab. Clean and Safe!

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Your first paragraph hits the nail on the head. From what I've read, bots all over the net will find any openly exposed ports in no time and start attacking it blindly, putting strain on your router and a general risk into your home network.

Regarding bandwith: 100% of the traffic via the domain name (not local network) runs through the proxy server. But these datacenters have 1 to 10 gigabit uplinks, so the slowest link in the chain is usually your home internet connection. Which, in my case, is 500mbit down and 50mbit up. And that's easily saturated on both directions by the tunnel and VPS. plus, streaming a 4K BluRay remux usually only requires between 35 and 40 mbit of upload speed, so speed is rarely a worry.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Thank you! It's done in excalidraw.com. Not the most straightforward for flowcharts, took me some time to figure out the best way to sort it all. But very powerful once you get into the flow.

If you're feeling funny, you can download the original image from the catbox link and plug it right back into the site like a save file!

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Gosh, that's cute. Probably how I'll end up too. Right now I'm not ready to let friends use my services. I already have friends and family on adguard and vaultwarden, that's enough responsibility for now.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

In addition to the other commenter and their great points, here's some more things I like:

  • ressource efficient: im running all my stuff on low end servers, and cant afford my reverse proxy to waste gigabytes of RAM (kooking at you, NPM)
  • very easy syntax: the Caddyfile uses a very simple, easy to remember syntax. And the documentation is very precise and quickly tells me what to do to achieve something. I tried traefik and couldn't handle the long, complicated tag names required to set anything up.
  • plugin ecosystem: caddy is written in go, and very easy to extend. There's tons of plugins for different functionalities, that are (mostly) well documented and easy to use. Building a custom caddy executable takes one command.
7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Very true! For me, that specific server was a chance to try out arm based servers. Also, I initially wanted to spin up something billed on the hour for testing, and then it was so quick to work that I just left it running.

But I'll keep my eye out for some low spec yearly billed servers, and move sooner or later.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

You're right, that's one of the remaining pain points of the setup. The rclone connections are all established from the homelab, so potential attackers wouldn't have any traces of the other servers. But I'm not 100% sure if I've protected the local backup copy from a full deletion.

The homelab is currently using Kopia to push some of the most important data to OneDrive. From what I've read it works very similarly to Borg (deduplicate, chunk based, compression and encryption) so it would probably also be able to do this task? Or maybe I'll just move all backups to Borg.

Do you happen to have a helpful opinion on Kopia vs Borg?

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Oh, that! That app proxies the docker socket connections over a TCP channel. Which provides a more granular control over what app gets what access to specific functionalities of the docker socket. Directly mounting the socket into an app technically grants full root access to the host system in case of a breach, so this is the advised way to do it.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Of course! here you go: https://files.catbox.moe/hy713z.png. The image has the raw excalidraw data embedded, so you can import it to the website like a save file and play around with the sorting if need be.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

The crowdsec agent running on my homelab (8 Cores, 16GB RAM) is currently sitting idle at 96.86MiB RAM and between 0.4 and 1.5% CPU usage. I have a separate crowdsec agent running on the Main VPS, which is a 2 vCPU 4GB RAM machine. There, it's using 1.3% CPU and around 2.5% RAM. All in all, very manageable.

There is definitely a learning curve to it. When I first dove into the docs, I was overwhelmed by all the new terminology, and wrapping my head around it was not super straightforward. Now that I've had some time with it though, it's become more and more clear. I've even written my own simple parsers for apps that aren't on the hub!

What I find especially helpful are features like explain, which allow me to pass in logs and simulate which step of the process picks that up and how the logs are processed, which is great when trying to diagnose why something is or isn't happening.

The crowdsec agent running on my homelab is running from the docker container, and uses pretty much exactly the stock configuration. This is how the docker container is launched:

  crowdsec:
    image: crowdsecurity/crowdsec
    container_name: crowdsec
    restart: always
    networks:
      socket-proxy:
    ports:
      - "8080:8080"
    environment:
      DOCKER_HOST: tcp://socketproxy:2375
      COLLECTIONS: "schiz0phr3ne/radarr schiz0phr3ne/sonarr"
      BOUNCER_KEY_caddy: as8d0h109das9d0
      USE_WAL: true
    volumes:
      - /mnt/user/appdata/crowdsec/db:/var/lib/crowdsec/data
      - /mnt/user/appdata/crowdsec/acquis:/etc/crowdsec/acquis.d
      - /mnt/user/appdata/crowdsec/config:/etc/crowdsec

Then there's the Caddyfile on the LabProxy, which is where I handle banned IPs so that their traffic doesn't even hit my homelab. This is the file:

{
	crowdsec {
		api_url http://homelab:8080
		api_key as8d0h109das9d0
		ticker_interval 10s
	}
}

*.mydomain.com {
	tls {
		dns cloudflare skPTIe-qA_9H2_QnpFYaashud0as8d012qdißRwCq
	}
	encode gzip
	route {
		crowdsec
		reverse_proxy homelab:8443
	}
}

Keep in mind that the two machines are connected via tailscale, which is why I can pass in the crowdsec agent with its local hostname. If the two machines were physically separated, you'd need to expose the REST API of the agent over the web.

I hope this helps clear up some of your confusion! Let me know if you need any further help with understanding it. It only gets easier the more you interact with it!

don't worry, all credentials in the two files are randomized, never the actual tokens

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Absolutely! To be honest, I don't even want to have countless machines under my umbrella, and constantly have consodilation in mind - but right now, each machine fulfills a separate purpose and feels justified in itself (homelab for large data, main VPS for anything thats operation critical and cant afford power/network outages and so on). So unless I find another purpose that none of the current machines can serve, I'll probably scale vertically instead of horizontally (is that even how you use that expression?)

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Nope, don't have that yet. But since all my compose and config files are neatly organized on the file system, by domain and then by service, I tar up that entire docker dir once a week and pull it to the homelab, just in case.

How have you setup your provisioning script? Any special services or just some clever batch scripting?

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

its basically a VPS that comes with torrenting software preinstalled. Depending on hoster and package, you'll be able to install all kinds of webapps on the server. Some even enable Plex/Jellyfin on the more expensive plans.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server. I want to prevent attackers sidestepping the proxy and directly accessing the server itself, which feels more likely to allow circumventing the isolations provided by docker in case of a breach.

Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN. For which I'd need to first replace my router, and learn a whole lot more about networking. Doing it this way, which is basically a homemade cloudflare tunnel, lets me rest easier at night.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

May I present to you: Caddy but for docker and with labels so kind of like traefik but the labels are shorter 👏 https://github.com/lucaslorentz/caddy-docker-proxy

Jokes aside, I did actually use this for a while and it worked great. The concept of having my reverse proxy config at the same place as my docker container config is intriguing. But managing labels is horrible on unraid, so I moved to classic caddy instead.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Maybe. But I've read some crazy stories on the web. Some nutcases go very far to ruin an online strangers day. I want to be able to share links to my infrastructure (think photos or download links), without having to worry that the underlying IP will be abused by someone who doesn't like me for whatever reason. Maybe that's just me, but it makes me sleep more sound at night.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

The rclone mount works via SSH credentials. Torrent files and tracker searches run over simple HTTPS, since both my torrent client and jackett expose public APIs for these purposes, so I can just enter the web address of these endpoints into the apps running on my homelab.

Sidenote, since you said sshfs mount: I tried sshfs, but has significantly lower copy speeds than with rclone mount. Might have been a misconfiguration, but it was more time efficient to use rclone than trying to debug my sshfs connection speed.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Glad to have gotten you back into the grind!

My homelab runs on an N100 board I ordered on Aliexpress for ~150€, plus some 16GB Corsair DDR5 SODIMM RAM.
The Main VPS is a 2 vCPU 4GB RAM machine, and the LabProxy is a 4 vCPU 4GB RAM ARM machine.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Pretty sure ruTorrent is a typical download client. The real reason is that it came preinstalled and I never had a reason to change it ¯_(ツ)_/¯

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

I'd love to have everything centralized at home, but my net connection tends to fail a lot and I dont want critical services (AdGuard, Vaultwarden and a bunch of others that arent listed) to be running off of flakey internet, so those will remain in a datacenter. Other stuff might move around, or maybe not. Only time will tell, I'm still at the beginning of my journey after all!

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

That's a tough one. I've pieced this all together from countless guides for each app itself, combined with tons of reddit reading.

There are some sources that I can list though:

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

I use Hetzner, mainly because of their good uptime, dependable service and being geographically close to me. Its a "safe bet" if you will. Monthly cost, if we're not counting power usage by the homelab, is about 15 bucks for all three servers.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

I heard about tailscale first, and haven't yet had enough trouble to attempt a switch.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Are you talking about the Tailscale App or the ZeroTier app? Because the TS Android app is the one thing im somewhat unhappy about, since it does not play nice with the private DNS setting.

7Sea_Sailor OP ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Hm, I have yet to mess around with matrix. As anything fediverse, the increased complexity is a little overwhelming for me, and since I am not pulled to matrix by any communities im a part of, I wasn't yet forced to make any decisions. I mainly hang out on discord, if that's something you use.

Docker Container Status Displays on Public Website

I have a home server with tech illiterate users (Tailscale/VPN won’t be a solution for them), and I’ve been setting up a little blog to keep them updated about content and status. I had an idea of setting up a server status page that displayed the running state of various docker containers so they could easily see if...

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

I haven't researched this, but my gut tells me one should be able to connect the two servers via Wireguard (direct tunnel, tailscale, zerotier, what have you) and discretely access the docker api without making it publicly available.

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

I havent looked deeply into it, but I know that Tailscale has SSO. Maybe this also applies when selfhosting the lighthouse with Headscale?

7Sea_Sailor ,
@7Sea_Sailor@lemmy.dbzer0.com avatar

Or take github out of the equation and directly use cloudflare pages. It has its own pros and cons, but for a simple static blog it'll be more than enough, and takes out the CNAME hassle.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines