Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

thirdBreakfast

@thirdBreakfast@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

thirdBreakfast ,

Yep, shoutout to the contributors, they are certainly not dragging their feet on all these bugfixes.

thirdBreakfast ,

Ah. I felt like a x.x.3 version was long enough to wait for things to be shaken out, and had decided to update to 10.9.x, but I might leave it for a little bit.

‘My whole library is wiped out’: what it means to own movies and TV in the age of streaming services (www.theguardian.com)

*What rights do you have to the digital movies, TV shows and music you buy online? That question was on the minds of Telstra TV Box Office customers this month after the company announced it would shut down the service in June. Customers were told that unless they moved over to another service, Fetch, they would no longer be...

thirdBreakfast ,

I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

So -

  • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS's, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
  • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
  • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They're on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
  • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
  • Yearly: visit the remotes and have a proper check/clean up/updates

Network loss after 24hrs on Docker LXC

Fine folks of c/selfhosted, I've got a Docker LXC (Debian) running in Proxmox that loses its local network connection 24 hours after boot. It's remedied with a LXC restart. I am still able to access the console through Proxmox when this happens, but all running services (docker ps still says they're running) are inaccessible on...

thirdBreakfast ,

No answer, but just to say I run most of my services with this setup - Docker in a Debian LXC under Proxmox, and don't have this issue. The containers are 'privileged', and I have 'nesting' ticked on, but apart from that all defaults.

thirdBreakfast ,

My 'good reason' is just that it's super convenient - for backups and painlessly moving apps around between nodes with all their data.

I would run plain LXCs if people nicely packaged up their web apps as LXC templates and made them available on LXCHub for me to run with lxc compose up, but they generally don't.

I guess another alternate future would be if Proxmox added docker container supervision to their web interface, but you're still not going to have the self-contained neat snapshot system that includes the data.

In theory you should be able to convert an OCI container layer by layer into an LXC, so I bet there's projects out there that attempt this.

https://lemmy.world/pictrs/image/68d09ae5-4a06-455b-9acb-249b8015b607.jpeg

thirdBreakfast ,

There are a heap of general "Linux Administration" courses which will patch a lot of holes in the knowledge of almost all self-taught self hosters. I'd been using Linux for a while but didn't know you could tab to complete file names in commands till I learned it on Udemy ¯_(ツ)_/¯

Basic docker networking?

Hi guys! I'm going at my first docker attempt...and I'm going in Proxmox. I created an LXC container, from which I installed docker, and portainer. Portainer seems happy to work, and shows its admin page on port 9443 correctly. I tried next running the image of immich, following the steps detailed in their own guide....

thirdBreakfast ,

I routinely run my homelab services as a single Docker inside an LXC - they are quicker, and it makes backups and moving them around trivial. However, while you're learning, a VM (with something conventional like Debian or Ubuntu) is probably advised - it's a more common experience so you'll get more helpful advice when you ask a question like this.

thirdBreakfast ,

how to access the NAS and HA separately from the outside knowing that my access provider does not offer a static IP and that access to each VM must be differentiated from Proxmox.

Tailscale, it will take about 5 minutes to set up and cost nothing.

thirdBreakfast ,

I'm also on Silverbullet, and from OP's description it sounds like it could be a good fit. I don't use any of the fancy template stuff - just a bunch of md files in a directory with links between them.

thirdBreakfast ,

Your workload (a NAS and a handful of services) is going to be a very familiar one to members of the community, so you should get some great answers.

My (I guess slightly wacky) solution for this sort of workload has ended up being a single Docker container inside an LXC container for each service on Proxmox. Docker for ease of management with compose and separate LXCs for each service for ease of snapshots/backups.

Obviously there's some overhead, but it doesn't seem to be significant.

On the subject of clustering, I actually purchased three machines to do this, but have ended up abandoning that idea - I can move a service (or restore it from a snapshot to a different machine) in a couple of minutes which provides all the redundancy I need for a home service. Now I keep the three machines as a production server, a backup (that I swap over to for a week or so every month or two) and a development machine. The NAS is separate to these.

I love Proxmox, but most times it get mentioned here people pop up to boost Incus/LXD so that's something I'd like to investigate, but my skills (and Ansible playbooks) are currently built around Proxmox so I've got a bit on inertia.

Good mini PC for around 100€

My current setup consists of a Raspberry Pi 4 with 4gb RAM and a 1tb external SSD. I'm thinking of getting a used mini PC for around 100€ to replace that tho because it would give me a lot more power and especially RAM (I currently need to use an 8gb swap file). My plan so far is to get a used mini PC that's quiet, has a...

thirdBreakfast ,

Is that a mini? I love those little 1L HP's. I run 3 G2 800's. These are very nicely built and therefore a joy to work on, and sip power when idling. Highly recommend. Also +1 for Proxmox.

thirdBreakfast ,

For light touch monitoring this is my approach too. I have one instance in my network, and another on fly.io for the VPSs (my most common outage is my home internet). To make it a tiny bit stronger, I wrote a Go endpoint that exposes the disk and memory usage of a server including with mem_okay and disk_okay keywords, and I have Kuma checking those.

I even have the two Kuma instances checking each other by making a status page and adding checks for each other's 'degraded' state. I have ntfy set up on both so I get the Kuma change notifications on my iPhone. I love ntfy so much I donate to it.

For my VPSs, this is probably not enough, so I am considering the more complicated solutions (I've started wanting to know things like an influx of fali2ban bans etc.)

thirdBreakfast , (edited )
- fiction
    - Abbott, Edwin A_
        - Flatland
            - Flatland - Edwin A. Abbott.epub
            - Flatland - Edwin A. Abbott.jpg
            - Flatland - Edwin A. Abbott.opf
    - Achebe, Chinua
        - Things Fall Apart
            - Things Fall Apart - Chinua Achebe.epub
            - Things Fall Apart - Chinua Achebe.jpg
            - Things Fall Apart - Chinua Achebe.opf

So in each directory that I use to delineate a library, I have a subdirectory for each author (in sort order form). Within each author subdirectory is a subdirectory for each book, with just the title, then the book with (edit - the anti-injection code mangled how I was trying to say the book file name. it's [book name]-[author].[extension])

I didn't invent this, it's just what Calibre spits out. When I buy a new book, I ingest it into Calibre, fix any metadata and export it to the NAS. Then I delete the Calibre library - I'm just using it to do the neatening up work.

thirdBreakfast ,

If this is a question about how to access your server at home from devices anywhere, securely, with a simple setup, then the answer is turn off all that port forwarding, and use Tailscale.

thirdBreakfast ,

With a somewhat similar usecase, I ended up using Kavita.

thirdBreakfast ,

Yo dawg, I put most of my services in a Docker container inside their own LXC container. It used to bug me that this seems like a less than optimal use of resources, but I love the management - all the VM and containers on one pane of glass, super simple snapshots, dead easy to move a service between machines, and simple to instrument the LXC for monitoring.

I see other people doing, and I'm interested in, an even more generic system (maybe Cockpit or something) but I've been really happy with this. If OP's dream is managing all the containers and VM's together, I'd back having a look at Proxmox.

thirdBreakfast ,

This is where I landed on this decision. I run a Synology which just does NAS on spinning rust and I don't mess with it. Since you know rsync this will all be a painless setup apart from the upfront cost. I'd trust any 2 bay synology less than 10 years old (I think the last two digits in the model number is the year), then if your budget is tight, grab a couple 2nd hand disks from different batches (or three if you budget stretches to it,).

I also endorse u/originalucifer's comment about a real machine. Thin clients like the HP minis or lenovos are a great step up.

thirdBreakfast ,

It has a practical element (Hello Jellyfin, Kavita, AudioBookshelf & Syncthing), but for the rest of it, it's about 60% hobby and 20% learning stuff that could be potentially career enhancing.

Gnu/Linux absolutely annihilating server operating systems means that I can run the same stack, and use the same tools, that giant companies are based on. All for free. In my spare room. 1L x86 computers cost less than two packs of cigarettes! Little SSD's are ridiculously cheap. And you don't even need that stuff - that old laptop in your cupboard will do. Even if you kick in to donate for your software (and I recommend you do if you can) it's a cheap hobby compared to golf or skating or whatever. Anything you need to learn there's blog posts and videos available.

We live in an amazing time in this hobby. I know there's companies that would like to take it away from us, but Open Source just keeps kicking goals. Thank you FOSS developers, Gnu, Linus, FSM, Cthulhu and the other forces in the universe that make this possible.

Certbot is great. Let's Encrypt is great. (lemmy.world)

I've been downloading SSL certificates from my domain provider, using cat to join them together to make the fullchain.pem, uploading them to the server, and myself adding a 90 day calendar reminder. Every time I did this I'd think I should find out about this Certbot thing....

thirdBreakfast OP ,

Good on you. For anyone else inspired, you can support Certbot here, and Let's Encrypt here.

I promise I don't work for them - I was just struck by how phenomenally handy they are.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines