Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@vegetaaaaaaa@lemmy.world avatar

vegetaaaaaaa

@vegetaaaaaaa@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I use netdata (the FOSS agent only, not the cloud offering) on all my servers (physical, VMs...) and stream all metrics to a parent netdata instance. It works extremely well for me.

Other solutions are too cumbersome and heavy on maintenance for me. You can query netdata from prometheus/grafana [1] if you really need custom dashboards.

I guess you wouldn't be able to install it on the router/switch but there is a SNMP collector which should be able to query bandwidth info from the network appliances.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Windows Servers

No

setup automatic responses to the alerts

It should be possible using script to execute on alarm = /your/custom/remediation-script https://learn.netdata.cloud/docs/alerts-&-notifications/notifications/agent-dispatched-notifications/agent-notifications-reference. I have not experimented with this yet, but soon will (implementing a custom notification channel for specific alarms)

restarting a service if it isn’t answering requests

I'd rather find the root cause of the downtime/malfunction instead of blindly restarting the service, just my 2 cents.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I agree that desktop/ATX tower PCs are the most useful form factor, you can stuff all your old junk hardware in there and offer it a second life without much investment.

However with current electricity prices buying more power efficient hardware can be a better medium-term investment. 1kWh bills at 0.2516€ currently where I'm at (~EU average price), assuming an average power consumption of 50W this gives you (50×24×365)/1000×0.2516=110€/year. At this rate a 200€ investment in hardware would pay for itself in 2-3 years.

Buying a <100€ setup is not worth it for general purpose servers in my opinion, it will either be underpowered or power hungry.

My current solution is to to run all my services in KVM (libvirt) VMs on my beefy desktop computer which is already on most of the time anyway. Best of both worlds.

If I had to redo everything I would probably buy a NUC/mini-PC with a good CPU, 64GB RAM and low power consumption, stash a single huge SSD in there, migrate my VMs there and call it a day. But this is not a cheap setup.

How much does it matter what type of harddisk i buy for my server?

Hello, I'm relatively new to self-hosting and recently started using Unraid, which I find fantastic! I'm now considering upgrading my storage capacity by purchasing either an 8TB or 10TB hard drive. I'm exploring both new and used options to find the best deal. However, I've noticed that prices vary based on the specific...

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

10000RPM SAS drives are noisy (and expensive), something to keep in mind. If I needed this kind of performance I would probably go full SSD.

Mirror all data on NAS A to NAS B

I'm duplicating my server hardware and moving the second set off site. I want to keep the data live since the whole system will be load balanced with my on site system. I've contemplated tools like syncthing to make a 1 to 1 copy of the data to NAS B but i know there has to be a better way. What have you used successfully?

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar
  • rsync + basic scripting for periodic sync, or
  • distributed/replicated filesystems for real-time sync (I would start with Ceph)
vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Netdata can also expose metrics to prometheus which you can then use in Grafana for more advanced/customizable dashboards https://learn.netdata.cloud/docs/exporting-metrics/prometheus

HDD spins but OS doesnt see mountable disk

The primary OS for this disk was Unraid. Its formated in BTRFS. I don't think either of those matter. The disk spins and worked before the reboot. But now. No matter what machine, port or cable I use its not mountable. Is there anything I can try? I was going to attempt Spinrite on it however it doesn't see anything either....

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

lsblk also show block devices and is prettier than looking directly at /sys/class/block

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I just don’t have that much time to spend on initial implementation and upkeep

Well k8s is a poor choice of platform for you :D

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Don't mind him. He's always there ranting about who knows what whenever software he dislikes is mentioned. Lookup his comment history for more of the same.

Easiest method to summon him is to mention Nextcloud and Proxmox in the same sentence.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Not an answer but still relevant: I actively avoid enabling unattended-upgrades for third-party repositories like Docker (or anything that is not an official Debian repository) because they don't have the same stability guarantees, and rely on other upgrade notification methods instead.

how bad of an idea is this to run a DNS in docker and use it for the host and other containers?

Personally I would simply install dnsmasq directly on the host because it is one apt install and a configuration file away. Keep it simple.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Usually you would have a second DNS resolver configured in /etc/resolv.conf (or whatever name resolution config system you are using, resolvconf, systemd-networkd, etc). The system will fall back to this resolver if the first resolver fails to respond (and/or replies NXDOMAIN, I'm not sure. The exact order and fallback conditions may vary depending on which system you use). This can be another dnsmasq instance, a public DNS resolver, your ISP's resolver, etc. This allows at least basic DNS resolution to work before your dnsmasq instance comes back up.

I would also add automatic monitoring for dnsmasq (either check that the service/container is running, or check the TCP connection to port 53, or check that DNS resolution is working for a known domain, etc)

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

msmtp never failed me

vegetaaaaaaa , (edited )
@vegetaaaaaaa@lemmy.world avatar

You can definitely replace senders with correct mail addresses for relaying through SMTP servers that expect them (this is what I do):

# /etc/msmtprc
account default
...
host smtp.gmail.com
auto_from on
auth on
user myaddress
password hunter2

# Replace local recipients with addresses in the aliases file
aliases /etc/aliases
# /etc/aliases
mailer-daemon: postmaster
postmaster: root
nobody: root
hostmaster: root
usenet: root
news: root
webmaster: root
www: root
ftp: root
abuse: root
noc: root
security: root
root: default
www-data: root
default: myaddress@gmail.com

(the only thing I changed from the defaults in the aliases file is adding the last line)

This makes it so all/most system accounts susceptible to send mail are aliased to root, and root in turn is aliased to my email address (which is the one configured in host/user/password in msmtprc)

Edit: I think it's actually the auto_from option which interests you. Check the msmtp manpage

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

https://github.com/chriswayg/ansible-msmtp-mailer/issues/14
While msmtp has features to alter the envelope sender and recipient, it doesn't alter the "To:" or "From:" message itself.
When the Envelope doesn't match these details, it can be considered spam

Oh I didn't know that, good to know!

The proposed one-line wrapper looks like a nice solution

what will be my next server operating system (Fedora Server, Fedora CoreOS, NixOS), your experience and opinion

I want to reset my server soon and I'm toying with the idea of using a different operating system. I am currently using Ubuntu Server LTS. However, I have been toying with the idea of using Fedora Server (I use Fedora on my laptop and made good experiences with it) or even Fedora CoreOS. I also recently installed NixOS on my...

How should I do backups?

I have a server running Debian with 24 TB of storage. I would ideally like to back up all of it, though much of it is torrents, so only the ones with low seeders really need backed up. I know about the 321 rule but it sounds like it would be expensive. What do you do for backups? Also if anyone uses tape drives for backups I am...

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

If this is a "shared hosting" type of server (LAMP stack), you can usually run PHP applications (assuming they are pre-packaged and don't need composer install or similar during the install process). Check https://awesome-selfhosted.net/platforms/php.html

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I think Peertube would be overkill for a single channel, but it's the closest to YouTube in terms of features (multiple formats/transcoding, comments, etc). Otherwise I would just rip the channel with yt-dlp and setup a "mirror" on something simple like a static site or blog. Find something that works, then automate (a simple shell script + cron job would do the trick).

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

On my desktop I do this with quodlibet alongside the KDE connect applet + KDE connect android app, which lets the phone control media players on the desktop. You probably don't want to run a full desktop environment just for this, but it's a good option if you already have a desktop PC with decent speakers.

Mentioning it just in case, because it works for me. If you're looking for a purely headless server there are other good suggestions in this thread.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I can manually monitor but it doesn’t happen just then

Setup proper monitoring with history. That way yo don't have to babysit the server, you can just look at the charts after a crash. I usually go with netdata

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

You could create the alias alias docker="podman"

There's even an official Debian package that takes care of this for you: https://packages.debian.org/bookworm/podman-docker

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

sftp://USERNAME@SERVER:PORT in the address bar of most file managers will work. You can omit the port if it's the default (22), you can omit the username if it's the same as your local user.

You can also add the server as a favorite/shortcut in your file manager sidebar (it works at least in Thunar and Nautilus). Or you can edit ~/.config/gtk-3.0/bookmarks directly:

file:///some/local/directory
file:///some/other/directory
sftp://my.example.org/home/myuser my.example.org
sftp://otheruser@my.example.net:2222/home/otheruser my.example.net

How responsive is your Nextcloud?

My Nextcloud has always been sluggish — navigating and interacting isn't snappy/responsive, changing between apps is very slow, loading tasks is horrible, etc. I'm curious what the experience is like for other people. I'd also be curious to know how you have your Nextcloud set up (install method, server hardware, any other...

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Quite fast.

KVM/libvirt VM with 4GB RAM and 4vCores shared with a dozen other services, storage is not the fastest (qcow2-backed disks on a ext4 partition inside a LUKS volume on a 5400RPM hard drive... I might move it so a SSD sometime soon) so features highly dependent on disk I/O (thumbnailing) are sometimes sluggish. There is an occasional slowdown, I suppose caused by APCu caches periodically being dropped, but once a page is loaded and the cache is warmed up, it becomes fast again.

Standard apache + php-fpm + postgresql setup as described in the Nextcloud official documentation, automated through this ansible role

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

VMs have a lot of additional overhead.

The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

https://serverfault.com/questions/225719/so-really-what-is-the-overhead-of-virtualization-and-when-should-i-be-concerned

I usually go for bare-metal > on top of that, multiple VMs separated by context (think "tenant", production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it's good to have at least separate VMs for "serious business" and "lab" contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and... ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you're serious about it you will probably end up maintaining your own images)

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Obfuscation can be helpful in not disclosing which are some services or naming schemes

The "obfuscation" benefits of wildcard certificates are very limited (public DNS records can still easily be found with tools such as sublist3r), and they're definitely a security liability (get the private key of the cert stolen from a single server -> TLS potentially compromised on all your servers using the wildcard cert)

What's a simple logging service?

Hiya, I'm looking to keep track of my different services in hosting via Unraid. Right now I'm hosting roughly 12 different services, but would be nice to have the logs of all my services in one place, preferably with a nice GUI. Are there any such services that could easily connect to the different docker containers I have...

vegetaaaaaaa , (edited )
@vegetaaaaaaa@lemmy.world avatar

Syslog over TCP with TLS (don't want those sweet packets containing sensitive data leaving your box unencrypted). Bonus points for mutual authentication between the server/clients (just got it working and it's 👌 - my implementation here

It solves the aggregation part but doesn't solve the viewing/analysis part. I usually use lnav on simple setups (gotty as a poor man's web interface for lnav when needed), and graylog on larger ones (definitely costly in terms of RAM and storage though)

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Would it be better to just have one PostgreSQL service running that serves both Nextcloud and Lemmy

Yes, performance and maintenance-wise.

If you're concerned about database maintenance (can't remember the last time I had to do this... Once every few years to migrate postgres clusters to the next major version?) bringing down multiple services, setup master-slave replication and be done with it

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

/thread

This is my go-to setup.

I try to stick with libvirt/virsh when I don't need any graphical interface (integrates beautifully with ansible [1]), or when I don't need clustering/HA (libvirt does support "clustering" at least in some capability, you can live migrate VMs between hosts, manage remote hypervisors from virsh/virt-manager, etc). On development/lab desktops I bolt virt-manager on top so I have the exact same setup as my production setup, with a nice added GUI. I heard that cockpit could be used as a web interface but have never tried it.

Proxmox on more complex setups (I try to manage it using ansible/the API as much as possible, but the web UI is a nice touch for one-shot operations).

Re incus: I don't know for sure yet. I have an old LXD setup at work that I'd like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.

vegetaaaaaaa , (edited )
@vegetaaaaaaa@lemmy.world avatar

In my experience and for my mostly basic needs, major differences between libvirt and proxmox:

  • The "clustering" in libvirt is very limited (no HA, automatic fencing, ceph inegration, etc. at least out-of-the box), I basically use it to 1. admin multiple libvirt hypervisors from a single libvirt/virt-manager instance 2. migrate VMs between instances (they need to be using shared storage for disks, etc), but it covers 90% of my use cases.
  • On proxmox hosts I let proxmox manage the firewall, on libvirt hosts I manage it through firewalld like any other server (+ libvirt/qemu hooks for port forwarding).
  • On proxmox I use the built-in template feature to provision new VMs from a template, on libvirt I do a mix of virt-clone and virt-sysprep.
  • On libvirt I use virt-install and a Debian preseed.cfg to provision new templates, on proxmox I do it... well... manually. But both support cloud-init based provisioning so I might standardize to that in the future (and ditch templates)
vegetaaaaaaa , (edited )
@vegetaaaaaaa@lemmy.world avatar

Did you read? I specifically said it didn't, at least not out-of-the-box.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I should RTFM again... https://manpages.debian.org/bookworm/libvirt-clients/virsh.1.en.html has options for virsh migrate such as --copy-storage-all... Not sure how it would work for actual live migrations but I will definitely check it out. Thanks for the hint

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

The migration is bound to happen in the next few months, and I can't recommend moving to incus yet since it's not in stable/LTS repositories for Debian/Ubuntu, and I really don't want to encourage adding third-party repositories to the mix - they are already widespread in the setup I inherited (new gig), and part of a major clusterfuck that is upgrade management (or the lack of). I really want to standardize on official distro repositories. On the other hand the current LXD packages are provided by snap (...) so that would still be an improvement, I guess.

Management is already sold to the idea of Proxmox (not by me), so I think I'll take the path of least resistance. I've had mostly good experiences with it in the past, even if I found their custom kernels a bit strange to start with... do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable? I'd still like to put a word of caution about that.

vegetaaaaaaa , (edited )
@vegetaaaaaaa@lemmy.world avatar

clustering != HA

The "clustering" in libvirt is limited to remote controlling multiple nodes, and migrating hosts between them. To get the High Availability part you need to set it up through other means, e.g. pacemaker and a bunch of scripts.

vegetaaaaaaa , (edited )
@vegetaaaaaaa@lemmy.world avatar

DO NOT migrate / upgrade anything to the snap package

It was already in place when I came in (made me roll my eyes), and it's a mess. As you said, there's no proper upgrade path to anything else. So anyway...

you should migrate into LXD LTS from Debian 12 repositories

The LXD version in Debian 12 is buggy as fuck, this patch has not even been backported https://github.com/canonical/lxd/issues/11902 and 5.0.2-5 is still affected. It was a dealbreaker in my previous tests, and doesn't inspire confidence in the bug testing and patching process on this particular package. On top of it, It will be hard to convice other guys that we should ditch Ubuntu and their shenanigans, and that we should migrate to good old Debian (especially if the lxd package is in such a state). Some parts of the job are cool, but I'm starting to see there's strong resistance to change, so as I said, path of least resistance.

Do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable?

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

“buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English?

It sends pretty bad signals when it causes a crash on the first lxd init (sure I could make the case that there are workarounds, switch locales, create the bridge, but it doesn't help make it appear as a better solution than proxmox). Whatever you call it, it's a bad looking bug, and the fact that it was not patched in debian stable or backports makes me think there might be further hacks needed down the road for other stupid bugs like this one, so for now, hard pass on the Debian package (might file a bug on the bts later).

About the link, Proxmox kernel is based on Ubuntu, not Debian…

Thanks for the link mate, Proxmox kernels are based on Ubuntu's, which are in turn based on Debian's, not arguing about that - but I was specifically referring to this comment

having to wait months for fixes already available upstream or so they would fix their own shit

any example/link to bug reports for such fixes not being applied to proxmox kernels? Asking so I can raise an orange flag before it gets adopted without due consideration.

Password Manager that supports multiple databases/syncing?

I currently use keePass, and use it on both my PC and my phone. I like it because I can keep a copy of my DB on my phone and export it through a few different means. But I can't seem to find an option to actually sync my local DB against a remote one. I've thought about switching to BitWarden but from what I can see it uses a...

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

Why not self host vaultwarden?

How does that work when your vaultwarden instance goes down for some reason? Lose access to passwords? Or does the browser extension still have access to a cached copy of the db?

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

but more like playing a video game and it drops down to 15fps

Likely not a server-side problem (check CPU usage on the server), if the server was struggling to transcode I think it would result in the playback pausing, and resuming when the encoder catches up. Network/bandwidth problems would result in buffering. This looks like a bad playback performance problem, what client are you using? Try with multiple clients (use the web interface ina browser as a baseline) and see if it makes any difference.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

i was just worried that the libraries in the container image are outdated

They actually are: trivy scan on authelia/authelia:latest https://pastebin.com/raw/czCYq9BF

How would I automate (VM/LXC)-agnostic templates in Proxmox without creating golden images?

For context: I want to automatically enable Intel SGX for every VM and LXC in Proxmox, but it doesn't seem like there's a way to do it using APIs AFAIK (so Terraform is out of the question unless I've missed something) other than editing the template for the individual LXC/VM....

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I would check enabling it from cloud-init and/or during an initial provisioning step using ansible

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I would have liked for this to be possible directly through Terraform

Is it this proxmox provider? It does allow specifying cloud-init settings: https://registry.terraform.io/providers/Telmate/proxmox/latest/docs/resources/cloud_init_disk. So you can use runcmd or similar to do whatever is needed inside the host to enable Intel SGX, during the terraform provisioning step.

AppArmour support for VMs, which is a secure enclave too (if I understand correctly).

Nope, Apparmor is a Mandatory Access Control (MAC)) framework [1], similar to SELinux. It complements traditional Linux permissions (DAC, Discretionary Access Control). Apparmor is already enabled by default on Debian derivatives/Ubuntu.

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I was under the impression that cloud-init could only really be used to run commands inside the guest?

Yes that's correct, I didn't realize you had something to do outside the guest to enable it. What exactly? How do you solve it manually for now?

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

I see, agree with you that it should be supported by the terraform provider if it is at the VM .conf level... maybe a new attribute in https://registry.terraform.io/providers/Telmate/proxmox/latest/docs/resources/vm_qemu#smbios-block? I would start by requesting this feature in https://github.com/Telmate/terraform-provider-proxmox/issues, and maybe try to add it yourself? (scratch your own itch, fix it for everyone in the process). Good luck

vegetaaaaaaa ,
@vegetaaaaaaa@lemmy.world avatar

So much server-side code :/ I wrote my own in pure HTML/CSS which gets rebuilt by ansible depending on services installed on the host. Basic YAML config for custom links/title/message.

Next "big" change would be a dark theme, but I get by with Dark Reader which I need for other sites anyway. I think it looks ok

https://lemmy.world/pictrs/image/187f4b0c-e1f3-4486-9984-2a285ff632ab.png

vegetaaaaaaa , (edited )
@vegetaaaaaaa@lemmy.world avatar

You can probably use it by templating out https://github.com/nodiscc/xsrv/blob/master/roles/homepage/templates/index.html.j2 manually or using jinja2. basically remove the {% ...%} markers and replace {{ ... }} blocks with your own text/links.

You will need a copy of the res directory alongside index.html (images, stylesheet).

You can duplicate col-1-3 mobile-col-1-1 and col-1-6 mobile-col-1-2 and divs as many times as you like and they will arrange themselves on the page, responsively.

But yeah this is actually made with ansible/integration with my roles in mind.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines