Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

ancoraunamoka

@ancoraunamoka@lemmy.dbzer0.com

This profile is from a federated server and may be incomplete. Browse more on the original instance.

ancoraunamoka ,

First of all ignore the trends. Fuck docker, fuck nixos, fuck terraform or whatever tech stack gets shilled constantly.

Find a tech stack that is easy FOR YOU and settle on that. I haven't changed technologies for 4 years now and feel like everything can fit in my head.

Second of all, look at the other people using commercial services and see how stressed they are. Google banned my account, youtube has ads all the time, the app for service X changed and it's unusable and so on.

Nothing comes for free in terms of time and mental baggage

ancoraunamoka ,

The only good reply in the thread. Thanks for saying this

How should I do backups?

I have a server running Debian with 24 TB of storage. I would ideally like to back up all of it, though much of it is torrents, so only the ones with low seeders really need backed up. I know about the 321 rule but it sounds like it would be expensive. What do you do for backups? Also if anyone uses tape drives for backups I am...

ancoraunamoka ,

I am simple man s I use rsync.

Setup a mergerfs drive pool of about 60 TiB and rsync weekly.

Rsync seems daunting at first but then you realize how powerful and most importantly reliable it is.

It's important that you try to restore your backups from time to time.

One of the main reasons why I avoid softwares such as Kopia or Borg or Restic or whatever is in fashion:

  • they go unmantained
  • they are not simple: so many of my frienda struggled restoring backups because you are not dealing with files anymore, but encrypted or compressed blobs
  • rsync has an easy mental model and has extremely good defaults
ancoraunamoka ,

what other people are saying, is that you rsync over an encrypted file system or other type of storages. What are your backup targets? in my case I own the disks so I use LUKS partition -> ext4 -> mergerfs to end up with a single volume I can mount on a folder

ancoraunamoka ,

how does this look safer for rsync? For me it looks like the risk for that is similar, but I might not know background of development for these.

Rsync is available out of the box in most linux distro and is used widely not only for backups, but a lot of other things, such as repository updates and transfers from file hosts. This means a lot more people are interested in it. Also the implementation, looking at the source code, is cleaner and easier to understand.

how do you deal with it when just a file changes?

I think you should consider that not all files are equal. Rsync for me is great because I end up with a bunch of disks that contain an exact copy of the files I have on my own server. Those files don't change frequently, they are movies, pictures, songs and so on.

Other files such as code, configuration, files on my smartphone, etc... are backup up differently. I use git for most stuff that fits its model, syncthing for my temporary folders and my mobile phone.

Not every file can suit the same backup model. I trust that files that get corrupted or lost are in my weekly rsync backup. A configuration file I messed up two minutes ago is on git.

ancoraunamoka ,

As long as you understand that simply syncing files does not protect against accidental or malicious data loss like incremental backups do.

Can you show me a scenario? I don't understand how incremental backups cover malicious data loss cases

ancoraunamoka ,

Going unmaintained is a non issue, since you can still restore from your backup. It is not like a subscription or proprietary software which is no longer usable when you stop to pay for it or the company owning goes down.

Until they hit a hard bug or don't support newer transport formats or scenarios. Also the community dries up eventually

ancoraunamoka ,

It is unrealiatic, that in a stable software release there is suddenly, after you tested your backup a hard bug which prevents recovery.

How is unrealistic? Think of this:

  • day 1: you backup your files, test the backup and everything is fine
  • day 2: you store a new file that triggers a bug in the compression/encryption algorithm of whatever software you use, now backups are corrupted at least for this file
    Unless you test every backup you do, and consequently can't backup fast enough, I don't see how you can predict that future files and situations won't trigger bugs in a software
  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines