Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@Shimitar@feddit.it avatar

Shimitar

@Shimitar@feddit.it

Me

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Shimitar ,
@Shimitar@feddit.it avatar

Summit. Neat, simple and clear. Updated very often and I simply love it.

Shimitar ,
@Shimitar@feddit.it avatar

Why such a mess? Either use immich with its upload or use syncthing (I use the latter).

Immich sucks in providing folder based albums, but excells in phone photo backup and sync!

Shimitar ,
@Shimitar@feddit.it avatar

There are better options for viewing.... Imho

Shimitar ,
@Shimitar@feddit.it avatar

There are many.

Personally I use both PiGallery2 (great folder view, super fast) and HomeGallery (innovative browsing approach, modern looks).

They will both work with your existent file/folder structure and will update as you add or remove photos.

Shimitar ,
@Shimitar@feddit.it avatar

Piwigo then, but its old, feels old, its slow and ugly.

Shimitar OP ,
@Shimitar@feddit.it avatar

Interesting but seems way overkill....

Shimitar OP ,
@Shimitar@feddit.it avatar

Why not agenDav?

Shimitar OP ,
@Shimitar@feddit.it avatar

Just learned about its existence from your link... Its referred there.

Shimitar ,
@Shimitar@feddit.it avatar

Running radicale on mydomain.blah/radicale just fine since day 0....

Shimitar ,
@Shimitar@feddit.it avatar

I used nextcloud for many years. I failed to see significant improvements overall and it has always been slow and clunky.

I have replaced with radicale, a WebDAV server, syncthing and little more.

Over the years I tried lots of plugins and never settled with any, always too barebone or mild.

Still an amazing tool, if it fits your use case.

Shimitar ,
@Shimitar@feddit.it avatar

Please refrain from posting without explaining. That's reddit style and its considered rude here.

Shimitar ,
@Shimitar@feddit.it avatar

Been on USB enclosures using Linux software raid for 20 years and never lost a bit so far.

Didn't go cheap with USB jbod, and i have no idea if zfs is more sensitive to USB... But I don't use zfs either so don't know.

But again I have been using two jbods over USB:

  • 4 SSDS split on two RAID1s on USB3
  • 2 HDDs on RAID1 on USBC

All three raid are managed by Linux software raid stack.

The original one I think I started in the 2000's, then upgraded disks many times and slowly moving to ssds to lower heat production and power usage.

Keep them COOL that's important.

Shimitar ,
@Shimitar@feddit.it avatar

I go to my disks and count my bits every morning, the total is always there, never lost one!

Shimitar ,
@Shimitar@feddit.it avatar

Yeah, you know there are only 10 types of people in the world: those who can count in binary and the others...

Shimitar ,
@Shimitar@feddit.it avatar

You fan pretty effective software raid with Linux built in drivers. No need for hardware raid, specially not cheapo ones...

Running Linux software raid for 20+ years with zero issues... Currently on USB3 and USB-C disks, but in the past all kind of mixed solutions (ide/sata/esata/USB/FireWire...).

Speed is not a big issue in my experience if you consume your media over network anyway.

Looking for a reverse proxy to put any service behind a login for external access.

I host a few docker containers and use nginx proxy manager to access them externally since I like to have access away from home. Most of them have some sort of login system but there are a few examples where there isn't so I currently don't publicly expose them. I would ideally like to be able to use totp for this as well.

Shimitar ,
@Shimitar@feddit.it avatar

Add Pam or basic auth to nginx and you are done.

Shimitar ,
@Shimitar@feddit.it avatar

Sysvinit on gentoo here. Its so simple and clean, all can be managed and hacked via bash scripts.

I see no benefits in my use cases for systemd. Boot speed is unneeded, service auto-restart is done via Monit, anything else I don't need.

This is true for all my server -and- all my workstations and laptops as well.

Systemd never solved a problem needed to be solved to start with.

Now that it also does coffee and cream for you, i start seeing some benefits like auto-restart services. Was it worthwhile? Meh, dunno.

At first it seemed another case of "I am too young and I want stuff done my way just because" and redhat shoved it down everybody throath to gain marked dominance. That they did.

At least now systemd looks like mature and finally start making sense. I was even contemplating testing a migration on one server.

Then I remembered, I like freedom of choice and keeping up being an old fart, so I didn't (yet).

(No, for Wayland and network manager I think they are both welcome and needed from the start).

It didn't help the main Dev suckass attitude, that didn't made friends.

How should I do backups?

I have a server running Debian with 24 TB of storage. I would ideally like to back up all of it, though much of it is torrents, so only the ones with low seeders really need backed up. I know about the 321 rule but it sounds like it would be expensive. What do you do for backups? Also if anyone uses tape drives for backups I am...

Shimitar ,
@Shimitar@feddit.it avatar

Anything I can download again doesn't get backup, but it sits on a RAID-1. I am ok at losing it due to carelessness but not due to a broken disk. I try to be carefully when messing with it and that's enough, I can always download again.

Anything like photos notes personal files and such gets backedup via restic to a disk mounted to the other side of the house. Offsite backup i am thinking about it, but not really got to it yet. Been lucky all this time.

From 10tb of stuff, the totality of my backupped stuff amount to 700gb. Since 90% of are photos, the backup size is about 700gb too. The actually part of that 700gb that changes (text files, documents..) amount to negligible. The photos never change, at most grow a bit over time.

Shimitar ,
@Shimitar@feddit.it avatar

Rent a cheap vps and do something like I did with ssh tunneling, or wireguard VPN, between home and the vps:

https://wiki.gardiol.org/doku.php?id=router:ssh_tunnel

(Sorry I keep posting links to my wiki but the whole point was writing once)

Shimitar ,
@Shimitar@feddit.it avatar

Why rathole and not ssh tunneling? The latter exposes only one port (that you are already exposing anyway) while the former requires an additional port.

What is the actual benefit of rathole? I an asking genuinely.

Shimitar ,
@Shimitar@feddit.it avatar

Fair, setting up ssh tunnels with autoreconnect and such is indeed more complex.

Shimitar ,
@Shimitar@feddit.it avatar

This is a great reason, I didn't know, but its interesting.

Shimitar ,
@Shimitar@feddit.it avatar

I wouldn't follow the advice of using Immich. While its a great tool, growing fast and super polished, its currently aimed at photo backup from your android phone/tablet and is not a good pick for a family photo gallery.

To that end I would look into pigallery2 or the very good homegallery, which is still in early stages as well but also quite polished and already working great. They will not replace Immich, but will complete the workflow nicely.

My photo management flow (which includes your requirements, plus the capability to organize new photos over time) is here https://wiki.gardiol.org/doku.php?id=services:photomanagement if you are interested.

In general the flow is to buy or recycle a pc of anykind, install linux (optional, but recomendes), buy a domain you like from some registrar, setup some kind of remote access from outside to your home, and install the services you want.

The workflow mandatory includes hours spent trying and failing, and also having tons of fun in the process. Don't forget the WAF (Wife Appreciation Factor) which will determine how much fun you can have.

Last, i al documenting all my steps and proceedings while I run down my own selfhost rabbit hole in the above linked wiki (self hosted, ofc).

See you around, I guess!

Shimitar ,
@Shimitar@feddit.it avatar

More.

I agree nextcloud might be a very good solution.l, specially because all the service you might need are there. The fun factor decreases tough.

Also, while cloudflare is heavily sponsorized in this community I disagree. It's probably the easiest approach but you end up depending on a specific service. Renting a cheap vps (virtual private server) and setting up a VPN or ssh tunneling is the best approach, but slightly more complex. In exchange you are free to migrate to a different vps at any time with basically zero downtime.

Using a VPN is clearly the safest approach but has two limits:

  • more complex setup for you users
  • cannot expose public services (like sharing photos with friends outside family, or sharing your resumee)

Using ssh tunnels to make your internal server accessible on port 80/443 of the vps instead gives you the maximum freedom, but you run higher risk unless you secure it properly (service separation, https with let's encrypt, strong authentication and so on....)

Shimitar ,
@Shimitar@feddit.it avatar

Sorry man, I am on mobile so I keep missing parts.

As for hardware, I would recycle anything you have at home if it has at least 8gb ram and a network card. Specially laptops (low watts consumption and built-in battery in case of power outage) are my favourites. But if you want to spend for new stuff, the low power N100 are all the rage nowadays.

For storage, go with at least two disks or ssds or nvme in RAID1 (and keep in mind that is not backup, which you should plan to do), they can be external USB drives as well, provided you spend some good money and don't go cheap on the USB enclosure. Mine have been working perfectly for the last decade.

Shimitar ,
@Shimitar@feddit.it avatar

I create folders with name like:
/gallery/2024/03 - Trail Del Marchesato/

And put there all the photos related to that event.

Or more generic like:
/gallery/2024/Winter
To collect generic photos of that period.

So I divide by year and reason/event. Inside each use moves his own photos for that event, or they create their folders.

Tags do the rest.

Homegallery let's you view them by similar or tags, while pigallery2 let's you view them by the folder. Both together fits the bill

Shimitar ,
@Shimitar@feddit.it avatar

Wow... Luckly I don't use systemd which seems to be the vector causing the sshd backdoor, via liblzma...

Shimitar ,
@Shimitar@feddit.it avatar

At home i have a FWA over 5G (mobile) with 1Tb/month of traffic cap. That can be raised by 200Gb if needed. Cost 24€/month.

On mobile I have 150Gb capped 3G/4G/5G (whatever works) for 7.99€/month.

Not bad deals in comparison with what I read here.

Shimitar OP ,
@Shimitar@feddit.it avatar

I run containers on bare metal indeed.

I have services running in containers on bare metal and services running without containers, on bare metal.

How can I bypass CGNAT by using a VPS with a public IPv4 address?

I want to move away from Cloudflare tunnels, so I rented a cheap VPS from Hetzner and tried to follow this guide. Unfortunately, the WireGuard setup didn't work. I'm trying to forward all traffic from the VPS to my homeserver and vice versa. Are there any other ways to solve this issue?...

Shimitar ,
@Shimitar@feddit.it avatar

I tried a few.

Podhoarder is nice but more geared toward hoarding and a bit complex for listening.

Podfetch is nice and currently maintained but has some issues with proxy auth. It might cut the cheese for you I think. It has a weird naming scheme on disk tough.

AudioBookReader is amazing and still currently under very active development. With its mobile app is perfect for my use case. Podcast support is just fine for me.

PodGrabber seems abandoned since 2022, but I didn't try it.

N100 Mini PC w/ 3xNVMe?

Not sure why this doesn't exist. I don't need 12TB of storage. When I had a Google account I never even crossed 15GB. 1TB should be plenty for myself and my family. I want to use NVMe since it is quieter and smaller. 2230 drives would be ideal. But I want 1 boot drive and 2 x storage drives in RAID. I guess I could potentially...

Shimitar ,
@Shimitar@feddit.it avatar

Using USB3 / USBC external storage for years.

Buy a good, non cheap, USB jbod or raid enclosure and put SSDs in it!

I have a 4 bay USB3 jbod plus a 2 bay USB-C box, inside the disks are all RAID.

Indeed internal disks / ssds / nvme are better, but consider that speed wise even USB3 is faster than any WiFi.

Just don't but they cheap and use good cables. And if you use spinning disks, ensure they stay cool.

Shimitar ,
@Shimitar@feddit.it avatar

I put my 2.5 ssds with adapters into 3.5 bays

Shimitar ,
@Shimitar@feddit.it avatar

I have rented a cheap vps and use ssh encrypted port fotwarding to it instead of cloudflare. Its an option in alternative.

Shimitar ,
@Shimitar@feddit.it avatar

No, I mean, do host on your own hardware then rent a vps and use it as public IP by ssh-tunneling and forward ports 80/443 back to your own hardware.

The idea is:
https://wiki.gardiol.org/doku.php?id=selfhost:architecture

Shimitar ,
@Shimitar@feddit.it avatar

Yes exactly, you can switch as fast as your DNS entry gets updated and you have zero dependency to a specific provider.

Should I learn Docker or Podman?

Hi, I've been thinking for a few days whether I should learn Docker or Podman. I know that Podman is more FOSS and I like it more in theory, but maybe it's better to start with docker, for which there is a lot more tutorials. On the other hand, maybe it's better to straight up learn podman when I don't know any of the two and...

Shimitar ,
@Shimitar@feddit.it avatar

I fully agree with you that devs should not release debs&rpms&etc, that's distro responsibility to create and manage from the binaries that the devs should release. No Dev should have to create those distro-bases formats, it's evil and useless.

Let me be more clear: devs are not required to release binaries at all. Bit they should, if they want their work to be widely used. And in this case, providing also a binary release alongside images solves all freedom of choice issues in my opinion. Here you show me my lack of preparedness as I didn't considered docker files as actual build instructions, I will do in the future.

I also fully agree with you that curl+pipe+bash random stuff should be banned as awful practice and that is much worse than containers in general. But posting instructions on forums and websites is not per se dangerous or a bad practice. Following them blindly is, but there is still people not wearing seatbelts in cars or helmets on bikes, so..

I was not single containers out, I was replying to a post about containers. If you read my wiki, every time a curl/pipe/bash approach is proposed, I decompose it and suggest against doing that.

Chmod 777 should be banned in any case, but that steams from containers usage (due to wrongly built images) more than anything else, so I guess you are biting your own cookie here.

Having docker files and composer file is perfectly acceptable. What is not acceptable is having only those and no binary releases. Usually sources are available (in FOSS apps at least) but that can be useless if there are no building instructions provided or the app uses some less common build stack.

On Immich, which is a perfect example of an amazing piece of software fast growing and very polished, I did try to build from sources but I couldn't manage the ML part properly. This is indeed due to my lack of experience with the peculiar stack they are using, but some build instructions would have been appreciated greatly (now I realize I should have started from the docker files). I gave up and pulled the images. No harm done, but little extra fun for me, and while I do understand the devs position, they too keep talking about making a living out of it and that's a totally different point to discuss on a different thread. I would suggest them that public relations and user support is more important than actually releasing an amazing product for making a living out of it. But that's just my real world experience as product manager.

In a world where containers are the only proposed solution, I believe something will be taken from us all. Somebody else explained that concept better then me in this thread. That's all.

Are you reusing one postgres instance for all services?

I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc....

Shimitar ,
@Shimitar@feddit.it avatar

Why would that have blocked all my databases at once? That would affect the same database I was migrating, not the others.

Shimitar ,
@Shimitar@feddit.it avatar

Yes it counts indeed... But in that case the service is down while its migrated so the fact the database is also down does it count?

I mean, it's a self hosted home service, not your bank ATM network...

Shimitar ,
@Shimitar@feddit.it avatar

Absolutely! If that feels easier and more consistent go ahead and use the container.

But its really one single executable with zero dependencies. Manual setup is really as fast as podman pull & up -d.

PSA: Docker nukes your firewall rules and replaces them with its own.

I use nftables to set my firewall rules. I typically manually configure the rules myself. Recently, I just happened to dump the ruleset, and, much to my surprise, my config was gone, and it was replaced with an enourmous amount of extremely cryptic firewall rules. After a quick examination of the rules, I found that it was...

Shimitar ,
@Shimitar@feddit.it avatar

That's another good reason to use podman, rules are on nft and separated from your rules.

Shimitar ,
@Shimitar@feddit.it avatar

Nope, Joplin saves as .md files but those are clearly NOT markdown. I switched after I got burned.

Shimitar ,
@Shimitar@feddit.it avatar

That's the point: that is not markdown file. Most of the text is markdown, but try editing it with a different editor ...

Try back and forth between md editors...

You end up with a mess.
I want md for interoperability, and this is not good.

Shimitar ,
@Shimitar@feddit.it avatar

I am using markor on android and silverbullet (web) on anything else.

Joplin was OK, but the android editor felt sluggish and the only available web GUI was... Meh. And I still had to use WebDAV to sync. And I lost all my data once due to how Joplin "think" sync should be done.

Now using syncthing with markor&silverbullet. Nice combo, and I can still access all my notes over WebDAV anyway.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines