Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

atzanteol

@atzanteol@sh.itjust.works

This profile is from a federated server and may be incomplete. Browse more on the original instance.

atzanteol ,

So you know - that's the max power output rating of the power supply. The NAS can be using anything "up to" that amount. Likely well below it.

atzanteol ,

Sorry - I thought you didn't know rather than were just offering completely useless information on purpose.

Beginner looking for NAS advice (kbin.social)

I'm looking for advice on how to get started with a NAS, probably Synology since it's beginner friendly and often well recommended. I'm thinking of a 2 bay case with 2x4TB HDDs in RAID1 setup. What do I have to look out for in a device to get the best bang for my bucks?...

atzanteol ,

I dont have a lot of experience with it yet so I’m not sure if I’m going to run into problems by mapping them directly to a NAS, or if I should have local copies of data and then rsync / syncthing them into the NAS.

I don't know Synology specifically but you can generally NFS mount from the NAS to a local folder and mount that as a volume in Docker. I do it all the time - works fine except sometimes for databases which prefer local filesystems (locking files over NFS is complex).

atzanteol ,

whats the life expectancy of a NAS? if it dies, can I just plug the drives into a new one?

Others have said that the drives are the weak point here - the NAS itself should last quite a while. But to address your second question - "maybe" (assuming you meant "and keep the data on them"). It will depend a lot on how the RAID on the NAS works. If it's just a Linux md RAID then you could probably pop them into a new Linux system and get them to mount (there will be issues of "drive order" you will need to deal with). Again if it's using standard zfs or BTRFS raid-like filesystems you would be fine. If the NAS has its own RAID or hardware RAID then likely not.

atzanteol ,

The allowed IP range on each client indicates what private address the server can use

I really dislike this description - yet I see it everywhere. It caused me a ton of confusion initially.

It's the IP addresses that will be routed over the VPN. So if you wanted, say, all traffic to go through the VPN then you would use "0.0.0.0/0". Which is what I do for my phone.

atzanteol ,

There are pros and cons to each...

There's the question of isolation. One shared service going down brings down multiple applications. But it can be more resource efficient to have a single DB running. If you're doing database maintenance it can also be easier to have one instance to tweak. But if you break something it impacts more things...

Generally speaking I lean towards one db per application. Often in separate containers. Isolation is more important for what I want.

I don't think anyone would say you're wrong for using a single shared instance though. It's a more "traditional" way of treating databases actually (from when hardware was more expensive).

atzanteol ,

Network interfaces can be assigned multiple IP addresses. You should be able to use DHCP and a link local address at the same time.

That said I think this is easier to do with network manager. I'm not sure how it works with the rpi. But "link local address rpi" is a good search term to start with.

atzanteol ,

Would you set a gateway? They're on the same network.

atzanteol ,

There is something to be said about CLI applications being risky by default ("rm" doesn't prompt to ask, rsync --delete will do just that). But I've definitely slipped on the mouse button while "drag & dropping" files in a GUI before. And it can be a right mess if you move a bunch of individual files rather than a sub-folder...

Starting from zero

I'm interested in exploring the world of self hosting, but most of the information that I find is incredibly detailed and specific, such as what type of CPU performs better, etc. What I'm really looking for is an extremely basic square 1 guide. I know basically nothing about networking, I don't really know any coding, but it...

atzanteol ,

Recommend doing a jellyfin server with an *arr stack

That's a great way to get a cease and desist letter from your ISP.

atzanteol ,

There are apps for lots of devices including Android TV which is what I use.

Self hosted NAS + Lightweight Game Streaming Solution?

I have an aging gaming desktop with a GTX 970 that I've previously used to let friends/family stream games. My area has a lot of fiber so it's surprisingly usable, even got VR working. Problem is, I'd prefer to use it as a NAS most the time as it has plenty of drive bays and I need somewhere better to run jellyfin than my...

atzanteol ,

I'd prefer to use it as a NAS most the time as it has plenty of drive bays and I need somewhere better to run jellyfin than my desktop.

I feel like "NAS" has simply lost all meaning.

atzanteol ,

Jellyfin is a NAS?

atzanteol ,

They're looking to run Jellyfin on the nas. And do a little gaming. That's not a NAS. It's a server/desktop that serves nfs shares.

atzanteol ,

That, is not a NAS then.

What is your preferred method for backing up several TB of data?

What storage software could I run to have an archive of my personal files (a couple TB of photos) that doesn't require I keep a full local copy of all the data? I like the idea of a simple and focused tool like Syncthing, but they seem to be angling towards replication....

atzanteol ,

Sounds like something like "git annex" is what you're looking for?

I use this to manage all my photos. It lets you add binaries and synchronize then to a backend server (can be local, can be s3, back blaze, etc).

You can then "drop" files and it ensures a remote exists first. And when you drop the file your still see a symlink of it locally (it's broken) so that you know it exists.

My workflow is to add my files, sync them to both a local server and b2, then I drop and fetch folders as i need (need disk space? "git annex drop 2022*", want to edit some photos? "git annex get 2022_10_01".

atzanteol ,

I'm using the bpg provider - but I share your pain. Both providers had things that don't work so I went with the one that supported my use-case better. But it's not ideal.

I would love an official provider.

How do I automount sshfs?

I have SSHFS on my server and would like to have it automatically mounted and store all of the documents, desktop, downloads, etc. on a couple computers. I am able to get it to all work except for mounting on startup. The server is Debian 12 and both clients are Tumbleweed. Nothing in fstab seems to work. When I add...

atzanteol ,

Why use SSHFS for that?

So that you don't have copies of files everywhere.

atzanteol ,

everywhere you want to use the files.

atzanteol ,

That's nice! I've always wanted a KVM but yeah, they're always super pricey...

atzanteol ,

My work laptop is 16x10 and I do appreciate the extra vertical space.

atzanteol ,

Are there any specific services that makes more sense to host on a laptop that would be sitting turned on but put away somewhere?

Nothing comes to mind.

One nice thing about using laptops though is the built-in UPS, assuming the battery is still good.

atzanteol ,

This sort of thing can be a bit of a pain.

chmod -R a+rX /path/to/pictures will grant "world-readable" to things so immich would be able to find them. You'd then want to set something like umask 002 for Nextcloud to create files by default with world readable permission. If it's running in a container I'm not sure how that is done as I've not done it before. You then hope Nextcloud doesn't set it own file permissions, which it may out of a duty to be more secure.

If you don't want files to be world-readable you could create a group that nextcloud and immich share then set group ownership. You may need to set a "sticky" bit to maintain the group ownership and then hope the individual applications don't override it, which they probably will.

If you can get both apps to use the same user or group that would probably be best. With the containerized versions of these you might be able to pass in a UID/GID for them to use?

atzanteol ,

To be pedantic - KVM is the hypervisor. Proxmox is a wrapper to it.

atzanteol , (edited )

I know, I know enough to be dangerous now, and I’m trying to get the system through my dangerous phase. I don’t think I know enough to ask intelligent questions yet…

That's fine - we all start somewhere.

I went looking to see if there were any "intro to networking for homegamers" sites but didn't come up with much... Maybe I'll put something together some day as this is a frequently misunderstood topic.

You "typically" have something like this:
Internet -> Your "ISP plastic box" (which acts as a router, firewall and gateway (actual terms you'll want to understand and can search on)) -> Things on your network.

In this scenario you have two separate networks - the Internet (things on the left of the firewall) and your internal network (things on the right).

Your internal things get to the internet by asking the gateway to fetch them for it. This is called "Network Address Translation" (NAT). Your internal network uses "non-globaly-routable IP addresses". They look like "192.168.0.0", "10.0.0.0", and 172.16.0.0. These are sometimes called RFC1918 addresses. These addresses can't be used on the internet. They're reserved for internal private use only. There are thousands of networks using those ranges internally so they're not unique globally.

The router has a "public" facing internet connection which gets an IP from your ISP that is globally unique. And it has a "private" facing connection that gets a private IP address (something like 192.168.0.1 is common). If you run ip route you'll see something like this:

$ ip route
default via 192.168.0.1 dev wlp0s20f3 proto dhcp metric 600 

This tells your computer to send all traffic that is not on the local private network (or it doesn't have a route for specifically) to the gateway (at 192.168.0.1) to fetch for you.

Things on the internet side of your router can't access things on the private network directly by default. So if you haven't gone out of your way to make that happen then I have good news - you're probably fine. What you're installing with UFW is a "host-based firewall". It only blocks and restricts access to ports running on that server. But the router also has a firewall which blocks everything from your network.

If you do want to access services in your private network from the internet side then you do something called "port forwarding". This means that when systems on the internet connect to your router on, for example, port 80 the router will "forward" the request to an internal system on that port (or a different one depending on how you configure it). But only that port gets forwarded and to a specific internal host/port. The router then acts as a go-between to make the communication happen.

Once you start exposing services to the internet you open up a larger can of risk that you'll want to understand.

In short - if you're not doing anything fancy then you probably don't really need host-based firewalling on systems in your private network. It wouldn't hurt - and I do it as well - but it's not a big deal if you don't.

How should I host Handbrake?

I'm currently using Handbrake on a windows 11 installation of a desktop computer. I am planning on turning this desktop computer into a NAS and media server to replace my current raspberry pi 4 system. Handbrake works great on this computer, but I was wondering how I could use Handbrake on this system after I convert it into a...

atzanteol ,

I wouldn't wish using ffmeg on my worst enemy. The cli requires a sophisticated understanding of encodings and codecs which makes writing a general "I want to rescale things" script weirdly complicated.

The cli options seem to change as well since I rarely am able to run a ffmpeg line found in the wild with any success. Some systems even ship "avconv" instead which makes this even more exciting.

/rant

atzanteol ,

I feel this comment in my bones. I decided a long time ago that if the solution to my problem was "use ffmpeg" then the problem wasn't worth solving. I just have more that I want to do in life than google the parameters of some ffmpeg plugin and reading wikipedia on the theory of digital video encoding to even know what CRF is, do I want it, and is "9" the value I want for it?

atzanteol , (edited )

The argument for hardware RAID has typically been about performance. But software RAID has been plenty performant for a very long time. Especially for home-use over USB...

Hardware RAID also requires you to use the same RAID controller to use your RAID. So if that card dies you likely need a replacement to use that RAID. A Linux software RAID can be mounted by any Linux system you like, so long as you get drive ordering correct.

There are two "general" categories for software RAID. The more "traditional" mdadm and filesystem raid-like things.

mdadm creates and manages the RAID in a very traditional way and provides a new filesystem agnostic block device. Typically something like /dev/md0. You can then use whatever FS you like (ext4, btrfs, zfs, or even LVM if you like).

Newer filesystems like BTRFS and ZFS implement raid-like functionality with some advantages and disadvantages. You'll want to do a bit of research here depending on the RAID level you wish to implement. BTRFS, for example, doesn't have a mature RAID5 implementation as far as I'm aware (since last I checked - double-check though).

I'd also recommend thinking a bit about how to expand your RAID later. Run out of space? You want to add drives? Replace drives? The different implementations handle this differently. mdadm has rather strict requirements that all partitions be "the same size" (though you can use a disk bigger than the others but only use part of it). I think ZFS allows for different size disks which may make increasing the size of the RAID easier as you can replace one disk at a time with a larger version pretty easily (it's possible with mdadm - but more complex).

You may also wish to add more disks in the future and not all configurations support that.

I run a RAID5 on mdadm with LVM and ext4 with no trouble. But I built my RAID when BTRFS and ZFS were a bit more experimental so I'm less sure about what they do and how stable they are. For what it's worth my server is a Dell T110 from around 12 years ago. It's a 2 core Intel G850 which isn't breaking any speed records these days. I don't notice any significant CPU usage with my setup.

atzanteol ,

If I were to redo things today I would probably go with ZFS as well. It seems to be pretty robust and stable. In particular the flexibility in drive sizes when doing RAID. I've been bitten with mdadm by two drives of the "same size" that were off by a few blocks...

atzanteol ,

Y’all must’ve been doing something wrong with your hardware raid to have so many problems. Anecdotally, as an admin for 20+ years, I’ve never had a significant issue with hardware raid. The exception might be the Sun 3500 arrays. Those were such a problem and we had dozens of them.

So what were you doing wrong to have so much trouble with the Sun 3500's?

atzanteol ,

This would prevent your friend from having to open ports in their router and from exposing their IP to the world (beyond their normal traffic, that is).

Their IP address is already "exposed to the world." I keep seeing people recommending this pattern in this community for the same reason. But I genuinely don't understand it. It sounds like one of those VPN ads frankly.

Your IP address is not private.

Frankly I would mothball the servers and move everything to the cloud rather than use a friend's resources. You retain control over the environment and don't need to worry about somebody unplugging your computer to vacuum.

atzanteol ,

What benefit do you think the vps provides though?

atzanteol ,

And you do realize there's a significant difference between exposing your IP as a client and exposing your IP as one that has servers hosted behind it, right?

No, there isn't. Bots scan indiscriminately. And script kiddies will still attack your servers running in their network, just via your proxy.

atzanteol ,

Hiding your IP when you open services to the internet.

No it doesn't. It hides it from things accessing your server but your IP address is not a secret and bots will scan it even if you do absolutely nothing on-line. And unless you're using a VPN 24x7 while browsing you give your IP address out more often by "using the internet" than you would by "running a server".

Though I suppose if you're the sort of person who really cares about hiding their IP you're also using a VPN 24x7 anyway... The VPN companies' marketing has worked wonders on spooking people about "your IP is available" it seems. I mean - sure, it is. But who cares?

Breaking out of ISP NAT (aka carrier NAT / CGNAT), where clients can’t open connections to your public IP.

That's fair - if needed.

atzanteol ,

HTTPS performs two duties.

  1. Secures your connection from prying eyes.
  2. Verifies the identity of the server.

Your VPN provides the former but not the latter. That said the odds of there being an issue in this regard are so slim as to be zero, so you'll probably be fine.

atzanteol ,

Wut? Ext4 is quite reliable.

atzanteol ,

Most filesystems should "just work" these days.

Why are you blaming the filesystem here when you haven't ruled out other issues yet? If you have a drive failing a new FS won't help. Check out "smartctl" to see if it reports errors in your drives.

atzanteol ,

FWIW lvm can give you snapshots and other features. And mdadm can be used for a raid. All very robust tools.

atzanteol ,

Nobody is going to go through the effort to ddos a personal site. 😂

atzanteol ,

That's not a ddos. Not even close. Your ISP would be getting involved if it were.

You don't even need to do a distributed dos against a home system since your bandwidth is so easy to overcome. A single EC2 instance could flood your standard home network.

atzanteol ,

Has it "denied service" to you? I'd be genuinely surprised. Are you on dial-up? I've run servers on my home network for "never you mind how long" and have never had a denial of service due to bot traffic.

atzanteol ,

Ahh - I see. That's why I keep telling people "a raspberry pi is not a server". :-)

As a self-hoster I would still recommend figuring out how to setup something as simple as any of the available WordPress plugins that do caching though. "Being lazy" and "Self-hosting" will end in tears.

Jayjo , to Selfhosted

@selfhosted Have a commerical @wireguard vpn on my server. The problem i have is that if i use a docker, it does use the vpn interface with iptables, but if that goes down, the docker still goes through without the vpn interface. I have looked at iptables, but docker makes it own, and bit of a minefield. Any ideas? Thanks

atzanteol ,

Maybe somebody else will provide more info, but by default docker usually creates a bridge for your containers called docker0 and uses the local system's routing tables.

You need to figure out how to either create a new docket network that only routes via the VPN or do that for your host as well.

atzanteol ,

They will also use 1.1.1.1 whenever they want. The order is not guaranteed.

Hosts also tend to use the same one for some time, so if your pihole went down clients may still favor 1.1.1.1 even after it comes back up.

atzanteol ,

I was doing some work on my server and noticed that when pi-hole was down, I couldn't access the internet.

You've opted to take control over a critical piece of network infrastructure. This is to be expected.

There's a reason DHCP provides for multiple DNS servers to be listed. Having redundant DNS servers is a common setup. So yes, multiple piholes if you want stability.

atzanteol ,

Does it run in the foreground? It works for me if I run podman run -it --rm docker.io/frooodle/s-pdf:latest. Maybe it's something with the application itself and some data in those mount points?

atzanteol ,

bots all over the net will find any openly exposed ports in no time and start attacking it blindly,

True.

putting strain on your router

I guess? Not more than it can handle mind. But sure there will be a bit of traffic. But this is also kinda true whether you expose ports or not. The scanning is relentless.

and a general risk into your home network.

Well...If your proxy forwards traffic to your home network you're still effectively exposing your home network to the internt. There's just a hop in between. Scans that attack the web applications mostly don't know or care about your proxy. If I hacked a service through the proxy I still gain access to your home network.

That said, having crowdstrike add a layer of protection here is a good thing to potentially catch something you didn't know about (eg a forgotten default admin password). But having it on a different network over a vpn doesn't seem to add any value here?

atzanteol ,

You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server.

You do what makes you feel comfortable, but understand that it's not a lot safer. It's not useless though so I wouldn't say don't do it. It just feels a bit too much effort for too little gain to me. And maybe isn't providing the security you think it is.

It's not "where the port is opened" that matters - it's "what is exposed to the internet" that matter. When you direct traffic to your home network then your home network is exposed to the internet. Whether though VPN or not.

The proxy server is likely the least vulnerable part of your stack, though I don't know if "caddy" has a good security reputation. I prefer to use Apache and nginx as they're tried and true and used by large corporations in production environments for that reason. Your applications are the primary target. Default passwords, vulnerable plugins, known application server vulnerabilities, SQL injections, etc. are what bots are looking for. And your proxy will send those requests whether it's in a different network or not. That's where I do like that you have something that will block such "suspect" requests to slow such scanning down.

Your VPS only really makes any sense if you have a firewall in 'homelab' that restricts traffic to and from the VPN and specific servers on specific ports. I'm not sure if this is what is indicated by the arrows in and out of the "tailscale" box? Otherwise an attacker with local root on that box will just use your VPN like the proxy does.

So you're already exposing your applications to the internet. If I compromise your Jellyfin server (through the VPS proxy and VPN) what good is your VPS doing? The first thing an attacker would want to do is setup a bot that reaches out to the internet establishing a back-channel communication direct to your server anyway.

Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN.

It's not "exposing a port that matters" - it's "providing access to a server." Which you've done. In this case you're exposing servers on your home network - they're the targets. So if you want to follow that advice then you should have your servers in a VLAN now.

The reason for separating servers on their own VLAN is to limit the reach an attacker would have should they compromise your server. e.g. so they can't connect to your other home computers. You would create 2 different networks (e.g. 10.0.10.0/24 and 10.0.20.0/24) and route data between them with a firewall that restricts access. For example 10.0.20.0 can't connect to 10.0.10.0 but you can connect the other way 'round. That firewall would then stop a compromised server from connecting to systems on the other network (like your laptop, your chromecast, etc.).

I don't do that because it's kinda a big bother. It's certainly better that way, but I think acceptable not to. I wouldn't die on that hill though.

I want to be careful to say that I'm not saying that anything you're doing is necessarily wrong or bad. I just don't want you to misunderstand your security posture.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines