Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

atzanteol

@atzanteol@sh.itjust.works

This profile is from a federated server and may be incomplete. Browse more on the original instance.

atzanteol ,

HDDs don't do well when rotated

The original iPod had an HDD in it. You can rotate HDDs. Sharp impacts may be risky though, especially for a non-laptop drive.

Is it practically impossible for a newcomer selfhost without using centralised services, and get DDOSed or hacked?

I understand that people enter the world of self hosting for various reasons. I am trying to dip my toes in this ocean to try and get away from privacy-offending centralised services such as Google, Cloudflare, AWS, etc....

atzanteol ,

Reverse proxies don't add security.

atzanteol ,

My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web,

The mapping is helpful but not a security benefit. The latter can be done with a firewall.

Paraphrasing - there is a bunch of stuff you can also do with a reverse proxy

Yes. But that's no longer just a reverse proxy. The reverse proxy isn't itself a security tool.

I see a lot of vacuous security advice in this forum. "Install a firewall", "install a reverse proxy", etc. This is mostly useless advice. Yes, do those things but they do not add any protection to the service you are exposing.

A firewall only protects you from exposing services you didn't want to expose (e.g. NFS or some other service running on the same system), and the rproxy just allows for host based routing. In both cases your service is still exposed to the internet. Directly or indirectly makes no significant difference.

What we should be advising people to do is "use a valid ssl certificate, ensure you don't use any application default passwords, use very good passwords where you do use them, and keep your services and servers up-to-date".

A firewall allowing port 443 in and an rproxy happily forwarding traffic to a vulnerable server is of no help.

atzanteol ,

Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles

So what? I can still access your application through the rproxy. You're not protecting the application by doing that.

Install a single wildcard cert and easily cover any subdomains you set up

This is a way to do it but not a necessary way to do it. The rproxy has not improved security here. It's just convenient to have a single SSL endpoint.

There’s even nginx configuration files out there that will block URL’s based on regex pattern matches for suspicious strings. All of this (probably a lot more I’m missing) adds some level of layered security.

If you do that, sure. But that's not the advice given in this forum is it? It's "install an rproxy!" as though that alone has done anything useful.

For the most part people in this form seem to think that "direct access to my server" is unsafe but if you simply put a second hop in the chain that now you can sleep easily at night. And bonus points if that rproxy is a VPS or in a separate subnet!

The web browser doesn't care if the application is behind one, two or three rproxies. If I can still get to your application and guess your password or exploit a known vulnerability in your application then it's game over.

atzanteol ,

They may offer some sort of WAF (web application firewall) that inspects traffic for potentially malicious intent. Things like SQL injection. That's more than just a proxy though.

Otherwise, they really don't.

atzanteol ,

I'm positive that F5's marketing department knows more than me about security and has not ulterior motive in making you think you're more secure.

Snark aside, they may do some sort of WAF in addition to being a proxy. Just "adding a proxy" does very little.

atzanteol ,

... You're joking right?

atzanteol ,

No point talking to you then.

atzanteol ,

I like Subsonic. The interface is a bit dated but it supports multiple users and has excellent android apps.

atzanteol ,

IP was invented in the '70s. Sometimes older protocols that work are just fine.

atzanteol ,

I picked up a second hand monitor from a goodwill shop for like $7USD. It would be worth having a display of some sort for troubleshooting.

atzanteol , (edited )

The zotero docs recommend against synchronizing by just copying a folder as it can lead to corruption.

They recommend using webdav which nextcloud supports but syncthing doesn't.

So your workflow is definitely possible with nextcloud and is the preferred option.

atzanteol ,

What does this mean?

it's not just a copy. It syncs the folder.

It's remarkable to me that you recommended to somebody an option that is the exact opposite of what you know to be true.

atzanteol ,

Do you think webdav somehow dumps you database? No it’s just a protocol to save your files on your webserver. It’s just a middelman.

Umn. It allows the application to do its own synchronization and diff resolution. It's why they recommend it.

Directory synchronization is a "best effort" to copy files back and forth without considering the application's needs. Copying database files while they're being written can be problematic for example.

Both Nextcloud and syncthing will synchronize a folder. And it will probably work if you aren't making lots of changes on both systems. But there is increased risk.

Yeah it’s my recommendation from my personal experience. Is that wrong?

Yes - absolutely. "I've been lucky so far" and recommending against what the product you're using says you should do is TERRIBLE advice.

The point is, syncthing is rock solid, never had any issue being it with my zotero database or syncing files between my devices. If you’re a Nextcloud advocate or are against my personal opinion so be it :).

Why are you getting defensive towards syncthing? It seems fine. It's the wrong tool for what you're using it for.

atzanteol , (edited )

Quick pros/cons from what I've read (correct me if I'm wrong - I've not used syncthing myself):

syncthing

Pros:

  • Easy to setup and use.
  • No infrastructure to maintain
  • Will sync directories between computers

Cons:

  • Uses third party resources to sync by default (can setup direct sync if needed/wanted however)
  • Only does directory synchronization

Nextcloud

Pros:

  • Can synchronize directories
  • Entire synchronization pipeline is under your control
  • Offers a lot more functionality if you want it (WebDAV, Calendars, public shares with "anyone with URL can view" permission, etc.)

Cons:

  • You need to setup/maintain your Nextcloud server
  • Can be fiddly to setup for some (wasn't for me - but lots of people do complain about it).
atzanteol ,

Thanks! Updated.

atzanteol ,

With that said, it is probably not worth it if she is a boomer. It would take a long time to get into a new workflow and it would affect her output. If she is used to adobe she should probably stick to it.

Yeah, she's basically dead right?

Is it safe to open a forgejo git ssh port in my router?

Hello all! Yesterday I started hosting forgejo, and in order to clone repos outside my home network through ssh://, I seem to need to open a port for it in my router. Is that safe to do? I can't use a vpn because I am sharing this with a friend. Here's a sample docker compose file:...

atzanteol ,

I have come to the conclusion that, regardless of whether it is safe, it doesn't make sense to increase the attack surface when I can just use https and tokens, so that's what I am going to do.

Are you already exposing HTTPS? Because if not you would still be "increasing your attack surface".

atzanteol ,

Opening ports on your router is never safe !

This is both true and highly misleading. Paranoia isn't a replacement for good security.

I would recommend something like wireguard, you still need to open a port on your router, but as long as they don't have your private key, they can't bruteforce it.

The same is true of ssh when using keys to authenticate.

atzanteol ,

Wait, so you have the full website exposed to the Internet and you're concerned about enabling ssh access? Because of the two ssh would likely be the more secure.

But either are probably "fine" so long as you have only trusted users using the site.

atzanteol ,

You’re right, but only if you are an experienced IT guy in enteprise environnement. Most users (myself included) on Lemmy do not have the necessary skills/hardware to properly configure and protect their networking system, thats way I consider something like wireguard way more secure than opening an SSH port.

But it doesn't help to just tell newbs that "THAT'S INSECURE" without providing context. It 1) reinforces the idea that security "is a thing" rather than "something you do" and 2) doesn't give them any further reference for learning.

It's why some people in this community think that putting a nginx proxy in front of their webapp somehow increases their security posture. Because you don't have "direct access" to the webapp. It's ridiculous.

Sure SSH key based configuration is also doing a great job but there is way more error prone configuration with an SSH connection than a wireguard tunnel.

In this case it's handled by forgejo.

atzanteol ,

You can get splitters for power cables.

atzanteol ,

Docker compose has a default "feature" of prefixing the names of things it creates with the name of the directory the yml is in. It could be that the name of your volume changed as a result of you moving the yml to a new folder. The old one should still be there.

docker volume ls

atzanteol , (edited )

Glad you sorted it!

It's very unexpected behavior for docker compose IMHO. When you say the volume is named "foo" it creates a volume named "directory_foo". Same with all the container names.

You do have some control over that by setting a project name. So you could re-use your old volumes with the new directory name.

Or if you want to migrate from an old volume to a new one you can create a container with both volumes mounted and copy your data over by doing something like this:

docker run -it --rm -v old_volume:/old:ro -v new_volume:/new ubuntu:latest 
$ apt update && apt install -y rsync
$ rsync -rav --progress --delete /old/ /new/ # be *very* sure to have the order of these two correct!
$ exit

For the most part applications won't "delete and re-create" a data source if it finds one. The logic is "did I find a DB, if so then use it, else create a fresh one."

atzanteol ,

I have a similar distrust of volumes. I've been warming up to them lately but I still like the simple transparency of bind mounts. It's also very easy to backup a bind mount since it's just sitting there on the FS.

atzanteol ,

Why not just ask for help with the issues you're having?

Mirror all data on NAS A to NAS B

I'm duplicating my server hardware and moving the second set off site. I want to keep the data live since the whole system will be load balanced with my on site system. I've contemplated tools like syncthing to make a 1 to 1 copy of the data to NAS B but i know there has to be a better way. What have you used successfully?

atzanteol ,

Sounds like you want a clustered filesystem like gpfs, ceph or gluster.

atzanteol ,

If something you're running has a memory leak then it doesn't matter how much RAM you have.

You can try adding memory limits to your containers to see if that limits the splash damage. That's to say you would hopefully see only one container (the bad one) dying.

atzanteol ,

I love todo-txt! As a heavy cli user it's the quickest and easiest to use todo "system" I've found.

atzanteol ,

The reverse proxy is going to have a config that says "for hostname 'foo' I should forward traffic to foo.example.com:port".

If you setup the rproxy at home then ssh just needs to forward all port 443 traffic to the rproxy. It doesn't care about hostnames. The rproxy will then get a request with the hostname in the data and forward it to the appropriate target on behalf of the requester.

If you setup the rproxy at the vps then yes - you would need to forward different ports to each backend target. This is because the rproxy would need to direct traffic to each target individually. And if your target is "localhost" (because that's where the ssh endpoint is) then you would differentiate each backend by port.

atzanteol ,

You're not "broadcasting" anything. You're running a server.

Your browser is the thing sending your ip to every site you visit. And beyond simple geolocation data it's not that useful to anybody.

atzanteol ,

Nginx isn't for security it's to allow hostname-based proxying so that your single IP address can serve multiple backend services.

atzanteol ,

To provide a bit more detail then - you would setup your proxy with DNS entries "foo.example.com" as well as "bar.example.com" and whatever other sub-domains you want pointing to it. So your single IP address has multiple domain names.

Then your web browser connects to the proxy and makes a request to that server that looks like this:

GET / HTTP/1.1
Host: foo.example.com

nginx (or apache, or other reverse proxies) will then know that the request is specifically for "foo.example.com" even though they all point to the same computer. It then forwards the request to whatever you want on your own network and acts as a go-between between the browser and your service. This is often called something like host-based routing or virtual-hosts.

In this scenario the proxy is also the SSL endpoint and would be configured with HTTPS and a certificate that verifies that it is the source for foo.example.com, bar.example.com, etc.

atzanteol ,

That's basically it. Definitely "not for me" either but some people like GUIs on these things.

atzanteol ,

Or streaming to a device that doesn't support your encoding. Something like an android tv that isn't as flexible and may need on the fly transcoding. You can be careful to select a well supported encoding on the server if needed.

atzanteol ,

Wireguard doesn't obfuscate its traffic so non-standard ports may not help depending on how sophisticated the blocking is (they could recognize the protocol and block your traffic regardless of port).

atzanteol ,

Can you ssh out? You could setup a VPS somewhere and use remote port forwarding to tunnel back home.

ssh -R 80:localhost:80 user@vps # forward HTTP traffic from remote host to the local host

You can even run ssh over an ssh tunnel for inceptiony goodness.

ssh -R 2222:localhost:22 user@vps  # your home system
ssh -p 2222 homeuser@vps  # From your remote system
atzanteol ,

Interesting - I had not. It was ages ago I was doing something like what I posted (well before that project ever got started) and it worked "well enough" for what I was doing at the time. Usually I'd run a SOCKS proxy on that second SSH line (-D 4444) and just point my browser at localhost:4444 to route everything home (or use foxyproxy to only route some traffic home).

Looks like sshuttle may have better performance though and provide similar functionality.

atzanteol ,

SSH port forwarding is quite handy. You can have SSH setup a SOCKS proxy that you can use to send your browser traffic through the tunnel as well.

How do I setup my own FOSS shopping website for my business?

Hello, I don't have much experience in self-hosting, I'm buying a ProtonVPN subscription and would like to port forward. I have like no experience in self-hosting but a good amount in Linux. I'm planning on using Proxmox VE with a YunoHost VM. I already have a domain name from Njalla. I'm setting up a website for my computer...

atzanteol ,

Be sure to familiarize yourself with PCI DSS compliance and how it does or does not apply to you and your payment gateway.

Managing servers in multiple locations

How do you manage multiple machines in different locations. The use case is something like this, i want self hosted different apps in different locations as redundancy. Something like i put one server in my house, one in my dad’s house, couple other in my siblings/friends house. So just in case say machine in my house down or...

atzanteol ,

This will be a good lesson in how difficult it is to setup servers with high availability.

I'd suggest getting redundancy working on your own network first before distributing it. How do you plan to handle storage? Will that be redundant as well?

Reaching service through domain from local network

I think i have a stupid question but i couldn't find answer to it so far :( When i want to reach a service that i host on my own server at home from the local network at home, is using a public domain effective way to do it or should i always use server's IP when configuring something inside LAN? Is my traffic routed through the...

atzanteol ,

Some routers will allow you to reach the external IP from inside your network, some may not. Mine currently does which is great (I can access www.example.com which resolves to my external IP address and it is forwarded as though I were outside my network).

What I do is register public domains on "example.com" and I have a sub-domain for internal IPs (home.example.com). My local DNS server responds for home.example.com and forwards for the rest.

This makes it clear to me which address I will be getting. And since all my public web traffic goes through a reverse-proxy it lets me also have a name for the server itself. e.g. "music.example.com" may go through the reverse proxy for my music server, but "music.home.example.com" resolves to my music server directly. So I can use the latter for ssh and other things.

atzanteol , (edited )

That seems like a terrible idea.

Why not just assign multiple IPs to eth0 instead? Or create a virtual interface?

atzanteol ,

Ahh, interesting.

atzanteol ,

What do you think "big corps" are doing with your IP address?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines