Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

atzanteol

@atzanteol@sh.itjust.works

This profile is from a federated server and may be incomplete. Browse more on the original instance.

atzanteol ,

With that said, it is probably not worth it if she is a boomer. It would take a long time to get into a new workflow and it would affect her output. If she is used to adobe she should probably stick to it.

Yeah, she's basically dead right?

Is it safe to open a forgejo git ssh port in my router?

Hello all! Yesterday I started hosting forgejo, and in order to clone repos outside my home network through ssh://, I seem to need to open a port for it in my router. Is that safe to do? I can't use a vpn because I am sharing this with a friend. Here's a sample docker compose file:...

atzanteol ,

I have come to the conclusion that, regardless of whether it is safe, it doesn't make sense to increase the attack surface when I can just use https and tokens, so that's what I am going to do.

Are you already exposing HTTPS? Because if not you would still be "increasing your attack surface".

atzanteol ,

Opening ports on your router is never safe !

This is both true and highly misleading. Paranoia isn't a replacement for good security.

I would recommend something like wireguard, you still need to open a port on your router, but as long as they don't have your private key, they can't bruteforce it.

The same is true of ssh when using keys to authenticate.

atzanteol ,

Wait, so you have the full website exposed to the Internet and you're concerned about enabling ssh access? Because of the two ssh would likely be the more secure.

But either are probably "fine" so long as you have only trusted users using the site.

atzanteol ,

You’re right, but only if you are an experienced IT guy in enteprise environnement. Most users (myself included) on Lemmy do not have the necessary skills/hardware to properly configure and protect their networking system, thats way I consider something like wireguard way more secure than opening an SSH port.

But it doesn't help to just tell newbs that "THAT'S INSECURE" without providing context. It 1) reinforces the idea that security "is a thing" rather than "something you do" and 2) doesn't give them any further reference for learning.

It's why some people in this community think that putting a nginx proxy in front of their webapp somehow increases their security posture. Because you don't have "direct access" to the webapp. It's ridiculous.

Sure SSH key based configuration is also doing a great job but there is way more error prone configuration with an SSH connection than a wireguard tunnel.

In this case it's handled by forgejo.

atzanteol ,

You can get splitters for power cables.

atzanteol ,

Docker compose has a default "feature" of prefixing the names of things it creates with the name of the directory the yml is in. It could be that the name of your volume changed as a result of you moving the yml to a new folder. The old one should still be there.

docker volume ls

atzanteol , (edited )

Glad you sorted it!

It's very unexpected behavior for docker compose IMHO. When you say the volume is named "foo" it creates a volume named "directory_foo". Same with all the container names.

You do have some control over that by setting a project name. So you could re-use your old volumes with the new directory name.

Or if you want to migrate from an old volume to a new one you can create a container with both volumes mounted and copy your data over by doing something like this:

docker run -it --rm -v old_volume:/old:ro -v new_volume:/new ubuntu:latest 
$ apt update && apt install -y rsync
$ rsync -rav --progress --delete /old/ /new/ # be *very* sure to have the order of these two correct!
$ exit

For the most part applications won't "delete and re-create" a data source if it finds one. The logic is "did I find a DB, if so then use it, else create a fresh one."

atzanteol ,

I have a similar distrust of volumes. I've been warming up to them lately but I still like the simple transparency of bind mounts. It's also very easy to backup a bind mount since it's just sitting there on the FS.

atzanteol ,

Why not just ask for help with the issues you're having?

Mirror all data on NAS A to NAS B

I'm duplicating my server hardware and moving the second set off site. I want to keep the data live since the whole system will be load balanced with my on site system. I've contemplated tools like syncthing to make a 1 to 1 copy of the data to NAS B but i know there has to be a better way. What have you used successfully?

atzanteol ,

Sounds like you want a clustered filesystem like gpfs, ceph or gluster.

atzanteol ,

If something you're running has a memory leak then it doesn't matter how much RAM you have.

You can try adding memory limits to your containers to see if that limits the splash damage. That's to say you would hopefully see only one container (the bad one) dying.

atzanteol ,

I love todo-txt! As a heavy cli user it's the quickest and easiest to use todo "system" I've found.

atzanteol ,

The reverse proxy is going to have a config that says "for hostname 'foo' I should forward traffic to foo.example.com:port".

If you setup the rproxy at home then ssh just needs to forward all port 443 traffic to the rproxy. It doesn't care about hostnames. The rproxy will then get a request with the hostname in the data and forward it to the appropriate target on behalf of the requester.

If you setup the rproxy at the vps then yes - you would need to forward different ports to each backend target. This is because the rproxy would need to direct traffic to each target individually. And if your target is "localhost" (because that's where the ssh endpoint is) then you would differentiate each backend by port.

atzanteol ,

You're not "broadcasting" anything. You're running a server.

Your browser is the thing sending your ip to every site you visit. And beyond simple geolocation data it's not that useful to anybody.

atzanteol ,

Nginx isn't for security it's to allow hostname-based proxying so that your single IP address can serve multiple backend services.

atzanteol ,

To provide a bit more detail then - you would setup your proxy with DNS entries "foo.example.com" as well as "bar.example.com" and whatever other sub-domains you want pointing to it. So your single IP address has multiple domain names.

Then your web browser connects to the proxy and makes a request to that server that looks like this:

GET / HTTP/1.1
Host: foo.example.com

nginx (or apache, or other reverse proxies) will then know that the request is specifically for "foo.example.com" even though they all point to the same computer. It then forwards the request to whatever you want on your own network and acts as a go-between between the browser and your service. This is often called something like host-based routing or virtual-hosts.

In this scenario the proxy is also the SSL endpoint and would be configured with HTTPS and a certificate that verifies that it is the source for foo.example.com, bar.example.com, etc.

atzanteol ,

That's basically it. Definitely "not for me" either but some people like GUIs on these things.

atzanteol ,

Or streaming to a device that doesn't support your encoding. Something like an android tv that isn't as flexible and may need on the fly transcoding. You can be careful to select a well supported encoding on the server if needed.

atzanteol ,

Wireguard doesn't obfuscate its traffic so non-standard ports may not help depending on how sophisticated the blocking is (they could recognize the protocol and block your traffic regardless of port).

atzanteol ,

Can you ssh out? You could setup a VPS somewhere and use remote port forwarding to tunnel back home.

ssh -R 80:localhost:80 user@vps # forward HTTP traffic from remote host to the local host

You can even run ssh over an ssh tunnel for inceptiony goodness.

ssh -R 2222:localhost:22 user@vps  # your home system
ssh -p 2222 homeuser@vps  # From your remote system
atzanteol ,

Interesting - I had not. It was ages ago I was doing something like what I posted (well before that project ever got started) and it worked "well enough" for what I was doing at the time. Usually I'd run a SOCKS proxy on that second SSH line (-D 4444) and just point my browser at localhost:4444 to route everything home (or use foxyproxy to only route some traffic home).

Looks like sshuttle may have better performance though and provide similar functionality.

atzanteol ,

SSH port forwarding is quite handy. You can have SSH setup a SOCKS proxy that you can use to send your browser traffic through the tunnel as well.

How do I setup my own FOSS shopping website for my business?

Hello, I don't have much experience in self-hosting, I'm buying a ProtonVPN subscription and would like to port forward. I have like no experience in self-hosting but a good amount in Linux. I'm planning on using Proxmox VE with a YunoHost VM. I already have a domain name from Njalla. I'm setting up a website for my computer...

atzanteol ,

Be sure to familiarize yourself with PCI DSS compliance and how it does or does not apply to you and your payment gateway.

Managing servers in multiple locations

How do you manage multiple machines in different locations. The use case is something like this, i want self hosted different apps in different locations as redundancy. Something like i put one server in my house, one in my dad’s house, couple other in my siblings/friends house. So just in case say machine in my house down or...

atzanteol ,

This will be a good lesson in how difficult it is to setup servers with high availability.

I'd suggest getting redundancy working on your own network first before distributing it. How do you plan to handle storage? Will that be redundant as well?

Reaching service through domain from local network

I think i have a stupid question but i couldn't find answer to it so far :( When i want to reach a service that i host on my own server at home from the local network at home, is using a public domain effective way to do it or should i always use server's IP when configuring something inside LAN? Is my traffic routed through the...

atzanteol ,

Some routers will allow you to reach the external IP from inside your network, some may not. Mine currently does which is great (I can access www.example.com which resolves to my external IP address and it is forwarded as though I were outside my network).

What I do is register public domains on "example.com" and I have a sub-domain for internal IPs (home.example.com). My local DNS server responds for home.example.com and forwards for the rest.

This makes it clear to me which address I will be getting. And since all my public web traffic goes through a reverse-proxy it lets me also have a name for the server itself. e.g. "music.example.com" may go through the reverse proxy for my music server, but "music.home.example.com" resolves to my music server directly. So I can use the latter for ssh and other things.

atzanteol , (edited )

That seems like a terrible idea.

Why not just assign multiple IPs to eth0 instead? Or create a virtual interface?

atzanteol ,

Ahh, interesting.

atzanteol ,

What do you think "big corps" are doing with your IP address?

atzanteol , (edited )

I'm not familiar with MediaMonkey so this may not be an option but...

I've used Subsonic for a number of years as my streaming server. I don't use tools to manage my files but one of the things I really like about Subsonic is that it will present the local file system structure to the clients (rather than only relying on ID3 tags). So if I create a directory called "1990s" it will show up in the Subsonic hierarchy (eventually - it scans periodically for new files).

I'm assuming you could use MediaMonkey to manage the files on your NAS over CIFS? Then Subsonic could just read the filesystem over NFS as well and serve what you have setup.

Subsonic clients offer the option to cache files or stream as well which is great for traveling.

atzanteol ,

And the transformation is complete. This is my favorite thing in the world now also.

I dockerized my Nextcloud (github.com)

For years I've been running my Nextcloud on bare metal. Now I finally decided to switch it to docker. I looked into Nextcloud All-In-One and it seems easy to deploy but it lacks configuration options like multiple domain names and putting it behind a traefik reverse proxy is kind of a pain to set up....

atzanteol ,

Docker runs on bare metal

atzanteol ,

Docker is bare metal

atzanteol ,

The second sentence implies otherwise.

atzanteol ,

Which is running on bare metal too.

atzanteol ,

Just non-container if you need to distinguish?

atzanteol ,

No. The phrase means that you're not running in a virtual machine.

atzanteol ,

Either way its pretty stupid to use it in reference to containers.

atzanteol ,

No, it's confusing. Because some people do use VMs. So it makes it far less clear about what a person's setup is.

An application running in a container runs exactly the same as a non-container application. It uses the same kernel. And it all runs directly on the CPU. There is no metal/non-metal distinction to make. People just say it because it "sounds cool". And there are a lot of people in this community who don't understand what containers are. So it further muddies the water

atzanteol ,

Put them on different subnets and use the gateway IP address maybe?

atzanteol ,

Look for things like restic, borg and duplicacy. They do differential backups and work well with slow remote storage.

I use backblaze with duplicacy and it works fine. Initial backup can take a while (nearly a week for me) but subsequent backups are fine. It's not fast per se, but sufficient for emergency recovery should I need it.

atzanteol ,

I would keep octopi off the Internet (local network only). There's too much risk that if somebody did get access they could heat your hot-end up to 300C and just leave it there or something.. Setup a vpn if you want remote access to it.

atzanteol ,

I'm using Nginx Proxy Manager, but lately I started seeing it moving slower and slower

I guarantee you that nginx performs adequately for self hosting. If something is running slower I'd look elsewhere first.

atzanteol ,

i have a mixed set of containers (a few, not too many) and bare-metal services

Containers run on bare metal. Or are you running them in a vm?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines