Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

atzanteol

@atzanteol@sh.itjust.works

This profile is from a federated server and may be incomplete. Browse more on the original instance.

atzanteol ,

"pinpoint" is a bit hyperbolic. Country, state and maybe city can be pretty good, at least in the US.

It's fine if that's important to you to hide, but entirely unnecessary for most people.

atzanteol , (edited )

"If you do everything perfectly you won't have security problems."

But people make mistakes. Human error and misconfigured servers is the cause of many security flaws. Especially people asking "what should I provide for DNS on this domain registration form?"

DNS services are dirt cheap. Hosting yourself requires some knowledge to run security, and you need a static IP address to host one which many people don't have.

Best not to do it yourself.

atzanteol ,

When you buy a domain you get the "right" to that domain, and nothing else. You then need to provide (either your own (not recommended) or through a service) DNS servers which will translate those names to IP addresses for you.

At that point any and all domains under the one you registered are in your control. Any requests for domains under that one will be directed to your DNS servers.

Sometimes the registration and domain management are provided by the same companies.

atzanteol ,

I would prefer something that functions both on and off line. A home assistant dashboard will be useless while you, for example, reboot the server. You would want something that stores all it needs locally and gets pushes from a server. Though I don't know what that would be ..

As the importance of a system increases your attention to stability must also increase.

Edit: you probably want something that is locked down as well so that the user doesn't accidently click to a different screen or exit the app.

atzanteol ,

It's a very good question. There is a huge difference between hosting a Jellyfin server and a server being used to provide medical care. If Jellyfin is down people are annoyed they can't watch videos. If this server is down somebody might miss their meds.

atzanteol ,

Not the advice you're looking for but I wouldn't do this.. I have a lot of experience with servers and software development and I wouldn't do it. The amount of effort to make and support a robust system like this is bigger than running a Jellyfin server for friends.

The client needs to be super user friendly and robust. It should work even when the server is unavailable. And you'll need to be on the hook for support. The server would need high availability as well for people adding reminders and schedules. Those are expensive requirements both in terms of money and time. Redundancy isn't cheap.

If these things aren't true the users won't trust the system and/or won't use it. Or a dementia patient could become confused. Maybe they skip medication or double-take it because a reminder wasn't shown? Think through your failure modes carefully.

atzanteol ,

So long as you understand and accept the risks - that's your call. Glad you're thinking things through.

That calendarclock looks promising actually. One of the things I was going to suggest was some sort of "client monitoring" to ensure that the screen has been updated so that it doesn't display old events which would be very confusing to somebody struggling with a sense of time. And their admin app seems to provide you with a "last updated" time which you could check to see if it's working. Hopefully there could be some alarms/notifications if it hasn't updated recently?

Either way - I hope the person you intend this for is doing well. As well as those caregiving.

atzanteol , (edited )

You're going to get a lot of bad or basic advice with no reasoning (use a firewall) in here... And as you surmised this is a very big topic and you haven't provided a lot of context about what you intend to do. I don't have any specific links, but I do have some advice for you:

First - keep in mind that security is a process not a thing. 90% of your security will come from being diligent about applying patches, keeping software up-to-date, and paying attention to security news. If you're not willing to apply regular patches then don't expose anything to the internet. There are automated systems that simply scan for known vulnerabilities on the internet. Self-hosting is NOT "set it and forget it". Figuring out ways to automate this help make it easy to do and thus more likely to be done. Checkout things like Ansible for that.

Second is good authentication hygiene. Choose good passwords. Better yet long passphrases. Or enable MFA and other additional protections. And BE SURE TO CHANGE ANY DEFAULT PASSWORDS for software you setup. Often there is some default 'admin' user.

Beyond that your approach is"security in depth" - you take a layered approach to security understanding what your exposure is and what will happen should one of your services / systems be hacked.

Examples of security in depth:

  • Proper firewalling will ensure that you don't accidentally expose services you don't intend to expose (adds a layer of protection). Sometimes there are services running that you didn't expect.
  • Use things like "fail2ban" that will add IP addresses to temporary blocklists if they start trying user/passwords that don't work. This could catch a bot from finding that "admin/password" user on your Nextcloud server that you haven't changed yet...

Minimize your attack surface area. If it doesn't need to be exposed to the internet then don't expose it. VPNs can help with the "I want to connect to my home server while I'm away" problem and are easy to setup (tailscale and wireguard being two popular options). If your service needs to be "public" to the internet understand that this is a bigger step and that everything here should be taken more seriously.

Minimize your exposure. Think though the question of "if a malicious person got this password what would happen and how would I handle it?" Would they have access to files from other services running on the same server (having separation between services can help with this)? Would they have access to unencrypted files with sensitive data? It's all theoretical, until it isn't...

If you do expose services to the internet monitor your logs to see if there is anything "unusual" happening. Be prepared to see lots of bots attempting to hack services. It may be scary at first, but relatively harmless if you've followed the above recommendations. "Failed logins" by the thousands are fine. fail2ban can help cut that down a bit though.

Overall I'd say start small and start "internal" (nothing exposed to the internet). Get through a few update/upgrade cycles to see how things go. And ask questions! Especially about any specific services and how to deploy them securely. Some are more risky than others.

atzanteol ,

Port Forwarding – as someone mentioned already, port forwarding raw internet traffic to a server is probably a bad idea based on the information given. Especially since it isn’t strictly necessary.

I don't mean to take issue with you specifically, but I see this stated in this community a lot.

For newbies I can agree with the sentiment "generally" - but this community seems to have gotten into some weird cargo-cult style thinking about this. "Port forwarding" is not a bad idea end of discussion. It's a bad idea to expose a service if you haven't taken any security precautions for on a system that is not being maintained. But exposing a wireguard service on a system which you keep up-to-date is not inherently a bad thing. Bonus points if VPN is all it does and has restricted local accounts.

In fact of all the services homegamers talk about running in their homelab wireguard is one of the safest to expose to the internet. It has no "well-known port" so it's difficult to scan for. It uses UDP which is also difficult to scan for. It has great community support so there will be security patches. It's very difficult to configure in an insecure way (I can't even think of how one can). And it requires public/private key auth rather than allowing user-generated passwords. They don't even allow you to pick insecure encryption algorithms like other VPNs do. It's a great choice for a home VPN.

atzanteol ,

Happy to help.

Going off of what you said, I am going to take what I currently have, scale it back, and attempt to get more separation between services.

Containerization and virtualization can help with the separation of services - especially in an environment where you can't throw hardware at the problem. Containers like Docker/podman and LXD/LXC aren't "perfect" (isolation-wise) but do provide a layer of isolation between things that run in the container and the host (as well as other services). A compromised service would still need to find a way out of the container (adding a layer of protection). But they still all share the same physical resources and kernel so any vulnerabilities in the kernel would potentially be vulnerable (keep your systems up-to-date). A full VM like VirtualBox or VMWare will provide greater separation at the cost of using more resources.

Docker's isolation is generally "good enough" for the most part though. Your aggressors are more likely to be bot nets scanning for low-hanging fruit (poorly configured services, known exploits, default admin passwords, etc.) rather than targeted attacks by state-funded hackers anyway.

atzanteol ,

Glad you didn't take my comment as being "aggressive" since it certainly wasn't meant to be. :-)

Wireguard is a game-changer to me. Any other VPN I've tried to setup makes the user make too many decisions that require a fair amount of knowledge. Just by making good decisions on your behalf and simplifying the configuration they've done a great job of helping to secure the internet. An often overlooked piece of security is that "making it easier to do something the right way is good for security."

atzanteol ,

I think mail forwarders are still a good way to go. It's hard to predict how Internet providers will react to email running in their networks.

These days I have an ec2 at AWS for my mail server and use SES for outbound mail. I'm thinking of moving "receiving" back into my network with a simple chat forwarding service but keep SES for outbound. They handle all the SPF and DKIM things and ensure their networks aren't on blacklists.

atzanteol ,

It's spam they're concerned about. Spam email is kinda "big business" and one way they thrive is by using bots to just scan for poorly-configured or vulnerable systems to hack and install an app that will let them send email from your system. By compromising hundreds or thousands of individual machines it makes it hard for mail providers to block them individually. It also uses a ton of bandwidth on internet service providers networks.

So some time ago service providers started to simply block port 25 (used to send email) on their networks except to certain services. I think they've backed off a bit now but inbound port 25 can often be blocked still. It may even be against their TOS in some cases.

atzanteol ,

It seems weirdly difficult to find a good solution to attach HDDs to my pi.

Being a nas is not at all what a pi is made for. So it's not surprising at all.

atzanteol ,

not a high performance NAS

That is an understatement.

atzanteol ,

You not caring if it's underpowered doesn't mean it's not underpowered.

atzanteol ,

As a general rule: One system, one service. That system can be metal, vm, or container. Keeping things isolated makes maintenance much easier. Though sometimes it makes sense to break the rules. Just do so for the right reasons and not out of laziness.

Your file server should be it's own hardware. Don't make that system do anything else. Keeping it simple means it will be reliable.

Proxmox is great for managing VMs. Your could start with one server, and add more as needed to a cluster.

It's easy enough to setup wireguard for roaming systems that you should. Make a VM for your VPN endpoint and off you go.

I'm a big fan of automation. Look into ansible and terraform. At least consider ansible for updating all your systems easily - that way you're more likely to do it often.

atzanteol ,

Why would you virtualize a file server? You want direct access to disks for raid and raid-like things.

atzanteol ,

And Virtualized is good because we don't want to "waste" a whole machine for just a file server.

Hmm. I strongly disagree. You've created a new dependency now for the fileserver to come up - a system that many other services will also depend on and which will likely contain backups.

A dedicated system is less likely to fail as it won't be sensitive to a bad proxmox upgrade or some other VM exhausting system resources on the host.

You can get cheap hardware if cost is an issue.

atzanteol ,

That system can be metal, vm, or container

Emphasis mine.

atzanteol ,

I also go on to say there are times when that rule should be broken. "It's more like a guideline than a rule". 🙂

atzanteol ,

Sure, I’m not saying its optimal,

Question title: Starting over and doing it "right"

But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware.

You gain stability - which is the single best thing you can get from a file server. It's not a glamorous job - but it's an important one.

Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated.

When somebody new to hosting services asks what they should do we should provide them with best practices rather than "you can run this on the microcontroller in your toaster" advice. Possible != good.

The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.

Running your NAS on a VM on Proxmox only makes good sense if you're just cheap. I've been there! I get it. But I wouldn't tell anyone what I was doing was a good idea and certainly wouldn't recommend it to others. It's a hack. Own it.

You can find old servers on eBay for ~$200. Here's the one I use for <$200. It's been running for more than a decade without trouble. Even when I mess up other systems it's always available. When I changed to Proxmox from how I previously managed some other systems it was already available and running. When an upgrade on my laptop goes wrong the backups are available on my fileserver. When a raspberry pi SD card dies the backup images are available on the fileserver. It. Just. Works.

atzanteol ,

Circling back to the VM thing though, even if I had dedicated hardware, if I would’ve used an old server for a NAS I still would’ve virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.

I can see the allure. I've just had a lot more experiences where "some idiot" (cough) made changes at 2AM to an un-related service that causes the entire fileserver and anything else on that system to become unavailable... Happens more often than a hardware error in my experience. :-)

Do you have two proxmox servers each with enough disk space to store everything on the fileserver? And I assume off-site backups to copy back from?

If my T110 exploded I'd just buy a new machine, restore from off-site, and re-provision with Ansible scripts. But have ~8TB in storage on my server so just copying that to a second system is not an option. I'm not going to have a system with a spare 10T of disk just sitting around..

atzanteol ,

Ah - I question whether that would really be a 30 or even 60 min operation. But I see what you mean.

One thing I think homegamers overlook is ansible. If you script your setups you can destroy/rebuild them pretty quickly. Both physical systems and VMs. Only manual part is installing Debian which is...pretty easy if we're talking about disaster recovery.

Also - you can still buy computers in stores. :-)

atzanteol , (edited )

This is way overcomplicated.

Internet -> router/firewall -> your network with all devices

No DMZ needed or wanted.

You will want a dhcp server which will likely be the router/firewall. It will tell all your internal systems to use it as a "gateway" for Internet traffic. The router then allows outbound for everybody and does NAT - basically it makes requests on that systems behalf and sends the results back. If your want external access to a system you configure port-forwarding on the router (again it acts as the middleman between external and internal systems).

Edited to add: I love that you provided a diagram though! Makes it much easier to discuss.

atzanteol ,

Yeah - basic home-networking is typically pretty straight-forward. You'll want to figure out your basic services (DHCP, DNS, and routing) but after that it's pretty simple. OpenWRT should handle the DHCP and routing. I'm not sure about DNS though.

DHCP will tell systems "here is your IP, here is the CIDR of the network you are on, here is the router that handles traffic for things NOT on that network (e.g. the internet), and here are the DNS servers you should use for name resolution.

With DHCP you can also hand out "static leases" to give systems reliable IP addresses based on their MAC addresses. Then you can setup a DNS server that does internal name resolution if you want to be able to reference systems by name. This DNS server doesn't need to be publicly available (and indeed should not be).

The Firewall is typically only for things coming into your network from the internet. You can restrict outbound traffic as well if you want but that's less common. By default things on the internet will NOT be able to get to your internal systems because of NAT. So to allow things "out there" to access a service running on an internal system you'll need to do port forwarding on your firewall. This will a) open a port on the internet side and b) send all traffic to that port to a port on an internal system. The router will handle all of the network-to-network and traffic handling stuff.

XPipe status update: New scripting system, advanced SSH support, performance improvements, and many bug fixes (sh.itjust.works)

I'm proud to share a status update of XPipe, a shell connection hub and remote file manager that allows you to access your entire server infrastructure from your local machine. It works on top of your installed command-line programs and does not require any setup on your remote systems. So if you normally use CLI tools like ssh,...

atzanteol ,

Honest question - why would you elevate privs on the bastion?

You can automatically use a bastion host with an SSH config entry as well in case you didn't know:

Host target.example.com
  User  username
  ProxyJump username@bastion.example.com

Then you just ssh target.example.com. Port forwarding is sent through as well.

atzanteol ,

A fileserver that does something else is not a fileserver. Squeezing lots of services into a single machine makes it harder to maintain and keep stable.

If you do want to do that it helps to run those other services in docker or some other container to isolate them from the host.

atzanteol ,

Just the "about" page has issues? The rest is fine? No messages on the console where you ran python?

atzanteol ,

Flask apps are usually run from gunicorn or something. What exactly did you modify on those shell scripts?

atzanteol ,

My journey has been similar yet distinctly different. I went from "put it all on one server" to running servers in AWS. But the cost was preventing me from doing much more than run a couple of compute nodes. I hated the feeling of "I could setup a server to do X but it's gonna cost another $x/month". So I've been shifting back to my own servers.

I do like devops and automation though. Automation is brilliant for creating easily reproducible and stable environments - especially for things you don't touch very often.. Proxmox was what let me start moving back "on prem" as it were. There are "good enough" terraform plugins for proxmox that let me provision standardized VMs from a centralized code-base. And I've got ansible handling most of the setup/configure beyond that. I've now got like 20 VMs whereas before I only had 2 EC2 nodes due to cost. So much happier...

atzanteol ,

The plugins are for terraform - not proxmox. There's two that I've found that have varying levels of "working":

The telmate one seems more popular but the bgp one worked better for me (I forget what wasn't working with the other one). They use the proxmox API to automate creating VMs for me.

atzanteol ,

Most people seem to just want to use RPIs as a very slow Linux server for some reason...

Use it to play around with hardware integration with the GPIO pins. Get a sensor HAT and start recording temperatures, write some code that turns on/off an LED, build a robot controller, etc. There are lots of kits and documentation on the various things you can do!

atzanteol ,

It is! Especially if you want to write the code yourself. It's an interesting design problem if you start to consider cases where the PI may be offline (mobile on a battery in my case). Do you lose that data? Store and forward? In memory or to a local data store? It's a fun rainy-weekend project.

Word of caution - HATs can be a rather inaccurate in their temperature monitoring. The Pi gets warm. I had done my work using a PTC thermistor that was distanced from the Pi itself. I've got a friend using a HAT and it's been very off (up to 10C above ambient!). A Pi Zero may not give off as much heat as, say a Pi4 though. YMMV.

atzanteol ,

That's one of the nice things about them.

You can write code that has access to more resources. I had a RPI once that showed code build status on an led strip (red failed, green passed). It was a Java program that connected to AWS SQS for build event notifications. A micro controller would be much harder to do that on.

atzanteol ,

I believe they used heritrix at one point. The important bit is that there is a special archive format that they use which is a standard. There are several tools that support it (both capturing to it and viewing it) - it allows for capturing a website in a 'working' condition with history or something. I'm a bit fuzzy on it since it's been some time since I looked into it.

atzanteol ,

I've had my own domain since the early 2000s and have never needed to run a public dns server. Couldn't, in fact, due to not having a static ip address. Sure, I run one internally but it's complicated enough to setup "properly" that I leave the external resolution to the big players. I doubt anyone's home setup will be more reliable than route53...

Commercial dns services are cheap as chips and make it easy to add records. You can often automate it with terraform or sensible as well. I can't think of any good reason not to use one.

atzanteol , (edited )

Use a public dns provider. Cloudflare, route53, dyndns (are they still around?), etc. Cheap, reliable, no worries about joining a ddos by accident. Some services are better left to experts until you really know what you're doing.

And if you do really know what you're doing you'll use a dns provider rather than host your own.

atzanteol ,

Host your own private DNS - yes, knock yourself out. I highly recommend it.

Public DNS? No - don't do that.

There are two services homegamers should be extra cautious of and should likely leave alone - DNS and email. These protocols are rife with historic issues that affect everybody, not just the hosting system. A poorly configured DNS server can participate in a DDOS attack without being "hacked" specifically. A poorly configured mail server can be responsible for sending millions of spam emails.

For a homegamer you probably only need a single public DNS record anyway (with multiple CNAME if you want to do host based routing on a load balancer). You take on a lot of risk with almost zero benefit.

atzanteol ,

Personally I’m very fed up with AWS, Cloudflare and Google virtually owning the modern Internet. I selfhost to get away from their spying and oligopoly so routing DNS through them is simply out of the question, for me.

I get that - but part of the reason for the current situation is that DNS is such a bad protocol that is risky to leave in unskilled hands. You can do damage beyond just your host. DNS is a big target and servers can find themselves participating in DDOS attacks. The big players do traffic analysis and rate limiting to minimize these things.

And really it’s not that hard these days with pre-packaged Docker containers.

It's not that it's "hard to run a name server" it's that it's tricky to configure one correctly so as to be a "good neighbor" on the internet. Most homegamers only need a single "A" record anyway - maybe some CNAMEs. It's not like you need anything complicated. And if you don't have a static IP address then you definitely want your DNS server to be updatable easily with a new IP. Updating NS records is more complicated.

Running an internal name server is fine and a great experience. You can do so much more on your own network than you would likely do with a public name server anyway.

atzanteol ,

You're saying "If you configure your DNS server properly and understand how it works then it can be setup securely."

I'm saying "Have you seen the questions in this community???"

atzanteol ,

Uh oh - my "nerd creds" are being questioned by a rando on the internet. 🤣

I broke nextcloud and i cant fix it

I managed to install and set up nextcloud snap on my ubuntu server a couple months ago. I haven't used it since and now I tried to log in to it but I forgot the username and password. I tried uninstalling and reinstalling the snap to reset the data but i couldn't access the web page. I couldn't find anything online. Any...

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines