Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@chiisana@lemmy.chiisana.net avatar

chiisana

@chiisana@lemmy.chiisana.net

This profile is from a federated server and may be incomplete. Browse more on the original instance.

NPM - What services need what toggled? (slrpnk.net)

Hiya, just got NPM installed and working, very happy to finally have SSL certs on all of my serivces and proper URLs to navigate to them, what a breeze! However, as I am still in the learning process: I am curious to know when to enable these three toggles and for what services. I assume the "Block Common Exploits", can always...

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I don’t use NPM but if “Cache Assets” means what it means in the traditional sense, it wouldn’t affect most home deployments.

Historically, resources are limited and getting Apache to load images/javascript/CSS files from disk each time they’re requested, even if the OS kernel eventually caches them to RAM, was a resources intensive process. Reverse proxies stepped up and identifies assets (images, JS and CSS), and stores them in memory for subsequent requests. This reduces the load on the Apache web server and reduces the hops required to serve the request. Thereby making everything faster.

For homelabs, and single user systems, this is essentially irrelevant, as you’re not going to be putting so much load on the back end system to notice the difference. May be good to still turn it on, but if you’re noticing odd behaviors (ie updates to CSS or images not taking), it may be a good idea to turn it off to see if that’s the culprit.

Basic docker networking?

Hi guys! I'm going at my first docker attempt...and I'm going in Proxmox. I created an LXC container, from which I installed docker, and portainer. Portainer seems happy to work, and shows its admin page on port 9443 correctly. I tried next running the image of immich, following the steps detailed in their own guide....

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Docker inside LXC adds not only the overhead they’d individually add — probably not significant enough for it to matter in a homelab setting — but with it also the added layer of complexity that you’re going to hit when it comes to debugging anything. You’re much better off dropping docker in a full fledged VM instead of running it inside LXC. With a full VM, if nothing else, you can allow the virtual networking to be treated as it’s own separate device on your network, which should reduce a layer of complexity in the problem you’re trying to solve.

As for your original problem… it sounds like you’re not exposing the docker container layer’s network to your host. Without knowing exactly how you’re launching them (beyond the quirky docker inside LXC setup), it is hard to say where the issue may be. If you’re using compose, try setting the network to external, or bridge, and see if you can expose the service’s port that way. Once you’ve got the port exposure thing figured out, you’re probably better off unexposing the service, setup a proper reverse proxy, and wiring the service to go through your reverse proxy instead.

I dockerized my Nextcloud (github.com)

For years I've been running my Nextcloud on bare metal. Now I finally decided to switch it to docker. I looked into Nextcloud All-In-One and it seems easy to deploy but it lacks configuration options like multiple domain names and putting it behind a traefik reverse proxy is kind of a pain to set up....

chiisana ,
@chiisana@lemmy.chiisana.net avatar

NextCloud’s trusted_proxies setting supports CIDR notation, so it mught be better to set the subnet of Traefik’s network as opposed to the IP address. That way, if you ever need to do anything with the container (I.e. upgrade traefik), the IP can change but the subnet is less likely to change.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

The documentation seems to suggest just IP address and CIDR notation.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

No problem! It’s a small change that might not affect most people :)

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Last time this was asked, I’ve voiced the concern that tying fixed IP address to container definitions is an anti-pattern, and I’ll voice that again. You shouldn’t be defining a fixed IP address to individual services as that prevents future scaling.

Instead, you should leverage service discover mechanisms to help your services identify each other and wire up that way.

It seemed like in NPM, there is no fitting mechanisms out of the box. Which may suggest your use case is out growing what it may be able to service you for in the future. However, docker compose stacks may rescue the current implementation with DNS resolution. Try simplifying your npm’s docker compose to just this:

   networks:
      - npm

networks:
  npm:
    name: npm_default
    external: true

And your jellyfin compose with something like:

   networks:
      - npm
      - jellyfin_net

networks:
  npm:
    name: npm_default
    external: true
  jellyfin_net:
    name: jellyfin_net
    internal: true

Have your other services in Jellyfin stack stay only on jellyfin_net or whatever you define it to be, so they’re not exposed to npm/other services. Then in the configs, have your npm talk direct to the name of your jellyfin service using hostname, maybe something like jellyfin or whatever you’ve set as the service name. You may need to include the compose stack as prefix, too. This should then allow your npm to talk to your jellyfin via the docker compose networks’ DNS directly.

Good luck!

chiisana ,
@chiisana@lemmy.chiisana.net avatar

It may not affect this current use case for a home media server, but people should still be aware of it so as they learn and grow, they don’t paint themselves in a corner by knowing only the anti patterns as the path forward.

Looking for a reverse proxy to put any service behind a login for external access.

I host a few docker containers and use nginx proxy manager to access them externally since I like to have access away from home. Most of them have some sort of login system but there are a few examples where there isn't so I currently don't publicly expose them. I would ideally like to be able to use totp for this as well.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I use Traefik as reverse proxy and Authentik as SSO IdP. When I connect to my “exposed” service, Traefik middleware determines if I have the appropriate access credentials established. If so, I get access; if not, I’m bounced over to Authentik, where I enter my username, and authenticate via Passkey (modern passwordless gated by private keys behind biometrics unlock). The middleware can also be bypassed based on my pre established private custom HTTP header, so apps doesn’t support the flow (ie mobile client for some apps) can get in directly as well.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I’m so lucky I got my SO on board with using a password manager early on! However, the passwordless login (after figuring out how send a user to the enroll stage initially) makes it so smiple, don’t even need the federated Google login.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I don’t use the two you’ve called out, so I cannot guarantee my Google results are accurate, but the principle is similar…

If the app supports external authentication (usually, looking for things like OIDC, SAML, or SSO in the documentation), then I’d configure the app to do that and skip the Traefik middleware piece.

This is what I’d do based on what I’m seeing on this article for NextCloud. That is, when all is said and done, I’d go https://nexcloud.myunexistent.deployment/ and be greeted with the next cloud login screen, where the external authentication option is shown on screen.

A similar setup might be achieved with Home Assistant’s commandline authentication provider to delegate authentication out via command line setup. Alternatively, use hass-auth-header plugin along with trusted proxy to delegate authentication out to the reverse proxy.

Hope this points to a relevant direction for you!

nginx proxy manager changes IP. How to get static container IP?

all the containers change IP addresses frequently. For home assistant a static IP address of the proxy manager is mandatory in order to reach it. For jellyfin it is useful to see which device accesses jellyfin. If the IP always changes, it doesn't work properly....

chiisana ,
@chiisana@lemmy.chiisana.net avatar

This feels like an anti-pattern that should be avoided. Docker compose allows for scaling individual services to have more than one instance. By hard assigning an IP address to a service, how is that going to be scaled in the future?

I don’t know how to reconcile this issue directly for NPM, but the way to do this with Traefik is to use container labels (not hard assigning IP address) such that Traefik can discover the service and wire itself up automatically. I’d imagine there should be a similar way to perform service discovery in NPM?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Except it is explicitly being told to use a singular IP address here. So the engine is either going to go against explicit assignment or going to create a conflict within its own network. Neither of which are the expected behavior.

Just because people are self hosting, doesn’t mean they should be doing things incorrectly.

Is there much performance difference in ad blocking options?

I'm currently using the blocklists included with unbound in opnsense on a mini PC and I have used pihole on a pi which now operates my 3d printers instead. I haven't tried any of the other network wide options. Has anyone made any blog posts or similar detailing performance testing of different options?...

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Most self hosted DNS level blocking will be very fast as it is really easy to keep the block list in RAM. I hosted Pi Hole on RPi 3 and an over provisioned VM (4 cores and 4GB of ram lol). The only difference I’ve noticed is whether or not the device is hardwired. When my RPi was hardwired into the network, there was no notable difference between the two.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If they’ve got the orange cloud enabled, then Cloudflare will cache, minify, and distribute the static contents to servers closer to your ISP. The result would be that the initial page load appears faster. Dynamic content (such as actually performing a search) would require the server to actually perform actions, and would depend on wider range of factors.

A lot of words to say, yes, if you have static content to serve, Cloudflare is one of the cheapest way to make them go vroom vroom.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If you’re new, something like Uniquiti UniFi stack is very beginner friendly and well polished.

If you’re planning to run your own hardware, the usual recommendation seems to be pfsense or opnsense on a modern lower end system (Intel N100 box for example).

Bearing in mind that a router is only responsible for routing (think directing the packets where to go). You’d also want to have access points to provide WiFi for your wireless devices. This is where UniFi stack makes it easier because you can just choose their access point hardware and control through single controller. Whereas rolling your own you’d be looking at getting something else to fill that role.

Should I learn Docker or Podman?

Hi, I've been thinking for a few days whether I should learn Docker or Podman. I know that Podman is more FOSS and I like it more in theory, but maybe it's better to start with docker, for which there is a lot more tutorials. On the other hand, maybe it's better to straight up learn podman when I don't know any of the two and...

chiisana ,
@chiisana@lemmy.chiisana.net avatar

At the end of the day, you’re running containers and both will get the job done. Go with whatever you want to start, and be open to try the other when you inevitably end up with jobby job that uses the other one instead.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

OIDC was a huge thing for me, I used FusionAuth for a bit and it worked great. Then I learned I could deploy my own WebAuthn / passkey password-less authentication, moved over to Authentik, and never looked back.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Do you mind elaborating a little on in what sense it is slow for you? It doesn’t “feel” slow for me, but as you’ve identified, it’s a multipage login process with some JavaScript driven content, so it’s not exactly the fastest compared to something more static. The pages generally loads in around/under 1 second for me; and once authenticated, the flow happens fairly quickly and infrequently that I don’t really notice or care for it.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Admin UI feels okay to me, at most 1/2 a second between page loads/repaints, definitely not several seconds kind of slow. I am running it on my oracle free tier VM and I’ve got only 3 users, so maybe I’m way over provisioned? Have you tried to measure where the latency is coming from? As in, it is the raw page load that’s slow, or if it is subsequent JavaScript triggered requests bottlenecking the performance?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Most of the apps I use support external authentication using popular standards (OAuth for most part). This means the clients will also support the said standards out of the box. Having a standardized authentication flow makes logging in much easier as well.

I also don’t want to deal with passwords… because I don’t trust myself to handle passwords. So before settling down on Authentik, I used FusionAuth to do OIDC via Google. Then I discovered I could do WebAuthn / Passkey with Authentik, so the portal really only ever need to know my public key, and approves access based on private keys, which are gated by my devices’ biometric features. This is way more secure than other solutions and I don’t even need to remember a password.

The one edge case I’ve encountered is a couple of apps recently transitioning to mandating authentication, but doesn’t have OIDC integration of their own. Fortunately, there’s a hidden config flag in XML that I can use to tell them that I have externally managed authentication, and I gate access to them via a middleware in my reverse proxy. As for client, my client of choice allows me to add custom HTTP headers, so I have a special “API key” kind of header that my reverse proxy looks at, which allow me to bypass authentication, so everything works nicely together.

In my mind, using the vanilla out of the box authentication feels less secure than me gating things via OIDC or middleware. This is because everyone knows they could Google for “Powered by WordPress” or similar phrase to target specific apps with known authentication exploit. However, by switching it up and using a different mechanism, the common exploit vectors might not be as effective against my deployment.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Do you have more than one network in Docker?

If so, you’d want to add a label to tell traefik which network to use; if memory serves, I think it is literally traefik.docker.network=traefik_default or something like that, where traefik_dedault should reflect the network the service is sharing with traefik — I put mine on the traefik default network from docker compose, hence the name but you may have other design.

Edit: sorry I’m on mobile right now, and I just saw you do have traefik docker network bit already, but it says media. Is that network where traefik have access?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If you don’t mind, can you please try disabling all but one or two stacks and see if your homepage responds faster?

I think although your setup may work, and is definitely better than me dumping everything into the Traefik gateway network, I can’t help but to wonder if Traefik picked up some overhead with each additional network it gets added to…?

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Humph… I wonder what’s the actual underlying issue here. Such a strange one!! Hope you’re able to figure it out at some point!

chiisana ,
@chiisana@lemmy.chiisana.net avatar

If docker works for you, then don’t change what’s not broken. If there are things you don’t like about docker (root access etc for example) then venture out and try others. At the end of the day, they’re just tools to get to the more interesting stuff — actually running applications and playing with them.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Cool. Thanks! One less reason for me to even consider Porman on the radar. Personally, I really don’t care for the tool itself, and am way more interested in the apps that I can run and play with :)

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Seems to be more on the web side of things for affiliate marketing, not necessarily light switch usage patterns? At least the pasted/quoted bit doesn’t suggest that it’d cover interactions with the devices.

chiisana , (edited )
@chiisana@lemmy.chiisana.net avatar

Are Lutron and Leviton the same company? I’ve always wondered but never found any definitive confirmation one way or another.

Edit: see, two replies, two answers… shrouded in mystery haha

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Cool. So don’t use their app. I’d imagine HomeAssistant usage cannot be tracked as it wouldn’t go through their app.

FWIW, I’m all in on HomeKit, so I only control over Home app for my light switches from another vendor, and I’ve got no skin in the game with Leviton, but same idea applies. No vendor apps means their app based tracking are much less relevant.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Can’t wait for Matter and Thread become more mainstream. Local first (and device level egress blocked by VLAN) for the win.

Cloudflare Alternative

What do you guys use to expose private IP addresses to the web? I was using the npm proxy manager with Cloudflare CDN. However, it stopped working after I changed my router (I keep getting error 521). Looking for an alternative to Cloudflare cdn so I can access my media server/self-hosted services away from LAN....

chiisana ,
@chiisana@lemmy.chiisana.net avatar

521 usually means they cannot reach your server properly. Was the router change due to a new ISP, and does the new ISP block port 80/443? Did you re-make all the relevant port forwarding rules? Changing CDN won’t change anything if your ports are closed/not responding as expected.

chiisana , (edited )
@chiisana@lemmy.chiisana.net avatar

521 = Origin server down; I.e. the port is not open and/or the IP address is incorrect all together.

522 = Origin server time out; I.e. the port might be open but no content is being sent back.

If you’re seeing 521, then Cloudflare cannot establish a connection to port 80/443 on your IP address in the A record. Bearing in mind that in order for someone from outside of your LAN (i.e CloudFlare) to have access to your services, they must be able to reach the service, so this value should be your external IP address, not an internal address. Once you have your external address keyed into the record, have someone else not in your home try to access that IP/port combination and see what happens. If they cannot access, then port forwarding is not setup or your ISP is blocking, or you’re behind some CGNAT. If they can access, then something else is at play (origin IP filtering comes to mind).

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Does Wireguard have a centralized server that the server at home connect to in order to expose itself? If not, I don’t see how it’d work for OP, because at this point, based on info shared, I’m inclined to think OP is having trouble exposing ports (be it ISP imposed or knowledge gap) as opposed to having issue with the service / vendor.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

The biggest fear would be when you’re rebuilding, you’re putting extra stress on the other drives, thereby increasing the risk of them, too, dying.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Security.

Cloudflare handles a very large amount of traffic and sees many different types of attacks (thinks CSRF, injections, etc.). It is unlikely that you or me will be individually targeted, but drive-bys are a thing, and thanks to the amount of traffic they monitor, the WAF will more likely block out anything and patch before I’m able to update my apps on 0 days.

Also, while WAF is a paid feature, other free features, such as free DDOS attack protection, help prevent other attacks.

It’s a trade off, sure; they’re technically MITM’ing your traffic, but frankly, I don’t care. Much like no one cares to target/attack me individually, they aren’t going to look at my content individually.

Additionally, it also makes accessing things much easier. Also, it is much more likely I’d find a SME using Cloudflare than some janky custom self hosted tunnel setup. So from a using homelab as a learning for professional experience point of view, it is much more applicable as well.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

It’d be a challenge to keep up — 0 days aren’t going to be added to self hosted solution faster than they could be detected and deployed on a massively leveraged system. Economy of scales at full display.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

The difference in my opinion is that doesn't matter how fast upstream vendors patch issues, there's a window between issue being detected, patch being implemented, release getting pushed, notification of release gets received, and then finally update getting deployed. Whereas at least on cloud WAF front, they are able to look at requests across all sites, run analysis, and deploy instantly.

There is a free tier with their basic "Free managed ruleset", which they've deployed for everyone with orange cloud enabled when we saw the Log4J issue couple years back. This protection applies for all applications, not just the ones that were able to turn around quickly with a patch.

If you want more bells and whistles, there's a fee associated with it, and I understand having fees is not for everyone, though the price point is much lower -- you get some more WAF feature on the $25/mn ($20/mn amortized when paid annually) tier as well before having to fork out the full $250/mn ($200/mn when paid annually) tier. There's a documentation page on all the price points and rulesets available.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

The free tier rolled out was specifically to address upstream vendors patching Log4J too slowly. They’re able to monitor the requests and intercept malicious patterns before it hits the server running unpatched (due to upstream unavailable yet) applications. They are updating with more rules for the free tier set as far as they’ve stated. The extras from paid tiers are more extra rulesets and more analytics around what was blocked etc.

At the end of the day though, you do you; the benefit for me may not be benefit for you. I’m not selling their service, and have no benefit what so ever should anyone opt into their services.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

No problem! I appreciate the civil discussion! Thank you!

chiisana ,
@chiisana@lemmy.chiisana.net avatar

On the product offering page for Free DDoS Web Protection, the features table shows that "Unmetered DDoS Protection" is available for everyone regardless of tier from Free all the way up to Enterprise. This change was rolled out on 2017-09-25, prior to this, there was a certain amount of throughput depending on price point (though, still very generous for the free tier from what I remembered).

Sometimes, people make up their mind about something and never update their knowledge, and it would appear this is one of those case here.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Self hosting email on non-mission critical domain for learning purposes might be okay if your intention is to get into the industry. Self hosting email for others on more production like setting you’re going to find yourself in a world of pain.

All it takes is one missed email (be it not making into their intended recipient’s inbox, or them not receiving an important notice in their inbox) and you’re never going to hear the end of it.

You’d also be liable for content your users send out from your servers — and I don’t mean the spam type, though if you get your IP blacklisted, your provider may want to have a word with you.

I’d strongly advise against going down this path, but if you do, be sure to have ways to legally shield yourself from any sort of potential liabilities.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

There’s a vocal handful group of people disliking CloudFlare because of their irrelevant “privacy” concern here — you can absolutely use the registrar without using their CDN features. Also, reality check: with CloudFlare’s market reach, there’s zero chance nothing they do online isn’t already MITM’ed already. Having said that, Cloudflare uses their registrar as loss leader, so they give their wholesale price to end users registering, and as such you’ll have the cheapest price available for the domain extensions they support. You can then just set your DNS without their orange cloud and traffic on your domain aren’t going to flow through their CDN.

What is your experience with Hetzner server auction?

I'm currently using a VPS from contabo and am curious if I would get better performance CPU and disk I/O wise because of the dedicated resources. The bigger VPS from contabo seem to be in a similar ballpark to the cheapest options available in the hetzner server auction when it comes to corecount, ram and disk size and price.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

Although most providers do over provision, due to mostly bursty nature of most services, you’re probably less likely going to notice the shared aspect as opposed to the general age of the system. So it may be a good idea to take a quick peek at your VPS’s processor and compare that against what you’d be auctioning for. 1 older core (I.e. E5-2687W) is not going to be able to put up same amount of work against 1 newer core (I.e. AMD EPYC 7763) — brands and actual models are less relevant, just the idea of age gap that’s more important.

If you want to be absolutely sure, it may be just a good idea to budget for some duration where you’d pay for both services (you’d need some time to migrate everything anyway), and run benchmarks on both systems to see what you’d get out of each, then decide which one to keep.

chiisana , (edited )
@chiisana@lemmy.chiisana.net avatar

You could use just a simple Apache (or even some simpler static file server) with no authentication what so ever, but only accessible to your own network. Then, add a Reverse Proxy Gateway such as Traefik, Caddy or whatever else, and add Authentik as a Middleware. User heads to the site (I.e.: https://files.yourdomain.ext/), Reverse Proxy Gateway bounces the request to the Middleware (I.e. Authentik), requires the SSO via whatever authority you’ve got setup, gets bounced back, and then your Reverse Proxy Gateway serves up the static content via the internal network without authentication (i.e.: http://172.16.10.3/).

Check out Forward Auth section of Authentik docs here: https://goauthentik.io/docs/providers/proxy/forward_auth

chiisana ,
@chiisana@lemmy.chiisana.net avatar

At $80 a pop, might get more oomph from an older optiplex if electricity cost isn’t too big of a concern?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines