Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

rentar42

@rentar42@kbin.social

This profile is from a federated server and may be incomplete. Browse more on the original instance.

rentar42 ,

I second that. This practice comes from a time where domain names were expensive, in many ways: SNI didn't exist/wasn't wide-spread, so each domain name on HTTPS needed a dedicated IP, Certificates weren't democratized yet via letsencrypt/acme and most hosts were big enough to run multiple services, because virtualization wasn't as widely available yet. So putting apps on sub-paths made sense.

Now all of those things are basically dealt with and putting each app on its own sub-domain just makes way more sense.

rentar42 ,

I went with iDrive e2 https://www.idrive.com/s3-storage-e2/ 5 TB is 150$/year (50% off first year) for S3-compatible storage. My favorite part is that there are no per-request, ingress or egress costs. That cost is all there is.

rentar42 ,

First: love that that's a thing, but I find the blog post hilarious:

We believe this choice must include the one to migrate your data to another cloud provider or on-premises. That’s why, starting today, we’re waiving data transfer out to the internet (DTO) charges when you want to move outside of AWS.

and later

We believe in customer choice, including the choice to move your data out of AWS. The waiver on data transfer out to the internet charges also follows the direction set by the European Data Act and is available to all AWS customers around the world and from any AWS Region.

But sure: it's out of their love for customer choice that they offer this now. The fact that it also fulfills the requirements by the EDA is purely coincidental, they would have done it for sure.

Remember folks: regulation works. Sometimes corporations need the state(s) to force their hand to do the right thing.

rentar42 ,

without trusting anyone.

Well, except of course the entity that gave you the hardware. And the entity that preinstalled and/or gave you the OS image. And that that entity wasn't fooled into including malicious code in some roundabout way.

like it or not, there's currently no real way to use any significant amount of computing power without trusting someone. And usually several hundreds/thousands of someones.

The best you can hope for is to focus the trust into a small number of entities that have it in their own self interest to prove worthy of that trust.

rentar42 , (edited )

Like many other security mechanisms VLANs aren't really about enabling anything that can't be done without them.

Instead it's almost exclusively about FORBIDDING some kinds of interactions that are otherwise allowed by default.

So if your question is "do I need VLAN to enable any features", then the answer is no, you don't (almost certainly, I'm sure there are some weird corner cases and exceptions).

What VLANs can help you do is stop your PoE camera from talking to your KNX and your Chromecast from talking to your Switch. But why would you want that? They don't normally talk to each other anyway. Right. That "normally" is exactly the case: one major benefit of having VLANs is not just stopping "normal" phone-homes but to contain any security incidents to as small a scope as possible. Imagine if someone figured out a way to hack your switch (maybe even remotely while you're out!). That would be bad. What would be worse is if that attacker then suddenly has access to your pihole (which is password protected and the password never flies around your home network unencrypted, right?!) or your PC or your phone ...

So having separate VLANs where each one contains only devices that need to talk to each other can severely restrict the actual impact of a security issue with any of your devices.

rentar42 ,

Without any text it's really hard to guess what you want and that's why you get so many different answers.

Do you want to

Note that I suspect you actually want the third one, in which case I suggest you avoid MediaWiki. Not because it's bad, but because it's almost certainly overkill for your use-case and there's way simpler, easier-to-setup-and-maintain systems with fewer moving parts out there.

rentar42 ,

I'm sorry that my attempt to find out what you want to be able to provide useful help annoyed you.

rentar42 ,

Oh, I'm 100% there with you on syntax. But having multiple pieces of software that support the same syntax seems useful.

Personally I've turned into more markdown kind of person rather than the traditional wiki syntax. And at least that one gained some level of standardization over time ...

rentar42 ,

Since most of those are run commercially and don't make their data easily accessible, that'll be a much different process, I assume. You'll basically have to scrape them like any other web site, except you'll specifically be targeting the edit/source view pages. Then find a wiki implementation that has as close a syntax as possible to the one they use (that could be tricky ...) and upload there. So unless you happen to find some code from someone who wanted to do the exact same thing, I'm afraid this would involve quite some programming/scripting.

rentar42 ,

Increase the attack surface compared to what? If you don't allow/enable any access to services inside your network from outside, then by definition you have fewer attack surfaces than if you add a VPN to that empty list.

So trivially the answer is "yes, it adds an attack surface".

But what are the alternatives? If you directly expose each individual service on a dedicated port, for example, then you'd add many more (and usually less well hardened) attack surfaces instead.

So if the comparison is "expose 5 web-based services directly" vs. "expose one VPN like wireguard", then the second option is almost always the clear winner when it comes to security (and frequently also when it comes to ease of setup as well as comfort).

rentar42 , (edited )

I've not found a good solution for actual constant monitoring and I'll be following this thread, but I have a similar/related item: I use healthcheck.io (specifically a self-hosted instance) to verify all my cron jobs (backups, syncs, ...) are working correctly. Often even more involved monitoring solutions do not cover that area (and it can be quite terrible if it goes wrong), so I think it'll be a good addition to most of these.

rentar42 ,

This isn't specific to just netdata, but I frequently find projects that have some feature provided via their cloud offering and then say "but you can also do it locally" and gesture vaguely at some half-written docs that don't really help.

It makes sense for them, since one of those is how they make money and the other is how they loose cloud customers, but it's still annoying.

Shoutout to healthcheck.io who seem to provide both nice cloud offerings and a fully-fledged server with good documentation.

rentar42 ,

At a big enough LAN even just getting everyone to change that setting is probably harder than setting up a central cache. Don't underestimate the amount of people that listen to instructions, say sure and then just either not do it, or fail to do it correctly.

Looking for the Perfect USB Flash Drive

I've been using some cheap flash drives for things like installing OSs and the like, but now I've picked up a Dell Wyse 3040 system to play with which only has 8gb of storage. So I'm installing the OS onto a flash drive permanently (don't worry, just for messing with, nothing of value will be lost if/when the drive craps out)....

rentar42 ,

USB SATA controllers are also very hit-and-miss. There's plenty of really, really bad ones out there. Either missing features, slow, getting hot or all of the above. If you found one that works well, good for you, but I'd avoid most noname brands, unless I had specific knowledge about the product or the very least the chipset they use.

Looking for a reverse proxy to put any service behind a login for external access.

I host a few docker containers and use nginx proxy manager to access them externally since I like to have access away from home. Most of them have some sort of login system but there are a few examples where there isn't so I currently don't publicly expose them. I would ideally like to be able to use totp for this as well.

rentar42 ,

I've got the same setup! What I love about authentik is that I can even add a Google login as an authentication method. That severely increases the spouse-acceptance factor, as they don't have to "remember yet another password" or "carry around another thingie". Personally I use a YubiKey anyway, but for others who aren't into it "for fun" or for philosophical reasons reducing the friction as much as possible is paramount.

rentar42 ,

That example makes sense to me, because it's an alternative to something like hosting a blog on some third party site: generate it statically and host the result somewhere.

rentar42 ,

Now you make me feel old. In "the olden days" before streaming of media over the internet was as commonplace as it was now, that was the standard way that tech-savy people consumed media: Either on their PC or with some set-top box with built-in storage. I fondly remember my PopcornHour, which was basically a line of desktop-boxes that ranged from "basically a hard disk, video decoder and HDMI out" all the way to "can automatically rip your BlueRays".

rentar42 ,

I suggest to avoid the temptation to get one of the many cheap Android boxen meant for media playback from Ali Express or the like, as they have a strong tendency to be heavily loaded with malware. Definitely not all of them, but it's really hard to tell which specific one you'll get.

rentar42 ,

That's a great answer if one already has a NAS (which is not unlikely, given the name of the community). But if that's not already present (or desired for other reason) then a simple media-PC with some built-in storage is simpler to set up.

rentar42 ,

A custom "source available" license that may not be as clear-cut as intended and depends on "we know it when we see it" by the authors of the license? You don't say!

rentar42 ,

I've not tried that myself, but AFAIK VLC can be remote controlled in various ways, and since the API for that is open, multiple clients for it exist: https://wiki.videolan.org/Control_VLC_from_an_Android_Phone

There's also Clementine which offers a remote-control Android app.

rentar42 ,

https://lemmy.world/post/12995686 was a recent question and most of the answers will basically be duplicates of that.

One slight addition I want to add: "Docker" is just one implementation of "OCI containers". It's the one that broke through initially in the hype, but you can just as easily use any other (podman being a popular one) and basically all of the benefits that people ascribe to "docker" can be applied to.

So you might (as I do) have some dislike for docker (the product) and still enjoy running containers.

rentar42 ,

I personally prefer podman, due to its rootless mode being "more default" than in docker (rootless docker works, but it's basically an afterthought).

That being said: there's just so many tutorials, tools and other resources that assume docker by default that starting with docker is definitely the less cumbersome approach. It's not that podman is signficantly harder or has many big differences, but all the tutorials are basically written with docker as the first target in mind.

In my homelab the progression was docker -> rootless docker -> podman and the last step isn't fully done yet, so I'm currently running a mix of rootless docker and podman.

rentar42 ,

In the immortal words of Jake the Dog:

Dude, suckin’ at something is the first step to being sorta good at something.

We are or were all noobs once. Going away from the keyboard is often an undervalued step in the solution-finding process. Kudos!

rentar42 ,

You've got a single, old HDD attached via USB. There's plenty of places that could be the bottleneck here, but that's among the first I'd check. Can you actually read from that HDD significantly faster than your network transfer speed? Check that locally first. No use in optimizing anything network-related when your underlying disk IO is slow.

rentar42 ,

Given the very specific dependencies that Immich has wrt. the Postgres plugins it needs, I'm certain that it's not currently packaged as an RPM and I would even bet that it never will be (at least not as one of the officially supported packages put out by the developers).

rentar42 ,

Can confirm the statistics: I recently consolidated about a dozen old hard disks of various ages and quite a few of them had a couple of back blocks and 2 actually failed. One disk was especially noteworthy in that it was still fast, error-less and without complaints. That one was a Seagate ST3000DM001. A model so notoriously bad that it's got its own Wikipedia entry: https://en.wikipedia.org/wiki/ST3000DM001
Other "better" HDDs were entirely unresponsive.

Statistics only really matter if you have many, many samples. Most people (even enthusiasts with a homelab) won't be buying hundreds of HDDs in their life.

Password Manager that supports multiple databases/syncing?

I currently use keePass, and use it on both my PC and my phone. I like it because I can keep a copy of my DB on my phone and export it through a few different means. But I can't seem to find an option to actually sync my local DB against a remote one. I've thought about switching to BitWarden but from what I can see it uses a...

rentar42 ,

Was about to post this, this works well for me.

In my case I'm storing the DB on my Google Drive for now, but Keepass2Android supports many different systems, including "generic" things like WebDAV, so really anything should work.

While Keepass2Android is integrated with the syncing and will always check for conflicts (i.e. check for latest version before saving), the same isn't necessarily true for the desktop client. But since I rarely edit from both devices at the same time, anything that syncs to the Desktop in a somewhat realtime fashion should work just fine.

And for the few (long ago) cases where updates were overwritten, the "previous version" feature of Google Drive was god-sent! (And KeepassX can simply merge the old overwritten version into the current one and you'll get the correct merge).

rentar42 ,

I think the difference is at what level:

  • don't implement your own storage redundancy system at the kernel level with a small team in a closed-source fashion, because that's the kind of thing that needs many eyes, lots of experience and many millions of hours real-world usage to fully debug and make sure it work.
  • do build your own system by combining pre-existing technologies that are built by experienced teams and tested/vetted by wide/popular usage.

I feel OPs critique has some truth to it. I personally would rather stay with raidz by zfs, exactly because of it's open nature (yes, they too have bugs, nothing is perfect).

rentar42 ,

Do you have any devices on your local network where the firmware hasn't been updated in the last 12 month? The answer to that is surprisingly frequently yes, because "smart device" companies are laughably bad about device security. My intercom runs some ancient Linux kernel, my frigging washing machine could be connected to WiFi and the box that controls my roller shutters hasn't gotten an update sind 2018.

Not everyone has those and one could isolate those in VLANs and use other measures, but in this day and age "my local home network is 100% secure" is far from a safe assumption.

Heck, even your router might be vulnerable...

Adding HTTPS is just another layer in your defense in depth. How many layers you are willing to put up with is up to you, but it's definitely not overkill.

rentar42 , (edited )

They are in fact the same image, as you can verify by comparing their digest:

$ docker pull ghcr.io/linuxserver/plex
Using default tag: latest
latest: Pulling from linuxserver/plex
Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
Status: Image is up to date for ghcr.io/linuxserver/plex:latest
ghcr.io/linuxserver/plex:latest
$ docker pull lscr.io/linuxserver/plex
Using default tag: latest
latest: Pulling from linuxserver/plex
Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
Status: Image is up to date for lscr.io/linuxserver/plex:latest
lscr.io/linuxserver/plex:latest
$

See how both images have the digest sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144. Since the digest uniquely identifies the exact content/image, that guarantees that those images are in fact byte-for-byte identical.

rentar42 ,

"Taking care of my sick mother ..." stops them real quick.

I want to get started with *arr apps - here are all the things I don't understand about (reverse-/)proxies and networking in order to get it set up.

Please can someone show off how smart and sexy they are by answering these questions. I don't mind if you just link me to a video or guide explaining it (like I'm 5?) instead of typing it out - but please don't just send me stuff that says something like "To forward to ports correctly, simply forward the correct ports - but be...

rentar42 ,

Those are usually the prefixes for interfaces which are not quite the same thing as networks. An interface is the surface that connects some device to a network. For example if your router treats its WLAN and its wired network as a single network (i.e. each thing on WLAN can see everything on wired and vice versa) then a specific device might still have a wlan1 and eth1 interface, each one reaching the respective physical network device, while being in the same network.

"One network" here really only means "something can successfully route between all the devices".

rentar42 ,

As others have mentioned (and also explained in quite some detail) you're trying to bite off a lot at once. First, for Jellyfin locally you can ignore most of that.

And if you really want to learn the ins and outs of all that (and I can recommend it, it's useful), then I suggest you start with some simple web app. Something like note taking or maybe even something trivial like a whoami service, which basically just echos some information it was sent back to you. That's super useful because you know that it is unlikely to be broken, so you can focus on the networking/port forwarding issues. And once you've got that working and have a rough feeling how this all works you can go on to more complex setups that actually do something useful.

XPipe status update: New scripting system, advanced SSH support, performance improvements, and many bug fixes (sh.itjust.works)

I'm proud to share a status update of XPipe, a shell connection hub and remote file manager that allows you to access your entire server infrastructure from your local machine. It works on top of your installed command-line programs and does not require any setup on your remote systems. So if you normally use CLI tools like ssh,...

rentar42 ,

This looks really interesting.

I don't mind the commercialization at all and think it's actually a good sign for an open source project to have a monetization strategy to be able to hang around.

But why do I have to agree to a EULA on a Apache-licensed piece of software? I understand that for the commercial features that might be necessary, but in that case could we get a separate installer for "this is all Apache licensed, no need for a EULA"?

Additionally the contribution file mentions that "some components are only included in the release version and not in this repository.". What are these components? Are they necessary for the basic core functionality?

rentar42 ,

The EULA is just standard terms like don’t try to circumvent the license requirement, if you buy a license don’t share it with other people, some warranty and liability stuff, etc.

Yes, I know. I actually read it (which is rare) and it's mostly sensible stuff. The "no reverse engineering" clause just felt weird in something that claims to be "mostly open source".

In the end I find it slightly misleading to call this open-core when the app with just the non-commercial features can't be built full from the published source.

They are not necessary for basic core functionality but it doesn’t work without it as the license requirement could be disabled easily then as I mentioned before.

I don't quite understand this argument. If I can build a development version I can run any and all code in the repo (while providing an existing xpipe installation) and somehow I would be able to ship this, if I had criminal energies, so how exactly does this requirement prevent that?

In other words: if the only way to access the commercial features without a license is by doing something illegal then ... that's not really adding much burden, is it?

In the end I'm probably just one of the open-source proponents that don't like that, and that's fine. Not everyone needs to agree with everyone, there's a lot of space here where reasonable minds can disagree. I just think that claiming "the main application is open source" when it can't be built purely from the source is a bit misleading.

rentar42 ,

The issue is that according to the spec the two DNS servers provided by DHCP are equivalent. While most clients favor the first one as the default, that's not universally the case and when and how it switches to the secondary can vary by client (and effectively appear random). So you won't be able to know for sure which client uses your DNS, especially after your DNS server was unreachable for a while for whatever reason. Personally I've "just" gotten a second Pi to run redundant copies of PiHole, but only having a single DNS server is usually fine as well.

Pi-Hole or something else for network ad blocking?

I've been aware of pi-hole for a while now, but never bothered with it because I do most web browsing on a laptop where browser extensions like uBlock origin are good enough. However, with multiple streaming services starting to insert adds into my paid subscriptions, I'm looking to upgrade to a network blocker that will also...

rentar42 , (edited )

Hint: you don't need to route all your traffic through your VPN to make use of the pihole adblocking: Just DNS. If your at home internet is even moderately stable/good then this should barely affect your roaming internet experience, since DNS traffic is such a small part of all traffic.

Also, since I'm already mirroring the configuration of my PiHole instance to a secondary one, I'm considering putting a tertiary one on some forever-free cloud server instance and just using that when not at home (put it into the same wireguard vpn to prevent security nightmares). That way my roaming private DNS wouldn't even depend on my home internet.

rentar42 ,

Note that there is some reliability drawback of spinning hard disks on and off repeatedly. maybe unintuitively HDDs that spin constantly can live much longer than those that spend 90% of their time spun down.

This might not be relevant if you use only SSDs, and might never affect you, but it should be mentioned.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines