Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@Max_P@lemmy.max-p.me avatar

Max_P

@Max_P@lemmy.max-p.me

Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I route through my server or my home router when using public WiFi and stuff. I don't care too much about the privacy aspect, my real identity is attached to my server and domain anyway. I even have rDNS configured, there's no hiding who the IP belongs to.

That said, server providers are much less likely to analyze your traffic because that'd be a big no-no for a lot of companies using those servers. And of course any given request may actually be from any of Lemmy, Mastodon, IRC bots or Matrix, so pings to weird sites can result entirely from someone posting that link somewhere.

And it does have the advantage that if you try to DDoS that IP you'll be very unsuccessful.

Does Matrix have anything akin to 'posts' as in Lemmy and Reddit?

I haven't really used any kind of messenger service since probably MSN Messenger and IRC back in the day so I'm a bit behind on a lot of the basics. Part of what's quite different now than the experience then is what modern messenger protocols seem to be used for, as in they have public channels dedicated to topics that function...

Max_P ,
@Max_P@lemmy.max-p.me avatar

Matrix is for chatting, not posts.

When it goes well you get live, interactive support and get your question answered fairly quickly. Nice and convenient. But as you've said already, it has drawbacks and it's where forums and things like Lemmy come in, where sometimes you can get replies days later.

They're different systems that reach different audiences. You use whichever based on the needs and complexity. What sucks is when the chat rooms develop some knowledge that doesn't get known outside and it's also not indexed anywhere on the web. Some things are better discussed in forum format (or mailing lists if you're very oldschool), while others are just better interactively and the back and forth on a public forum would just be painful.

Usually there's a bit of an overlap at least, where users are usually in Discord/Matrix/IRC and some forum or reddit or fediverse community at the same time.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That's why half decent VPN apps also add firewall rules to prevent leakage. Although nothing can beat Linux and shoving the real interface in a namespace so it's plainly not available to anything except the VPN process.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Some providers have managed to make split tunnelling work fine so those I suspect are not affected because they override the routing at the driver level. It's really only the kinda lame OpenVPN wrappers that would be affected. When you have the custom driver, you can affect the routing. It's been a while since I've tested this stuff on Windows since obviously I haven't been paid to do that for 6 years, but yeah I don't even buy that all providers are affected and that it's unfixable. We had workarounds for that when I joined PIA already so it's probably been a known thing for at least a decade.

The issues we had is sometimes you could get the client to forget to remove the firewall rules or to add back the routes and it would break people's internet entirely. Not great but a good problem to have in context.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Most VPN providers don't use DHCP. OpenVPN emulates and hooks DHCP requests client-side to hand the OS the IP it got over the OpenVPN protocol in a more standard way (unless you use Layer 2 tunnels which VPN providers don't because it's useless for that use case). WireGuard doesn't support DHCP at all and it always comes from configuration.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The attack vector here seems to be public WiFi like coffee shops, airports, hotels and whatnot. The places you kinda do want to use a VPN.

On those, if they're not configured well such as coffee shops using consumer grade WiFi routers, an attacker on the same WiFi can respond to the DHCP request faster than the router or do an ARP spoof attack. The attacker can proxy the DHCP request to make sure you get a valid IP but add extra routes on top.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Adding routes for other thing on the network the clients can reach directly and remove some load from the router. For example, reaching another office location through a tunnel, you can add a route to 10.2.0.0/16 via 10.1.0.4 and the clients will direct the traffic directly at the appropriate gateway.

Arguably one should design the network such that this is not necessary but it's useful.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The guy that manages Kbin has been having personal issues and stepped away from the fediverse so yeah Kbin is kind of in limbo at the moment and indeed not well moderated. There's mods but there's just so much they can do. The software doesn't federate the deletions so even if they're gone on Kbin, they remain everywhere else.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I'll never understand the people that fake these kinds of things. Fake watches, fake followers, fake views, fake likes, fake jobs. Why?

What's attractive about likes and views anyway? Why would I care that my date has 0 followers or a million followers? If anything it means they'll constantly be busy streaming.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Can confirm they block rooted Android users intentionally, completely silently, at least when using Google's RCS servers. The message just doesn't send and is automatically deemed spam if you don't pass PlayIntegrity. And the only RCS capable app is Google's Messages, third party apps can only access SMS and MMS functionnality.

So yeah, fuck RCS really. I was completely on board with RCS until that. Apple was right on that one. It won't fix messaging, it just puts it in Google's hands unless carriers finally decide to roll out real RCS instead of relying on Google to provide it.

Third party apps had that resolved a decade ago, and Signal is just plain better.

Max_P ,
@Max_P@lemmy.max-p.me avatar

There's always the command escape hatch. Ultimately the roles you'll use will probably do the same. Even a plugin would do the same, all the ZFS tooling eventually shells to the zfs/zpool, probably same with btrfs. Those are just very complex filesystems, it would be unreliable to reimplement them in Python.

We use tools to solve problems, not make it harder for no reason. That's why command/shell actions exist: sometimes it's just better to go that way.

You can always make your own plugin for it, but you're still just writing extra code to eventually still shell out into the commands and parse their output.

Max_P ,
@Max_P@lemmy.max-p.me avatar

It could be a disk slowly failing but not throwing errors yet. Some drives really do their best to hide that they're failing. So even a passing SMART test I would take with some salt.

I would start by making sure you have good recent backups ASAP.

You can test the drive performance by shutting down all VMs and using tools like fio to do some disk benchmarking. It could be a VM causing it. If it's an HDD in particular, the random reads and writes from VMs can really cause seek latency to shoot way up. Could be as simple as a service logging some warnings due to junk incoming traffic, or an update that added some more info logs, etc.

Why is replacement for home device controls so complicated?

I recently learned about Home Assistant here on Lemmy. It looks like a replacement for Google Home, etc. However, it requires an entire hardware installation. Proprietary products just use a simple app to manage and control devices, so can someone explain why a pretty robust dedicated device is necessary as a replacement? The...

Max_P ,
@Max_P@lemmy.max-p.me avatar

Even then, those requirements are easily satisfied by a Raspberry Pi and most other SBCs out there. Seems rather reasonable to dedicate one to HA. It's not too crazy when you take into consideration how powerful cheapo hardware can be these days.

Max_P ,
@Max_P@lemmy.max-p.me avatar

The federation aspect adds complexity. A lot of complexity.

The only thing the fediverse might enable is nobodies like me can theoretically write social media software and actually get them successful without becoming a VC funded social media startup and have to resort to ads and premium tiers.

But things that couldn't be done without the fediverse as a base? Nah not really.

Note that the concept of federation is really old. Emails are a form of federation. XMPP was federated too. Heck, Diaspora* is pretty old and tried to make open Facebook for almost as long as Facebook's been mainstream.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That's fine, the stock is up this quarter with all the hype, they'll deal with the next quarter when it comes.

This reeks of "make a chip better than Apple's or y'all are fired" and the ensuing lies throughout the company about the actual performance of the chip to appease management.

Max_P ,
@Max_P@lemmy.max-p.me avatar

It's obviously pretty valuable. How would we feel if say, China decided Microsoft/Google/AWS/Oracle had to sell to a Chinese company on the grounds of national security? They'd rather pull out too, despite China being a very large market too. Or what happens if other countries starts demanding the same?

Pretty sure ByteDance would rather keep their IP.

And if they sell, do they keep the rights for the other countries or it belongs to the US now?

Anti-web discrimination by banks and online services - is this even legal?

Banks, email providers, booking sites, e-commerce, basically anything where money is involved, it's always the same experience. If you use the Android or iOS app, you stayed signed in indefinitely. If you use a web browser, you get signed out and asked to re-authenticate constantly - and often you have to do it painfully using a...

Max_P ,
@Max_P@lemmy.max-p.me avatar

That's a safety thing. Phones are usually owned by one person or possibly shared in the family, but the security is such that app data is per-user anyway.

Websites though, people still sign in from all sorts of devices and often wildly insecure ones such as public/work computers, one malware away from hackers having access to your bank account.

Inconvenient for advanced users like us, but it would literally make all of those refund scams so much easier to pull off because they wouldn't even have to trick the victims into logging into their bank: blank the screen, transfer the money, tell them their computer is all fixed, bye.

Max_P ,
@Max_P@lemmy.max-p.me avatar

If your bank really spies on you through its app, I would change bank. Neither of my bank apps even run in the background or even request sensitive permissions. I will happily change my mind if you can show any proof that this is happening.

It's purely security. On Windows and largely on Linux desktop as well, any app can easily look at other app's data, that's why there's so many browser credential stealers. Maybe you'll never be a victim of this sort of attack, but if it does happen your bank account is gone.

Android and iOS have complete data isolation between apps. Unless you have root on it, even if you install malware and give it the maximum amount of permissions Android can possibly give, it can't access your auth cookies from the bank app. The bank app can't even access them either until you input a pin or biometric data to get it from the TEE.

Thus it's safe for banks to actually let people stay logged in with reduced identification. Browsers can't do that, not without the web integrity.

We're an absolutely minuscule minority that cares, and could use a stay logged in feature safely in a browser environment.

Dealing with fraud cases is expensive for the banks, they have good reasons to ensure you can only access your bank account under safe conditions. The average person doesn't even know what a web browser is, they know they click the Google and enter what site they want to go to into Google and search for it. They're the people that get scammed on the phone. They're the people that have their entire life savings wired overseas.

Just let your password manager fill up the login everytime, it's not hard.

Max_P ,
@Max_P@lemmy.max-p.me avatar

on a closed-source software stack

Android is open-source. My phone runs an open-source build of it.

At this point it's barely any worse than a web browser. I know it's sandboxed, it can't access anything I don't want to. All it lacks is isolation with the kernel since web browsers run JavaScript and Android runs native code.

Worst comes to worst you just run the app in Waydroid.

Max_P , (edited )
@Max_P@lemmy.max-p.me avatar

Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.

Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that's about it?

I'm a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I'd say it varies more with experience and how it's set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn't really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it's all up and running. I don't use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it's still happily humming along just fine.

Max_P ,
@Max_P@lemmy.max-p.me avatar

You probably need the server to do relatively aggressive keepalive to keep the connection alive. You go through CGNAT, so if the server doesn't talk over the VPN for say 30 seconds, the NAT may drop the mapping and now it's gone. WireGuard doesn't send any packet unless it's actively talking to the other peer, so you need to enable keepalive so it's sending stuff often enough the connection doesn't drop and if it does, quickly bring it back up.

Also make sure if you don't NAT the VPN, that everything has a route that goes back to the VPN. If 192.168.1.34 (main location) talks to 192.168.2.69 (remote location) over a VPN 192.168.3.0/24, without NAT, both ends needs to know to route it through the VPN network. Your PIVPN probably does NAT so it works one way but not the other. Traceroute from both ends should give you some insight.

That should absolutely work otherwise.

Instagram Advertises Nonconsensual AI Nude Apps (www.404media.co)

Instagram is profiting from several ads that invite people to create nonconsensual nude images with AI image generation apps, once again showing that some of the most harmful applications of AI tools are not hidden on the dark corners of the internet, but are actively promoted to users by social media companies unable or...

Max_P ,
@Max_P@lemmy.max-p.me avatar

Seen similar stuff on TikTok.

That's the big problem with ad marketplaces and automation, the ads are rarely vetted by a human, you can just give them money, upload your ad and they'll happily display it. They rely entirely on users to report them which most people don't do because they're ads and they wont take it down unless it's really bad.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Actually, a good 99% of my reports end up in the video being taken down. Whether it's because of mass reports or whether they actually review it is unclear.

What's weird is the algorithm still seems to register that as engagement, so lately I've been reporting 20+ videos a day because it keeps showing them to me on my FYP. It's wild.

Max_P ,
@Max_P@lemmy.max-p.me avatar

because it failed to include the most important requirement to protect Americans' civil rights: that law enforcement get a warrant before targeting a US citizen

So, he wants the government to dig dirt on US residents, but only if they're immigrants or temporary workers.

Max_P ,
@Max_P@lemmy.max-p.me avatar

For the backup scenario in particular, it makes sense to pipe them through right to the destination. Like, tar -zcv somefiles | ssh $homeserver dd of=backup.tar.gz, or mysqldump | gzip -c | ssh $homeserver dd of=backup.sql.gz. Since it's basically a download from your home server's perspective it should be pretty fast, and you don't need temporary space at all on the VPS.

File caching might be a little tricky. You might be best self host some kind of object storage and put varnish/NGINX/dedicated caching proxy software in front of it on your VPS, so it can cache the responses but will ultimately forward to the home server over VPN if it doesn't have it cached.

If you use NextCloud for your photos and videos and stuff, it can use object storage instead of local filesystem, so it would work with that kind of setup.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Kbin is not currently maintained due to the guy that makes it having personal issues and not having time to keep up with it. Some instances are even defederating kbin due to spam not being cleaned up and also some bugs sending the same activities over and over again.

No spam on my end on Lemmy.

Why is Matrix mentioned more often than XMPP in self hosted forums?

I'm looking into hosting one of these for the first time. From my limited research, XMPP seems to win in every way, which makes me think I must be missing something. Matrix is almost always mentioned as the de-facto standard, but I rarely saw arguments why it is better than XMPP?...

Max_P ,
@Max_P@lemmy.max-p.me avatar

Everyone ends up on MS Teams because they bundle it with Office365, so execs have the choice of "free" or another $12/mo/user for Slack. It immediately makes it a case of "justify how Slack is so much better we spend thousands on it when Microsoft gives us Teams for free". Those execs don't use chat software in the first place.

That's why the EU forced them to unbundle Teams.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Backup codes. You're supposed to print them out and put it in a fire safe or something. They're longer and not time based and valid until you rotate them. With those you can lose everything and still access your accounts.

My KeePass database is also synchronized locally on most of my devices, so even if my server is dead I'm not really locked out, I just have annoying merge conflicts to resolve.

Also, Yubikeys. They're nice. If whatever blackout destroys your Yubikey, you have much worse problems to worry about than checking your email.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Yeah similar setup except I use NextCloud.

KeepassDX is great, can use it with just about anything too. I used it over sftp for a bit. It'll happily do Google Drive, OneDrive, DropBox and just about anything that implements the right content providers.

Going through the provider is nice, it gives NextCloud an opportunity to sync it before it hands it over to KeepassXC, and knows when it gets saved too so it can sync it immediately. I don't think I've had merge conflicts since, and I still have my offline copy just in case.

The annoying part is when you've added a password on one side and cleaned up a bunch of passwords on the other side. When they get merged, it doesn't merge what changed it merges the databases together so your cleanup is gone. It's safe at least, and exceedingly rare.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Depends what it does.

Lets say you run a Reddit/Twitter/YouTube proxy. Yeah, the services ultimately still get your server's IP, but you will just appear as coming from some datacenter somewhere, so while they can know it's your traffic, they can't track you on the client side frontend and see that you were at home (and where your home is), then you went on mobile data and then ended on a guest WiFi, then at some corporate place. The server is obfuscating all of that. And you control the server, so your server isn't tracking anything.

The key to those services being more private is actually to have more people using them. Lets say now you have 10 people using your Invidious instance. It'll fudge your watch pattern a fair bit, but also any watched video could be from any of the 10 users. If they don't detect that, they've made a completely bogus profile that's the combination of you and your 10 users.

You can always add an extra layer and make it go through a VPN or Tor, but if you care that much you should already always be on a VPN anyway. But it does have the convenience that you can use it privately even without a VPN.


A concrete example. I run my own Lemmy server. It's extremely public but yet, I find it more private that Reddit would. By having my own server, all of my client-side actions are between me and my server. Reddit on the other hand can absolutely log and see every interaction I have with their site, especially now that they've killed third-party apps. It knows every thread I open, it can track a lot of my attention. It knows if I'm skimming through comments or actually reading, everything. In contract, the fediverse doesn't know what I actually read: my server collects everything regardless. On the other hand, all my data including votes is totally public, so I gain privacy in a way but lose some the other way.

Privacy is a tradeoff. Sometimes you're willing to give away some information to protect other.


For selfhosting as a whole, sure some things are just frontends and don't give you much like an Invidious instance, but others can be really good. NextCloud for example, I know my files are entirely in my control and get a similar experience to using Google Drive: I can browse my stuff from anywhere and access my files. I have my own email, so nobody can look at my emails and give me ads based on what newsletter I get.

It doesn't have to be perfect, if it's an improvement and gets you into selfhosting more stuff down the line, it's worth it.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Seems like a decent start! My recommendation is pick something you'll actually use, so you actually want to keep that VPS going, if for you that's silver bullet then have fun!

NextCloud is relatively easy to get going and useful for sharing files. I find it convenient combined with KeePass/KeePassDX so my passwords are synchronized are nice and safe although I'm considering an upgrade to BitWarden.

Matrix is also reasonably easy to set up and you can set up bridges to just about anything.

I also have my own emails but that's a special kind of hell for beginning with loads of things entirely out of your control.

Max_P ,
@Max_P@lemmy.max-p.me avatar

https://join-lemmy.org/docs/contributors/04-api.html

Lemmy is the API, it's always there. The web UI is just a client like any others and makes use of the Lemmy API. So you can just call the API to register an account, reset password, log in, everything. You don't need to register tokens or apps, you just log into your account and get a session token and you're good to go!

That makes it easy to discover the API as well, since you can just open your browser's devtools and inspect the network requests. It's the same API, so you can just go ahead and implement the same in your code. No second class clients for Lemmy, they all use the same public API.

Plus of course it also implements the ActivityPub APIs for federation, also which doesn't require registration or anything special.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Do you have spare drives to test? Can be really small or mismatched, it's just for testing.

The idea is as follows: make the exact same RAID with the old controller on test drives, then put them in the target controller with hopefully the same settings and see if it's happy. Make sure to have some large files with known checksums on it, just to test if the data is correct and not corrupted in subtle ways.

If it works, then it should work with the real drives. If it doesn't, good luck.

Also RAID 1 with 6 drives doesn't really make sense. RAID 1 would be mirrors, and if your data had 6 copies I think you'd care way too much about your data to even consider doing this. Probably RAID 5/6/10, which adds parity and striping to the mix which does significantly increase the chances of incompatibility.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That's like stage one where you filter out the obviously incompetent ones.

You wouldn't believe how many candidates with years of experience can't figure out those simple problems. Or even the super well known fizzbuzz.

It's insane, people will claim like 2-3 years of experience with Ansible, they can't even get a file copied. Couple years of Python, they don't understand async, generators and other pretty basic features.

People have always been lying a bit about their experience but it's getting way, way out of control.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Your router has no idea what domain has been used for a given connection, it knows the IP and only the IP.

HAproxy and NGINX can, because for HTTP you just need to look at the Host header, and for HTTPS, the SNI extension for TLS. Anything that uses TLS should be doable with HAproxy (you don't even need to decrypt the content, just read the SNI and pass it through to the backend as-is).

For other protocols, your only options are either it supports it, or you have to do multiple ports. Or a VPN at that point would also work, remove the problem entirely.

Max_P ,
@Max_P@lemmy.max-p.me avatar

Just fireup the game, connect, and play, as if the server is hosted on some VPS.

The best you can do without clients for the users is to set up a VPS and have your server VPN into it so the VPS can expose the game port through the VPN.

Other than that there's no escaping either clients for everyone, or open ports on your router. Something somewhere has to be accepting incoming connections.

Max_P ,
@Max_P@lemmy.max-p.me avatar

OnePlus no longer supports that as of ColorOS OxygenOS 12 unfortunately.

Max_P ,
@Max_P@lemmy.max-p.me avatar

There's no kernel-space code in systemd.

Max_P ,
@Max_P@lemmy.max-p.me avatar

I also use systemd a lot and it baffles me people can claim sysvinit was more reliable with a straight face.

Half the time I restarted MySQL in the sysvinit days (pre-upstart as well), it would fail to stop it then try to start a new instance of it with the old one still running and the only way to fix it was to manually stop the other instance.

Process management is like the one thing systemd really does well thanks to cgroups, it's impossible for it to lose track of processes because the process lied about its pidfile.

Max_P , (edited )
@Max_P@lemmy.max-p.me avatar

So does sysvinit. PID 1 has to be root to do its job. Under sysvinit it is the responsibility of each daemon to drop privileges on their own if they wish to do so.

Systemd can handle your services such that they start unprivileged from the get go. It also offers a lot of isolation by default with options like PrivateTmp, ProtectHome, ProtectSystem powered by cgroups. It can effectively run your services like they're in a Docker container if you want.

A lot of systemd also runs as separate services with their own user as well. Only the core init part really runs as root, it prefers to drop privilges and apply cgroup isolation wherever it makes sense to do so. The logger for example runs as systemd-journald, the DNS resolver runs as systemd-resolved. They're part of the systemd package but far from all of it runs as root. Systemd can even do certain privileged operations so that the service can run with less privileges such as binding port 80/443 for you so the web server doesn't need root at all to run.

It also enables users to do certain operations without requiring elevating privileges with sudo, which in many cases can help not have to give sudo NOPASSWD specific commands because your web developers need to be able to restart the web server, you can just add a Polkit rule that allows restarting that service without privileges. Systemd is all D-Bus, so you can control access at a very granular level. You can grant only start and reload if you want.

Sysvinit is just shell scripts running as root. There is no security whatsoever, it was never sysvinit's job to secure the system. It's mostly fine as all the tooling for it also requires root to use. But it does require root 100% of the time to interact with it.


There's good reasons to prefer sysvinit, those are just common FUD systemd haters keep spreading. There's no need to discredit or outright lie about systemd to justify preferring sysvinit: the simplicity of a few shell scripts and not needing 99% of what systemd does is a perfectly valid argument on its own.

I have boxes that use systemd very heavily and some that have a custom bash script as the init because the box only needs an IP and to start a single app. Right tool for the right job and stuff.

nginx proxy manager changes IP. How to get static container IP?

all the containers change IP addresses frequently. For home assistant a static IP address of the proxy manager is mandatory in order to reach it. For jellyfin it is useful to see which device accesses jellyfin. If the IP always changes, it doesn't work properly....

Max_P ,
@Max_P@lemmy.max-p.me avatar

The containers all have IPs unless you use the "host" network type, in which case it just stays in the host namespace, or "none" which ends up with an empty network namespace. And the IPs can indeed change. This is also why multiple containers can bind to the same port without colliding with eachother.

Docker kind of hides it from you because when you use -p to publish a port, it sets up a proxy process for you to pass traffic from the port on the host to the port on the container.

You usually have a docker0 bridge with an IP like 172.16.0.1, and if you run ip a in a container it'll have a single interface with an IP of 172.16.0.2.

https://docs.docker.com/network/

Max_P ,
@Max_P@lemmy.max-p.me avatar

Those are just the basic ones too, when macvlan, macvtap, ipvlan gets involved it gets even crazier. You can directly attach containers to the network such that your router assigns it an IP via DHCP like it's just another device plugged on your network.

You can also share a network namespace with multiple containers, usually kubernetes/podman pods to enable for sidecar containers like Filebeat, Consul, Envoy, etcd and so on.

If you use rootless containers, it'll use slirp4netns to emulate the network entirely in userspace.

In the cloud you usually end up with your pods directly attached to your VPC as well, like AWS Fargate serverless offerings.

Max_P ,
@Max_P@lemmy.max-p.me avatar

IMO Mint is to Ubuntu what Manjaro is to Arch: a pile of duct tape in the name of user experience ready to blow at the worst time, down to the TLS certificate mishaps.

People pick really weird distros to worship...

Max_P ,
@Max_P@lemmy.max-p.me avatar

I used to work for PIA. The best users are the occasional user, and there's a lot of them. They cost little bandwidth, they pop on every now and then and off fairly quickly. Andrew also got pretty lucky, riding both the Bitcoin and Snowden waves. It probably did ultimately run at a loss at some point, but all the big ones could ride on their crypto payments rapidly increasing in value, and the hardcore privacy people were very happy to pay in crypto.

You can easily cram ~1000-5000 active users on a 10 Gbps server because you can assume that most people are far from reaching gigabit on their own (OpenVPN limitations helped a lot there). Even at just a dollar a year per users you've still got 5 grands which more than pays for the server which really only needs a good NIC and a bunch of IPs. But remember, most of those are idle or not connected at all, so you can have many more users than there is bandwidth available. And at that scale you get bulk discounts on the servers as you fill up a good rack or two.

I have to imagine at this point the market is incredibly saturated though, I left a bit over 6 years ago.

Max_P ,
@Max_P@lemmy.max-p.me avatar

There's always Waydroid. Might need some tweaks to make it believe it has a real phone number attached to it, but it should work.

Max_P ,
@Max_P@lemmy.max-p.me avatar

That article is from 2013, so I'm a bit skeptical about the claims about under 1 TB drives. It was probably reasonable advice back then when 1 TB capacities were sorta cutting edge. Now we have 20+ TB hard drives, nobody's gonna be making arrays of 750 GB drives.

I have two 4TB drives in a simple mirror configuration and have resilvered it a few times due to oopsies and it's been fine, even with my shitty SATA ports.

The main concern is bigger drives take longer to resilver because well, it's got much more data to shuffle around. So logically, if you have 3 drives that are the same age and have gotten the same amount of activity and usage, when one gives up it would be likely for the other 2 to be getting close as well. If you only have 1 drive of redundancy, then this can be bad because temporarily, you have no redundancy so one more drive failure and the zpool is gone. If you're concerned about them all failing at the same time, the best defense is either different drive brands, or different drive ages.

But you do have backups, so, if that pool dies, it's not the end of the world. You can pull it back from your 18TB mirror array. And it's different drives, so those are unlikely to fail at the same time as your 3x4TB drives, let alone 2 more of them. You need 4 drives to give up in total in your particular case before your data is truly gone. That's not that bad.

It's a risk management question. How much risk do you tolerate? How's your uptime requirements? For my use case, I deemed a simple 2 drive mirror to be sufficient for my needs, and I have a good offsite backup on a single USB external drive, and an encrypted cloud copy of things that are really critical and I can't possibly lose like my Keepass database.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines