Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

hedgehog

@hedgehog@ttrpg.network

This profile is from a federated server and may be incomplete. Browse more on the original instance.

hedgehog ,

UBI doesn’t give any power to those who own the means of automation, nor does it take power away from laborers. Automation does that. Automation reduces the leverage of the laborer by reducing the capitalist’s reliance on labor.

We have the same leverage regardless of whether we have UBI or not, but the leverage of employers is reduced with UBI. That said, if more people opt not to work thanks to UBI, then the people who choose to work will see their leverage increased.

hedgehog ,

“Supposed to” according to what?

If you’re in the US, Federal labor laws explicitly allow “meal periods” to not be paid, though short breaks must be paid. Neither is required to be offered to employees, though.

Source: https://www.dol.gov/general/topic/workhours/breaks

State laws differ, of course, and many states - e.g., California - have much more employee-friendly laws. However, even in CA, a meal period must be offered but isn’t required to be paid (unless it’s an on-duty meal break).

hedgehog ,

it's still not a profitable venture

Source? My understanding is that Google doesn’t publish Youtube’s expenses directly but that Youtube has been responsible for 10% of Google’s revenue for the past few years (on the order of $31.5 Billion in 2023) and that it’s more likely than not profitable when looked at in isolation.

hedgehog ,

Tons of laptops with top notch specs for 1/2 the price of a M1/2 out there

The 13” M1 MacBook Air is $700 new from Best Buy. Better specced versions are available for $600 used (buy-it-now) on eBay, but the base specced version is available for $500 or less.

What $300 or less used / $350 new laptops are you recommending here?

hedgehog ,

It’s more like paying the ticket without ever showing up in court. And at least where I live, I can do that.

hedgehog ,

The news sites can cover whatever they want. If their readers consume it, great - they’re writing to audience. Doesn’t mean we can’t criticize it when it gets posted here.

hedgehog ,

If you’re talking about a stock Android OS on anything other than a Pixel, iOS wins in both regards. Stock on a Pixel, I don’t know that Apple is more secure, but if you’re installing apps via Google Play that use Google Play Services, iOS is certainly more private. Vs GrapheneOS on a Pixel, iOS is less private by far.

hedgehog ,

Better than bad is still “better.”

hedgehog ,

You think that Google Play Services is FOSS? Or that the version of Android on Samsung phones (as well as of most other Android phone manufacturers), including all baked in software, is FOSS?

hedgehog ,

And when you’re comparing two closed source options, there are techniques available to evaluate them. Based off the results of people who have published their results from using these techniques, Apple is not as private as they claim. This is most egregious when it comes to first party apps, which is concerning. However, when it comes to using any non-Apple app, they’re much better than Google is when using any non-Google app.

There’s enough overlap in skillset that pretty much anyone performing those evaluations will likely find it trivial to configure Android to be privacy-respecting - i.e., by using GrapheneOS on a Pixel or some other custom ROM - but most users are not going to do that.

And if someone is not going to do that, Android is worse for their privacy.

It doesn’t make sense to say “iPhones are worse at respecting user privacy than Android phones” when by default and in practice for most people, the opposite is true. What we should be saying is “iPhones are better at respecting privacy by default, but if privacy is important to you, the best option is to put in a bit of extra work and install GrapheneOS on a Pixel.”

hedgehog ,

The dice method is great. https://www.eff.org/dice

hedgehog ,

Being a bit pedantic here, but I doubt this is because they trained their model on the entire internet. More likely they added Reddit and many other sites to an index that can be referenced by the LLM and they don’t have enough safeguards in place. Look up “RAG” (Retrieval-augmented generation) if you want to learn more.

hedgehog ,

Sure, and that’s roughly the same amount of entropy as a 13 character randomly generated mixed case alphanumeric password. I’ve run into more password validation prohibiting a 13 character password for being too long than for being too short, and for end-user passwords I can’t recall an instance where 77.5 bits of entropy was insufficient.

But if you disagree - when do you think 77.5 bits of entropy is insufficient for an end-user? And what process for password generation can you name that has higher entropy and is still easily memorized by users?

hedgehog ,

Ah, fair enough. I was just giving people interested in that method a resource to learn more about it.

The problem is that your method doesn’t consistently generate memorable passwords with anywhere near 77 bits of entropy.

First, the example you gave ended up being 11 characters long. For a completely random password using alphanumeric characters + punctuation, that’s 66.5 bits of entropy. Your lower bound was 8 characters, which is even worse (48 bits of entropy). And when you consider that the process will result in some letters being much more probable, particularly in certain positions, that results in a more vulnerable process. I’m not sure how much that reduces the entropy, but it would have an impact. And that’s without exploiting the fact that you’re using quoted as part of your process.

The quote selection part is the real problem. If someone knows your quote and your process, game over, as the number of remaining possibilities at that point is quite low - maybe a thousand? That’s worse than just adding a word with the dice method. So quote selection is key.

But how many quotes is a user likely to select from? My guess is that most users would be picking from a set of fewer than 7,776 quotes, but your set and my set would be different. Even so, I doubt that the set an attacker would need to discern from is higher than 470 billion quotes (the equivalent of three dice method words), and it’s certainly not 2.8 quintillion quotes (the equivalent of 5 dice method words).

If your method were used for a one-off, you could use a poorly known quote and maybe have it not be in that 470 billion quote set, but that won’t remain true at scale. It certainly wouldn’t be feasible to have a set of 2.8 quintillion quotes, which means that even a 20 character password has less than 77.5 bits of entropy.

Realistically, since the user is choosing a memorable quote, we could probably find a lot of them in a very short list - on the order of thousands at best. Even with 1 million quotes to choose from, that’s at best 30 bits of entropy. And again, user choice is a problem, as user choice doesn’t result in fully random selections.

If you’re randomly selecting from a 60 million quote database, then that’s still only 36 bits of entropy. When the database has 470 billion quotes, that’ll get you to 49 bits of entropy - but good luck ensuring that all 470 billion quotes are memorable.

There are also things you can do, at an individual level, to make dice method passwords stronger or more suitable to a purpose. You can modify the word lists, for one. You can use the other lists. When it comes to password length restrictions, you can use the EFF short list #2 and truncate words after the third character without losing entropy - meaning your 8 word password only needs to be 32 characters long, or 24 characters, if you omit word separators. You can randomly insert a symbol and a number and/or substitute them, sacrificing memorizability for a bit more entropy (mainly useful when there are short password length limits).

The dice method also has baked-in flexibility when it comes to the necessary level of entropy. If you need more than 82 bits of entropy, just add more words. If you’re okay with having less entropy, you can generate shorter passwords - 62 bits of entropy is achieved with a 6 short-word password (which can be reduced to 18 characters) and a 4 short-word password - minimum 12 characters - still has 41 bits of entropy.

With your method, you could choose longer quotes for applications you want to be more secure or shorter quotes for ones where that’s less important, but that reduces entropy overall by reducing the set of quotes you can choose from. What you’d want to do is to have a larger set of quotes for your more critical passwords. But as we already showed, unless you have an impossibly huge quote database, you can’t generate high entropy passwords with this method anyway. You could select multiple unrelated quotes, sure - two quotes selected from a list of 10 billion gives you 76.4 bits of entropy - but that’s the starting point for the much easier to memorize, much easier to generate, dice method password. You’ve also ended up with a password that’s just as long - up to 40 characters - and much harder to type.

This problem is even worse with the method that the EFF proposes, as it'll output passphrases with an average of 42 characters, all of them alphabetic.

Yes, but as pass phrases become more common, sites restricting password length become less common. My point wasn’t that this was a problem but that many site operators felt that it was fine to cap their users’ passwords’ max entropy at lower than 77.5 bits, and few applications require more than that much entropy. (Those applications, for what it’s worth, generally use randomly generated keys rather than relying on user-generated ones.)

And, as I outlined above, you can use the truncated short words #2 list method to generate short but memorable passwords when limited in this way. My general recommendation in this situation is to use a password manager for those passwords and to generate a high entropy, completely random password for them, rather than trying to memorize them. But if you’re opposed to password managers for some reason, the dice method is still a great option.

hedgehog ,

Just sharing this link to another comment I made replying to you, since it addresses your calculations regarding entropy: https://ttrpg.network/comment/7142027

hedgehog ,

Are you familiar with LaTeX? You can use plugins that generate PDFs that follow the PDF/X1-a standard and send the resulting PDFs to professional printers.

TeXStudio is a FOSS LaTeX editor that looks well-suited for your use-case.

Since LaTeX documents are just text and your images are already sorted and so on, you could even write a script to construct the first draft of your doc with the pictures arranged consistently, based off the files in your file system, then edit it to tweak it to perfection. You could also/alternatively create or use some reusable LaTeX patterns.

hedgehog ,

I haven’t worked with Scribus but I’ve heard good things about it, so I don’t think you’d be making a wrong choice by going with it. For this use case, the main reasons I can think of for why LaTeX would be preferable would be:

  • if you preferred working with it, or with a particular LaTeX tool
  • if you want to learn one tool or the other
  • if being able to write a script to create the output is something you want to do and the equivalent is not possible in Scribus
hedgehog ,

Why should shadow bans be illegal?

hedgehog ,

Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.

This basically reads as “shadow bans are bad and have no redeeming factors,” but you haven’t explained why you think that.

If you’re a real user and you only have one account (or have multiple legitimate accounts) and you get shadow-banned, it’s a terrible experience. Shadow bans should never be used on “real” users even if they break the ToS, and IME, they generally aren’t. That’s because shadow bans solve a different problem.

In content moderation, if a user posts something that’s unacceptable on your platform, generally speaking, you want to remove it as soon as possible. Depending on how bad the content they posted was, or how frequently they post unacceptable content, you will want to take additional measures. For example, if someone posts child pornography, you will most likely ban them and then (as required by law) report all details you have on them and their problematic posts to the authorities.

Where this gets tricky, though, is with bots and multiple accounts.

If someone is making multiple accounts for your site - whether by hand or with bots - and using them to post unacceptable content, how do you stop that?

Your site has a lot of users, and bad actors aren’t limited to only having one account per real person. A single person - let’s call them a “Bot Overlord” - could run thousands of accounts - and it’s even easier for them to do this if those accounts can only be banned with manual intervention. You want to remove any content the Bot Overlord’s bots post and stop them from posting more as soon as you realize what they’re doing. Scaling up your human moderators isn’t reasonable, because the Bot Overlord can easily outscale you - you need an automated solution.

Suppose you build an algorithm that detects bots with incredible accuracy - 0% false positives and an estimated 1% false negatives. Great! Then, you set your system up to automatically ban detected bots.

A couple days later, your algorithm’s accuracy has dropped - from 1% false negatives to 10%. 10 times as many bots are making it past your algorithm. A few days after that, it gets even worse - first 20%, then 30%, then 50%, and eventually 90% of bots are bypassing your detection algorithm.

You can update your algorithm, but the same thing keeps happening. You’re stuck in an eternal game of cat and mouse - and you’re losing.

What gives? Well, you made a huge mistake when you set the system up to ban bots immediately. In your system, as soon as a bot gets banned, the bot creator knows. Since you’re banning every bot you detect as soon as you detect them, this gives the bot creator real-time data. They can basically reverse engineer your unpublished algorithm and then update their bots so as to avoid detection.

One solution to this is ban waves. Those work by detecting bots (or cheaters, in the context of online games) and then holding off on banning them until you can ban them all at once.

Great! Now the Bot Overlord will have much more trouble reverse-engineering your algorithm. They won’t know specifically when a bot was detected, just that it was detected within a certain window - between its creation and ban date.

But there’s still a problem. You need to minimize the damage the Bot Overlord’s accounts can do between when you detect them and when you ban them.

You could try shortening the time between ban waves. The problem with this approach is that the ban wave approach is more effective the longer that time period is. If you had an hourly ban wave, for example, the Bot Overlord could test a bunch of stuff out and get feedback every hour.

Shadow bans are one natural solution to this problem. That way, as soon as you detect it, you can prevent a bot from causing more damage. The Bot Overlord can’t quickly detect that their account was shadow-banned, so their bots will keep functioning, giving you more information about the Bot Overlord’s system and allowing you to refine your algorithm to be even more effective in the future, rather than the other way around.

I’m not aware of another way to effectively manage this issue. Do you have a counter-proposal?

Out of curiosity, do you have any experience working in content moderation for a major social media company? If so, how did that company balance respecting user privacy with effective content moderation without shadow bans, accounting for the factors I talked about above?

hedgehog ,

But major social media companies do exist. If your real point was that they shouldn’t, you should have said that upfront.

hedgehog ,

That's a bit abstract, but saying what others "should" do is both stupid and rude.

Buddy, if anyone’s being stupid and rude in this exchange, it’s not me.

And any true statement is the same as all other true statements in an interconnected world.

It sounds like the interconnected world you’re referring to is entirely in your own head, with logic that you’re not able or willing to share with others.

Even if I accepted that you were right - and I don’t accept that, to be clear - your statements would still be nonsensical given that you’re making them without any effort to clarify why you think them. That makes me think you don’t understand why you think them - and if you don’t understand why you think something, how can you be so confident that you’re correct?

hedgehog ,

No, I don’t think anything you do has any bearing on reality, period.

The current job market is beyond fucked.

Most job responses I get is they're not hiring anymore due to restructuring. Aka they just go for pure profit increase while overworking the understaffed employees. No more remote interviews either. Tons of requests to do one sided video interviews. And the pays appear lower than they were during the main pandemic, even though...

hedgehog ,

If you’re in the US, unpaid overtime is only permissible if you’re salaried exempt. To be salaried exempt:

  • you must make at least $684 every week ($35,568/year)
  • your primary job responsibility must be one of the following:
    • executive - managing the enterprise, or managing a customarily recognized department or subdivision; you must also regularly direct your work of at least two FTEs and be able to hire / fire people (or be able to provide recommendations that are strongly considered)
    • administrative - office or non-manual work directly related to the management or general business operations, or
    • learned professional - work which is predominantly intellectual in character and which includes work requiring the consistent exercise of discretion and judgment, in the field of science or learning
    • creative professional - work requiring invention, imagination, originality or talent in a recognized field of artistic or creative endeavor
    • IT related - computer systems analyst, computer programmer, software engineer or other similarly skilled worker in the computer field
    • sales
    • HCE (you must be making at least $107k per year)
  • your pay must not be reduced if your work quality is reduced or if you work fewer hours
    • for example, if you work 5 days a week, for an hour a day, you must get the same pay as if you worked 8 hours every day. There are some permissible deductions they can make - like if you miss a full day - and they can require you to use vacation time or sick time, if you have it - and of course they can fire you if you’re leaving without completing your tasks… but they still have to pay you.

Check out https://www.dol.gov/agencies/whd/fact-sheets/17a-overtime for more details on the above.

It’s quite possible you’re eligible for back-paid overtime.

Note also that the minimum exempt wages are increasing in July.

Re your “cover my expenses just to exist” bit and the follow-up about employers catching on and pushing abusive shit… if this is related to a disability make sure to look into getting that on record and seeking an accommodation. If your primary job duty is X and they’re pushing you to do Y, but your disability makes Y infeasible, then it’s a pretty reasonable accommodation to ask to not have to do Y (assuming your HCP agrees, of course).

Problems with creating my own instance

I am currently trying to create my own Lemmy instance and am following the join-lemmy.org docker guide. But unfortunately docker compose up doesn't work with the default config and throw's a yaml: line 32: found character that cannot start any token error. Is there something I can do to fix this?...

hedgehog ,

If you use that docker compose file, I recommend you comment out the build section and uncomment the image section in the lemmy service.

I also recommend you use a reverse proxy and Docker networks rather than exposing the postgres instance on port 5433, but if you aren’t familiar with Docker networks you can leave it as is for now. If you’re running locally and don’t open that port in your router’s firewall, it’s a non-issue unless there’s an attacker on your LAN, but given that you’re not gaining anything from exposing it (unless you need to connect to the DB directly regularly - as a one off you could temporarily add the port mapping), it doesn’t make sense to increase your attack surface for no benefit.

hedgehog ,

Definitely not, I do the same.

I installed 64 GB of RAM in my Windows laptop 4 years ago and had been using 64 GB of RAM in the laptop that it replaced - which was from 2013 (I think I bought it in 2014-2105). I was using 32 GB of RAM prior (on Linux and Windows laptops), all the way back to 2007 or so.

My work MacBook Pros generally have 32-64 GB of RAM, but my personal MacBook Air (the 15” M2) has 16 GB, simply because the upgrade wasn’t a cost effective one (and the M1 before it had performed great with 16) and because I’d only planned on using it for casual development. But since I’ve been using it as my main personal development machine and for self-hosted AI, and have run into its limits, when I replace it I’ll likely opt for 64 GB or more.

My Windows gaming desktop only has 32 GB of RAM, though - that’s because getting the timings higher with more RAM - particularly 4 sticks - was prohibitively expensive when I built it, and then when the cost wasn’t a concern and I tried to upgrade, I learned that my third and fourth RAM slots weren’t functional. I could upgrade to 64 GB in two slots but it wouldn’t really be worth it, since I only use it for gaming.

My Linux desktop / server has 128 GB of ECC RAM, though, because that’s as much as the motherboard supported.

hedgehog ,

I’m not the person you responded to, but I can say that it’s a perfectly fine take. My personal experience and the commonly voiced opinions about both browsers supports this take.

Unless you’re using 5 tabs max at a time, my personal experience is that Firefox is more than an order of magnitude more memory efficient than Chrome when dealing with long-lived sessions with the same number of tabs (dozens up to thousands).

I keep hundreds of tabs open in Firefox on my personal machine (with 16 GB of RAM) and it’s almost never consuming the most memory on my system.

Policy prohibits me running Firefox on my work computer, so I have to use Chrome. Even with much more memory (both on 32 GB and 64 GB machines) and far fewer tabs (20-30 at most vs 200-300), Chrome often ends up taking up far too much memory + having a substantial performance drop, and I have to to through and prune the tabs I don’t need right now, bookmark things that can be done later, etc..

Also, see https://www.techspot.com/news/102871-zero-regrets-firefox-power-user-kept-7500-tabs.html - I’ve never seen anything similar for Chrome and wasn’t able to find anything.

hedgehog ,

They don’t call them “mp3 players” anymore - that may be why you can’t find what you need. Look for a “DAP” instead - digital audio player - and you’ll probably have more luck.

For example, the Fiio M7 is $200 and is pretty full-featured. I have the M6 and I think I paid around $100, but I don’t think it’s being sold anymore.

hedgehog ,

You can use YaCy, which can be run as an independent self-hosted index (in “Local” mode), where it will index sites visited as part of web crawls that you initiate, or you can run it as part of a decentralized peer-to-peer network of indexes.

YaCy has its own search UI but you can also set up SearXNG to use it.

hedgehog ,

there is not a 'Searx Index' which is what this is about.

There’s YaCy, which includes a search index (which can be independent or can join a P2P network of indexes), web crawler, and web ui for searching. It can also be added as a SearXNG engine.

hedgehog ,

Last I checked (around the time that LLAMA v3 was released), the performance of local models on CPU also was pretty bad for most consumer hardware (Apple Silicon excepted) compared to GPU performance, and the consumer GPU RAM situation is even worse. At least, when talking about the models that have performance anywhere near that of ChatGPT, which was mostly 70B models with a few exceptional 30B models.

My home server has a 3090, so I can use a self-hosted 4-bit (or 5-bit with reduced context) quantized 30B model. If I added another 3090 I’d be able to use a 4-bit quantized 70B model.

There’s some research that suggests that 1.58 bit (ternary) quantization has a lot of potential, and I think it’ll be critical to getting performant models on phones and laptops. At 1.58 bit per parameter, a 30B model could fit into 6 gigs of RAM, and the quality hit is allegedly negligible.

hedgehog ,

I haven’t used it and only heard about it while writing this post, but Open WebUI looks really promising. I’m going to check it out the next time I mess with my home server’s AI apps. If you want more options, read on.

Disclaimer: I’ve looked into most of the options below enough to feel comfortable recommending them, but I’ve only personally self hosted the Automatic 1111 webui, the Oobabooga webui, and Kobold.cpp.

If you want just an LLM and an image generator, then:

For the image generator, something that leverages Stable Diffusion models:

And then find models that you like at Civitai.

For the LLM, the best option depends on your hardware. Not knowing anything about your hardware, I recommend a llama.cpp based solution. Check out one of these:

Alternatively, VLLM is allegedly the fastest for multi-user CPU-based inference, though as far as I can tell it doesn’t have its own webui (but it does expose OpenAI compatible API endpoints).

And then find a model you like at Huggingface. I recommend finding a model quantized by TheBloke.

There are a couple communities not on Lemmy that discuss local LLMs - r/LocalLLaMA and r/LocalLLM for example - so if you’re trying to figure out which model to try, that’s a good place to check.

If you want a multimodal AI, you can use llama.cpp with a model like LLAVA. The options below also have multimodal support.

If you want an AI assistant with expanded capabilities - like searching your documents or the web (RAG), etc. - then I don’t have a ton of experience there, but these seem to do that job:

If you want to use your local model as more than just a chat bot - integrating it into your IDE or a browser extension - then there are options there, and as far as I know every LLM above can be configured to expose an API allowing it to be used by your other tools. Some, like Open WebUI, expose OpenAI compatible APIs and so can be used with tools built to be used with OpenAI. I don't know of many tools like this, though - I was surprisingly not able to find a browser extension that could use your own API, for example. Here are a couple examples:

Also, I found this Medium article listed some of the things I described above as well as several others that I’d never heard of.

hedgehog ,

I am trying to avoid having to having an open port 22

If you’re working locally you don’t need an open port.

If you’re on a different machine but on the same network, you don’t need to expose port 22 via your router’s firewall. If you use key-based auth and disable password-based auth then this is even safer.

If you want access remotely, then you still don’t have to expose port 22 as long as you have a vpn set up.

That said, you don’t need to use a terminal to manage your docker containers. I use Portainer to manage all but my core containers - Traefik, Authelia, and Portainer itself - which are all part of a single docker compose file. Portainer stacks accept docker compose files so adding and configuring applications is straightforward.

I’ve configured around 50 apps on my server using Docker Compose with Portainer but have only needed to modify the Dockerfile itself once, and that was because I was trying to do something that the original maintainer didn’t support.

Now, if you’re satisfied with what’s available and with how much you can configure it without using Docker, then it’s fine to avoid it. I’m just trying to say that it’s pretty straightforward if you focus on just understanding the important parts, mainly:

  • docker compose
  • docker networks
  • docker volumes

If you decide to go that route, I recommend TechnoTim’s tutorials on Youtube. I personally found them helpful, at least.

hedgehog ,

I haven’t personally used any of these, but looking them over, Tipi looks the most encouraging to me, followed by Yunohost, based largely on the variety of apps available but also because it looks like Tipi lets you customize the configuration much more. Freedom Box doesn’t seem to list the apps in their catalog at all and their site seems basically useless, so I ruled it out on that basis alone.

hedgehog ,

It’s not changing the default behavior, so it still has it.

Per the article, they’re introducing a new opt-in feature that a woman, enbie, or person looking for same-gender matches can set up - basically a prompt that their matches can reply to.

I think Bumble also used to prevent you from sending multiple messages before getting a reply, but maybe that was a different app... If they still do that in combination with this feature, then I could see this feature continuing to accomplish their mission of empowering women in online dating.

hedgehog ,

Terrible article. Even worse advice.

On iOS at least, if you’re concerned about police breaking into your phone, you should be using a high entropy password, not a numeric PIN, and biometric auth is the best way to keep your convenience (and sanity) intact without compromising your security. This is because there is software that can break into a locked phone (even one that has biometrics disabled) by brute forcing the PIN, bypassing the 10 attempts limit if set, as well as not triggering iOS’s brute force protections, like forcing delays between attempts. If your password is sufficiently complex, then you’re more likely to be safe against such an attack.

I suspect the same is true on Android.

Such a search is supposed to require a warrant, but the tool itself doesn’t check for it, so you have to trust the individual LEOs in question to follow the law. And given that any 6 digit PIN can be brute forced in under 11 hours (40 ms per entry), this means that if you were arrested (even for a spurious charge) and held overnight, they could search your phone without you knowing.

With a password that has the same entropy as 10 random digits, assuming no further vulnerabilities allowing them to speed up the process, it could take up to 12 and a half years to brute force it. Make it alphanumeric (and still random) and it’s millions of years - infeasible within our lifetime - it’s basically a question of whether another vulnerability is already known or is discovered that enables bypassing the password entirely / much faster rates of entry.

If you’re in a situation where you expect to interact with law enforcement, then disable biometrics. Practice ahead of time to make sure you know how to do it on your phone.

hedgehog ,

Copying an iPhone isn’t as straightforward as you seem to think. Copying data from a locked iPhone requires either an exploit or direct access to the SSD / memory chips on the device (basically, chip-off forensics, which likely requires bypassing the storage controllers), and I assume the same is true for Android devices.

I’m not saying such exploits don’t exist, but local police departments don’t have access to them. And they certainly don’t have the capability to directly access your device’s storage and then reassemble it without your knowledge.

Now, if your device is confiscated for long enough that it could be mailed off to a forensics lab for analysis? Sure, then it’s a possibility. But most likely if they want your data that badly they’ll either hold onto your device, compel you into sharing the info with them, or try to trick you into giving it to them. Hanging onto your data without a warrant for over a decade is a high risk, low reward activity.

Your data’s more vulnerable to this sort of attack in transit.

hedgehog ,

It calls them “passwords,” but personally I don’t consider a 6 digit number to be a password. And according to this article on GrayKey, 6 digit “passcodes” became the norm back in 2015. I haven’t seen any stats showing that people on average use more secure passcodes now, and making the passcode required more frequently isn’t going to encourage anyone to use one that’s more secure.

The article just says “disable biometrics” which is bad advice for the average person, as it will result in them using a 6 digit passcode. This is a knee-jerk reaction at best, and the resulting advice is devoid of nuance, made by someone who clearly doesn’t understand the threat discussed in the article, and would benefit literally nobody who might feasibly take it.

My advice is echoed by the article above, but it’s based off having an understanding of the problem area and suggesting a solution that doesn’t just address one thing. Anyone giving advice on the topic should consider:

  • known threats and reasonably likely unknown threats
  • the mitigations to those threats
  • how the technology works for both the threats and the mitigations
  • the legal landscape in your jurisdiction - for us, the US - both in practice and in theory
  • people’s attitudes toward security, namely their willingness to suffer inconveniences for its sake
  • how all of the above interact, and how likely someone is to take the advice given in a way that improves their security overall

The author of this article considered none of the above.

hedgehog ,

100%.

If you’re always concerned about sophisticated attackers, then you should also:

  • Disable biometrics unlock whenever your device is about to leave your possession or you’re going to sleep
  • Protect against shoulder-surfing / surveillance attacks that can capture you entering your password, e.g., by being aware of your surroundings and only entering your password or viewing sensitive information when you‘re certain your screen (and thumb locations) can’t be observed or by obscuring a view of your phone with your shirt or a blanket (like Snowden)
  • Take the time to learn more about security in general and in relation to the specific threats that concern you
hedgehog ,

As I said in my first comment, I’m more familiar with iOS, where 6 digit passcodes are the default.

That said, do you genuinely think the average person would use a random 10+ alphanumeric character passcode to unlock their phone after taking the advice of this article and disabling biometric auth?

hedgehog ,

I’m not addressing anything Gitea has specifically done here (I’m not informed enough on the topic to have an educated opinion yet), but just this specific part of your comment:

And they also demand a CLA from contributors now, which is directly against the idea of FOSS.

Proprietary software is antithetical to FOSS, but CLAs themselves are not, and were endorsed by RMS as far back as 2002:

In contrast, I think it is acceptable to … release under the GPL, but sell alternative licenses permitting proprietary extensions to their code. My understanding is that all the code they release is available as free software, which means they do not develop any proprietary softwre; that's why their practice is acceptable. The FSF will never do that--we believe our terms should be the same for everyone, and we want to use the GPL to give others an incentive to develop additional free software. But what they do is much better than developing proprietary software.

If contributors allow an entity to relicense their contributions, that enables the entity to write proprietary software that includes those contributions. One way to ensure they have that freedom is to require contributors to sign a CLA that allows relicensing, so clearly CLAs can enable behavior antithetical to FOSS… but they can also enable FOSS development by generating another revenue stream. And many CLAs don’t allow relicensing (e.g., Apache’s).

Many FOSS companies require contributors to sign CLAs. For example, the FSF has required them since 2005 at least, and its CLA allows relicensing. They explain why, but that explanation doesn’t touch on why license reassignment is necessary.

Even if a repo requires contributors sign a CLA, nobody’s four freedoms are violated, and nobody who modifies such software is forced to sign a CLA when they share their changes with the community - they can share their changes on their own repo, or submit them to a fork that doesn’t require a CLA, or only share the code with users who purchase the software from them. All they have to do is adhere to the license that the project was under.

The big issue with CLAs is that they’re asymmetrical (as opposed to DCOs, which serve a similar purpose). That’s understandably controversial, but it’s not inherently a FOSS issue.

Some of the same arguments against the SSPL (which is not considered FOSS because it is so copyleft that it’s impractical) being considered FOSS could be similarly made in favor of CLAs. Not in favor of signing them as a developer, mind you, but in favor of considering projects that use them to be aligned FOSS principles.

hedgehog ,

You can use your phone’s browser to access the ticket. From https://help.livenation.com/hc/en-us/articles/9907955578129-How-do-I-use-Mobile-Entry-tickets

How do I find and use my tickets?

On a mobile browser:

  1. Open a web browser app and go to Ticketmaster.com.
  2. Sign into your My Account.
  3. Tap the circle in the top right and tap Upcoming Events.
  4. Find your order and tap View Tickets to access your tickets. We recommend adding your tickets to a digital wallet so that you’ll always have your ticket on hand.
  5. Your phone’s your ticket — scan it at the venue entrance and you’re in!

Also, if the event isn’t Mobile-only, you can select a different option for your ticket. See https://help.livenation.com/hc/en-us/articles/9902009367953-How-are-tickets-delivered for more details.

hedgehog ,

What happens when you click the “Next” button down at the bottom right?

If it doesn’t take you to your ticket then that sounds like a bug. Definitely a frustrating one; hopefully not intentional.

hedgehog ,

Whereas the signal devs are just sitting on their high horse and doing nothing but stupid cryptoscams.

Nah, they also spend their time squashing FOSS forks

hedgehog ,

What additional capabilities does that give the app beyond using Firefox or Chrome to install it as a PWA?

hedgehog ,

consumers will not notice any difference in the performance or effectiveness of products equipped with this technology.

I believe they missed this part of the memo

hedgehog ,

I believe that the pop-up was also bugged and that it was only supposed to show up once.

hedgehog ,

For logging in, Bitwarden supports TOTP, email, and FIDO2 WebAuthn on the free plan. It only adds Yubikey OTP and Duo support at the paid tier, and WebAuthn is superior to both of those methods. This is an improvement that they made fairly recently - back in September 2023.

The other features that the free plan lacks are:

  • the 1 GB of integrated, encrypted file storage. This is a convenience that is nice to have, but not essential to a password manager.
  • the integrated TOTP generator. This is a convenience that many argue is actually a security downgrade (under the “putting all your eggs in one basket” argument).
  • Upgraded vault health reports - free users get username data breach reports but not weak / reused password reports. This is the main area where your criticism is valid, but as far as I know free competitors don’t offer this feature, either. I looked at KeepassXC and didn’t see this mentioned.
  • Emergency access (basically a trusted contact who can access your vault under some circumstances). This isn’t essential, either, and the mechanisms they add to ensure security of it cost money to provide.
  • Priority support - free users get 24/7 support by email, which should be good enough
hedgehog ,

Getting physical access to users’ devices is more difficult than compromising their passwords, so in that sense, transitioning that one factor is a net improvement in terms of reducing the number of compromises for a given service.

Except for e2ee accounts, which I suspect Passkeys don’t support in the first place (at least, not without caching the password on your device), law enforcement can access your account’s data without ever needing your password. If you’re concerned about law enforcement breaking into your device and you’re not using a unique 16+ character passcode with it set to wipe the device after a certain number of attempts, that’s on you.

I’m not sure about the state of affairs on Android, but the most popular and powerful tool used by law enforcement to extract data from iOS devices only recently gained support for iOS 17 and it doesn’t have the ability to bypass passwords on a device that isn’t accepting FaceID; it just has the ability to brute force them. A password with sufficient entropy mitigates this attack. (It’s unclear if it’s able to bypass auth when FaceID is enabled, but I could see it going either way.)

You said a couple of things that I specifically want to address:

But it doesn't solve anything that existing TOTP over text messages didn't solve, other than some complexity, and it eliminated the password (something you know) factor at the server.

and

outside of MitM attacks that TOTP mitigates

Text-message based TOTP - or SMA 2FA - is incredibly vulnerable. In many cases, it can be compromised without the user even realizing. A user with a 4 digit PIN (even if that PIN is 1234) and a Passkey on their device is much less vulnerable than a user using SMS 2FA with a password used across multiple services.

If a user cares deeply about security, they likely already have a set of security keys (like the YubiKey 5C) that support U2F / WebAuthn, and they’ll add passkeys for their most sensitive services to those devices, protected by unique, high entropy PINs. This approach is more secure than using an equally high entropy password and U2F / WebAuthn if the latter isn’t secured with a PIN, since these devices are extremely secure and wipe their contents after 8 failed PIN attempts, but the password is transmitted to the server, which receives it in plaintext and stores it hashed, generally outside of a secure enclave, making the password vulnerable, e.g., if grabbed from server memory, or to a brute force attack on the hash if the server (which could be undetected and only involve read access to the db server), meaning a simple theft of the security key would be all that was needed to compromise the account (vs needing the PIN that is never transmitted anywhere).

And app-based TOTP doesn’t mitigate MITM at all. The only thing it does is add a timing component requirement, which current MITM phishing attacks have incorporated. To mitigate such an attack you need Passkeys, Webauthn, or U2F as an authentication factor. To bypass this the attackers need to compromise the service itself or a certificate authority, which is a much taller task.

The other thing is that we know most users reuse passwords and we know that sites will be compromised, so:

  • best case scenario, salted password hashes will be leaked
  • likely scenario, password hashes will be leaked,
  • and worst case scenario, plain text passwords will be leaked

and as a result, that user’s credentials for a different site will be exposed. For those users, Passkeys are a vast improvement over 1FA, because that vulnerability doesn’t exist.

Another factor is the increased visibility of Passkeys is resulting in more sites supporting them - U2F / Webauthn didn’t have great adoption. And getting these into the hands of more users, without requiring them to buy dedicated security keys, is a huge boost.

For the vast majority of users, passkeys are an improvement in security. For the few for whom they aren’t, those users likely know that, and they still benefit from increased adoption of a MITM immune authentication method, which they can choose on a site-by-site basis. And even they can benefit from increased security by storing passkeys on a security key.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines