If I'm not mistaken you can save keys in these chips so that they can not be extracted. You can only use the key to encrypt/decrypt/sign/verify by asking the chip to do these operations with your key.
You would probably use a recovery key that exists exclusively elsewhere like on paper in a vault. Like bitlocker.
I have no idea if signal uses TPM or not but generally keys in TPM are non-exportable which is a very good thing and IMO the primary reason to use TPM at all.
That said, let’s compare how it works on the phone to how it could work on MacOS and how it actually works on MacOS. In each scenario, we’ll suppose you installed an app that has hidden malware - we’ll call it X (just as a placeholder name) - and compare how much data that app has access to. Access to session data allows the app to spoof your client and send+receive messages
On the phone, your data is sandboxed. X cannot access your Signal messages or session data. ✅ Signal may also encrypt the data and store an encryption key in the database, but this wouldn’t improve security except in very specific circumstances (basically it would mean that if exploits were being used to access your data, you’d need more exploits if the key were in the keychain). Downside: On iOS at least, you also don’t have access to this data.
On MacOS, it could be implemented using sandboxed data. Then, X would not be able to access your Signal messages or spoof your session unless you explicitly allowed it to (it could request access to it and you would be shown a modal). ✅ Downside: the UX to upload attachments is worse.
It could also be implemented by storing the encryption key in the keychain instead of in plaintext on disk. Then, X would not be able to access your Signal messages and session data. It might be able to request access - I’m not sure. As a user, you can access the keychain but you have to re-authenticate. ✅ Downside: None.
It’s actually implemented by storing the encryption key in plaintext, collocated with the encrypted database file. X can access your messages and session data. ❌
Is it foolproof? No, of course not. But it’s an easy step that would probably take an hour of dev time to refactor. They’re even already storing a key, just not one that’s used for this. And this has been a known issue that they’ve refused to fix for several years. Because of their hostile behavior towards forks, the FOSS community also cannot distribute a hardened version that fixes this issue.
From what I understand it was withdrawn as a vote „in favor of the goals of the commission“ was not guaranteed. In part because Germany announced its decision to withdraw support yesterday. Seems to be standard behavior.
From my experience with Llama models, this is great!
Not all training info is about answers to instructive queries. Most of this kind of data will likely be used for cultural and emotional alignment.
At present, open source Llama models have a rather prevalent prudish bias. I hope European data can help overcome this bias. I can easily defeat the filtering part of alignment, that is not what I am referring to here. There is a bias baked into the entire training corpus that is much more difficult to address and retain nuance when it comes to creative writing.
I'm writing a hard science fiction universe and find it difficult to overcome many of the present cultural biases based on character descriptions. I'm working in a novel writing space with a mix of concepts that no one else has worked with before. With all of my constraints in place, the model struggles to overcome things like a default of submissive behavior in women. Creating a complex and strong willed female character is difficult because I'm fighting too many constraints for the model to fit into attention. If the model trains on a more egalitarian corpus, I would struggle far less in this specific area. It is key to understand that nothing inside a model exists independently. Everything is related in complex ways. So this edge case has far more relevance than it may at first seem. I'm talking about a window into an abstract problem that has far reaching consequences.
People also seem to misunderstand that model inference works both ways. The model is always trying to infer what you know, what it should know, and this is very important to understand: it is inferring what you do not know, and what it should not know. If you do not tell it all of these things, it will make assumptions, likely bad ones, because you should know what I just told you if you're smart. If you do not tell it these aspects, it is likely assuming you're average against the training corpus. What do you think of the intelligence of the average person? The model needs to be trained on what not to say, and when not to say it, along with the enormous range of unrecognized inner conflicts and biases we all have under the surface of our conscious thoughts.
This is why it might be a good thing to get European sources. Just some things to think about.
If the social biases of the model put a hard limit on your ability to write a good woman character, I question how much it's really you that's "writing" the story. I'm not against using LLMs in writing, but it's a tool, not a creative partner. They can be useful for brainstorming and as a sounding-board for ideas (potentially even editing), but imo you need to write the actual prose yourself to claim you're writing something.
I use them to explore personalities unlike my own while roleplaying around the topic of interest. I write like I am the main character with my friends that I know well around me. I've roleplayed the scenarios many times before I write the story itself. I'm creating large complex scenarios using much larger models than most people play with, and pushing the limits of model attention in that effort. The model is basically helping me understand points of view and functional thought processes that I suck at while I'm writing the constraints and scenarios. It also corrects my grammar and polishes my draft iteratively.
Well you can opt-out….but meta will decide if the request is “valid”? And then maybe they grant the opt-out request.. this is not the way.
Wife tries to optout and its a fucking disaster not only do they make it so that you hate computers from now on (i already did hate them but i was in IT for 30 years), Half the time the optout form does not “work” for some reason.
There are too many differences for me to list here, but unlike mobile operating systems, Windows and most Linux desktops do not provide sandboxed environments for userspace apps by default. Apps generally have free reign over the whole system; reading/writing data from/to other apps without restriction or notification. There are virtually no safeguards against malicious actors.
Mobile operating systems significantly restrict system-level storage space, making key areas read-only to prevent data access or manipulation. They also protect app storage, so one app can't arbitrarily access or modify data stored for a different app.
Mobile operating systems also follow an image-based update model, wherein updates are atomic. System software updates are generally applied successfully all at once or not at all, helping to ensure your phone is never left in a partial or unusable state after a system update.
For desktop users, macOS, and atomic Linux distros combined with Flatpak are the closest comparisons.
Why is Linksys sending your Wifi details, as well as your private password, outside of your home
If they're doing it, why are they sending your critically important private information unencrypted onto the public internet
The answer to the first one may be semi-legit as these are mesh products. As in, the other nodes in the mesh will need this information, and it appears that Linksys has decided to store your security data in AWS for the other mesh nodes to retrieve it when you're setting it up. I'd sure as hell like to know this before the product does this. Further, I'd much prefer to simply attached to each mesh node myself to input the secured credentials instead of sending them outside to the internet.
There's not excuse for Linksys sending the creds unencrypted onto the internet.
I'm just finding no confirmation that they send them unencrypted over the Internet and I've seen "researchers" calling sending passwords over HTTPS "unencrypted."
Mesh coordination is interesting. It's not great. That said I doubt that any off-the-shelf consumer mesh system does go through the work to keep things local-only. It's too easy to setup a cloud API and therefore likely all of them do that since it's the cheapest.
How would they know that the device sends the SSID and password otherwise? If it was encrypted you would not be able to read the content of the packages.
and I've seen "researchers" calling sending passwords over HTTPS "unencrypted."
That's because the password is unencrypted.
HTTPS will encrypt the channel and the data in flight, but the data is still unencrypted and anyone with a key that validated (assuming it actually checks for certificate validity) now has access to your unencrypted password. So yes, even over HTTPS it should be considered unencrypted.
Whether or not they're sending it over an encrypted channel, they're still sending out an unencrypted password that they have no need for. Linksys has no reason to need the unencrypted password, and at best would only need a hashed password to accomplish whatever business case they're sending that to solve. We have to assume that they're also saving it in clear text given how they're sending it in the clear as well.
No password should ever leave your network unencrypted, no matter the data channel encryption. Anything less is negligence , and the vendor should not be trusted.
From what I can find, by "These routers send your credentials in plaintext", they actually meant to say, "The mobile app sends credentials in plaintext."
If you use the web interface then your credentials are not sent in plaintext. The routers themselves also don't send credentials in plaintext.
The people who found this out got that wrong, and a lot of people are confused because they didn't expand on "in plaintext." They could be a little more professional / thoughtful.
Edit: I'm also thinking about the "may expose you to a MITM" bit. I think if it was https then a MITM (assuming all they can do is examine your packets) wouldn't work because the data can only be unlocked by the private key. It sounds like it was an http connection?
This is what I'm thinking too. The only likely scenario under which the plaintext and MITM words make sense together is HTTP. I wouldn't put it past Linksys to have used an HTTP API endpoint but these days a lot of things scream if you use HTTP. Thanks for the work!
But surely if it was stored encrypted, it would still need a key to unlock that info. Which would be on your PC. And could therefore be used by anything else to unlock your data.
The only safe way would be encrypt it with a password that only you know, and you'd need to enter before getting back into the software. And there couldn't be any "I forgot my password" function either. You lose it, the data is gone.
You are telling me this has been going on for almost a decade now, and no one ever noticed ?
So we trust open source apps under the premise that if malicious code gets added to the code, at least one person will notice ? Here it shows that years pass before anyone notices and millions of people's communications could have been compromised by the world's most trusted messaging app.
I don't know which app to trust after this, if any?
Matrix. You can host any version you want, and when you have to update, just do a version diff between you current and latest versions and check yourself.
Why is this a shock? Someone would need to have already compromised your device. Even if it was encrypted with a password they still could install a key logger
Back when the Signal org used to be called Open Whisper Systems it received grants and auditing from the Open Technology Fund which, at the time, was still a part of Radio Free Asia.
People are free to draw their own conclusions from it. Do you have anything material to contribute, or will you just be putting more smarmy words in my mouth from here on out?
stackdiary.com
Top