Yes, but things could be refined. With more resources and research thrown at it, it could become more versatile, that's why the title of the post says "could". And chances are, there are private and government entities already doing this. Research like this has been coming out for at least a decade now.
Even Xfinity has motion detection in homes using this technique now:
This has already been an area of research, both publicly, and most likely in private/government defense research. In a targeted situation, i.e. surveillance of a household of 6, this would work easily enough...but I doubt there is enough information to provide reliable (high AUC) tagging of ID in a public scenario oh hundreds to thousands of individuals.
> Researchers in Italy have developed a way to create a biometric identifier for people based on the way the human body interferes with Wi-Fi signal propagation.. can re-identify a person in other locations most of the time when a Wi-Fi signal can be measured. Observers could therefore track a person as they pass through signals sent by different Wi-Fi networks – even if they’re not carrying a phone.. their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent.
if it's a public scenario, you don't need that, they're using wifi on people's persons to id them. The concern is more gait analysis, and by some accounts even lip reading is possible with mm-wave 5g.
like i mentioned in another comment, do you really need good resolution for gait analysis? You also have people carrying their phones inside the house all the time, so you know what bssid is associated with that coarse movement. and if you have access to their ap/router combo, you can tell what IP that device has and what domains it's been visiting.
Let's say you visit a friend in a different city, the same ISP controlling their router, can use your mac, but even if you turn off your wifi or leave your phone in your car, your volume profile and gait can betray you. how you sit, how you lean, how you turn. I'd wager, if 6-10 distinct "points" can be made out and associated with a person, that's all that's needed to uniquely identify that person after enough analysis of their motion, regardless of where they go in the world.
Imagine if they're not using one AP, but using your neighbors AP as well, two neighbor APs and your own can triangulate and refine much better.
> Even Xfinity has motion detection in homes using this technique now
WiFi presence detection is a completely different problem. If the WiFi environment is changing past a threshold, return a boolean yes or no. It can't actually tell if someone is present or if the environment is just changing, such as a car driving close enough to reflect signals back in a certain way.
Doing mass surveillance where you detect individual people in a random home environment isn't the same thing at all. All of these "could" claims are trying to drawn connections between very different problems.
You'll have to explain that a bit more. Isn't the threshold detection analyzing radio signal data? For identifying people, you don't need to reconstruct their face or fingerpring using that data. you just need to fingerprint them.
With gait analysis for example, it's only looking at a handful of data points, the way we walk is very unique. lip-reading, i can see how that's a stretch, but out movement patterns and gait are disturbances in radio waves. If you're using just one person's wifi, that sounds difficult, but if you're collecting signal from multiple adjacent wifi access points, it's more realistic to build a very coarse motion representation, perhaps with a resolution no finer than 1 cubic ft, but even with more coarse representations, gait can be observed.
Even gait aside, the volume profile of a person and their location in the house alone are important data points, couple that with the unique wifi identifier or IP, you can make a really good guess at who the person is, and what room they're in.
Only if wifi is radically increased in frequency, power, directionality or antenna size. And i mean way beyond practicallity. It would be easier to identify people by the sounds of thier footsteps, something easily done through anything with a microphone. With three microphones, you can track that movement to the inch.
1) You can't stalk someone deliberately and persistently, using any means, or medium; even if you're a company, and even if you have good intentions.
2) You can't intentionally influence any number of people towards believing something false and that you know is against their interest.
These things need to be felony-level or higher crimes, where executives of companies must be prosecuted.
Not only that, certain crimes like these should be allowed to be prosecuted by citizens directly. Especially where bribery and threats by powerful individuals and organizations might compromise the interests of justice.
The outcome of this trial won't amount to anything other than fines. The problem is, this approach doesn't work. They'll just find different ways that can skirt the law. Criminal consequence is the only real way to insist on justice.
The probably with 2 is you then need someone to be the arbiter of truth, and the truth is often a hard thing to find. This would end up letting governments jail people they disagree with. How would you write the law to to prevent that?
I don't get what you mean? proving whether you've done something against someone's interest is already on the books for embezzlement, fraud,etc.. intention and planning are covered under many conspiracy laws. the influence part would need to be proven using internal documents, whistleblowers,etc..
The whole point of a court is to find truth. They do it all the time. Actually you would need to prove someone knew something is untrue, because it's innocent until proven guilty. You wouldn't have to prove what you said is true to get let off, just bring enough doubt to ward off your opponent's accusation of untruth.
Google Chrome (along with Mozilla, and eventually the other root stores) distrusted Symantec, despite being the largest CA at the time and frequently called "too big to fail".
Given how ubiquitous LE is, I think people will switch browsers first. non-chrome browsers based on chrome are plenty as well, they can choose to trust LE despite Chrome's choices. Plus, they had a good reason with Symantec, a good reason to distrust them that is. This is just them flexing, there is no real reason to distrust LE, non-web-pki does not reduce security.
GP gave a very good reason that non-web-PKI reduces security, you just refused to accept it. Anybody who has read any CA forum threads over the past two years is familiar with how big of a policy hole mixed-use-certificates are when dealing with revocation timelines and misissuance.
"it's complicated" is not the same as "it's insecure". Google feels like removing this complexity improves security for web-pki. Improving security is not the same as saying something is insecure. Raising security for web-pki is not the same as caliming non-web-pki usage is insecure or is degrading security expectations of web-pki users. It's just google railroading things because they can. You can improve security by also letting Google decide and control everything, they have the capability and manpower. But we don't want that either.
Neither, I meant if enough people panic and stop using chrome, website operators need not worry much. Safari is default on macs, and Edge is default on windows, both can render any website that can't be accessed in Chrome, so it'll make Chrome the browser that can't open half of the websites, instead of half of the websites out there suddenly being incompatible with chrome. The power of numbers is on LE's side.
If they wanted, they absolutely can distrust LE. The trick is to distrust only certificates issued after specific date (technically: with „NotBefore” field after specific point in time), so the certs already issued continue to work for the duration of their validity (until „NotAfter”). That way they can phase out even the biggest CAs. Moreover, they have infrastructure in place and playbook well rehearsed on other CAs already.
Even then, is the message "stop using chrome after this date because half the internet will break" (because where will all those non-paying people go to?), or "stop using LE and start paying someone for a free service"?
I bet google themselves would be scared of anti-trust lawsuits over this. Even if they weren't, i don't think they'll really go so far as to compromise the security of half of the internet just to get their way on this one small improvement.
The point about antitrust lawsuits I concur, but LE is not the only free-as-in-beer ACME. For one, there's ZeroSSL, then Actalis, SSL.com. For some time BuyPass offered free certs, but it does no longer. Last but not least Google itself has Public CA that offers certs over ACME, a fact that I think would be a huge fulcrum for antitrust suit. I would also expect that all other CAs would deploy ACME endpoints to attract at least some part of the cake (note they're in business of being vultures already). So the message will be „go find another CA, here are three examples, sorted randomly like the European first boot UX, just change the URI in certbot config".
Perhaps this shouldn't be left to the CA/B board, it has critical economic impact on many countries, it should be regulated by them?
Either way, I think LE has enough power to at least push-back and see where things fall. continuing to support users can't hurt them, until they truly have no other choice.
> [...] it has critical economic impact on many countries, it should be regulated by them?
This was exactly the point of recent (2024) eIDAS update, which introduced EU Trusted Lists. The original draft was that the browsers were mandated to accept X.509 certs from CAs („TSP”s) accredited in EU by national bodies. Browsers were supposed not to be free to just eject CAs from their root programs for any reason or no reason at all, but in case of infractions they were supposed to report to CAB or NAB that would make the final decision.
Browesers responded by lobbying, because the proposal also contained some questionable stuff like mandatory EV UI, which the browsers rightfully deprecated, and also it wasn't clear if they can use OneCRL and similar alternative revocation schemes for mitigations of ongoing attacks. The language was diluted.
Interestingly though, doesn't this threat become less credible the shorter certificate lifetimes get? Back in the day they could just do this and server admins would figure out how to switch to a new CA the next time they got around to renewing their certificate. Now though that's all automated, so killing a CA will likely nuke a bunch of sites.
This is good point. I think it would still be discounted in favour of suggesting another CAs that users can switch to, but you're right, the promise was that cert management would be hands off, and changing CAs is not hands off in any ACME client that I know of. Best Google could do would be to shift the blame to LE/ISRG, because it was ISRG that promised this automation.
They can do this with certificate transparency other wise CA can sign whatever date they want. But if they collude with CT that can issue rouge certificates for targeted attacks.
Yes, that's all right, there's already a requirement that they submit to one Google CT log and one non-Google CT log. They thought about it already. The playbook I mentioned they've been rehearsing contains specific threat against backdating certs, they say they'll distrust immediately if they detect, and they have means of detecting backdating on significant scale (esp. for LE, where they submit 100% issued certs, not just the subset that is intended for consumption with Chrome).
I don't think there is anything anyone can do about this trend, other than come up with viable age-verification schemes that preserve privacy, and don't require things like scanning your face or sending random companies your ID.
There are plenty of approaches to this, and I won't spam this comment with all the thoughts I have on the subject. But my frustration is people want things like "cancel your nitro subscription" well I don't have one. What else? It's just small things that will not impact anything. Every service out there will require this sort of verification soon. Being angry doesn't stop it. Even voting doesn't seem effective to me. But better solutions might.
If they could verify your age as accurately as a store attendant a physical store could, what else could they want? And if that could be done without giving random websites any identifying information about yourself, wouldn't that be better than this mess? Two things can be done, you can resist this nonsense while supporting alternatives to it.
There is this phenomena where users of a product say they want something. But what they want is very different. People are not good at telling you what really matters to them. That's the main obstacle of Matrix's adaption I think.
Matrix tries to copy-cat all these other products. But in the end it feels like something trying to be all sorts of things and not quite doing it as well as the originals in every way. Plus you have this "confusing" security/crypto aspect. And then you have the whole issue with inconsistencies between clients.
You have to really commit to it, or matrix has to be the backend of some other more refined/specific app (like chat section on websites, like Disqus).
In my opinion, if you want Matrix adaption, stop talking about Matrix adaption, that's like talking about HTTP adaption. You want people to use clients, talk about clients. Let's talk about "Element" adaption. (side-chat: Please make names more searchable. ok, you want to use this generic/confusing term "Element", can you at least make it unique by calling it "Elemnt" with a weird spelling so it's more searchable?)
People don't like learning new and complex systems for the sake of it. It's a chore. I want to be able to tell people "let's use Element" and explain why they should use it. It would help if it had original features other products didn't have that it does really well. It's been over a year since i used Element, but I didn't like the UI at all, it felt like Teams but more clunky? perhaps the mobile app is better, I never tried it.
All that said, I think it's a great system, it's perfect for government systems too. they're not usually concerned about things looking great or having cosmetic features. I would very much prefer to use it over Teams or Slack personally. So long as it handles scheduling meetings, and managing things like booking conference rooms just as well.
Productivity software and offerings in general should probably “come late to the party” and offer the most polished version of what everyone else is doing.
Yeah, but the originals they're copying from are also working hard to polish their own products, and they have more experience with user feedback regarding those features.
For example: What if matrix could be used like email! now that would make it stand out won't it? i.e.: [email protected] , you could post your matrix address like that, and matrix clients could resolve it using SRV records for matrix, and querying the correct homeserver designated on danssite.net. It could be marketed as an email replacement. You could ask sites to support it for registrations instead of email.
Or.. what if it supported custom gif reactions, record yourself doing something, save it as a custom emoji, and you can use it in any chat.
That's what I mean, lots of good ideas to pull from that makes it so that "use matrix because <it can do this better>" , instead of because it's more private or secure, or whatever.
> What if matrix could be used like email! now that would make it stand out won't it? i.e.: [email protected] , you could post your matrix address like that, and matrix clients could resolve it using SRV records for matrix, and querying the correct homeserver designated on danssite.net.
The problem is you still have to type prompts. That might require less word-count, but you still have to type it up, and it won't be short. For a small code base, your llm code might be a couple of pages, but for a complex code base it might be the size of medium-length novel.
In the end, you have text typed by humans, that is lengthy. and it might contain errors in logic, contradictions, unforeseen issues in the instructions. And the same processes and tooling used for syntactic code might need to apply to it. You will need to version control your prompts for example.
LLMs solve the labor problem, not the management problem. You have to spend a lot of time and effort with pages and pages of LLM prompts, trying to figure out which part of the prompt is generating which part of your code base. LLMs can debug and troubleshoot, but they can't debug and troubleshoot your prompts for you. I doubt they can take their own output, generated by multiple agents and lots of sessions and trace it all back to what text in your prompt caused all the mess either.
On one hand, I want to see what this experimentation will yield, on the other hand, it had better not create a whole suite of other problems to solve just to use it.
My confusion really is when experienced programmers advocate for this stuff. Actually typing in the code isn't very hard. I like the LLM-assistance aspect of figuring out what to actually code, and do some research. But actually figure out what code to type in, sure LLMs save time, but not that much time. getting it to work, debugging, troubleshooting, maintaining, those tend to be the pain-points.
Perhaps there are shops out there that just crank out lots of LoC, and even measure developer performance based on LoC? I can see where this might be useful.
I do think LLM-friendly high-level languages need to evolve for sure. But the ideal workflow is always going to be a co-pilot type of workflow. Humans researching and guiding the AI.
Psychologically, until AI can maintain it's own code, this is a really bad idea. Actually typing out the code is extremely important for humans to be able to understand it. Or if someone wrote the code, you have to write something that is part of that code base and figure out how things fit together, AI can't do that for you, if you're still maintaining the codebase in any capacity.
I just wish more people would protest this instead of things like secure boot.
Password managers and/or operating systems can manage private keys just fine. websites shouldn't be concerned with how the keys are managed, or be able to make demands on how users store credentials, or know device details for users.
One thing I dislike even with systems like FIDO2 is that the websites/apps can block list your FIDO key's vendors. Similar trends suck. Passkeys are just one iteration in a long line of systems designed with corporate interests in mind.
The system validating the authentication needs only to verify that the credentials are correct. If users want to use TPMs, HSMs,etc.. or none at all, that's up to them. And no information, other than what is strictly required to verify the credential should be transmitted over the network. a signature of challenge data from the app should be sufficient. the user's public key shouldn't be signed at all by hardware, a trusted 3rd party,etc.. the registration process should take care of establishing public key trust to the authenticator/app. The whole thing feels insidious.
When you have Apple managing your keychain, your passwords stored in that, your passkeys stored in that, them filling in your MFA info by reading your email and SMS on every device, supplying your primary email account and all your throwaway addresses, and possibly trying to tie you into their OAuth or whatever for a third party, you are fucked if something goes trivially wrong.
Welcome to being a human being, where you need dozens of different accounts and passwords and passkeys and authenticators to live in modern society.
Apple passwords just work. They integrate nicely with most websites where I can authenticate using biometrics instead of copy-pasting and leaving my credentials on my clipboard.
And let's be real here, no one else in the industry comes even close to the amount of investment, research, and maintenance of security platforms than Apple. I would not bet against Apple's security failing.
Everything is a tradeoff between convinience and security. I think Apple's password manager is the perfect middleground. I let it generate different passwords for every site, store my passkeys etc.
No one has the time to fully optimize their security footprint. No one. And if you do you're either A) working in a sensitive area that requires it for your job or B) being targeted by state-level threat actors or C) lying. Anything beyond a password manager + 2fa is severe overkill for anyone else.
The way apple implemented things is great, no argument there. Others need to tech note. But the same thing could have been implemented without requiring device/iOS lock-in. I don't care to malign apple, but alternatives need to work as well, and as smoothly as apple passwords and passkeys, without the corporate malice.
> I would not bet against Apple's security failing.
I wouldn't either, but now the same tech is going to be used by everyone, and Apple's goal of vendor-lockin succeeds. Their security isn't in question, their malicious and anti-competitive practices are. They are secure, and it works well. You're also tied into their ecosystem, and devices. they collect information that isn't necessary for their products to work well, and securely. You can't fault them for being greedy, they're not particularly worse in that regard, but industry needs to standardize better alternatives that work well, without the whole "you have to trust apple, and it's okay that they lock in people to their ecosystem" angle.
If authentication requires the website/app to demand anything that can only be obtained on an apple device, that is a user hostile and anti-competitive feature. What confounds me is that Apple has a strong user-base, doing this the right way doesn't cost them much. Making a user friendly authentication protocol that works without attestation and hardware-lockin doesn't hurt them. They don't need to play dirty and lockin users, their fanbase is already strong. They're just being greedy for that extra 0.001 increase.
If you have a password manager, 2FA is pointless anyway. Password manager already serves as two factors: possession of the database and the secret to decrypt it. 2FA is a mitigation against people getting pwned by reusing passwords or using bad passwords. Neither of which applies if you use a password manager. You can use the TOTP feature in KeepassXC for when it is useful.
Corporate interests HATE general purpose computing, and the freedom to run what you want. With that freedom, you can hurt their interests by blocking ads, stripping out spyware, or avoiding giving up your privacy, and they can't let you have that.
It's a death by thousand cuts that's finally starting to come together:
- Remote attestation like Play "integrity"
- Hardware backed DRM like Widevine
- No full access to filesystem on Android, and no access to filesystem at all on iOS
- No ability to run your own programs at all on iOS without Apple's permission.
- "Secure" boot on Android and iOS that do not allow running your own software
Ever wondered why Windows 11 have a TPM requirement? No, it's not just planned obsolescence.
If they get their way, user-owned computers running free software will never be usable again, and we'll lose the final escape hatch slowing down the enshittification of computers. The only hope we have is that they turn up the temperature a little too quickly that normies would catch on before it gets far enough.
Don't forget an entire new category of computing, AI, which is teetering on the edge of requiring processors from one manufacturer, which in turn requires gigabytes of closed-source runtime. Today, you can do functionally more with a computer with an nVidia chip, driven by their binary blobs, than with any other hardware - even though the application software is usually Free. It's a dangerous situation. We are so used to general purpose compute substrate being "free software friendly", but this amounts to a new type of CPU that categorically requires a closed-source OS to be useful.
Is this question being asked in a way that we actually get to choose? Because the obvious choice is general purpose computing plus a free society. But rather it feels that what is being picked for us is copyright, and only copyright.
But really, even picking the freedom and liberty options, copyright could survive just fine as a thing that applies to corporations and other business entities. Individuals could then be left with a choice whether to support their creators or not, which would be a better bargain for many creators without the middlemen taking hefty cuts.
You wouldn't have a free society for long if the general purpose computers are taken away. The government controls corporations which controls your computers, and with an order all of your devices will be turned against you like the telescreens in 1984. We're already scarily close to that reality.
Why are y'all so scared only when it's the government using the companies to influence people. The companies do it themselves already and in a much more insidious way than any government likely will.
You are already being fed propaganda and having your interactions controlled and monitored in order for the people in power to gain more power and stay in power indefinitely. This is already almost 1984. It's just not politicians in power, it's capitalists.
How is that better? At least we can, in theory, elect different politicians. With capitalists, that doesn't exist even in theory.
Windows 11 has tpm required to enforce full disk encryption that is pinned to a given machine. Linux would do well to do the same thing. It's possible but almost no one does it.
Linux should replicate Microsoft's feature where they back up your "full disk encryption" keys to your cloud account, completely unencrypted, and share them with the cops.
They really should (no joke). That's how recovery works when you manage lots of devices. And I wouldn't be surprised if they can do that with Linux already via Intune.
Full disk-encryption doesn't mean your encryption key never leaves the device. Matter of fact, there is no point in FDE if the key is readily accessible pre-boot on the device. And no mature key management system relies on users remembering credentials as the end-all-be-all. Even login credentials have recovery mechanisms. With FDE, that is the recovery mechanism.
It helps with locking out disks after a device is lost/stolen. it also helps when the hardware is fried and you have important data that needs recovery. Imagine that but you have 100k devices to manage that way. Are you going to rely on a revolving-door of 100k+ employees to manage that credential? And I'm sure it's stored on disk encrypted in their DB, but eventually the unencrypted credential is needed. Block-ciphers ultimately need the plain-text secret provided to them to function, regardless of what complex systems you use, the ciphers need the same deterministic secret.
Ultimately this isn't any worse than being able to go to their website and have a recovery link sent to your email, except instead of the whole send email part, you have to be an authorized admin or owner in their portal, and you just get it from there. Pre-boot, there is no networking or internet, even things like correct time information can't be guaranteed, for more complex systems.
LUKS supports multiple decryption methods so you could for example add one with a really long string or a yubikey as a backup. Most folks replying here aren't encrypting anything at all.
You can print recovery codes. Just chuck them in your safe.
Cryptography is only safe against someone who doesn't come and beat the password out of you if they want it. In my case, only my laptop is encrypted so if I lose it when I'm out it's useless.
The benefit is to not type encryption password on every boot. TPM stores the encryption key and Secure Boot ensures that the system is not tampered.
That said, I think that it's better to use alternative approach. Use unencrypted signed system partition which presents login screen. After user typed their username and password, only user home gets decrypted. This scheme does not require TPM and only uses secure boot to ensure that system partition has not been altered. I think that macOS uses similar approach.
This whole assumption that TPM is a secure way to store things is ridiculously faulty. It's an interceptable i2c bus, and there's multiple tools available since 0.9 that can recover keys from both cold RAM boot and from interception of the i2c bus.
If your laptop gets stolen, the thief also has your keys and can also decrypt the hard drive, which the TPM storage initially was supposed to have been invented for to actively prevent.
It is quite hard to do this safely on typical Linux systems, since there is a substantial amount of writable system data (e.g. syslog, /etc, /var). If unencrypted they will leak data, and if encrypted there is little difference from just encrypting the root.
A typical linux system will have everything in one partition and even if you do like to split up the system (for historical re-enactment?) it wouldn't matter as you'd be encrypting the whole disk anyway.
> The system validating the authentication needs only to verify that the credentials are correct. If users want to use TPMs, HSMs,etc.. or none at all, that's up to them.
That's not up to the user in a corporate environment. If you use company supplied hardware keys for FIDO2 you don't want users using some software emulator on their phone because they think it's easier.
I fully agree, seems Linux is heading directly towards being a Windows Clone. So far all the windows crap can be easily avoided, but once these things are forced on me, it is bye bye Linux.
Already I use BSD on an older laptop probably 40% of the time. Linux on my main system is there due to a hardware device issue BSD still have a minor problem with it. But for me right now, Linux seems to be heading in a wrong direction.
KeepassXC implements passkeys in a respectful way. I don't see how this is "Windows crap". If they want to force attestation on passkey implementations, whether or not Linux supports it will not matter.
The part that matters is if people adopt the bait. If the bait doesn't get chomped on, the hook is ineffective. Actively encouraging passkey adoption is telling people to eat the bait.
Websites blocking FIDO vendors is nothing new. In corporate environments this may be necessary. Imagine a 2-tiered environment where generally all vendors are allowed (no blocks) for accessing tier-1 information, but to access tier-2 you need a special vendor. That is not uncommon.
By the way, SAML has similar authentication restrictions, so this is not something FIDO came up with.
Even Xfinity has motion detection in homes using this technique now:
https://www.xfinity.com/hub/smart-home/wifi-motion
reply