I think you're conflating two precepts. Just because you can write an exploit, it doesn't - inherently - mean that you have the skills/knowledge/tools of where to look for all signs of exploit having occurred on your device(s).
From the inference of that logic, every developer should be able to use gdb or Windbg to ascertain where they shot themselves in the foot - but we know that this specific set of skills isn't inherently required to be a developer.
So, the same logic would be true here: Just because you can write a hand full of exploits, it doesn't inherently mean that you have the tools/know-how to be able to ascertain if any of all of the available exploits in the wild (or in private, re: tools for Trenchat) have been used on your phone.
You're arguing at the wrong side of the problem. Obviously yes, everyone can't be a perfect expert on everything and when doing anything complicated you should ask for help. Duh, as it were. I think I even said as much.
The point was at this level of expertise and size of market ("detection of iOS zero day rootkits"), there simply isn't a pool of "experts" you can draw on to do this a-la contract work. It's a tiny world and everyone is fumbling around and asking for help independently. And as a member of that tiny world, Gibson surely knew who he needed to call already.
But that's not the way the article framed the interaction, which implies to me that there's more context at work here.
> The rule of law means that nobody is above the law
If the stats from the Innocence Project are correct[1,2], then it would also mean that nobody is above being a victim of the rule of law, either.
The rule of law is not infallible - and any sort of blind "rule of law" worship is akin to the worship for a dictator; its just merely dressed in different clothing.
This has nothing to do with the concept of "rule of law". This is simply about how the law is applied and appealed. If anything, the rule of law should protect against these miscarriages of justice, because the law should be applied equally to everybody, and therefore the poor should have the same access to the processes of appeal as the rich and powerful.
> ...your email address is actually a cryptographically generated guid rather than something easily guessed or harvested. If you combine that with a background handshake procedure for introductions, so that all of your contacts get their own guid alias mapped to your canonical one, then you can revoke any of those if they get compromised at any time...
Here, you're kicking the problem further down the road, though, to another known attack vector: Directory Harvest Attack[1].
In this case, though, the directory (presumably) contains the guid mapping (which - by definition - would have to be a different guid than the object) and would have to process parsing these guids against the users. (This already occurs on recipient receive for some SMTP servers [just before BDATA/DATA] via the email address).
What would one bad email to an email guid do? Would it force rotation of the guid[s] throughout the entire forest? If so, how would that be communicated externally? How would you communicate it for just the one address, if you just changed the one guid?
Would you, instead, have to keep a guid history to check against -- or lose all of the email between possible compromise and the sender's database update? Would you just keep it in the Transport Queue, until manual intervention could check out email between the possible compromise of the guid and new mail would be received for the new guid? That wouldn't scale for large enterprises.
Keep in mind that nothing has to be sent for recipient validation to occur. The SMTP Server[s] just respond[s] to the recipient block with the next step -- but the caller doesn't have to complete the SMTP negotiation from this point, they already have validation if the addresses (even these proposed guids) are valid.
Tarpitting is somewhat of a viable option, here, but it isn't foolproof.
> Here, you're kicking the problem further down the road, though, to another known attack vector: Directory Harvest Attack[1].
Dictionary and brute force attacks don't work against cryptographic ids, so I don't see how this is relevant.
> What would one bad email to an email guid do?
I assume you mean, what would happen if you received a spam message and had to revoke a guid? First, revocation means the guid is no longer valid and to any incoming message, so it acts as if the guid simply doesn't exist.
Second, the idea here is that every entity gets their own guid designating you, so the same guid is not known by more than one entity. This is the purpose of the handshake protocol during introductions. If A and B know each other, B and C know each other, and A and C want an introduction, B triggers the introduction protocol which mints new guids for both A and C that are then exchanged with each other. This can happen transparently without the user seeing what's going on under the hood. Revocation is just a mark as spam button, and introduction is triggered by CC'ing more than one person in your address book (introduction is the trickiest part).
So if A gets a spam message from C, you just revoke the guid sent to C and you're done, any message from C now acts as if A's address no longer exists. This doesn't affect any connections to anyone else.
If B's guid for A is compromised in some way, you can trigger the introduction protocol again to mint a new guid after the compromise is resolved, then revoke the old one.
There is simply no way for spam to gain a real foothold here: they can't guess ids, and if they somehow obtain someone's address book, those addresses are valid only for one or two messages at best, before it gets revoked. The revocation and introduction protocols can happen using the existing protocols in a few different ways, like by exchanging some message types that are not seen by the user. There are definitely some details still to work out but I don't see any real roadblocks.
The only real "problem" is that now all email addresses are effectively private, eg. no globally addressable emails, which is not great for business purposes like [email protected]. You could of course keep running the old email system for this.
Email can be delayed ... for days, hours, even weeks. What if I set up a dead-man email to you, you revoke the id, then I die? Would you somehow magically receive my email for a revoked id?
Well obviously I wouldn't get an email at a revoked address anymore than I would get messages at an email account that I closed. If you want to set up a dead man email, then set that up with an address that isn't shared with anyone else, then there would never be a reason to revoke it.
I don’t see that really working. I regularly delete personal tokens off of GitHub, especially if they haven’t been used in awhile. I could see the same cleanup happening (or even being forced by disk space usage).
Anyway, I don’t think this idea would work with normal human patterns. At work, we regularly saw people opening emails years after we sent them. Hell, I’ve emailed people years after not talking to them. I just don’t see this working.
Why would disk space be an issue? Guids are 16 bytes each. Even if you have 10k contacts, that's only 10k guids your email server has to store. That's 160kB. What's the big deal? You get more spam than that daily. Why wouldn't you persist 160kB to never get spam again?
> work, we regularly saw people opening emails years after we sent them
So? There just really isn't a need to revoke anything until you receive spam on that address. Maybe we're just not on the same page about how this works. Here's a more detailed overview of what I have in mind:
That's ~15gb per billion contacts. There's an estimated ~2 billion gmail users, so we're talking 30gb just to have one guid per user, and you're suggesting multiple guids per user (unbounded). So, let's assume each user has at least two services, we're now at a 60gb table, and that doesn't even include a mapping between users and guids, which will probably double the table size even more.
At scale, you're probably looking at a multiple-terabyte table, right from the start, and spending compute-days, or even compute-weeks, just running migrations; just to get some dubious returns and a lot of additional end-user complexity.
> So, let's assume each user has at least two services, we're now at a 60gb table, and that doesn't even include a mapping between users and guids, which will probably double the table size even more.
That's literally nothing. As I said, each user gets 10x more spam than that daily.
> and spending compute-days, or even compute-weeks, just running migrations
Migrations for what?
> just to get some dubious returns and a lot of additional end-user complexity.
There is no additional user complexity.
Supposing your math is correct, each user has a relatively fixed but larger than normal storage overhead for their address book and a inbox that that grows slowly because there's no spam, rather than a a small but fixed storage overhead for their address book and an inbox that grows 10x-100x faster due to mountains of spam.
I just really don't think you're comparing the storage requirements correctly.
> ...the idea here is that every entity gets their own guid designating you, so the same guid is not known by more than one entity
Ok, now you're sending a list of guids that _can_ be emailed to, per negotiation? Otherwise, how are they sending to that specific guid? A guid is not a hash of an object but an identifier object (a 16-byte array, if I recall correctly) - it has to map to the recipient _somehow_.
In other words, in each SMTP exchange, that information would have to be stored in some form of look-up table, _somewhere_, on both the sending and receiving servers.
How do you enforce the senders destroying that table, so that many versions of it don't expose your half of the signature? Do you generate a new key per session? If so, where are you storing that key, in memory? How would you prevent the heap from exposing those keys in a process crash (say, where a dump is automatically generated - like in Windows)? How do you prevent a nefarious actor using A, B, or C from generating a flood of SMTP sessions and creating a tonne (yes, the metric kind) of these look-up tables in memory? What happens when back-pressure is hit? Do you force everything else to paging but keep the tables in memory?
I think you're overthinking it. For simplicity, instead of [human-readable]@mydomain.com, let's use [guid]@mydomain.com, a dynamic set of unguessable aliases for your account. Your guids that have been handed out are completely under your control and stored on your server.
There are no cryptographic keys to manage here, just cryptographically secure identifiers that are stored on a server.
If you and I had been introduced, you would have a [email protected] designating me in your address book, and I would have a [email protected] for your address in my address book.
So revoking the guid you have for me is an operation that happens on my server and simply invalidates the only address that you have. This part is simple and why spam is easily stopped in its tracks.
The introduction protocol is the tricky part, because C and B would have different guids for A, so if B CC's A when messaging C, then there should be a way for C to resolve their guid for A. This is done via a petname system.
If C does not already have a mapping for A (and so doesn't know A), then it can request an introduction from B. C sends B an "introduce-me as C[guid-intro]" message with a new guid for C, then B then sends to A "here's who I call C [guid-intro]". Guid-intro is a use-once guid for introduction purposes.
A then sends to C[guid-intro], "hi, I'm A[new-guid]". C replies, "hi, I'm C[new-guid]". C then revokes guid-intro since it was used, and we're done. A, B and C each have their own guid addresses for each other. You can keep the audit trail of where you got a guid introduction in a database, but that's not strictly necessary for this to work.
This introduction protocol happens transparently to the user just by exchanging specific message types the server recognizes. It's a protocol that can be built atop SMTP just to manage the database of addresses that the SMTP server accepts.
> If you and I had been introduced, you would have a [email protected] designating me in your address book, and I would have a [email protected] for your address in my address book.
The description, here, is no different than S/MIME encryption exchange - in that the guid exchange has to be done before it could be used.
You still have the issue of [A]Guid and [C]Guid correlating between themselves _and_ it being "unique" per SMTP session (after all, you said before that each guid has to be unique per session). This is where my earlier reference to generated guids being exchanged during the SMTP session comes into play. However, leaving that aside...
A single-use guid is no different than an SMTP address, if we're going based on the single guid inferred from that line -- and that guid has to be stored elsewhere in your system for it to be resolved to a recipient. So, you need something like a guid history array on the object for the forest to be able to resolve that guid (on recipient resolve) to a mail object inside your forest.
You have no mechanism (from your description) for B sending to [A]Guid or [C]Guid junk mail (assuming they've been able to discover the guids) using those guids. You say you would invalidate [A]Guid or [C]Guid -- but this doesn't resolve the issue of [A] and [C] now having to re-exchange Guids, for something that B has done.
So, now, all valid email between [A]Guid and [C]Guid is invalidated (per your description) and they're calling into your helpdesk, trying to understand why valid email isn't being delivered.
Do you tell them to re-exchange guids? How do they re-exchange guids when the mail system is dependent (directly) on those guids already being established on both sides? How do they "re-introduce" themselves, in other words, in that scenario?
> The description, here, is no different than S/MIME encryption exchange - in that the guid exchange has to be done before it could be used.
There are formal connections between some encryption protocols and what I'm describing here (effectively a system based on capability security, ie. this is modelling spam as an access control problem for an unbounded set of actors). Basically encryption let's you do away with extra storage requirements for the guids, but the cost is additional complexity around key management and revocation, and more compute cost. I haven't thought about it enough to see if there's a formal correspondence with S/MIME, but my proposal is very simple so I don't think you need to try to understand it through that lens.
> You still have the issue of [A]Guid and [C]Guid correlating between themselves _and_ it being "unique" per SMTP session
No, these guids are not per-session, they are persisted in a user's address book.
> and that guid has to be stored elsewhere in your system for it to be resolved to a recipient
Yes, each user's address book contains the guid address for a contact just like right now it contains an email address. Just take the existing address book and make the emails cryptographically unguessable guids. If you and I have the exact same set of contacts, none of our guid addresses will match. That's literally it.
> you need something like a guid history array on the object for the forest to be able to
No such history is needed. I really think you're overcomplicating this.
> You have no mechanism (from your description) for B sending to [A]Guid or [C]Guid junk mail (assuming they've been able to discover the guids) using those guids.
I don't understand what you're trying to describe here.
> You say you would invalidate [A]Guid or [C]Guid -- but this doesn't resolve the issue of [A] and [C] now having to re-exchange Guids, for something that B has done.
If A and C have been introduced per the protocol I described, then anything B does has no impact on the relationship between A and C. If B sends them junk mail, the user (A or C) could decide to revoke B's access to them, or may opt to not revoke if they think it was an accident.
You could opt to track who introduced you and make revocation decisions based on that extra info too, but it's not strictly necessary.
> How do they "re-introduce" themselves, in other words, in that scenario?
In the case that a guid address has to be revoked but you want to keep the connection, (perhaps the guid address leaked somehow), then the mail agent would have renew their connection by re-running the introduction protocol before revoking the previous guid, or they would have to request another introduction through someone they both know.
This is as simple as having a "mark as spam" button, and when the user clicks it, it asks if they want to block the user entirely or if this was accidental (or something). If the former, the system revokes immediately, if the latter the system re-runs the introduction protocol using the existing guids to get new ones, then revokes the old ones.
Qualitatively speaking it definitely is a problem in Germany. DB planning rules used to include a guidance note that noise barriers higher than the typical lower edge of a train window should only be considered in exceptional cases. Two or three decades of constant complaints about train noise have led to legal rules and planning regulations having been significantly tightened up and now mean that four, five or even six metre high noise barriers are nothing out of the ordinary for new-built infrastructure.
The result being that people still complain about fear of more noise when new infrastructure is proposed (despite freight trains having gotten quieter, too, due to the introduction of new brake shoes), but now they're also complaining about the visual blight, too. (And nobody cares about the views of the train passengers.)
Meanwhile in Switzerland for example the current state is that balancing noise reduction needs vs. the visual impact of noise barriers still is an official planning goal because apparently people haven't been screaming so loudly about noise and nothing else, so the Swiss tend to build fewer and lower noise barriers even today.
(Also purely empirically from my visits there, the UK also doesn't seem to build as many and as high noise barriers, even on infrastructure that has been newly built or rebuilt within the last two decades.)
Expediency is a gross over-simplification of why the cashless society is a growing tend.
For example, it costs more wealth than the value of lower denominational coins to produce said coins.[1] There's a correlation between the decrease in financial-related crimes and cashless vendors.[2] It is believed that cashless systems lead to less tax evasion.[3]
Having said this, I agree with the remainder of your statement - that it has the potential to discriminate against people who don't have and/or are unable to obtain bank accounts.
What's your problem? You selectively quoted a fraction of my statement and threw in a strawman.
Merchants don't want to keep registers to be stolen from cashiers, to be robbed, to have to deal with counting cash, or deal with deposits. That's 4 good reasons in their minds, and I've dealt with all of these. Have you?
> You selectively quoted a fraction of my statement and threw in a strawman.
I disagreed with the first part of your statement and agreed with the second part. Are you only reading to respond and not to comprehend? Also, how are studies or facts "a strawman".
What does dealing with those 4 things have to do with your initial gross oversimplification?
Counting the drawer and deposits take maybe 15 minutes a day. Its not difficult. Been running a small business for 10 mins and never had an issue with cash. God forbid someone robs us of the 300 bucks in the drawer. They can have it and insurance will cover it.
In one example (in the 10's), the owner of the building that Unisys used to lease wanted to disproportionately increase the cost of the lease; it was so much so, that it was cheaper for Unisys to close the site and offer relocation to the workers (notably, New York being a favoured relocation site).[1]
From the inference of that logic, every developer should be able to use gdb or Windbg to ascertain where they shot themselves in the foot - but we know that this specific set of skills isn't inherently required to be a developer.
So, the same logic would be true here: Just because you can write a hand full of exploits, it doesn't inherently mean that you have the tools/know-how to be able to ascertain if any of all of the available exploits in the wild (or in private, re: tools for Trenchat) have been used on your phone.
Edit: gbd != gdb