There are so many more ways one could screw up, and you only need to screw up once. For example, does X do browser fingerprinting and did you ever use similar setup to use a more identifiable Twitter account? Are you using unique phrasings or behavioral patterns? These things can be solved to a satisfactory degree, but I don't think "it's not hard" captures it - for an average user it _is_ hard.
> Are you using unique phrasings or behavioral patterns?
Why would Twitter voluntarily run that sort of query to satisfy a subpoena?
Whether it's difficult and risky for the average user depends on the threat model. "Twitter doesn't directly have my name, address, or phone number sitting in their database next to my account" is easy. Other things are more difficult.
Phrasing idiosyncrasies are publicly observable and anyone can note - as external observers did in Kaczynski or Hanssen cases - that a particular phrasing is quaint. It is probably true that Twitter is unlikely to run a browser fingerprinting query to de-anonymize someone tweeting spoilers from a softcore porn show. But a potential leaker has to ask: "how sure am I of that?"
A major problem with the article is the author's inability to weigh the evidence: actual evidence, like presence/absence pattern, is buried whereas p-hacking stylometry (let me try another expert, this one didn't give me what I wanted! let me feed him the Satoshi/Adam Back tells that I'm already in love with!) is majority of the article. It also includes absolute garbage like the vistomail spoof email during the block size wars. And, oh by the way, both Satoshi and Adam Back knew C++. Theranos evidence was binary (machines either work or they don't) but it is not so here and the author is simply out of his depth here.
It is sad - but entirely unsurprising - that NYT decided to paint a big target on someone's back just for clicks. Judith Miller-tier all over again. Miller too had real evidence and junk evidence, couldn't distinguish between the two, and editors wanted a flashy headline. Carreyrou has exactly the same problem here: NYT editors need multimedia events (like junk stylometry filtering - watch the number shrink from 34,000 to 562 to 114 to 56 to 8 to 1!!!) because that's what its audience-product relationship demands. I think it not unfair to say that modern Times' editorial culture has no mechanism for distinguishing rigorous inference from merely compelling narrative. Open the front page on a random day: how often do you see the Times staking credibility on a causal claim "A causes B" vs simply "X happened. Then Y came." vibes/parataxis.
I've had the fortune/misfortune to be directly or peripherally involved in nearly a dozen situations that made it to press and there isn't a single case where the story represented in the article wasn't blatantly misinterpreted from the facts. In nearly every case what was mentioned in the article was the complete opposite of what actually happened. Biggest/Most-egregious offenders were Vice and Vox Media but included are the NYT, WaPo and Time.
One can only narrow the things they care about to those they can verify (or personally affect them) and go after primary sources themselves and form their own conclusions. I'm no longer convinced that modern journalism is good for anything more than starting bonfires.
can you give some examples? I'm very interested in this. (after all we had about a decade of crying "fake news" - and as far as I understand the verdict was that big traditional outlets get the basic facts right - who what where when - but are absolutely clueless about or intentionally spin the "why".)
No meaningful ones that I'd want to reveal without doxxing myself. I can give you one of someone else's that can be independently confirmed.
https://www.vice.com/en/article/sugar-weasel-the-clown-escor... This article by Vice is 100% bullshit. Vice basically published this PR piece for the guy as a favor. A lot of articles that you read are really coordinated press releases -- like the initial Blake Lively v Justin Baldoni NYT hitpiece. Yes, I know this is dumb and totally entertainment and not "news", but this article actually harmed the business of the actual guy that Weasel ripped his shtick off from. Aaron Zilch used to rant about this guy and how bullshit this article was for years. There's a small clown kink/BDSM community in Vegas and those in it at the time this was published all called it out for the bullshit it was. Asked Vice for a correction/retraction and they did nothing.
Somebody handed them a clickbait story and they published it for the clicks.
Knoll's Law of Media Accuracy: "Everything you read in the newspapers is absolutely true, except for the rare story of which you happen to have firsthand knowledge."
See also, Gell-Mann Amnesia effect.
Most reporting is garbage once you get into the details.
eCash is an answer to "can you move value privately?" (yes, with concessions). Radius is solving "can you execute a million conditional state transitions per second at sub-penny costs?" Blind signatures are still just value transfers and offer no expressivity. If your agent wants DvP with conditional logic, or trustless escrow that releases on programmatic conditions, Chaumian eCash doesn't help. LN gets you close to some forms of atomic exchange, but close-to-atomic isn't atomic, and neither gives you composability. You can move balances, but you can't orchestrate multi-step workflows that can't unwind in the middle.
I imagine you could have some contract that interacts with ecash, based on revelation of a blind signature. Similar to a HTLC.
I have actually written up some ideas for a system based on something like this that I called "chaumian coupons", but I would have to go through my notes to find it.
Indeed, for any particular outcome you could design a custom protocol. In contrast, Radius gives you a generic execution environment. Going custom you lose EVM tooling, audited standard contracts, and every bespoke protocol needs its own security analysis, its own edge case handling, and its own integration work.
That said, the most important loss is composability: an agent can't take your Chaumian coupon escrow and snap it together with someone else's DvP protocol and a third party's rate-limiting wallet in a single atomic transaction. The power of a shared execution environment is that these interaction patterns don't need to be anticipated and designed around in advance. Agents aren't going to negotiate which custom protocol to use for every interaction (the necessary protocol might not even exist!), they will use shared infrastructure where composability of various primitives is the effortless default.
I think that in the large, efficiency does matter. You are optimizing a three-variable problem of efficiency, decentralization, and configuration of the smart contracts.
The fact that AI agents are involved does not nessisarially tilt the scale towards the configurability of smart contracts. If you have extra cognition, you can probably spend some time thinking about what specialized protocol to use.
I am pessimistic on etherium and similar "open-ended" scripting cryptocurrencies because everything they can do can be replicated with a specialized cryptocurrency. So as soon as some killer app emerges (e.g. ENS) people will realize that it's more cheap/fast/decentralized to do this with a specialized system (e.g. namecoin).
The open-ended systems like etherium can only succeed if there is demand for constantly-novel open-ended scripting that interacts with other open-ended scripts all the time. I really don't think that's the case. If you can capture 90% of the use case with different specialized cryptocurrencies and do atomic swaps between them, that will always be better from a technological standpoint.
The fundamental limit of cryptocurrencies is bandwidth, and how much data you can sync between all nodes: either you have a few high-bandwidth nodes or many low-bandwidth nodes. Splitting different contracts across networks is naturally better than sharing a single blockchain and competing for gas.
People don't want this to be the case because you can't achieve vendor/cryptocurrency lock in, and "competition is for losers". If you have multiple independent blockchains, you pretty much must have multiple tokens. But it's true.
The moat for "utility" smart-contract systems is dependent on the cost of switching, which will be lowered by AI if anything. At the end of the day you can just fork the blockchain if you don't like the tokenomics or even just the fees. Some assembly required, but that is why stuff like litecoin can exist in this market.
There is only one type of "like" on X. Since June 2024, all likes (both historical and new) are hidden from profiles, but they aren't fully anonymous: post authors can still see who liked their content (unless the "liker" has a protected account the author doesn't follow).
Bookmarks are the only truly private engagement—no one, including the author, can see who bookmarked a post, though the public count still increases.
A retweet actively redistributes content to your followers; a like signals approval (the author will normally see it) and influences the algorithm without that same direct amplification. Prior to the June 2024 update, your feed also had likes from people you follow.
On Reddit, author says "The preview is shown at 20fps for a 3x scale image (90x90 pixels) and 50fps for a 1x scale image. This is due to the time it takes to read the image data from the sensor (~10ms) and the max write speed of the display.", and adds that optical mice motion tracking goes to 6400 fps for this sensor but you can't actually transmit image at that rate.
None of these documents were actually published on the web by then, incl., a Watergate PDF bearing date of Nov 21, 1974 - almost 20 years before PDF format got released. Of course, WWW itself started in 1991.
Google Search's date filter is useful for finding documents about historical topics, but unreliable for proving when information actually became publicly available online.
Are you sure it works the same way for documents that Google indexed at the time of publication? (Because obviously for things that existed before Google, they had to accept the publication date at face value).
Yes, it works the same way even for content Google indexed at publication time. For example, here are chatgpt.com links that Google displays as being from 2010-2020, a period when Google existed but ChatGPT did not:
So it looks like Google uses inferred dates over its own indexing timestamps, even for recently crawled pages from domains that didn't exist during the claimed date range.
That's a very good question. It all depends on how you pick the witness b: there is a procedure that definitely is not zero-knowledge: say, if prover uses his knowledge of factorization to construct an explicit b that betrays that factorization.
For example, if n = p1*p2*...*pk is square-free and not a Carmichael number, then by Korselt's criterion there exists a pi such that pi-1 does not divide n-1 (this also implies that pi>2). Use the Chinese Remainder Theorem to produce b such that b=1 (mod pj) for all j!=i, and b (mod pi) is a generator of (Z/piZ)^*. Then b is a Fermat witness: gcd(b, n) = 1 (because b is non-zero modulo every prime factor) and b^(n-1) != 1 (mod n) because b^(n-1) != 1 (mod pi) (as pi-1 does not divide n-1).
However, b "betrays" the prime factorization of n, since gcd(b-1, n)>1 (by construction b-1 is divisible by all pj with j!=i, but not divisible by pi>2), and thus gcd(b-1, n) is a non-trivial factor of n. (I assumed square-free above but if pi^ei (ei>=2) divides n, then b=1+pi^(ei-1) (mod pi^ei), b=1 (mod pj^ej) (j!=i) also would have worked.)
On the other hand, it is also known that for non-Carmichael numbers at least half of the bases b with gcd(b, n) = 1 are Fermat witnesses. So if you pick b uniformly at random, the verifier does not gain any new information from seeing b: they could have sampled such a witness themselves by running the same random test. Put another way, the Fermat test itself is an OK ingredient, but a prover who chooses b in a factorization-dependent way can absolutely leak the factors - the final protocol won't be ZK.
I especially like how he highlights that Collatz conjecture shows that a simple dynamical system can have amazingly complex behavior; also 3n-1 variant has two known cycles - so "any proof of the Collatz conjecture must at some point use a property of the 3n+1 map that is not shared by the 3n-1 map." And this property can't be too general either - questions about FRACTRAN programs (of which Collatz conjecture is a special case) can encode the halting problem.
reply