I’m looking at the site and right at the beginning it says:
> Standard.site provides shared lexicons for long-form publishing on AT Protocol. Making content easier to discover, index, and move across the ATmosphere.
Which part of these required a new protocol and couldn’t be built before @at existed? Seems to me we’re reinventing the wheel for I’m not entirely sure which benefit. But maybe someone who’s more into this part of the web can educate me on this.
> Which part of these required a new protocol and couldn’t be built before @at existed? Seems to me we’re reinventing the wheel for I’m not entirely sure which benefit.
The atproto folks went and categorized all of the other attempts to do this at the time. (They even had some I hadn't heard of!)
All of them make various tradeoffs. None of them were the set of tradeoffs the team wanted. So they needed to make some new things. That's really the core of it.
My sibling has one of the largest and most specific things, but this is the underlying reason.
Did that require an entire new protocol though? I am 100% sure that if Twitter, Facebook and all the other platforms decided that they want to offer a way to move around accounts they could do it.
The protocol is much more than data portability, it essentially turns the global social media system into a giant distributed system anyone can participate in at any point. Imagine if FB also let you tap into the event stream or produce your own event stream other FB users could listen to in the official FB app. That would be a pretty awesome requirement for all social media apps, yea?
I am not debating that. But this same reasoning applies to @at or any other implementation. You have to be willing to implement the features and use the protocol. So I still don’t see why this is any different.
Forgive me this shameless ad :) with the latest performance updates, Shufflecake ( https://shufflecake.net/ ) is blazing fast (so much, in fact, that exceeds performances of LUKS/dm-crypt/VeraCrypt in many scenarios, including SSD use.
I see a lot of comments recommending TrueCrypt/VeraCrypt here, which is fine, but did you know there is something even more interesting? ;)
Shufflecake ( https://shufflecake.net/ ) is a "spiritual successor" to TrueCrypt/VeraCrypt but vastly improved: works at the block device level, supports any filesystem of choice, can manage many nested layers of secrecy concurrently in read/write, comes with a formal proof of security, and is blazing fast (so much, in fact, that exceeds performances of LUKS/dm-crypt/VeraCrypt in many scenarios, including SSD use).
Disclaimer: it is still a proof of concept, only runs on Linux, has no security audit yet. But there is a prototype for the "Holy Grail" of plausible deniability on the near future roadmap: a fully hidden Linux OS (boots a different Linux distro or Qubes container set depending on the password inserted at boot). Stay tuned!
It is scary! my coping mechanism, which I admit is stupid, is to believe no matter what I do as long as I am online they have my data. But you are right most people just give absurd amount of data for willingly.
And the surveillance could be inversely correlated to profitability. If they pour billions into these chat bots and can't monetize them to the revolutionary oracles they touted, one minor consolation is to sell detailed profiles about the people using them. You could probably sort out the less intelligent people based on what they were asking.
> "but then WHAT is a good measure for QC progress?" [...] you should disregard quantum factorization records.
> The thing is: For cryptanalytic quantum algorithms (Shor, Grover, etc) you need logical/noiseless qubits, because otherwise your computation is constrained [...] With these constraints, you can only factorize numbers like 15, even if your QC becomes 1000x "better" under every other objective metric. So, we are in a situation where even if QC gets steadily better over time, you won't see any of these improvements if you only look at the "factorization record" metric: nothing will happen, until you hit a cliff (e.g., logical qubits become available) and then suddenly scaling up factorization power becomes easier. It's a typical example of non-linear progress in technology (a bit like what happened with LLMs in the last few years) and the risk is that everyone will be caught by surprise. Unfortunately, this paradigm is very different from the traditional, "old-style" cryptanalysis handbook, where people used to size keys according to how fast CPU power had been progressing in the last X years. It's a rooted mindset which is very difficult to change, especially among older-generation cryptography/cybersecurity experts. A better measure of progress (valid for cryptanalysis, which is, anyway, a very minor aspect of why QC are interesting IMHO) would be: how far are we from fully error-corrected and interconnected qubits? [...] in the last 10 or more years, all objective indicators in progress that point to that cliff have been steadily improving
I agree with the statement that measuring the performance of factorisation now is not a good metric to assess progress in QC at the moment. However, the idea that once logical qubits become available, we reach a cliff, is simply wishful thinking.
Have you ever wondered what will happen to those coaxial cables seen in every quantum computer setup, which scale approximately linearly with the number of physical qubits? Multiplexing is not really an option when the qubit waiting for its control signal decoheres in the meantime.
Oh, I didn't mean to imply that the "cliff" is for certain. What I'm saying is that articles like Gutmann's fail to acknowledge this possibility.
Regarding the coaxial cables, you seem to be an expert, so tell me if I'm wrong, but it seems to me a limitation of current designs (and in particular of superconducting qubits), I don't think there is any fundamental reason why this could not be replaced by a different tech in the future. Plus, the scaling must not need to be infinite, right? Even with current "coaxial cable tech", it "only" needs to scale up to the point of reaching one logical qubit.
> I don't think there is any fundamental reason why this could not be replaced by a different tech in the future.
The QC is designed with coaxial cables running from the physical qubits outside the cryostat because the pulse measurement apparatus is most precise in large, bulky boxes. When you miniaturise it for placement next to qubits, you lose precision, which increases the error rate.
I am not even sure whether logical components work at such low temperatures, since everything becomes superconducting.
> Even with current "coaxial cable tech", it "only" needs to scale up to the point of reaching one logical qubit.
Having a logical qubit sitting in a big box is insufficient. One needs multiple logical qubits that can be interacted with and put in a superposition, for example. A chain of gates represents each logical qubit gate between each pair of physical qubits, but that's not possible to do directly at once; hence, one needs to effectively solve the 15th puzzle with the fewest steps so that the qubits don't decohere in the meantime.
> I am not even sure whether logical components work at such low temperatures, since everything becomes superconducting.
Currently finishing a course where the final project is designing a semiconductor (quantum dot) based quantum computer. Obviously not mature tech yet, but we've been stressed during the course that you can build most of the control and readout circuits to work at cryogenic temps (2-4K) using slvtfets. The theoretical limit for this quantum computing platform is, I believe, on the order of a million qubits in a single cryostat.
> you can build most of the control and readout circuits to work at cryogenic temps (2-4K) using slvtfets
Given the magic that happens inside high-precision control and readout boxes connected to qubits with coaxial cables, I would not equate the possibility of building one with such a control circuit ever reaching the same level of precision. I find it strange that I haven’t seen that on the agenda for QC, where instead I see that multiplexing is being used.
> The theoretical limit for this quantum computing platform is, I believe, on the order of a million qubits in a single cryostat.
The results of the 2025 elections for the president and board members at the International Association for Cryptologic Research (IACR) have been botched because the results of the super-secure cryptographic e-voting system cannot be retrieved due to the "accidental loss" of a decryption key.
While human mistakes happen, this incident comes under very troubling circumstances.
Why does an e-voting system of an association like IACR not support t-out-of-n threshold decryption?
Why is a system where a single party can collude to invalidate the vote considered acceptable?
Wouldn't be wiser to freeze to the date of November 20th the eligibility status for voting instead of "calling to arms" IACR members who had previously decided to opt out from Helios emails?
Does the identity of some of the candidates to Director represent a problem for IACR?
I like the skepticism against Bluesky, and I agree that where VC money is involved things are mostly sketchy.
However, this post was about the at protocol, which seems like you just hand-waved in one sentence:
> The AT Protocol used by Bluesky has some interesting features, although to be honest I don't know how many of these are just impossible to achieve on ActivityPub or are just WIP lagging behind due to funding constraints.
I don't think the debate between them is super useful because their architectures are very different.
You also mentioned an issue with the bluesky relay, but others already exist so it's not techincally tied to Bluesky. Heck, I think the fact multiple can exist at the same, while degrades the social aspect, still makes it decentralized.
> I don't think the debate between them is super useful because their architectures are very different.
Sure, that's true, but I, personally, care mostly about one question: Who holds the keys to the kingdom? In this respect, I think the AT Protocol fails spectacularly, mainly due to the lack of a credible strategy to implement really self-custodian identities.
> You also mentioned an issue with the bluesky relay, but others already exist so it's not techincally tied to Bluesky. Heck, I think the fact multiple can exist at the same, while degrades the social aspect, still makes it decentralized.
Yes, but this is also true for Nostr, Diaspora, Mastodon, etc. The difference being, last time I checked (and of course things might have changed in the meantime) with AT Protocol it was only possible to self-host part of the infrastructure (and hosting the relay is insanely demanding).
This is another example of gaslighting from Bluesky that just makes me angry. How in the holiest of Hells does an "Identity directory controlled by a Swiss Association" make the whole thing better?
Sorry, not buying it. I don't have a horse in the race, but won't fall for the marketing.
I agree with the sentiment and I wouldn't call Bluesky "open social"- I don't trust them either. But I still don't find these to be arguments to be against the protocol per se, which I find really interesting.
> Who holds the keys to the kingdom? In this respect, I think the AT Protocol fails spectacularly, mainly due to the lack of a credible strategy to implement really self-custodian identities
From what I've read, you can still own the entire stack from top to bottom, none of it is necessarily tied to bluesky.
Even the identity managed being discussed only applies to bluesky, and whatever ecosystem subscribed to it; but in theory, you could create your own social platform with a new one (you'd obviously lose that ecosystem).
But then again, this would also apply to Mastodon, since whoever owns the instance could always nuke it, and if you own your own instance, you need to build an network that trusts you. There's always an authority involved.
> The difference being, last time I checked (and of course things might have changed in the meantime) with AT Protocol it was only possible to self-host part of the infrastructure (and hosting the relay is insanely demanding).
Well it's definitely not the "50TB" you mentioned e.g here is someone running a relay on a $34/month vps and isn't going to accumulate more disk: https://whtwnd.com/bnewbold.net/3lo7a2a4qxg2l
But it's importance is overblown anyway, it's just a json transmitter for signed data. I think the pds and identity managements are the better concern, and I hope there's a better way to decentralize those (if that makes sense).
EDIT: You're still correct that to fully spin up a new bluesky on your own you'd need an insane amount of storage for hosting all that data that's currently stored on bluesky (especially the did:plc and pds). All good arguments against the company, but that's only because people are choosing to store their pds repositories on bluesky.
You could just as well point your repo to your own server and use a different social media. They could go under and someone else can create a new app view. I find that really cool; still leaves the identity issue open.
reply