Hacker Newsnew | past | comments | ask | show | jobs | submit | ratorx's commentslogin

I think I read somewhere (but could be wrong) that it was because they didn’t want to own any “authentication” services. Their infrastructure was zero trust (as in they don’t hold any passwords or private keys), just a discovery server for different devices.

Making it explicit wouldn’t be particularly problematic no? Option<&Spouse> in Rust terms. Or for this specific case, a GADT (Single | Married<&Spouse>)?

It could even use a special “NULL” address. Just don’t pollute every type with it.


Polluting every type with it is just a very bad implementation which has nothing in common with the concept of NULL as proposed by Tony Hoare in the paper "Record Handling".

The concept as proposed by Hoare is strictly necessary for things like partial relations, which are encountered very frequently in practice.

It is true however that a large number of programming languages have misused the concept of a NULL reference proposed by Hoare.

As you say, there must be distinct types that may have or may not have a "nothing" value.


> The concept as proposed by Hoare is strictly necessary for things like partial relations

I read the 'Partial Functional Relationships' section. There's nothing special there. It's just structs with nulls.

> Polluting every type with it I think you're reacting to the pollution of the syntax. Hoare already polluted the semantics: Every type can be null. You can't even model a NonNull<Spouse> because NonNull itself can be null.


Pure CS is not necessarily equivalent to pure maths. For the “science” bit of CS, you do need to do the equivalent of experiments (for more applied topics).

For example, a physics degree is expected to have experiments. You are not required, expected (and possibly do not want) to know the tools required to professionally build a bridge because you did courses on mechanics. But you might do an experiment on the structural integrity and properties of small structures.

Whether this is a good split is an entirely different question.


Changing the links and doing nothing else would be a pretty dumb MITM. You could do a more complex variant which is not so easy to spot (targeting specific networks, injecting malware whilst modifying the checksum)

The key property of SSL that is useful for tamper resistance is that it’s hard to do silently. A random ASN doing a hijack will cause an observable BGP event and theoretically preventable via RPKI. If your ISP or similar does it, you can still detect it with CT logs.

Even the issuance is a little better, because LE will test from multiple vantage points. This doesn’t protect against an ISP interception, but it’s better than no protection.


> The key property of SSL that is useful for tamper resistance is that it’s hard to do silently.

Not when you control the certificate issuer.


I find it interesting that they bought Astro (https://blog.cloudflare.com/astro-joins-cloudflare/), which from my definitely-not-a-frontend-person perspective seems to tackle a similar problem to Next. A month ago.

If it is so cheap to make something that they recommend using (rather than a proof of concept), why buy Astro (presumably it was more expensive than the token cost of this clone?).

One conclusion is that, at the organisational level, it still makes sense to hire the “vision” behind the framework, rather than just clone it. Alternatively, maybe AI has improved that much in 1 month!


Unlike Astro, this looks more like a low effort experiment while making fun of a competitor than a project intended for any serious production use.

Maybe I'm wrong. We'll see what happens a couple of years from now.


I'm very patient with the ai-led porting projects since they're revealed with a big engagement splash on social media. Could it be durable? sure but I doubt anyone is in that much of a rush to migrate to a project built in a week either.


I view it as a long-overdue exit ramp for maintainers of Next.js-based webapps to extricate themselves from its overly-opinionated and unnecessarily-tightly-coupled build tooling. Being stuck on webpack/rspack and unable to leverage vite has been a huge downside to Next.js. It's a symptom of Vercel's economic incentives. This project fixes it in one fell swoop. I predict it hurts Vercel but saves Next.js.


Astro is a different paradigm. Acquiring Astro gives Cloudflare influence over a very valuable class of website, in the same way Vercel has over a different class from their ownership of Next.js. Astro is a much better fit for Cloudflare. Next.js is very popular and god awful to run outside of Vercel, Cloudflare aren’t creating a better next.js, they’re just trying to make it so their customers can move Next.js websites from Vercel to Cloudflare. Realistically, anyone moving their next.js site to Cloudflare is going to end up migrating to Astro eventually.


Can you talk more about this? What’s wrong with cloudflare pages plus Nextjs? Why do you need Astro?

Thanks


Astro isn’t solving the same surface as next. Astro is great for static sites with some dynamic behavior. The same could be said about next depending on how you write your code, but next can also be used for highly dynamic websites. Using Astro for highly dynamic websites is like jamming a square peg into a round hole.

We use Astro for our internal dev documentation/design system and it’s awesome for that.


But presumably if you could do this for Next it would be at least as easy for Astro?


Astro is not a static site builder.


I think they just want steer users/developers to CF products, maybe not? It is interesting to see the two platforms. I've moved to svelte, never been a frontend person either but kind of enjoying it actually.


> which from my definitely-not-a-frontend-person perspective seems to tackle a similar problem to Next.

It does not. Astro is more for static sites not dynamic web apps.


That used to be the case 3-4 years ago. Today Astro is very much a serious contender for dynamic web apps.


I tried it about 6 months ago for something I had to redo in NextJS afterwards, it is really not built for those sorts of web apps, even today.


What features are missing?


Astro has "server islands" which rely on a backend server running somewhere. If 90% of the page is static but you need some interactivity for the remaining 10%, then Astro is a good fit, as that's what makes it different than other purely static site generators. Unlike Next.js, it's also not tied to React but framework-agnostic.

Anyways, that's why it's a good fit for Cloudflare: that backend needs to be run somewhere and Astro is big enough to have some sort of a userbase behind them that Cloudflare can advertise its service to. Think of it more as a targeted ad than a real acquisition because they're super interested in the technology behind it. If that were the case, they could've just forked it instead of acquiring it.

From Astro's perspective, they're (presumably) getting more money than they ever did working on a completely open source tool with zero paywalls, so it's a win-win for both sides that Cloudflare couldn't get from their vibe-coded project nobody's using at the moment.


You buy that concern to hire the people. The stack is already free.


You can make it so employees don’t have ambient access to data, and require multi-party approval for all actions that require user data. Giving away a user password should be treated as a routine risk.

I’m not saying that’s how it actually works, and this process doesn’t have warts, but the ideal of individual employees not having direct access is not novel.


It doesn’t have to be a compile time constant. An alternative is to prove that when you are calling the function the index is always less than the size of the vector (a dynamic constraint). You may be able to assert this by having a separate function on the vector that returns a constrained value (eg. n < v.len()).


A global train outage would be quite a spectacle, is that even possible?


Management not having to listen to engineers is the structural problem. How do managers know which concerns that engineers bring up are actually relevant? How do engineers know which concerns have real world consequences (without having a incredibly high burden of proof)?

Having regulation, or standardisation is a step toward producing a common language to express these problems and have them be taken seriously.

Leadership gets a strong signal - ignoring engineers surfacing regulated issues has large costs. Company might be sued and executives are criminally liable (if discovered to have known about the violation).

Engineering gets the authority and liability to sign off on things - the equivalent of “chartership” in regular fields with the same penalties. This gives them a strong personal reason to surface things.

It’s possible that this is harder for software engineering in its entirety, but there is definitely low hanging fruit (password storage and security etc).


I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.

And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.

I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.

I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: