Hacker Newsnew | past | comments | ask | show | jobs | submit | anon7000's commentslogin

It makes sense for sites with a lot of static pages, but you barely need react in that case. NextJS does not perform that well out of the box. I’d argue that a basic SPA with no SSR using something like preact would be a better choice for many building dashboards or applications (not marketing/docs sites). It’s also easier to host & operate and has fewer footguns.

Getting SSR right is tricky and barely even matters for a lot of use cases I’m seeing with Next.

Better server/client integration when it comes to rendering UIs is neat, but there are other technologies that solve for that at a more fundamental level (htmx, phoenix)


It rather appears to make sense for any site that currently makes additional requests to fetch data as part of the page load.

It is broadly useful and relatively easy to use while still staying within the React framework the developer knows well.

That said, I didn't build more than a demo app with NextJS, so I don't know a lot about possible issues. Just the concept seems to be good.


If people payed attention, Kamala’s policies were decently well thought out and not the exact same as Biden.

It’s more about how demonized the libs have been, dissatisfaction with “woke culture,” certain groups of conservatives who will never vote pro-choice, and certain populations not feeling like they have a spot in liberal dialogue. (Young men.) and partly because she’s a black woman.

I would argue the election had almost NOTHING to do with actual policy or cabinet choices, because Trump should have easily lost if it was. His previous cabinet was a disaster and he can no longer attract the best and brightest due to his controversy. So, exactly as expected, his cabinet is a fucking disaster. People are attracted to the guy who will go apeshit on a system that doesn’t seem to work for them, even when stability & slow progress is actually better. (Fast progress is even better, but that’s not what Trump provides)


Broadly speaking, people don't pay attention. The message from her campaign[1] was that she would do nothing differently from Biden. This is a mistake even if (especially if!) her actual policies are different from Biden's.

The only thing Harris could come up with was, eventually: "unlike Biden, I will have a republican in my cabinet".

[1]: https://www.politico.com/news/2024/10/08/harris-biden-the-vi...


That gives you a chatbox tacked onto an IDE, not exactly an agentic command center. Cursor gets close. But it’s hard to work on multiple things at once, or across multiple codebases.

IMO, the answer is remote container environments like Codespaces, Coder, DevPod, etc. (dev containers)

We are moving into Codespaces now and it basically gives us an isolated full runtime env with Docker-in-Docker running Postgres. Developers had been trying various things to script worktrees, dealing with jank related to copying files into worktrees, managing git commands to orchestrate all of this, and managing port assignments.

Now with dev containers, we get a full end-to-end stack that we start up using Aspire (https://aspire.dev) which is fantastic because it's programmable.

All the ports get automatically routed and proxied and we get a fully functioning, isolated environment per PR; no fiddling with worktrees, easy to share with product team, etc.

A 64GB developer machine can realistically run ~2 of our full stacks (Pg, Elastic, Redis, Hatchet, Temporal, bunch of other supporting services). Frontend repo is 1.5m+ lines of TS (will grind small machines to a halt on this alone). In Codespaces? A developer could realistically work on 10 streams of changes at once and let product teams preview each; no hardware restrictions. No juggling worktrees, branches, git repo state.

I can code from any browser, from my phone, from a MacBook Neo, from a Chromebook. Switching between workstreams? Just switch tabs. Fiddling around with local worktrees for small, toy projects seems fine. But for anything sizable, future seems to be in dev containers.


I don’t understand your view. Reality is that we need some way to encode the rules of the world in a more definitive way. If we want models to be able to make assertive claims about important information and be correct, it’s very fair to theorize they might need a more deterministic approach than just training them more. But it’s just a theory that this will actually solve the problem.

Ultimately, we still have a lot to learn and a lot of experiments to do. It’s frankly unscientific to suggest any approaches are off the table, unless the data & research truly proves that. Why shouldn’t we take this awesome LLM technology and bring in more techniques to make it better?

A really, really basic example is chess. Current top AI models still don’t know how to play it (https://www.software7.com/blog/ai_chess_vs_1983_atari/) The models are surely trained on source material that include chess rules, and even high level chess games. But the models are not learning how to play chess correctly. They don’t have a model to understand how chess actually works — they only have a non-deterministic prediction based on what they’ve seen, even after being trained on more data than any chess novice has ever seen about the topic. And this is probably one of the easiest things for AI to stimulate. Very clear/brief rules, small problem space, no hidden information, but it can’t handle the massive decision space because its prediction isn’t based on the actual rules, but just “things that look similar”

(And yeah, I’m sure someone could build a specific LLM or agent system that can handle chess, but the point is that the powerful general purpose models can’t do it out of the box after training.)

Maybe more training & self-learning can solve this, but it’s clearly still unsolved. So we should definitely be experimenting with more techniques.


> Reality is that we need some way to encode the rules of the world in a more definitive way

I mean, sure. But do world models the way LeCun proposes them solves this? I don't think so. JEPAs are just an unsupervised machine learning model at the end of the day; they might end up being better that just autoregressive pretraining on text+images+video, but they are not magic. For example, if you train a JEPA model on data of orbital mechanics, will it learn actually sensible algorithms to predict the planets' motions or will it just learn a mix of heuristic?


I mean, that’s how growth works. Like, if the economy normally grows, the economy is always the biggest it’s ever been. Debt’s always the highest. Human population is always the largest. Number of companies is always increasing. Amount of important economic infrastructure financed by debt is ever growing.

To be fair, I’m not saying our debt is in a good place. But just that we should expect it to always be the most it’s ever been, just like we’d always expect the economy to be the largest it’s ever been.

By itself, it doesn’t mean anything if it’s always increasing, what matters more is how quickly debt is growing and if we aren’t keeping up in how we pay it off


> Debt’s always the highest.

You don’t need to fund growth with debt necessarily- look at Norway for example. They have modest gdp growth and a net asset position. (And yes I know it’s because of their mineral wealth I’m just saying that growth doesn’t necessarily entail growing debt).


I think if you’re playing around with apps & Tailscale on your NAS, it’s a homelab.

Or just use Tailscale serve to put the app on a subdomain

It’s easy to produce a high volume of code, sure, but it is not equally easy to test, verify, and integrate it. And with a high volume of code, there is a high volume of shit to review & test & integrate. For companies that give a shit about not vibe coding their way into a disaster (because they have lucrative enterprise contracts that depend on reliability & security), that’s the real blocker. (Plus, these types of projects are big, not trivial, and things are harder to integrate & properly test because of that.)

Not to mention, if a team wants to keep a semblance of understanding of what they own & ship… it can be exhausting to have a huge volume of new code coming into the system.

It’s definitely a productivity unlock. For sure. But there are a lot of knock-on effects we’re still figuring out that counteract how much extra “value” we’re shipping


In my case, the volume of code is roughly the same. I'm not using the efficiency towards pumping out more code, just using it to be AFK more.

I spend enough time iterating and refining to the point I'm comfortable taking ownership of the outputted code. Perhaps hypocritically, I do mald when people upload code for review that they clearly haven't taken the effort to read through critically.


Well, but AI also helps by writing good comments & docs :-) (one thing Im usually very brief)

Yeah, it’s very good for productivity & focus to have some easy tasks to give you a quick win & dopamine boost. Spending all day reviewing AI output is NOT that.

I suppose some say this is an argument in favor of non-human workers, but the whole point of this endeavor should be to improve human society. (Isn’t that, allegedly, what tech companies are all about? :p)


They literally spent a decent chunk of money spinning up a line of business that could only make money if the tariffs were illegal.

> Did they know it was illegal

it doesnt have to be black and white. they knew enough to spin up a business that when it is overturned they could make money... which means they knew the probability was high.


[flagged]


Insurance company deal: if you pay us $X now, and then Y happens, we will make you whole, even though that cost may very well exceed $X.

Lutnick deal: we pay you $X' now, and if Y' happens, we collect everything which will substantively exceed $X'.

This is not insurance, its closer to shorting stocks.

Oh, one other thing: the insurance company has essentially nothing to do with Y at all, in the sense that they have no control over Y and generally speaking no involvement in it (think: accidents, floods, storms, fires). By contrast Lutnick is the Secretary of Commerce of the United States of America.


Lutnick deal: if we pay you $X now, and then Y happens, you will make us the whole refund, even though that windfall may very well exceed $X.

Insurance company deal: you pay us $X' now, and if Y' happens, we pay for everything which may substantively exceed $X'.


I have no idea what the point of this is, since it just restates what I wrote, and reinforces the point that the Lutnick deal is nothing like "insurance".

They are the same. If you don’t get it then I’m not sure I can help you further

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: