Despite consensus forecasts predicting a slowdown to 1.8% growth, the US economy appears poised for acceleration in 2026 driven by a potent combination of aggressive fiscal and monetary loosening. Treasury Secretary Scott Bessent’s optimism is underpinned by the implementation of the "One Big Beautiful Bill Act," which delivers retroactive tax cuts, alongside a rebound in government spending following a record 43-day shutdown and the potential for the Supreme Court to invalidate certain tariffs, resulting in significant corporate refunds. This fiscal stimulus coincides with the Federal Reserve’s pivot to lower interest rates—a trend likely to intensify as President Trump seeks to appoint dovish leadership to the central bank. While this synchronized stimulus supports bullish stock market projections and complements favorable global conditions like low oil prices, it carries significant risks of reigniting inflation and spiking long-term bond yields; nevertheless, the absence of immediate shocks suggests the economy has ample scope to outperform expectations.
By late 2025, Boston’s Kendall Square biotech hub faces a severe downturn marked by a "biotech winter" of plummeting venture capital and soaring lab vacancies. Caused by high interest rates, domestic policy uncertainties, and intensifying global competition, this crisis has triggered a significant talent exodus, leaving recent PhD graduates overqualified and underemployed as companies freeze hiring to cut costs. The contraction has stalled critical medical research and threatened Boston’s economic stability, though industry leaders remain cautiously optimistic that adaptation strategies—such as AI integration and renewed merger activity—could spark a recovery by 2026.
This reads like a mid-life crisis. A few rebuttals:
1. Yes, humans cause enormous harm. That’s not new, and it’s not something a single technology wave created. No amount of recycling or moral posturing changes the underlying reality that life on Earth operates under competitive, extractive pressures. Instead of fighting it, maybe try to accept it and make progress in other ways?
2. LLMs will almost certainly deliver broad, tangible benefits to ordinary people over time; just as previous waves of computing did. The Industrial Revolution was dirty, unfair, and often brutal, yet it still lifted billions out of extreme poverty in the long run. Modern computing followed the same pattern. LLMs are a mere continuation of this trend.
Concerns about attribution, compensation, and energy use are reasonable to discuss, but framing them as proof that the entire trajectory is immoral or doomed misses the larger picture. If history is any guide, the net human benefit will vastly outweigh the costs, even if the transition is messy and imperfect.
Thats for telling it like it is overlord. We'll see. Im guessing this divide will continue to grow to threaten our whole economic system. When people cant pay rent, food, and electric genAI isnt going to fix those problems.
I would love to see these people ripped apart as they tell the masses that’s it’s ok they’re children can’t eat now, because in 100 years, they’ll be able to afford more mind raping spyware and slop.
Or as they’re brutally killed in some sort of modern reign of terror, just shrug, let them know it’s ok, it’s just human nature that’s happened again and again and that a new, better society will eventually be born from it so they should just accept it.
The distinction Karpathy draws between "growing animals" and "summoning ghosts" via RLVR is the mental model I didn't know I needed to explain the current state of jagged intelligence. It perfectly articulates why trust in benchmarks is collapsing; we aren't creating generally adaptive survivors, but rather over-optimizing specific pockets of the embedding space against verifiable rewards.
I’m also sold on his take on "vibe coding" leading to ephemeral software; the idea of spinning up a custom, one-off tokenizer or app just to debug a single issue, and then deleting it, feels like a real shift.
> The distinction Karpathy draws between "growing animals" and "summoning ghosts" via RLVR
I don't see these descriptions as very insightful.
The difference between general/animal intelligence and jagged/LLM intelligence is simply that humans/animals really ARE intelligent (the word was created to describe this human capability), while LLMs are just echoing narrow portions of the intelligent output of humans (those portions that are amenable to RLVR capture).
For an artificial intelligence to be intelligent in it's own right, and therefore be generally intelligent, it would need to need - like an animal - to be embodied (even if only virtually), autonomous, predicting the outcomes of it's own actions (not auto-regressively trained), learning incrementally and continually, built with innate traits like curiosity and boredom to put and keep itself in learning situations, etc.
Of course not all animals are generally intelligent - many (insects, fish, reptiles, many birds) just have narrow "hard coded" instinctual behaviors, but others like humans are generalists who evolution have therefore honed for adaptive lifetime learning and general intelligence.
> while LLMs are just echoing narrow portions of the intelligent output of humans
But they aren't just echoing, that's the point. You really need to stop ignoring the extrapolation abilities in these domains. The point of the jagged analogy is that they match or exceed human intelligence in specific areas in a way that is not just parroting.
It's tiresome in 2025 to keep on having to use elaborate long winded descriptions to describe how LLMs work, just to prove that one does understand, rather than be able to assume that people generally understand, and be able to use shorter descriptions.
Would "riffing" upset you less than "echoing"? Or an explicit "echoing statistics" rather than "echoing training samples"? Does "Mashups of statistical patterns" do it for you?
The jagged frontier of LLM capabilty is just a way of noting the fact that they act more like a collection of narrow intelligences rather than a general intelligence who's performance might be expected to be more even.
Of course LLMs are built and trained to generate based on language statistics, not to parrot individual samples, but given your objection it's amusing to note that some of the areas where LLMs do best, such as math and programming, are the ones where they have been RL-trained to override these more general language patterns and instead more closely follow the training data.
> I’m also sold on his take on "vibe coding" leading to ephemeral software; the idea of spinning up a custom, one-off tokenizer or app just to debug a single issue, and then deleting it, feels like a real shift.
We should keep in mind that currently our LLM use is subsidized. When the money dries up and we have to pay the real prices I’ll be interested to see if we can still consider whipping up one time apps as basically free
The author is conflating a financial correction with a technological failure.
I agree that the economics of GenAI are currently upside down. The CapEx spend is eye-watering, and the path to profitability for the foundational model providers is still hazy. We are almost certainly in an age of inflated-expectations hype-cycle peak that will self-correct, and yes, "winter is harsh on tulips".
However, the claim that the technology itself is a failure is objectively disconnected from reality. Unlike crypto or VR (in their hype cycles), LLMs found immediate, massive product-market fit. I use K-means clustering and logistic regression every day; they aren't AGI either, but they aren't failures.
If 95% of corporate AI projects fail, it's not because the tech is broken; it's because middle management is aspiring to replace humans with a terminal-bound chatbot instead of giving workers an AI companion. The tech isn't going away, even if AI valuations might be questioned in the short term.
AI-text detection software is BS. Let me explain why.
Many of us use AI to not write text, but re-write text. My favorite prompt: "Write this better." In other words, AI is often used to fix awkward phrasing, poor flow, bad english, bad grammar etc.
It's very unlikely that an author or reviewer purely relies on AI written text, with none of their original ideas incorporated.
As AI detectors cannot tell rewrites from AI-incepted writing, it's fair to call them BS.
What kind of world do you live in? Actually Google ads tend to be some of the highest ROI for the advertiser and most likely to be beneficial for the user. Vs the pure junk ads that aren't personalized, and just banner ads that have zero relationship to me. Google Ads is the enabler of free internet. I for one am thankful to them. Else you end up paying for NYT, Washinton Post, Information etc -- virtually for any high quality web site (including Search).
Most of the time, you need to pick one. Modern advertising is not based on finding the item with the most utility for the user - which means they are aimed at manipulating the user's behaviour in one way or another.
Lowering LDL cholesterol is arguably the most evidence-backed longevity intervention available today. Mendelian randomization studies suggest that each standard deviation of lifelong LDL reduction translates to roughly +1.2 years of additional lifespan, implying ~+2.4 to +3.6 years from sustained, meaningful lowering alone.
Pair this with tight blood-pressure control (aim systolic <130 mmHg) and a healthy BMI—every incremental improvement helps. Together, LDL, BP, and BMI form the most potent triad of interventions most people can implement now and expect to see substantial benefits 20–40 years down the line.
I recently downloaded about 10 years of monthly price returns for QQQ, TQQQ, NVDA, GBTC, and a few others. Then I asked ChatGPT and Gemini (separately) to find the portfolio that maximizes an adjusted CAGR — roughly, mean return minus ½ × standard deviation².
Result: 70% NVDA, 30% GBTC (Bitcoin), and 0% QQQ or TQQQ.
Honestly, not a bad mix — especially for a small, high-risk slice of your portfolio.
Next, I compared TQQQ (Triple Qs) vs. QQQ using the same 10-year monthly data. The optimizer picked 100% TQQQ, which again makes sense if you’re doing this in a tax-advantaged account like a 401(k) or IRA and only with money you’re willing to take some risk on.
Then I expanded the dataset — 55 years of returns across major asset classes (S&P 500, gold, short- and long-term Treasuries, corporate bonds, real estate, etc.) — and asked for the optimal portfolio.
The winner: ~85% S&P 500, 15% gold, though 75/25 gives nearly the same return with a better Sharpe ratio.
A few quick takeaways:
Gold → GLDM ETF is the best vehicle.
QQQ → QQQM or TQQQ are the best versions.
And if you’re feeling adventurous: 70% NVDA, 30% IBIT (Bitcoin) isn’t crazy.
For what it’s worth, I’ve been running 75% stocks / 25% gold for a while now, but I’m thinking of carving out ~10% of the stock portion for a more aggressive tilt: TQQQ (6%), NVDA (2%), IBIT (1%) — because why not?
"the portfolio that maximizes an adjusted CAGR" over the ranges 2015-2025? Isn't that one just massively overfitting to the unique geopolitical events of 2016-current? 1970-2025 sounds better. But still that's baking in the USD exchange rate.
What happens if you backtest it by finding the equivalent portfolios for 1998-2008 (or even 1919-1929?)
1. I find Gemini 2.5 Pro's text very easy and smooth to read. Whereas GPT5 thinking is often too terse, and has a weird writing style.
2. GPT5 thinking tends to do better with i) trick questions ii) puzzles iii) queries that involve search plus citations.
3. Gemini deep research is pretty good -- somewhat long reports, but almost always quite informative with unique insights.
4. Gemini 2.5 pro is favored in side by side comparisons (LMsys) whereas trick question benchmarks slightly favor GPT5 Thinking (livebench.ai).
5. Overall, I use both, usually simulatenously in two separate tabs. Then pick and choose the better response.
If I were forced to choose one model only, that'd be GPT5 today. But the choice was Gemini 2.5 Pro when it first came out. Next week it might go back to Gemini 3.0 Pro.
reply