Hacker Newsnew | past | comments | ask | show | jobs | submit | plaidfuji's commentslogin

It is annoying though, when you start a new chat for each topic you tend to have to re-write context a lot. I use Gemini 3, which I understand doesn’t have as good of a memory system as OpenAI. Even on single-file programming stuff, after a few rounds of iteration I tend to get to its context limit (the thinking model). Either because the answers degrade or it just throws the “oops something went wrong” error. Ok, time to restart from scratch and paste in the latest iteration.

I don’t understand how agentic IDEs handle this either. Or maybe it’s easier - it just resends the entire codebase every time. But where to cut the chat history? It feels to me like every time you re-prompt a convo, it should first tell itself to summarize the existing context as bullets as its internal prompt rather than re-sending the entire context.


Agentic IDEs/extensions usually continue the conversation until the context gets close to 80% full, then do the compacting. With both Codex and Claude Code you can actually observe that happening.

That said I find that in practice, Codex performance degrades significantly long before it comes to the point of automated compaction - and AFAIK there's no way to trigger it manually. Claude, on the other hand, has a command for to force compacting, but at the same time I rarely use it because it's so good at managing it by itself.

As far as multiple conversations, you can tell the model to update AGENTS.md (or CLAUDE.md or whatever is in their context by default) with things it needs to remember.


Codex has `/compact`

> The “Set up billing” link kicked me out of Google AI Studio and into Google Cloud Console, and my heart sank. Every time I’ve logged into Google Cloud Console or AWS, I’ve wasted hours upon hours reading outdated documentation, gazing in despair at graphs that make no sense, going around in circles from dashboard to dashboard, and feeling a strong desire to attain freedom from this mortal coil.

100% agree


we will have this fixed soon : ) thank you for the patience, have wanted this in AI Studio directly since the day I joined Google!

Reasoning (3 Pro) >> Flash, and I assume their overviews are generated by Flash. But I haven’t found those to be that bad, myself.


Now have it generate the articles and comments, too…


The moat will be memory.

As a regular user, it becomes increasingly frustrating to have to remind each new chat “I’m working on this problem and here’s the relevant context”.

GenAI providers will solve this, and it will make the UX much, much smoother. Then they will make it very hard to export that memory/context.

If you’re using a free tier I assume you’re not using reasoning models extensively, so you wouldn’t necessarily see how big of a benefit this could be.


Python is a pretty bad language for tabular data analysis and plotting, which seems to be the actual topic of this post. R is certainly better, hell Tableau, Matlab, JMP, Prism and even Excel are all better in many cases. Pandas+seaborn has done a lot, but seaborn still has frustrating limits. And pandas is essentially a separate programming language.

If your data is already in a table, and you’re using Python, you’re doing it because you want to learn Python for your next job. Not because it’s the best tool for your current job. The one thing Python has on all those other options is $$$. You will be far more employable than if you stick to R.

And the reason for that is because Python is one of the best languages for data and ML engineering, which is about 80% of what a data science job actually entails.


> And pandas is essentially a separate programming language.

I'd say dplyr/tidyverse is a lot more a separate programming language to R than pandas is to Python.


> And pandas is essentially a separate programming language.

No it isn't.


...unless your data engineering job happens on a database, in which case R's dbplyr is far better than anything Python has to offer.


I have the same experience with Gemini, that it’s incredibly accurate but puts in defensive code and error handling to a fault. It’s pretty easy to just tell it “go easy on the defensive code” / “give me the punchy version” and it cleans it up


Yes the defensive code is something that most models seem to struggle with - even Claude 4.5 Sonnet, even after explicitly prompting it not to - still adds pointless null checks and fallbacks in scripting languages where that something being null won't have any problems apart from an error being logged. I get this particularly when writing Angelscript for Unreal. This isn't surprising since as a niche language there's a lack of training data and the syntax is very similar to Unreal C++, which does crash to desktop when accessing a null reference.


What if the outage and phishing attack were coordinated at a higher level? There’s a scary thought.


Bezos will get to Mars at any cost!


Bezos is shooting for the Moon, Elons the one going for Mars. In fact, that's why Elon's going for Mars, to show up Bezos' plan.


Should probably just short nvidia


"just short nvidia" is not simple. Even if you believe it is overvalued, and you are correct, a short is a specific bet that the market will realize that fact in a precise amount of time. There are very significant risks in short selling, and famously, the market can stay irrational longer than you can remain solvent.


There is a wide space where LLMs and their offshoots make enormous productivity gains, while looking nothing like actual artificial intelligence (which has been rebranded AGI), and Nvidia turns out to have a justified valuation etc.


It's been three years now, where is it? Everyone on hn is now a 10x developers, where are all the new startups making $$$? Employees are 10x more productive, where are the 10x revenues? Or even 2x?

Why is growth over the last 3 years completely flat once you remove the proverbial AI pickaxes sellers?

What if all the slop generated by llms counterbalance any kind of productivity boost? 10x more bad code, 10x more spam emails, 10x more bots


You can generally buy options only a few years out. A few years is decidedly shorter than the lifetime of everyone reading this thread.


“Markets can remain irrational longer than you can remain solvent.”


that's probably a good idea, either AI bubble explodes or competitors catch up


I’m also curious about your last question. Cost of goods sold would not fall into R&D or sales as far as I know.

So curious, in fact, that I asked Gemini to reconstruct their income statement from the info in this article :)

There seems to be an assumption that the 20% payment to MS is the cost of compute for inference. I would bet that’s at a significant discount - but who knows how much…

Line Item | Amount (USD) | Calculation / Note

Revenue $4.3 Billion Given.

Cost of Revenue (COGS) ($0.86 Billion) Assumed to be the 20% of revenue paid to Microsoft ($4.3B * 0.20) for compute/cloud services to run inference.

Gross Profit $3.44 Billion Revenue - Cost of Revenue. This 80% gross margin is strong, typical of a software-like business.

Operating Expenses

Research & Development ($6.7 Billion) Given. This is the largest expense, focused on training new models.

Sales & Ads ($2.0 Billion) Given. Reflects an aggressive push for customer acquisition.

Stock-Based Compensation ($2.5 Billion) Given. A non-cash expense for employee equity.

General & Administrative ($0.04 Billion) Implied figure to balance the reported operating loss.

Total Operating Expenses ($11.24 Billion) Sum of all operating expenses.

Operating Loss ($7.8 Billion) Confirmed. Gross Profit - Total Operating Expenses.

Other (Non-Operating) Income / Expenses ($5.7 Billion) Calculated as Net Loss - Operating Loss. This is primarily the non-cash loss from the "remeasurement of convertible interest rights."

Net Loss ($13.5 Billion) Given. The final "bottom line" loss.


Thanks for doing the prompting work here.

One thing I read - with $6.7bn R&D on $3.4bn in Gross Profit, you need a model to be viable for only one year to pay back.

Another thing, with only $40mm / 5 months in G&A, basically the entire company is research, likely with senior execs nearly completely equity comped. That’s an amazingly lean admin for this much spend.

On sales & ads - I too find this number surprisingly high. I guess they’re either very efficient (no need to pitch me, I already pay), or they’re so inefficient they don’t hit up channels I’m adjacent to. The team over there is excellent, so my priors would be on the first.

As doom-saying journalists piece this over, it’s good to think of a few numbers:

Growth is high. So, June was up over $1bn in revenues by all accounts. Possibly higher. If you believe that customers are sticky (i.e. you can stop sales and not lose customers), which I generally do, then if they keep R&D at this pace, a forward looking annual cashflow looks like:

$12bn in revs, $9.6bn in gross operating margin, $13.5bn in R&D, so net cash impact of -$4bn.

If you think they can grow to 1.5bn customers and won’t open up new paying lines of business then you’d have $20-25bn in revs -> maybe $4bn in sales -> +2-3bn in free cashflow, with the ability to take a breather and make that +15-18bn in free cashflow as needed. A lot of that R&D spend is on training which is probably more liquid than employees, as well.

Upshot - they’re going to keep spending more cash as they get it. I would expect all these numbers to double in a year. The race is still on, and with a PE investment hat on, these guys still look really good to me - the first iconic consumer tech brand in many years, an amazing team, crazy fast growth, an ability to throw off billions in cash when they want to, and a shot at AGI/ASI. What’s not to like?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: