Hacker Newsnew | past | comments | ask | show | jobs | submit | rahulyc's commentslogin

All the websites currently blocking Claude Code or other AI agents are fighting a losing battle. Computer-use is in the early stages, and the thing preventing mass-adoption seems to be the number of tokens it takes. Agents can fumble around trying 10 CLI commands that don't work before finding the right one and we barely notice. But other visual agents (browser use / computer use etc) end up eventually fumbling on to the right thing, but we don't have the patience to wait 20 mins. to click a button. As tokens get cheaper + faster, we probably get the models that can use a UI interface just as natively as a CLI.

Tokens cheaper? I don't think that seems to be the case ... VC funded tokens were there to build user base and token price will go up as they eventually switch from growth to profitability.

I wish I could place a lot of money on the opposite side of this bet.

I don't think many realize how could the cheap, alternative models are becoming. I prefer SOTA models for key work, but I can also spend 10X as many tokens on an open model hosted by a non-VC subsidized provider (who is selling at a profit) for tasks that can tolerate slightly less quality.

The situation is only getting better as models improve and data centers get built out.


What open source model and what non-subsidized provider specifically?

GLM 4.7 Flash is 0.07/1m tokens in, 0.40/1m tokens out on AWS Bedrock us-east-1. That's less than 1/10 the price of Haiku 4.5

Bedrock isn't the cheapest either although I'm fairly sure they aren't being VC subsidized

There are definitely cheap tokens out there. The big gotcha is "for tasks that can tolerate slightly less quality"


Yes, but how cheap is it to run four at the same time? It’s tough to run one good model locally, but running four at the same time which I commonly do with Claude and Codex just doesn’t seem to be happening anytime soon.

I'm referring to hosted models such as via OpenRouter or from the model providers' own services.

I think everyone making claims that inference is getting more expensive are unaware that there are more LLM providers than Google, Anthropic, and OpenAI.


Fair - there are bets both ways though I wouldn't consider it to be a certainty. That revenue drive on this AI build out is going to be real and multifold.

It will take a few years until scheduled data center construction finishes, and together with software optimizations that may come up in the meantime, it may cause a significant decrease in token price.

And the lethal trifecta but I suppose that's all agents as of now anyhow. Every AI provider has major warnings about letting AI have access to PII in the browser.

> the thing preventing mass-adoption seems to be the number of tokens it takes.

Try the exhorbitant expenses and ballooning waste of generated electricity and usable water.


They don’t need to be 100% effective they just need to make you afraid enough of being banned to not bother trying.

How do they know that the "you" accessing the site is the same "you" they previously banned?

Face-scanning? Iris patterns?


You used your credit card to buy whatever service or product they sell.

I hate to break it to you but it is really easy to get anonymous visa/mastercard cards.


That link does not in any way support your bogus claim.

nobody can block actual LLM providers, they use spoofed requests to scan web for content, sometimes even using residential proxies.

Sure they can, proof of work seems to be effective. Anubis has become pretty popular

The new SaaS is subagent as a service?


indeed! there is no reason why tooling for AI agents shouldn't use AI when tooling for humans is shifting towards AI


Yes, but also you'll never have any early evidence of the Foom until the Foom itself happens.


If only General Relativity had such an ironclad defense of being as unfalsifiable as Foom Hypothesis is. We could’ve avoided all of the quantum physics nonsense.



it doesn't mean it's unfalsifiable - it's a prediction about the future so you can falsify it when there's a bound on when it is going to happen. it just means there's little to no warning. I think it's a significant risk to AI progress that it can reach some sort of improvement speed > speed of warning or any threats from AI improvement


First, I built the software using my hands to do my bidding...

Now, the software is using my hands to its bidding?


I think your argument isn't exactly right.

You can imagine a future world where producing real goods and services is ~free (AI compute infinite etc.)

In this world, the entire economy will be ~advertising only so you can charge people anything at all instead of giving it away for free.


If your revenue model is predicated on Star Trek-style communism, it's maybe not a very realistic model. I really don't think if producing things is essentially free that advertising will be a very big thing since it would be pointless.


I was working on something almost similar, mostly to create a browser agent, and LLMs are really good at writing playwright scripts compared to vanilla JS.


I didn't realize the setup script had to be done in the UI over in the environment tab. I assumed it would be reading something like setup.sh from the codebase.

Docs could make that more clear


desertcart | Dubai, UAE | Full-time | ONSITE | $70k-$120k | https://www.desertcart.ae/careers

We're looking to build a team of experienced software developers to help us bring new products to market in cross-border ecommerce space. Our stack is based mainly on a Ruby backend and we have a number of different technologies we use at a smaller scale. We deal with directly with the logistics, e-commerce, search, and web crawling. There's tons of interesting opportunities in the space and we love to move fast. Also happy to sponsor visa / help with relo.

Email me if you're interested, or want to chat further: rahul [at] [companyname].ae or apply at link above.

Open Positions: - Senior software engineer (Ruby/JS/React experience is a plus) - UI / UX designer - Product Manager / Product Lead (experience in ecommerce / logistics is a plus) - Frontend dev (React Native experience a plus)


desertcart | Dubai, UAE | Full-time | $80k-$120k |

Senior software engineer(Ruby/JS/React experience is a plus)

We're looking to build a team of experienced software developers to help us bring new products to market in cross-border e-commerce space. Our stack is based mainly on a Ruby backend and we have a number of different technologies we use at a smaller scale. We deal with directly with the logistics, e-commerce, search. There's tons of interesting opportunities in this space and we love to move fast.

Email me if you're interested, or want to chat further: rahul [at] [companyname].ae


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: