Hacker Newsnew | past | comments | ask | show | jobs | submit | comandillos's commentslogin

The biggest mistake is that people trusted a company that, in reality, isn't that different from Apple. Just because everyone claimed Android as the true open source alternative to iOS, when only AOSP was that.

Yea agree. I reeeeally dont get why Google or Apple have good reputation at all.

Google (before the sell-off) promoted a morality in 'don't be evil' that was a stark contrast to other tech firms. The adverts they carried were minimal. Their "free" stuff was top of the line, better than people were getting from paid services.

Apple (under Jobs) sold themselves as counter-culture, they used popstars (unironically), and design, to sell the idea that if you were your own person, or followed fashion, then you bought Apple.

I think the goodwill from those days still provides the foundations of their cultural position now. Although they chip away at those foundations.

OpenAI looked like it could follow Google's early model, until it didn't.


The writing was on the wall for "don't be evil" when Google started the process of acquiring the much reviled DoubleClick back in 2007, nearly 20 years ago at this point. That's longer than most people reading this have been in the tech industry; a generation has never seen Google be anything other than increasingly extractive and monopolistic.

They built products people like, and specially Apple has good reputation for building reliable, long-lasting and easy to use stuff for most people, leading to a heavy user adoption. But heavy user adoption without the proper regulation and company ethics leads to, well, monopolistic practices.

i mean Apple kind of used that position for building a good reputation. their whole thing is/was how secure their devices were and how they had human verification on all apps that went through the app store with a clear intents file (a file the describes exactly WHY an app needs permission for bluetooth/etc), and a secure enclave that prevented even the FBI from getting in (while apple refused to give them a backdoor). Hackers and tinkerers will find a lot of these measures to be an annoyance and authoritative control, but a lot of people just want their phone to a product, not the user.

These kind of things just make me want to use Graphene even more, or literally any platform that isnt the monopoly ones. Somehow I think AI and vibecoding, even if it may sound as an unpopular opinion, will allow people to build free ecosystems and actually usable devices that dont rely on the usual providers.

that reinforces me using HarmonyOS - nothing against Graphene btw -. It's impressive how difficult is to actually use any platform apart from the stablished ones normally these days.

Same, and I also read Netherlands instead of Neanderthals.

Yeah me to!

https://en.wikipedia.org/wiki/De_Rat%2C_IJlst

>De Rat (English: The Rat) is a smock mill in IJlst, Friesland, Netherlands, which was originally built in the seventeenth century at Zaanstreek, North Holland. In 1828 it was moved to IJlst, where it worked using wind power until 1920 and then by electric motor until 1950. The mill was bought by the town of IJlst in 1956 and restored in the mid-1960s. Further restoration in the mid-1970s returned the mill to full working order. De Rat is working for trade and is used as a training mill. The mill is listed as a Rijksmonument (No. 39880).[1]


Such a pity remote dev containers are critical for me. I guess some SSH tunneling could help with it...


Umm… zed supports remote dev over ssh… what’s your concern?


And Zed even supports Dev Container


It seems not both at the same time, I just tried to open a dev container over ssh with 1.0 and didn't work


I don't know why everyone praises GPT 5.4 while Opus 4.5 and onwards are way better for me on complex stuff, i.e. reverse engineering, implementing low-level protocols, interpreting datasheets and specs... I've using Codex for a while and although the app itself its great, the model sometimes takes approaches that do not make any sense.

GLM is really good for the size and price. I've using Big Pickle on OpenCode and its pretty impressive what it can achieve for being free.


I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.


This is about the newly release Qwen3.6. Just wanted to make sure you got that correctly.


Quite scared by the fact that the original issue pointing out the actual root cause of the issue has been 'Closed as not planned' by Anthropic.

https://github.com/anthropics/claude-code/issues/46829


The response doesn't even make sense and appears to be written by AI.

> The March 6 change makes Claude Code cheaper, not more expensive. 1h TTL for every request could cost more, not less

Feels very AI. > Restore 1h as the default / expose as configurable? 1h everywhere would increase total cost given the request mix, so we're not planning a global toggle.

They won't show a toggle because it will increase costs for some unknown percentage of requests?


Sounds like a decision I would make when memory is expensive and you want to get rid of the very long (in time) tail of waiting 1h to evict cache when a session has stopped.

There must be a better way to do this. The consumer option is the pricing difference. If they’d make cache writes the same price as regular writes, that would solve the whole problem. If you really want to push it, use that pricing only for requests where number of cache hits > 0 (to avoid people setting this flag without intent to use it), and you solved the whole issue.


Memory is expensive? If reads are as rare as they claim you can just stash the KV-cache on spinning disk.


Aren’t those latency sensitive though?


When a casino is making a lot of money from gamblers, they don't care about their customers losing money, given the machines are rigged against you.

Anthropic sells you 'knowledge' in the form of 'tokens' and you spend money rolling the dice, spinning the roulette wheels and inserting coins for another try. They later add limits and dumb down the model (which are their gambling machines) of their knowledge for you to pay for the wrong answers.

Once you hit your limit or Anthropic changes the usage limits, they don't care and halt your usage for a while.

If you don't like any of that, just save your money and use local LLMs instead.


Why scared? Like, if theit software gets bad, we stop using it.


Maybe scared wasn't the best word... but we cannot deny Opus is a great - if not greatest - model at coding and Anthropic is the only one serving it a reasonable prices when going through their subscription model.


Sounds like an addiction to me


I mean this is blatantly false. Codex just rolled out a $100 a month plan with higher usage and lower quotas than Claude and GPT 5.4 is more capable than Opus 4.6. At least for the systems work I do.

And if you can't stomach OpenAI, GLM 5.1 is actually quite competent. About Opus 4.5 / GPT 5.2 quality.


how have you coded before the era of llms?


In my case T&C on using inout/output is so bad in almost Lal the other providers, that I'm forbidden from using them for work (and it doesn't make sense to pay a separate sub if I have basically two at this point, one direct with Anthropic, one via github.com copilot).


This is still far away from being viable for actually useful models, like bigger MoE ones with much larger context windows. I mean, the technology is very promising just like Cerebras, but we need to see whether they are able to keep up this with the evolution of the models to come in the next few years. Extremely interesting nevertheless.


Keep in mind though that if you can run a model at 100-1000x the speed, then even if the model is less capable the sheer speed of them may make you do more interesting things (like deep search explorations with LLM-guided heuristics).


just another piece to this jenga tower called c++. if you want reflection maybe just use a language that was designed with reflection support since the beginning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: