Hacker Newsnew | past | comments | ask | show | jobs | submit | Aissen's commentslogin


It might be a bit CPU and RAM starved… Which in theory should be OK, but in practice you'll find production workloads that struggle because of this. Just make sure whatever you want to run on this is indeed extremely GPU-bound, or you might have bad surprises later.

Thanks for the context. Since I'm not interested in betting, I had not clicked on the grey on white About link at the bottom, which says:

> All the trains, delays, and data on this app are real.But the money isn't – because for that I'd need to move to Malta. Or Cyprus. Or Schleswig-Holstein.


It would be fun if Google lost its months of edge in the LLM value race because it alienated early adopters paying $250/month by using a 0-strike system with no customer support.


Honestly, I think it was probably a few users abusing the system like crazy. I've been building with Gemini CLI the past few days and had an increasing amount of issues getting a request through.

The GH issue trackers were full of people bitching and moaning about it. I think it might be a worse thing to alienate your users who use your product in the intended way - through Google's tooling.

But I agree the 0 strike rule seems really excessive.

It is also a possible scenario that a single individual sets up 10+ AI Pro subscriptions to blast through tokens like crazy - not sure how the economics of the daily allowances compare to the API pricing here.


> Honestly, I think it was probably a few users abusing the system like crazy.

It's not unusual that 10% of users use up 80% of the capacity; first saw this when home internet started getting ubiquitous.

By dropping the problematic 10%, the remaining 90% are:

a. Much happier, and

b. Much more profitable for the provider.

Resulting in a sustainable service for those who don't abuse the hell out o the ToS/FUP.


> 880mm^2 die

That's a lot of surface, isn't it? As big an M1 Ultra (2x M1 Max at 432mm² on TSMC N5P), a bit bigger than an A100 (820mm² on TSMC N7) or H100 (814mm² on TSMC N5).

> The larger the die size, the lower the yield.

I wonder if that applies? What's the big deal if a few parameter have a few bit flips?


> I wonder if that applies? What's the big deal if a few parameter have a few bit flips?

We get into the sci-fi territory where a machine achieves sentience because it has all the right manufacturing defects.

Reminds me of this https://en.wikipedia.org/wiki/A_Logic_Named_Joe


Also see Adrian Thompson's Xilinx 6200 FPGA, programmed by a genetic algorithm that worked but exploited nuances unique to that specific physical chip, meaning the software couldn't be copied to another chip. https://news.ycombinator.com/item?id=43152877


I love that story.


2000s movie line territory:

> There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols.


Spoiler is in the conclusion:

> Yes, it is absolutely key to build your app as ARM, not to rely on Windows ARM emulation.


Is this actually surprising? Once you use stuff like vectorization you want to get as much performance out of a system. If you're not natively compiling for a system, you won't get any performance.

Using AVX2 and using an emulator have contradictory goals. Of course there can be a better emulator or actually matching hardware design (since both Apple and Microsoft actually exploit the similar register structure between ARM64 and x86_64). However, this means you have increased complexity and reduced reliability / predictability.


Author here - have to say, thanks for reading all the way to the end, you don't always see people do that ;)

I put a spoiler at the top too, to avoid trying to make people read the whole thing. The real bit is that chart, which I think is quite an amazing result.

You're right re building. We're a compiler vendor, so we have a natural interest in what people should be targeting. But even for us the results here were not what we expected ahead of time.


Having written an emulator, the conclusion was a bit less surprising. It's also probably not definitive, as it might depend on the specific hardware (and future emulator optimizations); you even say in your blog that the hardware you use is not the hardware Microsoft targeted.


Is Chrome for Windows compiled in ARM too or does it use the Windows under emulation?

The reason I ask is that I believe Windows Chrome is (like many Windows binaries) compiled with lots of the advanced CPU features disabled (e.g. AVX512) because they're not available on older PCs. Is that true?


If anyone from Collabora Office is looking, there is a weird paragraph with syntax-colored HTML in https://www.collaboraonline.com/blog/collabora-online-now-av... "*A Note on Early Releases"


This does not surprise me from the company that accidentally deleted the widevine L1 certificate on my phone (that never had any third party OS) during an update and could not restore it, nor would it replace the motherboard (for which it claimed it was the only possible fix).


What does Vercel get out of Next.js? Just default integration of overpriced cloud infra.


Vercel was founded (or co-founded?) by the author of Next.js. That's a very different story. Vercel is like what some hypothetical Astro Cloud could have become if it had grown out of Astro.


"I don't care, it's a stupid question."


It gets to be THE platform where to deploy frontends for many headless enterprise CMS and comerce stores that due to partnerships with Vercel only have Next.js based SDKs.

Additionally, I wish more serveless cloud vendors would offer a free tier like Vercel, including support for compiled languages on the backend (C, C++, Rust, Go) without asking me for a credit card upfront.


It's a bit sad to see hardware manufacturers changing the pixel layout because we weren't able to adapt modern software to do sub-pixel font rendering that works with different layouts out of the box.


On the other hand: it's nice to see the march of pixel density increasing which makes this problem go away entirely.

I used an LG C2 42" as a monitor for a few years. The color fringing was particularly bad for me because I like yellow text and LG uses RWBG. 4K 42" and 1440p 27" are about 110 DPI. This is not enough. 4K 27" is about 160 DPI. That is enough. We've already pushed past needing to care about subpixel layouts if you properly weight pixel density in your selection.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: