Hacker Newsnew | past | comments | ask | show | jobs | submit | grayxu's commentslogin

This is not a valid argument. TPS is essentially QoS and can be adjusted; more GPUs allocated will result in higher speed.

There are sequential dependencies, so you can't just arbitrarily increase speed by parallelizing over more GPUs. Every token depends on all previous tokens, every layer depends on all previous layers. You can arbitrarily slow a model down by using fewer, slower GPUs (or none at all), though.

Partially true, you can predict multiple tokens and confirm, which typically gives a 2-3x speedup in practice.

(Confirmation is faster than prediction.)

Many models architectures are specifically designed to make this efficient.

---

Separately, your statement is only true for the same gen hardware, interconnects, and quantization.


With speculative decoding you can use more models to speed up the generation however.

Yes, because speculation has NEVER bitten us in the ass before, right? Coughs in Spectre

Speculative decoding is just running more hardware to get a faster prediction. Essentially, setting more money on fire if you're being billed per token.


Actually, Opus might achieve a lower cost with the help of TPUs.

The memory wall is an eternal problem when performing computations on the CPU


other filesystems are just as susceptible to data corruption from memory errors. this is not a weakness unique to ZFS.


While this guide covers roughly 80% of the material, it remains a high-level overview that lacks depth. I can't confirm if it was LLM-generated, but the content is undeniably superficial. Real-world production environments are far more complex; for instance, despite other users mentioning hugepages and TLB, there is no discussion of critical issues like TLB shootdown.


The reason is that you are not a Computer Science PhD. But soft skills (such as storytelling, sense of ownership, etc.) can still be passed down


It's a bit ironic that the "soft" skills are becoming the hard skills nowadays. A lot of the AI buzz these days is around PM's, Data Scientists, etc. who now have the tools to code "well enough" and are attractive due to their people skills and/or other skillsets.

Not to say this is an objective analysis, just observing the subjective trends.


same for chinese students


You can buy a used AX1800 with OpenWrt in China for around five or six euros.... XD


First, you need to identify and quantify your skills and strengths.


Too many GitHub repositories can integrate GPT into smart speakers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: