Hacker Newsnew | past | comments | ask | show | jobs | submit | ted_dunning's commentslogin

That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.

There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.


Indeed, even if were possible for someone to create any program most of the time just by directing a team of AI agents, when something does not work one needs the ability to zoom in through the abstraction levels and understand exactly the program that is executed, so only knowing to generate prompts becomes insufficient.

This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.


I don't think compilers are a good example. The economics of software development has won a long time ago. For example in Gamedev with well known soft real-time requirements people (mostly) stopped doing that machine code dance many hardware generations ago. Like it happened with memory optimizations: people measure memory in GB now not in KB =)

I am sure programmers cherish every case when they can do micro optimization but in the retrospect the high level cuts is what made the system fit the perf or memory budget.


Gamedev dev is a good example actually. True, handwritten assembly has gone out of style. But knowing how caches work, and how to lay out data to improve performance is important. And stuff like vector intrinsics also gets used.

1) luckily, nowadays compiler's bugs surface very rarely, as the average programmer does not have capability to solve such issues

2) unfortunately, LLM's, by their very nature (not having a model of what they do, are prone to introducing subtle bugs, i.e. it is like programming in high-level language whose compiler likes to wing it


As stated in the abstract, the anomalies occur more within a window around a nuclear event.

This precise point has been challenged, FWIW. See https://arxiv.org/pdf/2601.21946.

32˚N 80˚W altitude 1000 miles

That is certainly the myth that drives this.

There is also a fair bit of demographics at play. Many of the people writing these little applications grew up and imprinted before open source was much of a thing.


Sorta kinda.

TLDR: historical brine production and modern wetlands restoration.

https://en.wikipedia.org/wiki/San_Francisco_Bay_Salt_Ponds


That is merely medieval times.

In ancient times, floats were all 60 bits and there was no single precision.

See page 3-15 of this https://caltss.computerhistory.org/archive/6400-cdc.pdf


I see their 60-bit float has the same size exponent (11 bits) as today's doubles. Only the mantissa was smaller, 48 bits instead of 52.

That written document is prehistoric.

By definition, a document that is written is historic, not prehistoric.

Prehistoric information could be preserved by an oral tradition, until it is recorded in some documents (like the Oral Histories at the Computer History Museum site).


Julia supports full IEEE 754 rounding mode support.

And none of that doesn't improve the throughput of the clinical trials. It just decrease the cost of coming up with things to put into trial.

Actually, this AI Compute is not very useful for physics, protein folding or many other high performance computing.

The problem is that the connectivity required for much of AI is very different than that required for classic HPC (more emphasis on bandwidth, less on super low latency small payload remote memory operations) and the numeric emphasis is very different (lots of mixed resolution and lots of ridiculously small numeric resolutions like fp8 vs almost all fp64 with some fp32).

The result is that essentially no AI computers reach the high end of the TOP500.

The converse is also true, classic frontier scale super computers don't make the most cost effective AI training platforms because they spend a lot of the budget on making HPC programs fast.


Alphafold (protein folding) was trained on Googles TPUs which are not GPUs true but very close.

Flow simulation also happens on GPUs and not CPUs though.

El Capitan is the top 1 on top 500 and the flops ratio between CPU and GPU is nearly 1 to 100.


You should mark sarcasm as subtle as this.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: