Progress is definitely not inremental, it's exponential.
The same performance (training an LLM with a given perplexity) can be achieved 5x cheaper next year while the amount of money deep learning infrastructure gets increases exponentially right now.
If this method is able to get to AGI (which I believe but many people are debating), human intelligence will just be mostly ,,skipped'', and won't be a clear point.
In nature, exponential curves reveal themselves to be sigmoidal on a long enough time scale. Since you're on HN you probably have a mathematical bent, and you should know that.
The same performance (training an LLM with a given perplexity) can be achieved 5x cheaper next year while the amount of money deep learning infrastructure gets increases exponentially right now.
If this method is able to get to AGI (which I believe but many people are debating), human intelligence will just be mostly ,,skipped'', and won't be a clear point.