Interesting, I feel pretty much the opposite. To me these podcasts are the equivalent of the average LLM-generated text. Shallow and non-engaging, not unlike a lot of the "fake marketing speech" human-generated content you find in highly SEO-optimized pages or low-quality Youtube videos. It does indeed sound real, but not mind-blowing or trustworthy at all. If this was a legit podcast found in the store I would've turned it off after the first 30 seconds because it doesn't even come close to passing my BS filter, not because of the content but because of the BS style.
It's decent background noise about a topic of your choice, with transparently fake back-and-forth between two speakers with some meaningless banter. It's kind of impressive for what it is, and it can be useful to people, but it´s clearly still missing important elements that make actual podcasts great
It’s intentionally fine tuned to sound that way because Google doesn’t want to freak people out.
You can take the open source models and fine tune them to take on any persona you want. A lot like what the Flux community doing with the Boring Reality fine tune.
Exactly. And pay more attention to the delta/time and delta/delta/time.
We are all enjoying/noticing some repeatable wack behavior of LLMs, but we are seeing the dual wack of humans revealed too.
Massive gains in neural type models and abilities A, B, C, ..., I, J, K, in very little time.
Lots of humans: It's not impressive because can't L, M, yet.
They say people model change as linear, even when it is exponential. But I think a lot of people judge the latest thing as if it somehow became a constant. As if there hasn't been a succession of big leaps, and that they don't strongly imply that more leaps will follow quickly.
Also, when you know before listening that a new artifact was created by a machine, it is easy to identify faults and "conclude" the machine's output was clearly identifiable. But that's pre-informed hindsight. If anyone heard this podcast in the context of The Onion, it would sound perfectly human. Intentionally hilarious, corny, etc. But it wouldn't give itself away as generated.
I feel like people have been saying that since GPT-4 dropped (many papers up the line now) and while there have been all sorts of cool LLM applications and AI developments writ large, there hasn't really been anything to inspire a feeling that another step change is imminent. We got a big boost by training on all the data on the Internet. What happens next is unclear.
Except that none of the fundamental limitations have changed for many years now. That was a few thousand papers ago. I'm not saying that none of the LLM stuff it's useful, it is, and many useful applications are likely undiscovered. I am using it daily myself. But people expecting some kind of sudden leap in reasoning are going to be pretty disappointed.
We don't even need to look that far. During an extended interaction the new ChatGPT voice mode suddenly began speaking in my boyfriend's voice. Flawlessly. Tone, accent, pauses, speaking style, the stunted vowels from a childhood mouth injury. In that moment there were two of him in the room.