Today on the front page there was an obviously vibe coded python script that pulls OSM data and slaps a colour scheme on it. Of course the data was skewed, because apparently LLMs don't do projections...
I gave up on the first non-ironic 'You are absolutely correct' comment... What is even real...
To be fair, vibe discovery is a lot more viable than vibe coding. Vibe coding implies the LLM output is acceptable. Vibe discovery implies a human in the loop, because LLMs can't "discover". They have no inate preference based on their lived experience in the same sense that a human or any biological organism does.
Exactly! LLMs' (or any Gen-AI) lack of lived-experience/emotions is their Achilles heel. The best human creators understand how to inspire emotions mainly because they can feel it themselves. Most other humans, despite innately understanding emotions, can't really create things that inspire emotions in others. So, Gen-AI as we know it today can't really reach a point where it deeply, personally understands and inspires emotions. Vibe discovery bridges this gap, I think.
these seem to be more about archival after the fact. I would like to see a commercial certification & commitment right from the start, similar to OU for food, UL / ECE for electronics
at https://comma.ai/support#what-is-openpilot