Sorry to repeat myself from (much) older posts: My real fear is content generation. What if search engines are completely overwhelmed with "sounds OK/meh" generated content. How can we identify the stuff written by real humans with more nuance? Or, are we doomed?
Less doom and gloom view: Many of the posts here were about humour. That is a terrific use case for AI language models. You can have something like a Medium blog (for monetisation) where the author tries to induce funny responses from the AI langauge model. Best responses become the day's blog post. I would definitely read it. The one posted here about Lord Byron rhyming about SpongeBob SquarePants was hysterical to me. Or what if you asked the AI language model to generate a fake interview between Lord Byron (or Mark Twain) and Lex Fridman. Probably the result would be hysterical, and Lex would love it!
Less doom and gloom view: Many of the posts here were about humour. That is a terrific use case for AI language models. You can have something like a Medium blog (for monetisation) where the author tries to induce funny responses from the AI langauge model. Best responses become the day's blog post. I would definitely read it. The one posted here about Lord Byron rhyming about SpongeBob SquarePants was hysterical to me. Or what if you asked the AI language model to generate a fake interview between Lord Byron (or Mark Twain) and Lex Fridman. Probably the result would be hysterical, and Lex would love it!