This is a trust issue. If someone I trust hands me a big pr, I focus on the important details. If someone i dont trust hands me a big pr, i just reject it and ask them to break the problem down further. I dont waste my time on this kind of thing, regardless of whether it was hand written or generated.
Nowhere in here does it indicate that the generated plan was wrong or broken. I dont care if you use ai to write. I care if you write well. If the author trusted the other person, then it shouldn't matter. If the author didn't trust the other person, then they'd have to validate their output anyway. Granted the tech allows people I dont trust to generate a lot more bs, a lot faster. But i just reject and move on with my life in that case. I am no ai booster but a lot people are expressing distaste for tools when they should be expressing distaste for fools.
> It used to be that a well-written document was a proof-of-work that the author thought things through (or at least spent some time thinking about it).
I think you hit the nail on the head here. The problem isn't so much that people can do bad work faster than ever now, its that we can no longer rely on the same heuristics for quickly assessing a given piece of work. I dont have a great answer. But I do still think it has something to do with trust and how we build relationships with each other.
> Granted the tech allows people I dont trust to generate a lot more bs, a lot faster. But i just reject and move on with my life in that case.
But even a rejection is work. So if they're generating more bs faster, they are generating more work for you. And, in some organizations, they will receive rewards for occasionally pressing buttons and inundating you with crap.
> a lot people are expressing distaste for tools when they should be expressing distaste for fools.
I'm pretty sure that the original article, and most of the derogatory comments here, are expressing distaste for fools rather than tools. Specifically, tool-using fools.
ok but how does it work though? Is this seriously just passing the titles to some llm with a prompt like 'roast this'? is it reading the actual content of the link as well?
The year is 20X5. Despite the onslaught of artificially intelligent agents capable of understanding and synthesizing new concepts in written language, humans are still capable of basic cognition… for now.