Hi! I work at IEEE Spectrum and there's no way an LLM wrote this. We have a pretty strict Generative AI use policy (bottom of this page https://spectrum.ieee.org/about). I'm guessing this is from writers using actual writing techniques that Gen AI stole from...
I took my introductory college writing classes at a college I can’t name-drop without sounding like a jerk, which also did a bunch of LLM research over the years. We used a TON of em dashes in our writing. It’s no mystery, to me, where that stylistically prevalent quirk comes from. I’ve definitely been accused of being an LLM bot.
I was speaking with my 14 year old nephew via messaging last month. It was about a deep topic, synthetic consciousness. He wrote such an intelligent reply that I asked him: hey, was this from an LLM? He was insulted. I did research with his parents and found out that 90% no, he's just a very smart kid.
Is there a name for this this mode of confusion yet?
He was insulted on two counts. Firstly doubting his intelligence. Secondly the insinuation of deception: "You aren't that smart, surely you cheated."
The insulting didn't end there. You asked his parents! Even then you only landed at 90%, yet another insult because why can't he earn 100%? Ethical dilemmas on all sides!
Indeed. Along the same lines, the recent "humans parading as agents" on moltbook story made me think... what is the inverse of the goal of captcha? That's impossible practically, right?
Yes, suspected LLM is mimicry passing under the turning test,encheapened by poor editing, further encheapened by even poorer training. It is going to get both worse,and better at the same time. (I grew up with ELIZA.BAS, and used to easily spot fakes, and it is getting both easier and harder. ) I detest the words of LLM, but not the M. It's a statistical model. It is a very large language statistical model - that is constantly fooling slower hoomins.
That policy doesn't explicitly disallow writers from using LLMs as part of their process, nor does it mention reviewing submissions for content that could be LLM-generated.
I like some of the ideas in the article but there are some very "it wasn't just A, it was B" sentences in there. IEEE has a higher standard.
It's because there's clearly a near-1:1 ratio of input to output. I also noticed some LLMisms, and I suspect the author may have ran the text (perhaps in the form of a large number of bullet points) through an LLM. But because he's using the LLM to clean instead of multiply, it's still worth reading.
Probably similar to what I do with my papers and resumes, I write them myself then throw them through LLMs for suggestions and corrections, manually reviewing the output.
LLM-isms are tolerably bad. LLM's narrative ability is intolerably terrible. As others said, because a human actually wrote the overall narration for this, it was still compelling to read. The mistake would be skipping a well-narrated and thoughtful article just because of a few bad LLMisms.
I think LLM's lack of "theory of mind" leads to them severely underperforming on narration and humor.
Ironic because IEEE Spectrum has an anti-LLM policy. So your complex about LLM writing styles has indirectly caused you to stop supporting genuine prose.
Seriously there's no LLM stuff in here. Only emdashed which were used in journalism decades before AI was even a thing.
I feel for you, because moving forward more and more interesting and substantious articles will be written with llm-isms, either because LLM was used directly in writing or because the authors absorbed the style.
The way this article was written, is the standard way these kind of US pop science articles have always been written. It's LLM that absorbed that, not the opposite.
I think of Tesla autopilot as sophisticated cruise control. Can perform most driving tasks better than I can, saves a lot of cognitive work, still needs close of my 100% attention.
No technology so far has reduced the amount of time we spend working. If anything it's increased. A few decades ago it was common for a household to have only one adult in the labour market, now it's common to have two. The work around the house hasn't gone away, of course.
People are already at the point they could be working a fraction of the time they do, but they continue to work full time, producing shit they don't need, like ever growing vehicles and incremental smartphone upgrades etc.
Most people in developed countries are well above poverty. According to one study[0], 99% of Americans have at least one streaming subscription.
In Europe most people get healthcare free at the point of use.
We easily produce what we need and could probably go completely sustainable if people weren't addicted to working and making more stuff. How do we make sustainability as satisfying as building?
It’s like how adding a lane to a highway doesn’t decrease traffic, traffic will just increase to consume the additional capacity. Efficiency gains at work just mean more work haha
Genesis 1:13, Eve optimal
Replace 'Then' with 'And' in optimal ('And the LORD God said') and poetic_daily to preserve narrative vav-consecutive connective consistently.
He has a knack for putting words to the vague frustrations I feel but can't quite articulate. How does he find so many perfect examples that nail exactly what's wrong?
Tried it in earnest. Definitely detect some aggression, and would feel stressed if this were an exam setting. I think it was pg who said that any stress you add in an interview situation is just noise, and dilutes the signal.
Also, given that there's so many ways for LLMs to go off the rails (it just gave me the student id I was supposed to say, for example), it feels a bit unprofessional to be using this to administer real exams.
reply