> > Here's something most developers overlook: if an LLM has a 2% JSON defect rate, and Response Healing drops that to 1%, you haven't just made a 1% improvement. You've cut your defects, bugs, and support tickets in half.
Why do you have an expectation that a company will disclose to you when they use AI for their copywriting? Do you want them to disclose the software they used to draft and publish? If a manager reviewed the blog post before it went live?
This tell us more about your (lacking) skills at using AI than the state of AI tools themselves.
You're probably using the free lobotomized versions of LLMs, and shooting a one-off short ambiguous prompt and wondering why it didn't turn out the way you imagined.
Meanwhile people spend hundreds of dollars on pro LLM access and learn to use tool calling, deep research, agents and context engineering.
I tried this app and immediately got caught. The app window was visible in plain sight to the interviewer. The interviewer asked me what I was doing. I tried to pretend I didn't know what I was talking about, but obviously that only made things worse because they were seeing everything. I got banned from ever interviewing at that company and they said they would report me. I'm not sure where they reported me to, if it's some company information sharong platform or the authorities, but needless to say I am scared and hate myself. I was just nervous about that interview and just wanted to make sure I passed it as I do have the skills they were gonna ask about in the interview. I fucked up.
Kind-of. You could theoretically use LoRA for this, in fact, but it probably wouldn't have enough capacity to make it a proper substitute of the attention mechanism. Instead a full MLP is trained as input chunks get processed.
The content of your posts is really insightful and interesting, but it's feel like junk quality because of the way LLMs write blogposts.
What was your prompt?
reply