But I'm not interested in the AI's point of view. I have done that myself.
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
Did you agree with it before the AI wrote it though (in which case, what was the point of involving the AI)?
If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.
Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details, sometimes showing new ways.
I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
> Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details
> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.
Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?
You could instead ask Kimi K2 to demolish your point instead, and you may have to hold it back from insulting your mom in the ps.
Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say.
Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they can be powerful tools for actually understanding where someone else is coming from.
Try this: Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.
> why would you prefer my writing from an LLM generated one?
Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.
"I asked an $LLM and it said" is very different than "in my opinion".
Your opinion may be supported by any sources you want as long as it's a genuine opinion (yours), presumably something you can defend as it's your opinion.
When you start to think about that statement, and why it was written there, why a company chooses to pay $ to tell you this, you know that inherently something went REALLY wrong in the past.
And because it's a company, they're doing the bare minimum to fix it, as to minimize the impact on their bottom line.
It reminds me of the ads against a certain prop in CA, the one that would make app workers (?) employees.
Advertisements taken out by Lyft, Uber, etc, all to sway people.
When companies want you to do something it's not in your best interest. It's in theirs.
I disagree. I think we're testing it, and we haven't seen the worst of it yet.
And I think it's less about non-deterministic code (the code is actually still deterministic) but more about this new-fangled tool out there that finally allows non-coders to generate something that looks like it works. And in many cases it does.
Like a movie set. Viewed from the right angle it looks just right. Peek behind the curtain and it's all wood, thinly painted, and it's usually easier to rebuild from scratch than to add a layer on top.
I suspect that we're going to witness a (further) fork within developers. Let's call them the PM-style developers on one side and the system-style developers on the other.
The PM-style developers will be using popular loosely/dynamically-typed languages because they're easy to generate and they'll give you prototypes quickly.
The system-style developers will be using stricter languages and type systems and/or lots of TDD because this will make it easier to catch the generated code's blind spots.
One can imagine that these will be two clearly distinct professions with distinct toolsets.
I actually think that the direct usage of AI will reduce in the system-style group (if it was ever large there).
There is a non-trivial cost in taking apart the AI code to ensure it's correct, even with tests. And I think it's easy to become slower than writing it from scratch.
My initial thought about juries is that they are inherently more fair, as they represent the people.
Change always happens in the public, which then causes people to vote a certain way, which then changes the laws.
So you end up in situations where what you did is wrong by law, but you get a better outcome because of the jury, as the jury is the public, as opposed to a judge, and usually well-paid, meaning quite disconnected from the 'common man'. In the US for example judges are elected, which means they should represent the public better.
When you have a multi-platform image the actual per-platforms are usually not tagged. No point.
But that doesn't mean that they are untagged.
So on GitHub Actions when you upload a multi-platform image the per-platform show up in the untagged list. And you can delete them, breaking the multi-platform image, as now it points to blobs that don't exist anymore.
UI has really gotten down in the last couple of years.
Reduced information density
Abuse of paradigms we learned in the past (e.g. blue button to accept, links for decline) to make us accept harmful settings
Notifications for just about everything
Popups inside of the app
Adding buttons for functionality that's not included in your subscription, which show the nagscreen once you click on it
UI upgrades which add nothing (see all above)
Removal of agency. I don't want the upgrade, nor do I want to be reminded in 3 days.
A changing UI sounds absolute hell to me. When I was growing up I broke my Windows installation a lot by changing settings. That's how I learned.
Equally I don't like how many instructions and scripts everywhere use shorthands.
Sometimes you see curl -sSLfO. Please, use the long form. It makes life easier for everybody. It makes it easier to verify, and to look up. Finding --silent in curl's docs is easier than reading through every occurrence of -s.
curl --silent --show-error --location --fail --remote name https://example.com/script.sh
For a small flight of fancy, imagine if each program had a --for-docs argument, which causes it to simply spit out the canonical long-form version equivalent to whatever else it has been called with.
While I'd appreciate that facility too, it seems... even-more-fanciful, as one tool would need to somehow incorporate all the logic and quirks of all supported commands, including ones which could be very destructive if anything went wrong.
Kind of like positing a master `dry-run` command as opposed to different commands implementing `--dry-run` arguments.
I did muck around with using "sed" to process the "man" output to find a relevant long option in a one-liner, so it wouldn't be too difficult to implement.
I did something like this:
_command="sed" _option="n"
man -- "${_command}" | sed --quiet --expression "s/^ -${_option}.*, //p"
Then I realised that a bit of logic is needed (or more complicated regexp) to deal with some exceptions and moved onto something else.
agreed. i get if you're great at cli usage or have your own scripts, but if you're publishing for general use, it should be long form. that includes even utility scripts for a small team.
also, putting it out long-form you might catch some things you do out of habit, rather than what's necessary for the job.
Another possible advantage is that I invariably have to check the man page to find the appropriate long-form option and sometimes spot an option that I didn't know about.
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
reply