So, the question is, if you have a long expression, should you have to worry too much about either adding parentheses, or making sure that your line break occurs inside a pair of parentheses.
It boils down to preference, but a language feature that supports whatever preference you have might be nice.
priority = "URGENT" if hours < 2 else
"HIGH" if hours < 24 else
"MEDIUM" if hours < 72 else
"LOW"
> All of these occulted skills, that we literally can't explain why they work are akin to gamblers superstitions.
Yes, it's hard to see how, at this moment in time, "Anybody can write code with an LLM" is so different from "Anybody can make money in the stock market."
The underlying mechanisms are completely different, of course, and the putative goal of the LLM purveyors is to make it where anybody really can write code with an LLM.
I'm typically a nay-sayer and a perfectionist, but many not-so-great things become and stay popular, and this may fall into that category.
> Kind of weird tools also incorporate addictive gambling game's UX design.
It's unclear it started out this way, but since it's obviously going this way, it is certainly prudent to ask if some of this is by design. It would presumably be more worrisome if there were only a single vendor, but even with multiple vendors, it might be lucrative for them to design things so that "true insider knowledge" of how to make good prompts is a sought-after skill.
Why? Because all the folks involved have created a technology in search for a problem to solve. That never, ever works. Steve Jobs of all people left this piece of wisdom behind. Its amazing how few actually apply it.
The internet was never this - its origins go back to the need to able to transmit data - darpa. And this is what we still do now...
The only thing worse than the overuse of AIs is the ever present handwringing and finger-pointing of people who wrongly believe they are infallible AI detectors.
Curiosity is good and helps with your personal development, for sure.
OTOH, tfa specifically said:
> I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I'm utterly content to wait until their hype has been realised.
So, it's not like he's being deliberate ignorant, rather simply deliberately slow-walking his journey.
my company (mid-size, publicly traded) is mandating [x] hours spent on AI per week. i have no idea how they're planning on measuring this, and as far as i can tell, neither does management.
suppose it's better than counting lines of code, though.
My uncle leads IT support teams, the org is measuring AI use in writing reports and tickets. The org has very poorly structured and obsolete processes (he's trying to straighten them up as he goes), AI will probably amplify the lack of structure, by making it easier for the work to _look_ as if someone carefully reviewed the issues and followed procedure.
A friend is a team lead in an org that's mandating vibecoding via "Devin", a lesser known player an "architect" chose after shallow review. The company also has endemic process issues and simply can't do deployments reliably, it's behind the times in methodology in every other respect. Higher ups are placing their trust in a B-list agentic tool instead of fixing the problems.
Anyway, I wouldn't be caught dead working at either of those two shops even before the AI rollout, but this is what's going on in the IT underworld.
I hate the AI assistants for ticket-writing. The beneficial use there would be to prompt for possibly-useful information that's not present, or call out ambiguity and let the writer decide how to resolve any of that. Coaching, basically. Suggesting actual text to include, for people who aren't already excellent at ticket-writing, just leads to noisier tickets that take more work to understand ("did they really mean this, or did the LLM just prompt them to include it and they thought sure, I guess that's good?")
[EDIT] Oh and much of your post rings true for my org. They operate at a fraction the speed they could because of organizational dysfunction and failure to use what's already available to them as far as processes and tech, but are rushing toward LLMs, LOL. Yeah, guys, the slowness has nothing to do with how fast code is written, and I'm suuuuure you'll do a great job of integrating those tools effectively when you're failing at the basics....
Lots of organizations don't want to accept that their velocity issues are quality issues. It's often a view held by an old guard that was there when the business experienced growth by adding features, while not having to bear any maintenance burden. The people who remain are either also oblivious to this, or simply have stopped caring.
LLM-generated code hits all the right notes, it's done fast, in great volumes, and it even features what the naysayers were asking for. Each PR has 20 pages of documentation and adds some bulk to the stuff in the tests folder, that can sit there looking pretty. How wonderful! Hell, you can even do now that "code review" that some nerd was always complaining about, just ask the bot to review it and hit that merge button.
Then you ask the bot to generate the commands again for the deploy (what CI pipeline?) and bam! New features customers will love. And maybe data corruption.
A firm that is led by people who can envision, very clearly, revenue-generating and cost-reduction projects - wins. Writing code by hand is absolutely irrelevant. Who fucking cares. The former is what matters.
Code generation acceleration only matters when those pre-requisites are met. How did Apple go from the verge of bankruptcy to where it is today?
All Im seeing is most people are not smart at all - no wonder they are so impressed by LLMs! They can't think straight. I only see this become even worse over-time. Perhaps this is the stated goal.
Well before offices were computerized at all some of the manual processes turned out to be more effective than after full computerization was completely accomplished. Which was sometimes decades later so nobody could tell which workflows it actually applied to, or wouldn't believe it anyway by the 21st century.
It was truly quite rare to have such well-honed manual processes though, the "average" place had a lot of elements that were far from perfect but still benefited after the computerization dust had settled. Then at the opposite end of the spectrum were companies where everything was an absolute shitshow, maybe since the beginning.
That's kind of where Conway's Law comes from, if you benchmark against a manual shitshow that has built up over the years, and replace it with a computerized version, you get a shitshow on steroids. The only other choice would have been to spend the appropriate number of years manually undoing the shitshow before making any really bold moves.
Now AI can really take things to a whole 'nother level, not just on steroids but possibly violating Conway's Law . . . squared.
Linus has the same problem as corporate America, but at a slightly different level.
He needs some group of really smart, self-selected programmers, where corporate America is looking for a larger group of somewhat less intelligent programmers.
You might be describing an even more intelligent programmer, but unless he can dumb down his programming to fit in, he's probably better off being a lone developer.
> If you see a bad starting point picked by someone else, by all means, point it out if it will be problematic now or in the foreseeable future, because that's a bug.
Can't disagree at all, but many people push back in the name of XP.
> poor programmers is a belief that theyre special snowflakes who can anticipate the future.
Some of us are actually reasonably good at anticipating the future, and don't mesh well in environments where people claim to care about quality but are always deferring stuff for the next release.
> This is why no matter how many brilliant programmers scream YAGNI, dont do BDUF and dont prematurely optimize there will always be some comment saying the equivalent of "akshually sometimes you should...",
You write as if there are no brilliant people doing BDUF. This is unequivocally false.
> remembering that one time when they metaphorically rolled a double six and anticipated the necessary architecture correctly when it wasnt even necessary to do so.
You've doubled down on the gambling trope. Any development is gambling. Thinking otherwise is foolish.
Sure, people who live and breathe YAGNI gamble in different ways, but they still gamble. I can't count the number of times I've made the statement "but if this happens and then that happens, it will be bad" which was countered with "but what are the chances of that?"
Look, if you have to ask, the probability of it being a problem at the customer approaches one.
My personal solution?
I went to work for chip companies. Chip companies have to put a stake in the ground and work to build something that they will be able to sell as designed and implemented.
This doesn't mean that there isn't a triage for features, or the potential for feature creep, or that there isn't a triage for the most important bugs right before tapeout, but the process that works and gets you a chip that customers will use is much closer to what people call BDUF than it is YAGNI.
Git's merge is already symmetrical for two branches being merged, and that, in and of itself, often leads to problems.
It's completely unclear that extending this to multiple branches would provide any goodness.
reply