Sure, our streets are shit, our legislators are actively trying to ruin the tech industry, we spent $130B on 300 yards of train, policing in large cities is a disaster, and the "best state school system" translates into an education level at 29th/37th country-wide, building housing is impossible due to the world's worst permitting process, but otherwise...
Look, I don't mind paying a lot of taxes. If there's service you get for it. And I'm deeply in the blue camp. But CA leadership (state/county/city) is still an utter disaster and needs to be tossed out on its ear.
Yeah I don’t mind Californians shitting on California, by all means, strive for perfection. But… have you been to eastern Kentucky? There’s levels to this.
Also, low key, visit eastern Kentucky. Gorgeous countryside, Red River Gorge. Genuine people, I once stayed in someone’s cabin over Thanksgiving and they insisted on bringing me a fresh Thanksgiving meal, a gesture I’ll never forget.
Most metrics teams are reasonably competent and are aware of that. Excepting "growth hackers"
I haven't been in a single metrics discussion where we didn't talk about what we're actually measuring, if it reflects what we want to measure, and how to counterbalance metrics sufficiently so we don't build yet another growthhacking disaster.
Doesn't mean that metrics are perfect - they are in fact aggravatingly imprecise - but the ground truth is usually somewhat better than "you clicked it, musta liked it!"
I feel like this needs a big asterisk. Can you ship a a non-trivial iOS or Mac app that uses SwiftUI or other first-party APIs without Xcode? Is it practical? Those are real questions, some cursory searching did not turn up a concrete answer.
It is possible and practical in lots of cases. And it's necessary to use the CLI tools directly in some situations, such as when deploying from CI servers rather than building by hand.
Is it possible for 100% of situations? I don't know, because I haven't tried 100% of the situations. And in one case I haven't figured out yet (AUv3).
Yes, but this is not most contexts. If you're running an "experiment" you should probably not be anthropomorphizing the machine that's being experimented with.
They are also collectively rational, as a response to an ecosystem that's spun out of control and habitually consumes rats nests of dependencies.
Early participation and beta programs are outsourcing careful engineering via making everybody else guinea pigs. If we want to sling around accusations of free-riding (really?!), you're slacking on testing and free-riding on your early users.
I really wish we'd stop arguing about AI with an "some automation failed, so all automation is bad" approach.
Yes, AF447 crashed due to lack of training for a specific situation. And yet, air travel is safer than ever.
Yes, that Tesla drove into a wall, and yet robotaxis exist, work well, and are significantly safer than human drivers.
Yes, there are a lot of "witchcraft" approaches to working with AI, but there are also significant accelerations coming out of the field that have nothing to do with AI.
Yes, AI occasionally makes very stupid mistakes - but ones any competent engineer would have guardrails in place against.
And so a lot of the piece spends time arguing strawmen propped up by anecdotes. And that detracts from the deeply necessary discussion kicked off in the second part, on labor shock, capital concentration, and fever dreams of AI.
The problem of AI isn't that it's useless and will disrupt the world. It's that it's already extremely useful - and that's the thing that'll lead to disrupting the world.
I think you're maybe oversimplifying a bit. I dont think the argument here is that "AI" is not 100% so we shouldn't use AI. There are issues we need to be aware of.
Specifically, AI companies want to inflate the utility of AI because that's how they make money. There should be guardrails where appropriate. Unfortunately, as usual, we need to make mistakes before we can learn from them.
I think you may have missed a subtle point: there is an especial risk from automation which almost always works correctly. The aviation industry calls the phenomenon "automation fatigue". It's very difficult for humans to stay alert and monitor systems like these, and the use of the systems tends to lead to de-skilling over time in the very skills required to monitor them and fix the (rare but fatal - at least in aviation) error cases when they occur.
It also means you're so helpless as a developer that you could never debug another person's code, because how would you recognize the errors, you haven't made them yourself.
This is extremely funny given we're the industry full of folks that brought the world QS and longevity hacks, together with the peptide boom.
"But that is only because doctors don't keep current/have preconceived notions"
Which would never happen to designers?
reply