I find that the perception of AI coding is very bimodal, and both camps have it wrong. You have the one camp who will only trust it to, at most, write a few lines of code. The other camp prompts, walks always, and pushes (or just has the AI do it).
I believe both camps it frustratingly wrong. If you haven't yet given it a chance at doing something substantial, then at least _try_ it once. On the other side of the coin, that first experience where it does something 80% right is intoxicating, but AI doesn't reason and can't get it 100% right - it can't even multiply relatively small numbers.
The former camp is going to get left behind and won't be able to compete, the latter camp is one prompt away from a disaster.
I have OTEL + Rust in production, alongside some other languages (+ OTEL), and it is by far more useful and predictable than the others. I often find myself monkey-patching in logging for other language libraries, where with Rust it just works.
Opened the link. Saw my own comment. I'm still as confused today as I was then about how this was ever supposed to work—either the quoted code is wrong or there's some weird unstated interface contract. I gather from other issues the maintainers are uninterested in a semver break any time soon. Unsure if they'd accept a performance regression (even if it makes the thing actually work). So I feel stuck. In the meantime, I don't use per-layer filtering. That's a trap.
I've got a whole list of puzzling bugs in the tracing <-> opentelemetry <-> datadog linkage.
You'd have to not understand anything, to understand it. "Qualified" people are the most unqualified, and "unqualified" people are geniuses. Just believe what the "unqualified" people say, and you can be a genius too.
> Review by a senior is one of the biggest "silver bullet" illusions managers suffer from.
My manager has been urging us to truly vibe code, just yesterday saying that "language is irrelevant because we've reached the point where it works - so you don't need to see it." This article is a godsend; I'll take this flawed silver bullet any day of the week.
Isn't his point exactly that we don't want to have too many function colors and instead want a generic way of declaring side effects so people can do what they want (be it try fns, IO, async, etc..., no panicking)?
Friendly reminder: most cloud providers have deletion locks. Go and enable them on your prod dbs right now.
Sure, Claude could just remove the lock - but it's one more gate.
Edit: these existed long before agents, and for good reason: mistakes happen. Last week I removed tf destroy from a GitHub workflow, because it was 16px away from apply in the dropdown. Lock your dbs, irrespective of your take on agents.
> Another reason is compilation time. The more complicated the function signatures, the more prologue/epilogue code we have to generate that LLVM has to chew on. [...]
I know that LLVM completely dominates compilation time, but maybe the improvements from this could make the other bits (i.e. compiling rustc with callconv=fast) fast enough to make up the difference?
I am extremely skeptical that that would be the case. Local stack accesses are pretty guaranteed to be L1 cache hits, and if any memory access can be made fast, it's accesses to the local stack. The general rule of thumb for performance engineering is that you're optimizing for L2 cache misses if you can't fit in L2 cache, so overall, I'd be shocked if this convoluted calling convention could eke out more than a few percent improvement, and even 1% I'm skeptical of. Meanwhile, making 14-argument functions is going to create a lot of extra work in several places for LLVM that I can think of (for starters, most of the SmallVector<Value *, N> handling is choosing 4 or 8 for N, so there's going to be a lot of heap-allocate a 14-element array going on), which will more than eat up the gains you'd be expecting.
The New American Dream: Start as a non-profit dedicated to humanity, pivot to a for-profit to scale and eventually find your final form as a subsidiary of the military industrial complex.
In other words, US tax payers are already paying customers of OpenAI, a few simply won’t be a “double” customer. This isn’t “exactly” fascism, no. It’s something though.
There are communities who gobble up anything Microsoft produces. People in the Microsoft MVP program are usually in this camp - if you want to find examples. Me and my coder friends were part of the fandom, but with just me and my biased N=10 sample set; this fanbase is evaporating quickly (but I still know some hardcore "azure thumpers").
Not just people like that. I'm always searching for better ways to do things and dive into things deeper. Including Windows and Copilot. So having spaces for that can be helpful. Most public forums are unfortunately just complaint departments. Nobody wants to solve anything, they just want to complain with some projection of David and Goliath. It's really annoying. I want to find more positive spaces but for a lot of tech it's just negative all the way down. Maybe I'm just crazy for enjoying tech still and not being committed to an OS religion.
I believe both camps it frustratingly wrong. If you haven't yet given it a chance at doing something substantial, then at least _try_ it once. On the other side of the coin, that first experience where it does something 80% right is intoxicating, but AI doesn't reason and can't get it 100% right - it can't even multiply relatively small numbers.
The former camp is going to get left behind and won't be able to compete, the latter camp is one prompt away from a disaster.
reply