It depends how you actually use the messages. Zero-copy can be slowing things down. Copying within L1 cache is ~free, but operating on needlessly dynamic or suboptimal data structures can add overheads everywhere they're used.
To actually literally avoid any copying, you'd have to directly use the messages in their on-the-wire format as your in-memory data representation. If you have to read them many times, the extra cost of dynamic getters can add up (the format may cost you extra pointer chasing, unnecessary dynamic offsets, redundant validation checks and conditional fallbacks for defaults, even if the wire format is relatively static and uncompressed). It can also be limiting, especially if you need to mutate variable-length data (it's easy to serialize when only appending).
In practice, you'll probably copy data once from your preferred in-memory data structures to the messages when constructing them. When you need to read messages multiple times at the receiving end, or merge with some other data, you'll probably copy them into dedicated native data structs too.
If you change the problem from zero-copy to one-copy, it opens up many other possibilities for optimization of (de)serialization, and doesn't keep your program tightly coupled to the serialization framework.
There's no particular reason for an image format based on video codec keyframes to ever support a lot of the advanced features that JPEG XL supports. It might compress better than AVIF 1, but I doubt it would resolve the other issues.
Cargo's cache is ridiculously massive (half of which is debug info: zero-cost abstractions have full-cost debug metadata), but you can delete it after building.
There's new-ish build.build-dir setting that lets you redirect Cargo's temp junk to a standard system temp/cache directory instead of polluting your dev dir.
> There's new-ish build.build-dir setting that lets you redirect Cargo's temp junk to a standard system temp/cache directory instead of polluting your dev dir.
If it’s just logs, I would prefer to redirect it to /dev/null.
The situation today is very different than what it used to be when people actually used 386 or Amigas because they had no other options (BTW, Rust supports m68k, just not AmigaOS specifically).
Today even crappiest old PCs that you can fish out of a dumpster are already new enough to have Rust/LLVM support. We have mountains of Rust-compatible e-waste that you can save from landfill. Take whatever is cheapest on eBay, or given away on your local FB marketplace, and it will run Rust, and almost certainly be orders of magnitude faster and more practical than the unsupported retro hardware.
Using actual too-niche-for-Rust hardware today is more expensive. Such machines are often collectors' items, and need components and accessories that are hard to obtain, or need replacements/adapters that can be custom low-volume products.
Even if you can put together something from old-but-not-museum-yet parts, it's not going to make more sense economically than getting an older-gen Raspberry PI kit or its Ali Express knock-offs (there are VGA dongles more expensive than some of these boards).
It's fine to appreciate SGI and DEC Alpha, have fun using BE OS, or prove that AmigaOS is still a perfectly fine daily driver, but let's not pretend it's a situation that people are in due to economic hardship.
> but let's not pretend it's a situation that people are in due to economic hardship.
I'd encourage you to not strawman my response. Because I already said myself that it appears to me it's only hobbyists who are losing support.
My objection isn't to the argument that it's dropping support, my objection is that it's dropping support without cause. Other than, the assumed this would be more comfortable for me.
Maintainers are absolutely not required to support everything for ever, but I recall a story where someone from Linux paid for a user to upgrade, not because that was required, because more because that would make dropping support for that floppy driver feel ethical.
This is the level of compassion everyone should expect from software engineers in critical positions of power.
I have no sympathy for people who lack the compassion to expend the effort to help others. I do have sympathy for people who have to watch the world that they, even if it's them alone. Have to watch their world get worse, so that others can avoid a trivial amount of perceived discomfort.
Should this solo maintainer (who understands C) be required do things exactly the way that I want? Of course not, but I'll be damned if everyone expects me to remain silent while I watch them disrespect other people who were previously depending on their support.
By alluding the switch to Rust was "without cause", and bringing up concerns of floppy users and retro-hobby hardware, you seem to be seeing the change only from a very narrow perspective of interests of very specific group of users.
There are lots of other users, and lots of other ways to care about them. Making software less likely to have vulnerabilities is caring about its users too. Making software work better and faster on contemporary hardware is caring about users too, just a different group (and a way larger one, and including users who really can't afford faster hardware).
Sometimes it's just not possible to make everyone happy, and even just keeping the status quo is not always a free option. Hypothetically, keeping working support for some weird floppy drive may be increasing overall system complexity, and cost dev and testing effort that might have been spent on something else that benefitted a larger number of users more.
Switching to a language with a friendlier compiler, fewer gotchas, less legacy cruft, and less picky dependency management can also be a way of caring about users - lowering the barriers to contributing productively can help get more contributions, fewer bugs, improve the software overall, and empower more users to modify their tools.
It'd be fine to argue which trade-offs are better, and which groups users should be prioritized, but it's disingenuous to frame not accommodating the retro/hobby usecases in particular a sign of lack of compassion in general. It could be quite the opposite - focusing only on the status quo and past problems shows lack of care about all the other users and the future of the software.
That's just your lack of familiarity with the foreign-to-you language (you may be unable to read Korean too, despite Korean being pretty readable).
Syntactically, Rust is pretty unambiguous, especially compared to C-style function and variable definitions. You get fn and let keywords, and definitions that are generally read left-to-right, instead of starting with an arbitrary identifier that may be a typedef, a preprocessor macro, or part of a type that is read in so-called "spiral" order (which isn't even a spiral, but more complex than than).
Cargo isn't satisfied with its own solver either. Solvers are a hard and messy problem.
The problem is theoretically NP complete (a SAT solver), but even harder than that: users also care about picking solutions that optimize for multiple criteria like minimal changes, more recent versions, minimal duplication (if multiple versions can coexist), all while having easy-to-understand errors when dependencies can't be satisfied, and with better-than-NP performance. It ends up being complex and full of compromises.
I'm happy to burden EU companies with responsibilities like securing storage of my private data, having processes to update and delete my data, having to consider whether data collection can be minimized, and getting my consent if they want to repurpose or sell the data they've collected.
It would be much cheaper and pro-business to let them collect everything and secure nothing.
unwrap() is only the most superficial part of the problem. Merely replacing `unwrap()` with `return Err(code)` wouldn't have changed the behavior. Instead of "error 500 due to panic" the proxy would fail with "error 500 due to $code".
Unwrap gives you a stack trace, while retuned Err doesn't, so simply using a Result for that line of code could have been even harder to diagnose.
`unwrap_or_default()` or other ways of silently eating the error would be less catastrophic immediately, but could still end up breaking the system down the line, and likely make it harder to trace the problem to the root cause.
The problem is deeper than an unwrap(), related to handling rollouts of invalid configurations, but that's not a 1-line change.
We don't know what the surrounding code looks like, but I'd expect it handles the error case that's expressed in the type signature (unless they `.unwrap()` there too).
The problem is that they didn't surface a failure case, which means they couldn't handle rollouts of invalid configurations correctly.
The use of `.unwrap()` isn't superficial at all -- it hid an invariant that should have been handled above this code. The failure to correctly account for and handle those true invariants is exactly what caused this failure mode.
To actually literally avoid any copying, you'd have to directly use the messages in their on-the-wire format as your in-memory data representation. If you have to read them many times, the extra cost of dynamic getters can add up (the format may cost you extra pointer chasing, unnecessary dynamic offsets, redundant validation checks and conditional fallbacks for defaults, even if the wire format is relatively static and uncompressed). It can also be limiting, especially if you need to mutate variable-length data (it's easy to serialize when only appending).
In practice, you'll probably copy data once from your preferred in-memory data structures to the messages when constructing them. When you need to read messages multiple times at the receiving end, or merge with some other data, you'll probably copy them into dedicated native data structs too.
If you change the problem from zero-copy to one-copy, it opens up many other possibilities for optimization of (de)serialization, and doesn't keep your program tightly coupled to the serialization framework.