It's usually the case that the more strident someone is in a blog post decrying innovation, the more wrong he is. The current article is no exception.
It's possible to define your own string_view workalike that has a c_str() and binds to whatever is stringlike can has a c_str. It's a few hundred lines of code. You don't have to live with the double indirection.
I think the article is trying to address the question from the title.
Definining your own string_view workalike is probably possible, but I'm not sure if it's OK to use it in a public API, for example. Choosing to use const string & may be more suitable.
Am I the only one who doesn't like PEGs and prefers EBNF-style parser generators? The order-dependence of PEG alternatives and the lack of ambiguity detection are footguns, IMHO
No, it doesn't go places we "do not want it to go". What part of zero knowledge doesn't make sense? How precisely does a free, unlinkable, multi-vendor, open-source cryptographic attestation of recent humanity create something terrible?
It would behoove people to engage with the substance of attestation proposals. It's lazy to state that any verification scheme whatsoever is equivalent to a panopticon, dystopia as thought-terminating cliche.
We really do have the technology now to attest biographical details in such a way that whoever attests to a fact about you can't learn the use to which you put that attestation and in such a way that the person who verifies your attestation can see it's genuine without learning anything about you except that one bit of information you disclose.
And no, such a ZK scheme does not turn instantly into some megacorp extracting monopoly rents from some kind of internet participation toll booth. Why would this outcome be inevitable? We have plenty of examples of fair and open ecosystems. It's just lazy to assert right out of the gate that any attestation scheme is going to be captured.
So, please, can we stop matching every scheme whatsoever for verifying facts as actors as the East German villain in a cold war movie? We're talking about something totally different.
The ZK part isn't the problem. The "attestation of recent humanity" part is. Who attests? What happens when someone can't get attested?
You've been to the doctor recently, right? Given them your SSN? Every identity system ever built was going to be scoped || voluntary. None of them stayed that way.
Once you have the identity mechanism, "Oh it's zero knowledge! So let's use it for your age! Have you ever been convicted?" which leads to "mandated by employers" which leads to...
We've seen this goddamn movie before. Let's just skip it this time? Please?
The part where FAANG does usual Embrace, Extend, Extinguish, masses don't care/understand and we have yet another "sign in with... " that isn't open source nor zero-knowledge in practice and monetizes your every move. And probably at least one of the vendors has massive leak that shows half-assed or even flawed on purpose implementation.
That was ballsy! But, sadly, it was a temporary hack. Both Voyager have degrading, unfixable thrusters. The rubber diaphragms in the hydrazine fuel tanks are degrading, shedding silicon dioxide (i.e. sand) microparticles into the thruster fuel. These particles are gradually clogging the thruster nozzles and reducing their thrust. Eventually, thrust will decline to the point that they could fire the thrusters all day long and still not impart enough momentum to point the probes at Earth. Once that happens, we'll lose contact with the probes.
They'd switched away from the primary thrusters in 2004 due to this degradation. Now the backups are so degraded that the primary thrusters are better again in comparison.
Thruster clogging will kill Voyagers in about five years if nothing else gets them first. The least degraded thrusters nozzles are down to 2% of their diameter --- 0.035mm of free-flow area remaining.
The Voyagers will probably celebrate their 50th anniversary, but not much beyond that. :-(
Kind of ignominious to be done in not by the inexorable decline of radioactivity but by an everyday materials science error of the sort we make on earth all the time. In the 1970s, we knew how to make hydrazine-compatible rubber. We just didn't use it for the Voyagers.
But why? You can do everything contracts do in your own code, yes? Why make it a language feature? I'm not against growing the language, but I don't see the necessity of this specific feature having new syntax.
Pre- and postconditions are actually part of the function signature, i.e. they are visible to the caller. For example, static analyzers could detect contract violations just by looking at the callsite, without needing access to the actual function implementation. The pre- and postconditions can also be shown in IDE tooltips. You can't do this with your own contracts implementation.
Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries.
In terms of contract in a function, you might be passing the pointer to the function so that the function can write to the provided pointer address. Input/output isn't specifying calling convention (there's fastcall for that) - it is specifying the intent of the function. Otherwise every single parameter to a function would be an input because the function takes it and uses it...
I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse.
A nitpick to your nitpick: they said "memory location". And yes, a pointer always points to a memory location. Notwithstanding that each particular region of memory locations could be mapped either to real physical memory or any other assortment of hardware.
I don't agree. Null is an artefact of the type system and the type system evaporates at runtime. Even C's NULL macro just expands to zero which is defined in the type system as the null pointer.
Address zero exists in the CPU, but that's not the null pointer, that's an embarrassment if you happen to need to talk about address zero in a language where that has the same spelling as a null pointer because you can't say what you meant.
Null doesn't expand to zero on some weird systems. tese days zero is special on most hardware so having zero and nullptr be the same is importnt - even though on some of them zero is also legal.
Historically C's null pointer literal, provided as the pre-processor constant NULL, is the integer literal 0 (optionally cast to a void pointer in newer standards) even though the hardware representation may not be the zero address.
It's OK that you didn't know this if you mostly write C++ and somewhat OK that you didn't know this even if you mostly write C but stick to pre-defined stuff like that NULL constant, if you write important tools in or for C this was a pretty important gap in your understanding.
In C23 the committee gave C the C++ nullptr constant, and the associated nullptr_t type, and basically rewrote history to make this entire mess, in reality the fault of C++ now "because it's for compatibility with C". This is a pretty routine outcome, you can see that WG14 members who are sick of this tend to just walk away from the committee because fighting it is largely futile and they could just retire and write in C89 or even K&R C without thinking about Bjarne at all.
Contracts are about specifying static properties of the system, not dynamic properties. Features like assert /check/ (if enabled) static properties, at runtime. static_assert comes closer, but it’s still an awkward way of expressing Hoare triples; and the main property I’m looking for is the ability to easily extract and consider Hoare triples from build-time tooling. There are hacky ways to do this today, but they’re not unique hacky ways, so they don’t compose across different tools and across code written to different hacks.
The common argument for a language feature is for standardization of how you express invariants and pre/post conditions so that tools (mostly static tooling and optimizers) can be designed around them.
But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.
We've known for years that we tunnel IP over DNS [1]. We know, of course, that we can load or play DOOM over IP. Suddenly, combining the two things we already know how to do is supposed to garner attention and plaudits?
Also it's sort of amazing how few people modify their tools and remove the objectionable bits.
For example, Java. Checked exceptions. Everyone hates checked exceptions. They're totally optional. Nothing in the JVM talks about a checked exception. Patch javac, comment out the checked exception checker, and compile. Nothing goes wrong. You can write Java and not deal with checked exceptions.
Likewise, you can modify rustc and make it not enforce the orphan rule.
Too many people treat their tools as black boxes and their warts as things they must tolerate and not things they can fix with their own two hands without anybody's permission.
Does the author expect incumbents to relax language rules that grant exorbitant privilege to incumbents? The orphan rule is up there with error handling in ways Rust is a screwed up language that appeals to people who don't know what they're missing. Other systems languages aren't weak in these ways.
Could you elaborate on that error handling part? To me, Rust is the only sane language I've worked with that has error-like propagation, in that functions must explicitly state what they can return, so that you don't get some bizarre runtime error thrown because the data was invalid 15 layers deeper
I don't know what quotemstr was specifically talking about, but here's my own take.
The ideal error handling is inferred algebraic effects like in Koka[1]. This allows you to add a call to an error-throwing function 15 layers down the stack and it's automatically propagated into the type signatures of all functions up the stack (and you can see the inferred effects with a language server or other tooling, similar to Rust's inferred types).
Now, how do you define E4, E5 and E6? The "correct" way is to use sum types, i.e., `enum E4 {E1(E1), E2(E2)}`, `enum E5 {E1(E1), E3(E3)}` and `enum E6 {E1(E1), E2(E2), E3(E3)}` with the appropriate From traits. The problem is that this involves a ton of boilerplate even with thiserror handling some stuff like the From traits.
Since this is such a massive pain, Rust programs tend to instead either define a single error enum type that has all possible errors in the crate, or just use opaque errors like the anyhow crate. The downside is that these approaches lose type information: you no longer know that a function can't return some specific error (unless it returns no errors at all, which is rare), which is ultimately not so different from those languages where you have to guard against bizarre runtime errors.
Worse yet, if f1 has to be changed such that it returns 2 new errors, then you need to go through all error types in the call stack and flatten the new errors manually into E4, E5 and E6. If you don't flatten errors, then you end up rebuilding the call stack in error types, which is a whole different can of worms.
Algebraic effects just handle all of this more conveniently. That said, an effect system like Koka's isn't viable in a systems programming language like Rust, because optimizing user-defined effects is difficult. But you could have a special compiler-blessed effect for exceptions; algebraic checked exceptions, so to speak. Rust already does this with async.
It's possible to define your own string_view workalike that has a c_str() and binds to whatever is stringlike can has a c_str. It's a few hundred lines of code. You don't have to live with the double indirection.
reply