I genuinely don't think it's hard to compete. I use AI sometimes, I don't use it more than I use it. I find myself at least just-as-productive as people who primarily use AI.
I personally tire of people acting like it's some saving grace that doubles/triples/100x your productivity and not a tool that may give you 10-20% uplift just like any other tool
Calling LLMs trivial is a new one. Yea just consume all of the information on the internet and encode it into a statistical model, trivial, child could do it /s
As an Australian I always find it funny going places and having to remember which dance-around word everyone uses for "toilet". Washroom, restroom, bathroom, there's so many!
“Toilette” is still used that way in normal everyday French. “Je fais ma toilette” - I’m washing up/getting ready/getting dressed/doing my morning hygiene routine/etc.
The easy to remember terms and will work nearly anywhere without giving offence are: "loo" in a residential property or "gents/ladies" for a non-residential property.
Soviet apartments had a separate rooms for the toilet and the area with a bath/shower/sink. The area behind the toilet was usually a hinged wall that could be opened to reveal the entry point for utilities.
I assume toilet hands were an unspoken issue, because there was no possible way to traverse from the toilet room to the washroom without touching anything.
For a complete tangent, I’ll mention that Soviet toilets had a “poop shelf” so that people could eyeball their stool to gauge their health. One flaw of this design is that there was no odour suppression offered by toilets that immediately immerse stool in water.
Having played with Dafny only in a university course, I really enjoyed it as a way of implementing algorithms and being certain they work with much less cruft than unit tests.
I haven't gone looking but verifier tools compatible with languages people already use (typescript/rust/go/whatever is the flavour of the month) feel like the way to go
The languages people already use are inconsistent (in a logic theory sense) and lack formal semantics. Efforts to try and prove anything useful about programs written in those languages don't get far, because the first fact requires you to restrict everything to a subset of the languages, and sometimes that restricts you to an unworkable subset, and the latter fact is a massive hill to climb at whose peak is a note saying, "we have changed the implementation, this hill is no longer relevant."
The only* practical way to approach this is exactly Dafny: start with a (probably) consistent core language with well understood semantics, build on a surface language with syntax features that make it more pleasant to use, proving that the surface language has a translation to the core language that means semantics are clear, and then generate the desired final language after verification has succeeded.
Dafny's about the best of the bunch for this too, for the set of target languages it supports.
(It's fine and normal for pragmatic languages to be inconsistent: all that means here is you can prove false is true, which means you can prove anything and everything trivially, but it also means you can tell the damn compiler to accept your code you're sure is right even though the compiler tells you it isn't. It looks like type casts, "unsafe", and the like.)
* one could alternatively put time and effort into making consistent and semantically clear languages compile efficiently, such that they're worth using directly.
Dafny is great, and has some advantages compared to its competitors, but unequivocally calling it "the best" is quite bullish. For example, languages using dependent types (F*, ATS, Coq/Adga/Lean) are more expressive. And there are very mature systems using HOL.
Truth is that everything involves a tradeoff, and some systems are better than others at different things. Dafny explores a particular design space. Hoare-style invariants are easier to use than dependent types (as long as your SMT solver is happy, anyway) but F* also has that, except that in F* you can also use dependent types when automatic refinement proofs become inadequate. And F* and ATS can target low-level, more so than Dafny.
Probably I would not use ATS for anything, but between F* and Dafny, there isn't such a clear cut (I'd most likely use F*).
And if I don't need (relatively) low-level, I wouldn't use either.
Can't I just work on a subset of the language? I don't care if the linked list implementation I use isn't verified. I want to verify my own code, not because it will run on mars, but as an alternative to writing tests. Is this possible?
Yes. See SPARK for an example of this. It is a subset of Ada plus the SPARK portions to convey details and proofs. You can use it per file, not just on an entire program, which lets you prove properties of critical parts and leave the rest to conventional testing approaches.
I agree, using tools more in line with your language is better but I believe the knowledge from learning Dafny ought to transfer well to other systems. And Dafny seems better as a pedagogical system than what I've seen and used for other languages.
I'm exploring it now as a way to ease colleagues into SPARK. A lot of the material appears to transfer over and the book Program Proofs seems better to me than what I found for SPARK. I probably wouldn't have colleagues work through the book themselves so much as run a series of tutorials. We've done this often in the past when trying to bring everyone up to speed on some new skillset or tooling, if someone already knows it (or has the initiative to learn ahead of time) then they run tutorial sessions every week or so for the team.
The only tool I see to ever take off in languages that people already use is design by contract, and is has been a hard ride even making that available in some.
Let alone something more complex in formal verification.
Exactly, if you are doing copilot driven dev, then you are in a feedback loop with the agent and it's easier to have multiplexed ongoing "conversations" with the agent across discrete checkouts of the same codebase than it is to ask an agent to make a change, commit, checkout another branch, wait for the context to update/become aware, ask for a change, agree to it, commit, then switch back to another context.
It's the kind of thing git worktrees was made for.
Like I get it, but English is fluid and is it really that far to make the assumptive leap from "git" to "github/lab/service", seems pretty clear what they meant (even if it's not completely/technically/um-actually correct because git != github).
Yay it seems I don't have to wear the Dunce hat but also, honestly, I'm not even trying to be pedantic on what counts as "git" here. I'm more annoyed that the contribution calendar is considered inherently "git". I doubt Github can even patent that---it's more a habit-tracker thing.
(Okay there's also `git` in the URL, I noticed, which means whoever made the page had that mental mapping in mind but still...)
I just feel clickbaited that this item is at the top of this forum. I'll stop engaging with this now. Nothing of interest for me here.
to be fair, git != github is an important distinction that many fail to grasp. we had a manager who conflated the two, meetings were peppered with things like "do a github pull", "will he be able to log in to the repository on his new machine", "i can't find any good tutorials on setting up a git wiki", etc.
I personally tire of people acting like it's some saving grace that doubles/triples/100x your productivity and not a tool that may give you 10-20% uplift just like any other tool