"Consider using threads" is only safe advice if the person doing the considering knows how to dealt with the (usually unwanted) non-determinism threads introduce.
Only a small fraction do, but threads look so simple on the surface the rest don't realise they are walking into a mine field.
Is your assertion that it takes more time for a CPU to read values out of a 30 byte struct and do a couple shifts and branches than to parse a JSON representation?
I think that goomba bumped into the squished goomba Mario had just squished. Mario was just a bit to the left so the flat goombas hit box stuck out to the right a bit and the other goomba hit it.
Yeah it doesn't really land well as a joke because it's the very first thing. Joking hot takes at least need a bit of warm up so readers know what's going on, whereas this just opens with a no-context dis at women where it's not even clear what the joking read is supposed to be talking about. What hard work? What's happening?
I placed it on the readme after a long night of working on this project. The joke was directed at my partner who had giggled with me about the absurdity of writing a dsl to template my website.
That said, I do agree that it doesn't land well out of context for the general audience, and for that reason will be removed
I was familiar with that phrase and its shorthand ("GLHF") but the latter half of the sentence ("for interacting with GPT models") confused the punchline enough that the joke just didn't land, because the context is one of using RL to "interact with GPT" (relevant to this article) but a more appropriate context would have been regular ole RL using agents in a simulated environment, like - I don't know, a video game?
Mold doesn't solve any problems for small- or medium-sized projects... there's is little/no advantage.
That's what async CI/CD unit and integration testing are for.
Also, it depends on the platform. Go doesn't have this problem. Rust does, to a degree. Interpreted languages make a linker moot.
The primary use case for mold is giant projects with massive executables. It's not a general-purpose linker, it can't improve inefficient workflows lacking automation, and it can't improve the multitasking of developer time for people who insist on waiting around instead of doing something else useful.
Might be fun to use diffusion to make adversarial/false answers to the pixelated scenes, depending on how pixelated they are. If they're quite pixelated you could probably come up with some crazy alternatives.
It's not fair to say that rust is 'designed around immutable variables' - rust has move semantics, but move semantics just mean that if you have a variable, 'a' and you assign it to variable 'b', the variable 'a' is now dead and can no longer be referenced. This exists at the lexical level, it's a no-op in generated code.
Is that maybe what you were referring to?
For certain scalar types, flexibility is increased by allowing 'copy' semantics where assigning 'a' into 'b' makes 'b' a copy of 'a', and both are alive. Then it ends up mattering how heavy the type is - although you can only implement 'copy' for things you can trivially memcpy, so nothing on the heap.
Generally anything that would be expensive to duplicate doesn't get 'copy' semantics, but instead requires you to move it into a new variable, or explicitly clone it.
Rust also has immutable-by-default semantics, but it's by default. You can mutate the contents of structs, but there can only be one mutable reference XOR an arbitrary quantity of read-only references and aliasing is not permitted. This forms the basis for many of the safety guarantees.
Did that help? I was guessing at what you meant, so if that wasn't it I can always try again.
[edit] within the context of microcontrollers Rust requires you to be very explicit about what is and isn't permitted, and how things should work. You can disallow in your construction pretty much anything expensive or non-trivial.
So really anything on an AVR8 that isn't either an 8- or 16-bit int, unsigned or signed, is going to be complete and utter monster to deal with.
Natively the CPU deals with 8-bit values. Obviously that's a little cramped so you can just about get away with using a few more instructions to do 16-bit. If you absolutely must, 32-bit ints aren't horrible to cope with, but then you start to get into a lot of unnecessary code when you want to change size.
Even a very high level language like C is a bad idea on something so constrained, because C assumes that everything is a massive approximately VAX-like architecture with mappable memory all over the place, and limitless amounts of it, possibly as much as one or two megabytes.
> Even a very high level language like C is a bad idea on something so constrained, because C assumes that everything is a massive approximately VAX-like architecture with mappable memory all over the place, and limitless amounts of it, possibly as much as one or two megabytes.
That's not really true for the AVR family; the instruction set and general architecture was designed with C in mind. Unlike say a PIC microcontroller, the AVR family has a hardware stack pointer (SPH/SPL) and a large number of 8-bit registers which can also be referenced in 16-bit pairs for the (albeit limited) set of instructions which support it.
C makes some assumptions (for instance assuming the existence of a stack pointer), but the AVR designers kept that stuff in mind. Pretty sure AVR C actually uses ILP16 data model, not an 8-bit model as you may be expecting.
C doesn't make assumptions about the size of an address space, although you can when specifying the data model for your architecture.
The only thing that's slightly less than clean programming AVRs using C is that they're Harvard architecture instead of Von Neumann so you have to access program memory via special instructions (lpm/elpm/spm). That's wrapped with a __attribute__((progmem)) specifier in AVR GCC so the compiler knows it uses a different address space.
> So really anything on an AVR8 that isn't either an 8- or 16-bit int, unsigned or signed, is going to be complete and utter monster to deal with.
I don’t see why this a problem. Both C and Rust give you 8 bit and 16 bit types to work with. It’s true that you may sometimes need assembly to eke out the last drops of performance on such small chips, but equally sometimes you don’t and C/C++/Rust are excellent tools for the job.
You can use wider types, maybe even floats I can't recall, they'll get lowered to the target architecture and the generated code will be in terms of narrower registers.
A 32-bit add will get turned into 1 8-bit add and 3 8-bit add-with-carry instructions. You won't even notice, unless to your point, you see a performance issue or start running out of code space.
With all due respect, basically everything. What you describe sounds like Haskell laziness, maybe that's what you were thinking of? Or the memory model of e.g. Ocaml and Lisp, but those are then optimized at compile-time.
Rust enforces many guarantees w.r.t. memory access & sharing at compile time, but at codegen time, it's basically as vanilla and ‶boring″ as C++.
Pretty much all of it. Rust has immutable variables but also has mutable ones, and is not "designed around immutable variables". Additionally, immutable variables don't generally need to be "copied all over the place" . Rust is not particularly memory inefficient compared with C++.
Not before losing his career and having to flee his country, probably because his reputation was dragged through the mud. News of the acquittal doesn't travel nearly as far as the initial arrest - see sibling comment below.