Hacker Newsnew | past | comments | ask | show | jobs | submit | bonzini's commentslogin

> Forget the bells-and-whistles and answer this: does the use of Rust generate a result that is more performant and more efficient than the best result using C?

These are the performance results for an NVMe driver written in Rust: https://rust-for-linux.com/nvme-driver

It's absolutely on par with C code.


That was way way blown out of proportion. https://lwn.net/Articles/991062/ has a much less somber quote from just three months later:

> Ted Ts'o said that the Rust developers have been trying to avoid scaring kernel maintainers, and have been saying that "all you need is to learn a little Rust". But a little Rust is not enough to understand filesystem abstractions, which have to deal with that subsystem's complex locking rules. There is a need for documentation and tutorials on how to write filesystem code in idiomatic Rust. He said that he has a lot to learn; he is willing to do that, but needs help on what to learn


You can absolutely write drivers with zero unsafe Rust. The bridge from Rust to C is where unsafe code lies.

And hardware access. You absolutely can't write a hardware driver without unsafe.

There are devices that do not have access to memory, and you can write a safe description of such a device's registers. The only thing that is inherently unsafe is building DMA descriptors.

Converting to RPN is, roughly speaking, the easiest way to generate code for either register or stack architectures.

Once you have a parse tree, visiting it in post order (left tree, right tree, operation) produces the RPN.


Yep. It can refactor very well but that's it. For complex code bases it cannot even build boilerplate that makes sense; at most it saves some typing.

> It can refactor very well but that's it.

Can it though? I thought it was most useful for writing new code, but have so far never had it correctly refactor existing code. Its refactoring attempts usually change behavior / logic, and sometimes even leave the code in a state where it's even harder to read.


I did find some benefit in lowering the cost of exploratory work, but that's it—certainly worth 20€/month, but not the price of any of the "ultimate" plans.

For example today I had to write a simple state machine (for a parser that I was rewriting so I had all the testcases already). I asked Claude Code to write the state machine for me and stopped it before it tried compiling and testing.

Some of the code (of course including all the boilerplate) worked, some made no sense. It saved a few minutes and overall the code it produced was a decent first approximation, but waiting for it to "reason" through the fixes would have made no sense, at least to me. The time savings mostly came from avoiding the initial "type the boilerplate and make it compile" part.

When completing the refactoring there were a few other steps like where using AI was useful. But overall the LLM did maybe 10% of the work and saved optimistically 20-30 minutes over a morning.

Assuming I have similar savings once a week, which is again very optimistic... That's a 2% reduction or less.


There haven't been layoffs at Red Hat after 2023, whereas according to your statement there should have been two more rounds. The layoffs from your articles are at IBM, and did not affect Red Hat.

Thank you for the correction.

> FWIW I have heard that IBM used to force their management style on acquisitions in years past

Definitely wasn't like that for Red Hat. We had a CFO with an IBM past which was a really nice guy and never ever felt like he was parachutes from IBM.

Now after 6 years legal, HR and finance will move to IBM starting next January; but my perspective from engineering is that after the acquisition it's been and remains business as usual.

I have no idea how it was for Hashicorp.


Haven't heard a damn thing about "RedHat" in years, though. It's dead as far as Linux distros go. I'm sure it's used in the IBM-o-sphere, but I'm just not around that at all.

Well I am not sure what other commercial distros you consider to be alive, but Red Hat makes Canonical's yearly revenue in a couple weeks.

Outside IBM land, Meta runs on a CentOS Stream fork.


> I'm just not around that at all.

You might live/work in a bubble. It's used everywhere in large enterprise.


Not always. LOADALL was used heavily by Microsoft's HIMEM.SYS on the 286, but was not preserved on subsequent models.

That was because LOADALL was impossible to preserve, since the internal state of the CPU changed in the next models.

80386 also had an undocumented LOADALL instruction, but it was encoded with a different opcode, as it was incompatible with the 80286 LOADALL, by restoring many more registers.

After 1990, no successors to LOADALL were implemented, because Intel introduced the "System Management Mode" instead, which provided similar facilities and much extra.


They could still preserve it for backwards compatibility in microcode. They didn't do that because the 386 made it possible to get out of protected mode, and in fact allowed a more efficient implementation (big real mode) without undocumented opcodes.

It amuses me that the SMM state save area still lists the descriptor cache fields as reserved, even now that (thanks to virtualization) descriptor caches and big real mode finally have become an official feature of the architecture.


> why wouldn't author names on open source licenses count as PII?

They are but you can keep PII if it is relevant to the purpose of your activity. In this case the author needs you to share his PII, in order to exercise his moral and copyright rights.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: