"Cause" seems unsubstantiated: I think to justify "cause," we'd need strong evidence that the equivalent bug (or worse) wouldn't have happened in C.
Or another way to put it: clearly this is bad, and unsafe blocks deserve significant scrutiny. But it's unclear how this would have been made better by the code being entirely unsafe, rather than a particular source of unsafety being incorrect.
But it didn't promise to be the solution either. Rust has never claimed, nor have its advocates claimed, that unsafe Rust can eliminate memory bugs. Safe Rust can do that (assuming any unsafe code relied upon is sound), but unsafe cannot be and has never promised to be bug free.
Except that it didn't fail to be the solution: the bug is localized to an explicit escape hatch in Rust's safety rules, rather than being a latent property of the system.
(I think the underlying philosophical disagreement here is this: I think software is always going to have bugs, and that Rust can't - and doesn't promise - to perfectly eliminate them. Instead, what Rust does promise - and deliver on - is that the entire class of memory safety bugs can be eliminated by construction in safe Rust, and localized when present to errors in unsafe Rust. Insofar as that's the promise, Rust has delivered here.)
You can label something an "explicit escape hatch" or a "latent property of the system", but in the end such labels are irrelevant. While I agree that it may be easier to review unsafe blocks in Rust compared to reviewing pointer arithmetic, union accesses, and free in C because "unsafe" is a bit more obvious in the source, I think selling this as a game changer was always an exaggeration.
Having written lots of C and C++ before Rust, this kind of local reasoning + correctness by construction is absolutely a game changer. It's just not a silver bullet, and efforts to miscast Rust as incorrectly claiming to be one seem heavy-handed.
Google's feedback seems to suggest Rust actually might be a silver bullet, in the specific sense meant in the "No Silver Bullet" essay.
That essay doesn't say that silver bullets are a panacea or cure all, instead they're a decimal order of magnitude improvement. The essay gives the example of Structured Programming, an idea which feels so obvious to us today that it's unspoken, but it's really true that once upon a time people wrote unstructured programs (today the only "language" where you even could do this is assembly and nobody does it) where you just jump arbitrarily to unrelated code and resume execution. The result is fucking chaos and languages where you never do that delivered a huge improvement even before I wrote my first line of code in the 1980s.
Google did find that sort of effect in Rust over C++.
This is sort of the exact opposite of reality: the point of safe Rust is that it's safe so long as Rust's invariants are preserved, which all other safe Rust preserves by construction. So you only need to audit unsafe Rust code to ensure the safety of a Rust codebase.
(The nuance being that sometimes there's a lot of unsafe Rust, because some domains - like kernel programming - necessitate it. But this is still a better state of affairs than having no code be correct by construction, which is the reality with C.)
I've written lots of `forbid(unsafe_code)` in Rust; it depends on where in the stack you are and what you're doing.
But as the adjacent commenter notes: having unsafe is not inherently a problem. You need unsafe Rust to interact with C and C++, because they're not safe by construction. This is a good thing!
I think unsafe Rust is harder to write than C. However, that's because unsafe Rust makes you think about the invariants that you'd need to preserve in a correct C program, so it's no harder to write than correct C.
In other words: unsafe Rust is harder, but only in an apples-and-oranges sense. If you compare it to the same diligence you'd need to exercise in writing safer C, it would be about the same.
Safe Rust has more strict aliasing requirements than C, so to write sound unsafe Rust that interoperates with safe Rust you need to do more work than the equivalent C code would involve. But per above, this is the apples-and-oranges comparison: the equivalent C code will compile, but is statistically more likely to be incorrect. Moreover, it's going to be incorrect in a way that isn't localizable.
Ultimately every program depends on things beyond any compilers ability to verify, for example the calls to code not written in that language being correct, or even more fundamentally if you're writing some embedded program that literally has interfaces to foreign code at all the silicon (both that handles IO and that which does the computation) being correct.
The promise of rust isn't that it can make this fundamentally non-compiler-verifiable (i.e. unsafe) dependency go away, it's that you can wrap the dependency in abstractions that make it safe for users of the dependency if the dependency is written correctly.
In most domains rust don't necessitate writing new unsafe code, you rely on the existing unsafe code in your dependencies that is shared, battle tested, and reasonably scoped. This is all rust, or any programming langauge, can promise. The demand that the dependency tree has no unsafe isn't the same as the domain necessitating no unsafe, it's the impossible demand that the domain of writing the low level abstractions that every domain relies on doesn't need unsafe.
Almost all of them. It would be far shorter to list the domains which require unsafe. If you're seeing programmers reach for unsafe in most projects, either you're looking at a lot of low level hardware stuff (which does require unsafe more often than not), or you are seeing cases where unsafe wasn't required but the programmer chose to use it anyway.
Ultimately all software has to touch hardware somewhere. There is no way to verify that the hardware always does what it is supposed to be because reality is not a computer. At the bottom of every dependency tree in any Rust code there always has to be unsafe code. But because Rust is the way it is those interfaces are the only places you need to check for incorrectly written code. Everywhere else that is just using safe code is automatically correct as long as the unsafe code was correct.
And that is fine, because those upstream deps can locally ensure that those sections are correct without any risk that some unrelated code might mis-safely use them unsafely. There is an actual rigorous mathematical proof of this. You have no such guarantees in C/C++.
> And a bug in one crate can cause UB in another crate if that other crate is not designed well and correctly.
Yes! Failure to uphold invariants of the underlying abstract model in an unsafe block breaks the surrounding code, including other crates! That's exactly consistent with what I said. There's nothing special about the stdlib. Like all software, it can have bugs.
What the proof states is that two independently correct blocks of unsafe code cannot, when used together, be incorrect. So the key value there is that you only have to reason about them in isolation, which is not true for C.
I think you're misunderstanding GP. The claim is that the only party responsible for ensuring correctness is the one providing a safe API to unsafe functionality (the upstream dependency in GP's comment). There's no claim that upstream devs are infalliable nor that the consequences of a mistake are necessarily bounded.
Those guys were writing a lot of unsafe rust and bumped into UB.
I sound like an apologist, but the Rust team stated that “memory safety is preserved as long as Rusts invariants are”. Feels really clear, people keep missing this point for some reason, almost as if its a gotcha that unsafe rust behaves in the same memory unsafe way as C/C++: when thats exactly the point.
Your verification surface is smaller and has a boundary.
And all of it is eventually run on an inherently unsafe CPU.
I cannot understand why we are continuing to have to re-litigate the very simple fact that small, bounded areas of potential unsafety are less risky and difficult to audit than all lines of code being unsafe.
> Any large Rust project I check has tons of unsafe in its dependency tree.
This is an argument against encapsulation. All Rust code eventually executes `unsafe` code, because all Rust code eventually interacts with hardware/OS/C-libraries. This is true of all languages. `unsafe` is part of the point of Rust.
It's just moving the goalposts. "If it compiles it works" to "it eliminates all memory bugs" to "well, it's safer than c...".
If Rust doesn't live up to its lofty promises, then it changes the cost-benefit analysis. You might give up almost anything to eliminate all bugs, a lot to eliminate all memory bugs, but what would you give up to eliminate some bugs?
Can you show me an example of Rust promising "if it compiles it works"? This seems like an unrealistic thing to believe, and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.
The cost-benefit argument for Rust has always been mediated by the fact that Rust will need to interact with (or include) unsafe code in some domains. Per above, that's an explicit goal of Rust: to provide sound abstractions over unsound primitives that can be used soundly by construction.
> Can you show me an example of Rust promising "if it compiles it works"? [...] and I've never heard anybody working on or in Rust claim that this is something you can just provide with absolute confidence.
I have heard it and I've stated it before. It's never stated in absolute confidence. As I said in another thread, if it was actually true, then Rust wouldn't need an integrated unit testing framework.
It's referring to the experience that Rust learners have, especially when writing relatively simple code, that's it tends to be hard to misuse libraries in a way that looks correct and compiles but actually fails at runtime. Rust cannot actually provide this guarantee, it's impossible in any language. However there are a lot of common simple tasks (where there's not much complex internal logic that could be subtly incorrect) where the interfaces provided by libraries they're depending on are designed to leverage the type system such that it's difficult to accidentally misuse them.
Like something like not initializing a HTTP client properly. The interfaces make it impossible to obtain an improperly initialized client instance. This is an especially distinct feeling if you're used to dynamic languages where you often have no assurances at all that you didn't typo a field name.
I've seen (and said) "if it compiles it works," but only when preceded by softening statements like "In my experience," or "most of the time." Because it really does feel like most of the time, the first time your program compiles, it works exactly the way you meant it to.
I can't imagine anybody seriously making that claim as a property of the language.
Yeah, I think the experiential claim is reasonable. It's certainly my experience that Rust code that compiles is more confidence-inspiring than Python code that syntax-checks!
6 days ago: Their experience with Rust was positive for all the commonly cited reasons - if it compiles it works
8 days ago: I have to debug Rust code waaaay less than C, for two reasons: (2) Stronger type system - you get an "if it compiles it works" kind of experience
4 months ago: I've been writing Rust code for a while and generally if it compiles, it works.
5 months ago: If it’s Rust, I can just do stuff and I’ve never broken anything. Unit tests of business logic are all the QA I need. Other than that, if it compiles it works.
9 months ago: But even on a basic level Rust has that "if it compiles it works" experience which Go definitely doesn't.
Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...
GP isn't asking for examples of just anyone making that statement. They're asking for examples of Rust making that promise. Something from the docs or the like.
> Some people claim that the quote is hyperbolic because it only covers memory errors. But this bug is a memory error, so ...
It's a memory error involving unsafe code, so it would be out of scope for whatever promises Rust may or may not have made anyways.
I think it's pretty reasonable to interpret "Language X promises Y" as tantamount to said promise appearing in Language X's definition and/or docs. Claims from devs in their official capacities are likely to count as well.
On the other hand, what effectively random third parties say doesn't matter all that much IMO when it comes to these things because what they think a language promises has little to no bearing on what the language actually promises. If I find a bunch of randos claiming Rust promises to give me a unicorn for my birthday it seems rather nonsensical to turn around and criticize Rust for not actually giving me a unicorn in my birthday.
I've also said it, with the implication that the only remaining bugs are likely to be ones in my own logic. Like, suppose I'm writing a budget app and haven't gone to the lengths of making Debit and Credit their own types. I can still accidentally subtract a debit from a balance instead of adding to it. But unless I've gone out of my way to work around Rust's protections, e.g. with unsafe, I know that parts of my code aren't randomly mutating immutables, or opening up subtle use-after-free situations, etc. Now I can spend all my time concentrating on the program's logic instead of tracking those other thousands of gotchas.
It's not moving the goalposts at all. I'm not a Rust programmer, but for years the message has been the same. It's been monotonous and tiring, so I don't know why you think it's new.
Safe Rust code is safe. You know where unsafe code is, because it's marked as unsafe. Yes, you will need some unsafe code in an notable project, but at least you know where it is. If you don't babysit your unsafe code, you get bad things. Someone didn't do the right thing here and I'm sure there will be a post-mortem and lessons learned.
To be comparable, imagine in C you had to mark potentially UB code with ub{} to compile. Until you get that, Rust is still a clear leader.
If there are specific incompatibilities or rough edges you're running into, we're always interested in hearing about them. We try pretty hard to provide a pip compatibility layer[1], but Python packaging is non-trivial and has a lot of layers and caveats.
Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial? uv sync and uv run are sort of fine for developing a distribution/package, but they’re not exactly replacements for anything else one might want to do with the pip and venv commands.
As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps. Unless I’ve missed something, if I make a change to a source tree that uv sync doesn’t notice, I’m stuck with uv pip install -e ., which is a wee bit disappointing and feels a bit gross. I suppose I could try to put something correct into cache-keys, but this is fundamentally wrong. The list of files in my source tree that need to trigger a refresh is something that my build system determines when it builds. Maybe there should be a way to either plumb that into uv’s cache or to tell uv that at least “uv sync” should run the designated command to (incrementally) rebuild my source tree?
(Not that I can blame uv for failing to magically exfiltrate metadata from the black box that is hatchling plus its plugins.)
> Is there any plan for a non-“compatibility layer” way to do anything manual or nontrivial?
It's really helpful to have examples for this, like the one you provide below (which I'll respond to!). I've been a maintainer and contributor to the PyPA standard tooling for years, and once uv "clicked" for me I didn't find myself having to leave the imperative layer (of uv add/sync/etc) at all.
> As a very basic example I ran into last week, Python tooling, even the nice Astral tooling, seems to be almost completely lacking any good detection of what source changes need to trigger what rebuild steps.
Could you say more about your setup here? By "rebuild steps" I'm inferring you mean an editable install (versus a sdist/bdist build) -- in general `uv sync` should work in that scenario, including for non-trivial things where e.g. an extension build has to be re-run. In other words, if you do `uv sync` instead of `uv pip install -e .`, that should generally work.
However, to take a step back from that: IMO the nicer way to use uv is to not run `uv sync` that much. Instead, you can generally use `uv run ...` to auto-sync and run your development tooling within an environment than includes your editable installation.
By way of example, here's what I would traditionally do:
python -m venv .env
source .env/bin/activate
python -m pip install -e .[dev] # editable install with the 'dev' extra
pytest ...
# re-run install if there are things a normal editable install can't transparently sync, like extension builds
Whereas with uv:
uv run --dev pytest ... # uses pytest from the 'dev' dependency group
That single command does everything pip and venv would normally do to prep an editable environment and run pytest. It also works across re-runs, since it'll run `uv sync` as needed under the hood.
My setup is a mixed C/C++/Python project. The C and C++ code builds independently of the Python code (using waf, but I think this barely matters -- the point is that the C/C++ build is triggered by a straightforward command and that it rebuilds correctly based on changed source code). The Python code depends on the C/C++ code via ctypes and cffi (which load a .so file produced by the C/C++ build), and there are no extension modules.
Python builds via [tool.hatch.build.targets.wheel.hooks.custom] in pyproject.toml and a hatch_build.py that invokes waf and force-includes the .so files into useful locations.
Use case 1: Development. I change something (C/C++ source, the waf configuration, etc) and then try to run Python code (via uv sync, uv run, or activating a venv with an editable install). Since there doesn't seem to be a way to have the build feed dependencies out to uv (this seems to be a deficiency in PEP 517/660), I either need to somehow statically generate cache-keys or resort to reinstall-package to get uv commands to notice when something changed. I can force the issue with uv pip install -e ., although apparently I can also force the issue with uv run/sync --reinstall-packages [distro name]. [0] So I guess uv pip is not actually needed here.
It would be very nice if there was an extension to PEP 660 that would allow the editable build to tell the front-end what its computed dependencies are.
Use case 2: Production
IMO uv sync and uv run have no place in production. I do not want my server to resolve dependencies or create environments at all, let alone by magic, when I am running a release of my software built for the purpose.
My code has, long before pyproject.toml or uv was a thing and even before virtual environments existed (!), had a script to build a production artifact. The resulting artifact makes its way to a server, and the code in it gets run. If I want to use dependencies as found by uv, or if I want to use entrypoints (a massive improvement over rolling my own way to actually invoke a Python program!), as far as I can tell I can either manually make and populate a venv using uv venv and uv pip or I can use UV_PROJECT_ENVIRONMENT with uv sync and abuse uv sync to imperatively create a venv.
Maybe some day uv will come up with a better way to produce production artifacts. (And maybe in the distant future, the libc world will come up with a decent way to make C/C++ virtual environments that don't rely on mount namespaces or chroot.)
[0] As far as I can tell, the accepted terminology is that the thing produced by a pyproject.toml is possibly a "project" or a "distribution" and that these are both very much distinct from a "package". I think it's a bit regrettable that uv's option here is spelled like it rebuilds a _package_ when the thing you feed it is not the name of a package and it does not rebuild a particular package. In uv's defense, PEP 517 itself seems rather confused as well.
uv needs to support creation of zipapps, like pdm does (what pex does standalone).
Various tickets asking for it, but they also want to bundle in the python interpreter itself, which is out of scope for a pyproject.toml manager: https://github.com/astral-sh/uv/issues/5802
There are a lot of Jewish, pro-Israel professors in the US. I don't see any evidence that it was a factor in this man's death. I think it would be irresponsible for a news organization to speculate until more information is actually available.
(You'll note that even Yeshiva World News isn't speculating about motives here.)
The ty repo contains the ruff repo[1] as a submodule, where the remainder of the code is. It is indeed open source, the layout is just indirect at the moment because of code-sharing between the tools.
I had the same question — I understand that the Actions control plane has costs on self-hosted runners that GitHub would like to recoup, but those costs are fixed per-job. Charging by the minute for the user’s own resources gives the impression that GitHub is actually trying to disincentivize third-party runners.
Self-hosted runner regularly communicates with the control plane, and control plane also needs to keep track of job status, logs, job summaries, etc.
8h job is definitely more expensive to them than a 1 minute one, but I'd guess that the actual reason is that this way they earn more money, and dissuade users from using a third party service instead of their own runners.
That's generous, but doesn't seem consistent with how Microsoft does business. Also, if that's the case why does self-hosted cost the same as the lowest hosted tier?
The checks here seem pretty minimal[1]. I'd recommend taking a look at fickling (FD: former employer) for a more general approach to pickle decompilation/analysis[2].
Thanks for the link! fickling is excellent work (and definitely the gold standard for deep analysis).
The goal with AIsbom was to build something lightweight enough to run in a fast CI/CD loop that creates a standard inventory (CycloneDX SBOM) alongside the security check. We are definitely looking at fickling's symbolic execution approach for inspiration on how to make our safety.py module more robust against obfuscation.
I know this sounds weird: "symbolic execution" of pickle VM cannot be slow right? We are talking about just a few thousands instructions here and you don't need "symbolic execution" per se, just write a custom interpreter and run it. That would take less than 10ms for any given PyTorch file (excluding disk loading).
Agree. Writing a pickle interpreter is not particularly challenging. I did that in Swift to help load PyTorch checkpoint https://github.com/liuliu/swift-fickling without these pitfalls.
Can you explain how shorter certificate lifetimes make LE more of a single point of failure? I can squint and see an argument for CA diversity; I struggle to see how reducing certificate lifetimes increases CA centralization.
Shorter lifetimes means more renewal events, which means more individual occasions in which LE (or whatever other cert authority) simply must be available before sites start falling off the internet for lack of ability to renew in time.
We're not quite there yet, but the logical progression of shorter and shorter certificate lifetimes to obviate the problems related to revocation lists would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet", alongside AWS, Cloudflare, and friends. With cert lifetimes measured in years or months, the CA can have a bad day and as long as you didn't wait until the last possible minute to renew, you're unimpacted. With cert lifetimes trending towards days or less, now your CA really does need institutionally important levels of high availability.
Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
> would suggest that we eventually end up in a place where the major ACME CAs join the list of heavily-centralized companies which are dependencies of "the internet"
I think that particular ship sailed a decade ago!
> Its less that LE becomes more of a single point of failure than it is that the concept of ACME CAs in general join the list of critically available things required to keep a site online.
Okay, this is what I wanted clarified. I don't disagree that CAs are critical infrastructure, and that there's latent risk whenever infrastructure becomes critical. I just think that risk is justified, and that LE in particular is no more or less of a SPOF with these policy changes.
Because when they eventually get their wet dream of 7-day renewals, everyone replies upon them once a week. LE being down for 48-hours could take out a big chunk of the Internet.
Certificates have historically been a "fire and forget" but constant re-issuance will make LE as important as DNS and web hosting.
FWIW, we're acutely aware of the operational risks of super short lifetimes and frequent renewals. That's why our `shortlived` profile is clearly documented as only being appropriate for orgs that have high operational maturity and an oncall rotation. We carry pagers too, and if LE goes down for 48 hours, we'll be desperately trying not to take out a huge chunk of the Internet.
The longer certificates were valid the more often we'd have breakage due to admins forgetting renewal, or how do install the new certificates. It was a daily occurrence, often with hours or days of downtime.
Today, it's so rare I don't even remember when I last encountered an expired certificate. And I'm pretty sure it's not because of better observability...
Oh for sure. This is stupid policy by an organization with no accountability to anyone, that represents the interests of parties with their own agendas.
I don't think it's that venal: the CABF holds CAs accountable, largely through the incentives of browsers (which in turn are the incentives of users, mediated by what Google, Microsoft, Apple, and Mozilla think is worth their time). That last mediation is perhaps not ideal, but it's also not a black hole of accountability.
Serious question: what tools only support netrc for authentication? I'm aware of lots of tools that (unfortunately IMO) support netrc as a source of credentials, but I can't think of a single one that requires it.
Or another way to put it: clearly this is bad, and unsafe blocks deserve significant scrutiny. But it's unclear how this would have been made better by the code being entirely unsafe, rather than a particular source of unsafety being incorrect.
reply