If there's a way to make it more precise and/or specific and/or faster, or create similar macros with better functionality and/or correctness, that's great.
See the same directory for corresponding assert_* macros for less than, greater than, etc.
Is there any constant more misused in compsci than ieee epsilon? :)
It's defined as the difference between 1.0 and the smallest number larger than 1.0. More usefully, it's the spacing between adjacent representable float numbers in the range 1.0 to 2.0.
Because floats get less precise at every integer power of two, it's impossible for two numbers greater than or equal to 2.0 to be epsilon apart. The spacing between 2.0 and the next larger number is 2*epsilon.
That means `abs(a - b) <= epsilon` is equivalent to `a == b` for any a or b greater than or equal to 2.0. And if you use `<` then the limit will be 1.0 instead.
Epsilon is the wrong tool for the job in 99.9% of cases.
A (perhaps initially) counterintuitive part of the above more explicitly stated: The doubling/halving also means numbers between 0 and 1 actually have _more_ precision than the epsilon would suggest.
Considerably more in many cases. The point of floating point is to have as many distinct values in the range 2-4 as are in the range 1-2 as are between 1/2 and 1, 1/4 and 1/2, 1/8 and 1/4, etc. the smallest representable difference between consecutive floating point numbers down around the size of 1/64 is on the order of epsilon/64
Multiplying epsilon by the largest number you are dealing with is a strategy that makes using epsilons at least somewhat logical.
So I'd probably rewrite that code to first find the ulp of the larger of the abs of a and b and then assert that their difference is less than or equal to that.
Edit: Or maybe the smaller of the abs of the two, I haven't totally thought through the consequences. It might not matter, because the ulps will only differ when the numbers are significantly apart and then it doesn't matter which one you pick. Perhaps you can just always pick the first number and get its ULP.
This is what was done to a raytracer I used. People kept making large-scale scenes with intricate details, think detailed ring placed on table in a room with a huge field in view through the window. For a while one could override the fixed epsilon based on scene scale, but for such high dynamic range scenes a fixed epsilon just didn't cut it.
IIRC it would compute the "dynamic" epsilon value essentially by adding one to the mantissa (treated as an integer) to get the next possible float. Then subtract from that the initial value to get the dynamic epsilon value.
Definitely use library functions if you got 'em though.
Because of the representation of floats, couldn't you just bitwise cast to uints and see if the (abs) difference was less than or equal to one? But practically you probably should check if it's less than or equal to say ten, depending on your tolerance.
It would be very useful to be able to compare the significant directly then. I realize there is a boundary issue when a significant is very close to 0x00..000 or 0xFFF..FFF
Everyone has already made several comments on the incorrect use of EPSILON here, but there's one more thing I want to add that hasn't yet been mentioned:
EPSILON = (1 ulp for numbers in the range [1, 2)). is a lousy choice for tolerance. Every operation whose result is in the range [1, 2) has a mathematical absolute error of ½ ulp. Doing just a few operations in a row has a chance to make the error term larger than your tolerance, simply because of the inherent inaccuracy of floating-point operations. Randomly generate a few doubles in the range [1, 10], then randomize the list and compute the sum of different random orders in the list, and your assertion should fail. I'd guess you haven't run into this issue because either very few people are using this particular assertion, or the people who do happen to be testing it in cases where the result is fully deterministic.
If you look at professional solvers for numerical algorithms, one of the things you'll notice is that not only is the (relative!) tolerance tunable, but there's actually several different tolerance values. The HiGHS linear solver for example uses 5 different tolerance values for its simplex algorithm. Furthermore, the default values for these tolerances tend to be in the region of 10^-6 - 10^-10... about the square root of f64::EPSILON. There's a basic rule of thumb in numerical analysis that you need your internal working precision to be roughly twice the number of digits as your output precision.
Your last comment is essential for numerical analysis, indeed. There is this "surprising" effect that increasing the precision of the input ends up by decreasing that of the output (roughly speaking). So "I shall just use a s very small discretization" can be harmful.
Your assertion code here doesn't make a ton of sense. The epsilon of choice here is the distance between 1 and the next number up, and it's completely separated from the scale of the numbers in question. 1e-50 will compare equal to 2e-50, for example.
I would suggest that "equals" actually is for "exactly equals" as in (a == b). In many pieces of floating point code this is the correct thing to test. Then also add a function for "within range of" so your users can specify an epsilon of interest, using the formula (abs(a - b) < eps). You may also want to support multidimensional quantities by allowing the user to specify a distance metric. You probably also want a relative version of the comparison in addition to an absolute version.
Auto-computing epsilons for an equality check is really hard and depends on the usage, as well as the numerics of the code that is upstream and downstream of the comparison. I don't see how you would do it in an assertion library.
Ignoring the misuse of epsilon, I'd also say that you'd be helping your users more by not providing a general `assert_f64_eq` macro, but rather force the user to decide the error model. Add a required "precision" parameter as an enum with different modes:
// Precise matching:
assert_f64_eq!(a, 0.1, Steps(2))
// same as: assert!(a == 0.1.next_down().next_down())
// Number of digits (after period) that are matching:
assert_f64_eq!(a, 0.1, Digits(5))
// Relative error:
assert_f64_eq!(a, 0.1, Rel(0.5))
You generally want both relative and absolute tolerances. Relative handles scale, absolute handles values near zero (raw EPSILON isn’t a universal threshold per IEEE 754).
The usual pattern is abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol) to avoid both large-value and near-zero pitfalls.
It depends on the use case, but do you consider NaN to be equal to NaN? For an assert macro, I would expect so. Also, your code works differently for very large and very small numbers, eg. 1.0000001, 1.0000002 vs 1e-100, 1.0000002e-100.
For my own soft-floating point math library, I expect the value is off by a some percentage, not just off by epsilon. And so I have my own almostSame method [1] which accounts for that and is quite a bit more complex. Actually multiple such methods. But well, that's just my own use case.
Machine eps provides the maximum rounding error for a single op. Let's say I write:
let y = 2.0;
let x = sqrt(y);
Now is `x` actually the square root of 2? Of course not - because the digit expansion of sqrt(2) doesn't terminate, the only way to precisely represent it is with symbolics. So what do we actually have? `x` was either rounded up or down to a number that does have an exact FP representation. So, `x` / sqrt(2) is in `[1 - eps, 1 + eps]`. The eps tells you, on a relative scale, the maximum distance to an adjacent FP number for any real number. (Full disclosure, IDK how this interacts with weird stuff like denormals).
Note that in general we can only guarantee hitting this relative error for single ops. More elaborate computations may develop worse error as things compound. But it gets even worse. This error says nothing about errors that don't occur in the machine. For example, say I have a test that takes some experimental data, runs my whiz-bang algorithm, and checks if the result is close to elementary charge of an electron. Now I can't just worry about machine error but also a zillion different kinds of experimental error.
There are also cases where we want to enforce a contract on a number so we stay within acceptable domains. Author alluded to this. For example - if I compute some `x` s.t. I'm later going to take `acos(x)`, `x` had better be between `[-1, 1]`. `x >= -1 - EPS && x <= 1 + EPS` wouldn't be right because it would include two numbers, -1 - EPS and 1 + EPS, that are outside the acceptable domain.
- "I want to relax exact equality because my computation has errors" -> Make `assert_rel_tol` and `assert_abs_tol`.
- "I want to enforce determinism" -> exact equality.
- "I want to enforce a domain" -> exact comparison
Your code here is using eps for controlling absolute error, which is already not great since eps is about relative error. Unfortunately your assertion degenerates to `a == b` for large numbers but is extremely loose for small numbers.
Apart from what others have commented, IMO an “assertables” crate should not invent new predicates of its own, especially for domains (like math) that are orthogonal to assertability.
EQ should be exactly equal, I think. Although we often (incorrectly) model floats as a real plus some non-deterministic error, there are cases where you can expect an exact bit pattern, and that’s what EQ is for (the obvious example is, you could be writing a library and accept a scaling factor from the user—scaling factors of 1 or 0 allow you to optimize).
You probably also want an isclose and probably want to push most users toward using that.
Numeric comparison implies subtraction or similar a - b sign and zero extraction at some lower level (in an ALU micro-op perhaps), so that's duplicated effort of a - b unless it can be optimized away.
match a - b {
d if d >= 0.0 => d < f64::EPSILON,
d => d >= -f64::EPSILON, /* true if -EPSILON to -0.0 */
}
Um, what? You've linked an RFC for Rust, but the CPP Reference article for C++ So yeah, the Rust RFC documents a proposed change, and the C++ reference documents an implemented feature, but you could equally link the C++ Proposal document and the Rust library docs to make the opposite point if you wanted.
You can also rely on the fact (not promised in C++) that these are actually the IEEE floats and so they have all the resulting properties you can (entirely in safe Rust) just ask for the integers with the same bit pattern, compare integers and because of how IEEE is designed that tells you how far away in some proportional sense, the two values are.
On an actual CPU manufactured this century that's almost free because the type system evaporates during compilation -- for example f32::to_bits is literally zero CPU instructions.
Oh, my research was wrong and the line from the RFC doc...
>Currently it is not possible to answer the question ‘which floating point value comes after x’ in Rust without intimate knowledge of the IEEE 754 standard.
So nevermind on it not being present in Rust I guess I was finding old documentation
Yeah, the RFC is explaining what they proposed in 2021. In 2022 that work landed in "nightly" Rust, which means you could see it in the documentation (unless you've turned off seeing unstable features entirely) but to actually use it in software you need the nightly compiler mode and a feature flag in your source #![feature(float_next_up_down)].
By 2025 every remaining question about edge cases or real world experience was resolved and in April 2025 the finished feature was stabilized in release 1.86, so it just works in Rust since about a year.
For future reference you can follow separate links from a Rust RFC document to see whether the project took this RFC (anybody can write one, not everything gets accepted) and then also how far along the implementation work is. Can I use this in nightly? Maybe there's an outstanding question I can help answer. Or, maybe it's writing a stabilization report and this is my last chance to say "Hey, I am an expert on this and your API is a bit wrong".
Yeah a standards document with the phrase "Currently it is not possible to answer the question" threw me, I'd argue pretty strongly that's not how standards should be written, but oh well, lesson learned.
But that's not a "standards document"? Firstly, unlike for WG21 the goal of the Rust project is to implement a programming language, the output of WG21 is an ISO Document and even though in fact the final document is largely useless the process to write it is crucial, nobody reads that official $$$$ PDF from ISO but they do use the drafts which are, though they insist otherwise for legal reasons, functionally equivalent. However the output of the Rust project is the language itself, not a standards document.
Beyond that though, neither Rust RFCs nor their nearest analogue the C++ P-series proposal papers are the output product - they're proposing to change that output and so they're written in a very different style.
Barry even starts with an anecdote! This would be entirely inappropriate for a standard but he wasn't writing a standard, like this Rust RFC he was making a proposal.
This would have worked if ieee hadn't severely messed up when (not) designing NaN semantics, but they did, so in rust, this can return false when comparing a NaN value to itself. (see the NaN section of https://doc.rust-lang.org/std/primitive.f32.html)
If you're open to questions, I'm switching my teams from Docker to Podman on macOS. I'm hitting blockers for multi-user setups i.e. each developer has a non-admin account on the machine, whereas brew runs in its own account with admin permissions.
I would love a way to have Podman installable in userspace meaning in a non-admin account, or installable without brew, or with a dependency list such as QEMU or whatever else needs to be installed by an admin ahead of time, or with a sudousers config list, etc.
I know this is an atypical setup. Any advice from anyone here is much appreciated about multi-user non-admin macOS container setup for Podman or Docker or equivalent.
I maintain multiple open source projects. In the past two months I've seen an uptick in AI-forgery attacks, and also an uptick in legitimate code contributions.
The AI-forgery attacks are highly polished, complete with forged user photos and fake social networking pages.
The legitimate code contributions are from people who have near-zero followers and no obvious track record.
This is topsy-turvy yet good news for open source because it focuses the work on the actual code, and many more people can learn how to contribute.
So long as code is good enough to get in the right ballpark for a PR, then I'm fine cleaning the work up a bit by hand then merging. IMHO this is a great leap forward for delivering better projects.
Yes good points both, thank you. The source code link has more explanation about color choices and my preference of POSIX compatibility. You can also see the color function that checks NO_COLOR, CLICOLOR_FORCE, TERM = "dumb", -t 1, etc.
For color operands, I use raw escape codes because I aim for POSIX compatibility. If I'm working on something that needs more capabilities then I switch to a more powerful programming language e.g. python, rust, etc.
As far as I understand, POSIX tput defines some terminal operands (clear, init, reset, etc.) but not color operands (setaf, setab, sgr0, etc.).
For shell syntax, I aim for POSIX because for anything more advanced I switch from shell to a more powerful programming language.
Currently POSIX doesn't have the 'declare' statement, nor the '\e' syntax consistently, nor the 'echo -e' syntax consistently.
As for exporting, I do it because I have many quick scripts that I run often, and I prefer to switch the colors in one place such as in the evening from light mode to dark mode.
When you say the print statements aren't going to work by default, can you say more about this? Anything you can explain or point me to about this will be a big help. Thanks!
I'm in the affected group because I'm a US citizen working in the UK. There's much more to the story because the UK has many digital ID aspects already in place-- such as for work visas and residence permits-- but these not coordinated into a whole.
What I experienced last year was many digital verification steps that were all required: open a UK bank account, sign up for a UK phone number, secure a UK residential postal address, apply for UK right-to-rent codes, generate a UK national insurance number, file for UK healthcare registration, and more.
Each step had different digital workflows and UI/UX. To traverse all these steps took hundreds of hours and a couple months wall time.
Many steps had catch-22s. The UK bank account needed a UK phone number, while the UK phone company needed a UK bank account. The UK payroll company needed a permanent residence, while the UK landlord needed UK payroll stubs. None of the steps had a quick simple way to digitally verify my UK work visa.
IMHO federation could be a big help here, such as for government agencies and government-approved businesses doing opt-in data sharing and ideally via APIs. For example, imagine each step can share its relevant information with other steps. This could make things more efficient, more accurate, and ideally more secure.
I am a bit confused about this. Is that a list of things you needed to open a bank account? Or a list of things for which you needed to show ID?
I am not sure a government digital ID would help with dealing with businesses.
Right to rent is a stupid and useless bit of bureaucracy which encourages racism - its much easier for landlords not to rent to someone who looks or sounds foreign, especially at the bottom end of the market where people might not have passports.
Edit: I should have said something like discrimination on grounds of race or national origin. The landlords are not motivated by a desire to discriminate, but to avoid have to carry out checks, especially if they do not understand the requirements with regard to visas - easier just to let to someone who (they think!) is definitely British.
> I am not sure a government digital ID would help with dealing with businesses.
I am pretty sure it would if it was allowed to. Once businesses have one usable source of ID and/or residence, they don't have to create and maintain elaborate alternative ways of establishing this information.
I come from a country where there is a national ID and lived in the UK for a while (before there was any form of electronic registration of foreign workers). I facepalmed everytime I had to interact with a business requiring ID or address, or with the government. This is a long-solved problem and they refuse to use the known, good, solution. They even managed to make a national ID into law around 2010 and then scrape it a year or so later when a new government came into power. I still can't believe it.
Can relate. The UK electronic eVisa app was pure garbage. The major redeeming feature of the UK civil service and the various regulatory quagmires is that they're effectively open source. You (or Claude) can read through their practice manuals or policies and find a work-around. But my goodness is it annoying until you figure that out. Another fascinating bit is you may think the various departments are connected but they are not. The nice looking UK Government Digital Service (GDS) Design System gives everything a veneer of connected competence, but under the bonnet, that slick UI signal is as reliable as a posh accent. Don't become a migrant if you don't have to.
And I guess people who vote for less government are people who never have dealt with good and efficient government, only government destoyed by people who don't want it to work or lobbying companies.
Where I live e-government is super smooth, like having your taxes filed for you - all you have to do is to sign it with your e-id. E-id is, as I see it, actually saftey for me as a citizen, with delegated security so that the SP only get verification and the info actually needed from the IDP.
The government is famously dysfunctional in the UK. It is unlikely to get any better. So most of us would rather they do nothing than make the situation worse.
I've lived in other countries in Europe and their government isn't that smooth either. In fact the UK was much better than Spain when I lived there, though things may have changed now.
Sorry people are downvoting you, I guess some folks think the downvote is for people they disagree with. But this is my experience too: government that works and is smooth and efficient can turn one into a fan.
People who vote for less government forget that it only empowers big businesses, whilst the protections for small business and individuals are muscled out
That's one advantage of the US' 1950s paper-based approach to everything, or at least as it was 20-odd years ago. As a non-US citizen I opened a bank account with barely any ID (no drivers license or phone number), they gave me a box of paper cheques that I had to look up online to figure out how to use because I had no idea what to do with them, I got an SSN (still not quite sure how I managed that), filed IRS tax returns, and somehow got a complete US identity and whatnot set up which seemed to be based mostly on the fact that everything was built around paper records and no-one talked to anyone else about what they had on file.
I moved from the UK to Scandinavia, where there is a federated ID (BankID) that you use to access pretty much everything and it removes all this complexity that the UK has. I can't imagine life without such an easy system. One of the downsides is that there's a bit of a catch-22 to getting an ID in the first place but once you've managed that it's done.
A key difference is the relationship between the people and the government and the motivation behind creating a federated ID. There's definitely an element of governmental monitoring to the Scandinavian model but the relationship with the government is less adversarial than in the UK.
The point is that the government tried to sell this as helping against illegal immigration by enabling effective right to work checks and this was a blatant lie since it would not change anything: right to work checks are already carried put amd legal immigrants have eVisa that are checked online by employers.
It is obvious that the government is being deceitful. Noone wants ID cards except the Tony Blair Institute.
"There's much more to the story because the UK has many digital ID aspects already in place-- such as for work visas and residence permits-- but these not coordinated into a whole."
They're determined to bring it in and will attempt to gradually. You need an ID for so many things in the UK so it is a lie in some ways.
Photo ID is common already. Not just driving licences (with a "c") and passports, but in numerous other forms. In Scotland, young people have to prove their age continually, and they have a choice of these or state issued photo bus passes, Young Scot cards (no idea who issues these but commonly used as ID) and student IDs. They're definitely being conditioned into it.
I live in Scotland, and the scenario here is pretty similar to England.
If you live here you hear about England non-stop on the news. In fact, many papers and news channels report England-only developments as if they apply to the rest of the UK.
I generally don't pay attention to the news at all anymore (the vast majority of it is rage bait). I think the last time I paid attention to the BBC news when it was on when the Bibby Stockholm Barge was being moored in Portland. The way the news anchor was talking about it, was as if it was full of Zombies with the T-Virus on-board. While the footage just had a tugboat pulling a barge. It was utterly ridiculous.
Outside of that the news seems to be very focused on what happens in London/Westminster, Ukraine, Palestine or Trump. I don't really care about any of those.
I only pretty much care about things like Digital ID and stuff like OSA.
However Scotland (much like Northern Island) seems like it own little weird microcosm.
I don't really understand the political landscape outside of England and whenever I see statements made by the main party up there (the SNP) they seem to be utterly ridiculous jingoist anti-English nonsense that feels like it stems from Braveheart. I am not going to listen to a politician that basically painting me out to be the enemy, which is odd since most Scottish people I've spoken to are quite friendly.
I mean yeah they will have to show it if they’re buying booze, cigs, or getting a discounted travel ticket. But I don’t think that’s unreasonable, and “conditioning” feels overly dramatic.
They aren't getting a discounted travel ticket, they're getting free bus travel in return for carrying around a photo ID all the time. (I don't agree with fourteen and fifteen year olds being able to travel on the bus for free at ten or eleven at night on Friday or Saturday and getting up to no good on the public coin. It was sold to the public as a school bus replacement and/or reducing car use. It is an obvious attempt to normalise ID cards.)
You could get a prepaid (pay as you go) SIM for £1 from any phone service shop in a minute.
Few years ago I could get a "Passport" account from HSBC without UK phone at all and without a proof of address, I was simply asked to show my employment contract to THW clerk.
And the rest -- in the UK lives many EU citizens who are used to having the ID cards and are used to their utility. Many are VASTLY superior to what Labour was trying to impose.
The thing is, there's a fundamental difference between these and the ID card UK's Labour wanted to introduce.
It wasn't to make things EASIER. If it was, you'd get a plastic with NFC, photo, perhaps UTR or NINo and a date of birth, with a storage to keep your Oyster card or other sort of ID. Its a solved and tried problem.
It wasn't to make things safer - otherwise you could use it to sign your documents with a certificate - securely, reading your ID by your phone. You could use your ID to ANONYMOUSLY (yes) confirm your age. Not only offline (when buying alcohol as a Muslim for example), but also online.
It was openly planned to be used as a tool of control and oppression. PM was claiming it will be easier to control the pesky immigrants (lying it will make impossible employing someone illegally - lying, because Right to Work scheme is in force right noe, and its also completely online).
It was supposed to be a bind, not a tool. Only online identifier is a nightmare waiting to happen for every single European with a settled status -- NOTHING to prove legal status except for computer saying "yay". People lost job, homes, got bounced off the border because "the computer" wrongly claimed they were not legally.
THIS is what it was supposed to be in the first place.
It's okay if you don't believe me, but in that case please look up three examples: lists if features of the Estonian, Dutch and Polish ID card, what things you can do with use of either, consider the convenience and safety, and THEN compare it with only-online solution touted by the Labour, their intended use and features. Not a list of the documents it will supposedly replace, but features.
And that in XXIst century with eIDAS 2.0 in force - so the best practices available to pick and use.
Couldn't have said it better - you are 100% correct.
And yes - regarding a UK phone number: you can buy a pre-paid SIM in literally every single supermarket or corner shop / convenience store in the country like you would buy a can of Coke or a pack of chewing gum, this is a non-isue.
To get a UK phone number, is it not enough to get a tourist plan? Most places I’ve been have tourist SIM cards at the airport, and more recently tourist eSIM plans.
These examples could both be much better IMHO with a top comment block that describes the purpose of the functionality and shows good usage examples. Something like this below, and ideally using runnable doc comments to help keep the comment correctly explaining the code.
Replace symbol placeholders in the input string with translated values. Scan the string for symbol placeholders that use the format "$foo". The format uses a dollar sign, then optional ASCII letter, then optional word characters. Each recognized symbol is replaced with its corresponding value.
Symbols are only replaced if the symbol exists i.e. getSymbol(String) returns non-null, and the symbol has not already been replaced in this invocation.
Example:
- input = "Hello $name, welcome to $city!"
- output -> "Hello Alice, welcome to Boston!"
Return the string with symbol placeholders replaced.
The Rust code in the assert_f64_eq macro is:
I'm the author of the Rust assertables crate. It provides floating-point assert macros much as described in the article.https://github.com/SixArm/assertables-rust-crate/blob/main/s...
If there's a way to make it more precise and/or specific and/or faster, or create similar macros with better functionality and/or correctness, that's great.
See the same directory for corresponding assert_* macros for less than, greater than, etc.
reply