Hacker Newsnew | past | comments | ask | show | jobs | submit | nicula's commentslogin

Yeah it's quite interesting. I don't think I've seen this come up in anything besides that talk which is about the HFT space.

However, I did hear a few times about cases where you'd trade better average performance that comes with spikes (e.g. in terms of response time) in favor of worse average performance that has lower variance.


Some people already mentioned this in the r/cpp discussion. Small correction: 256 is not the correct number of iterations, since if all elements in that slice are even, then your 8-bit counter will wrap-around to zero, which can lead to a wrong answer. What you want is 255 iterations.

I've looked at the generated assembly for such a solution and it doesn't look great. I'm expecting a significant speed penalty, but I haven't had the time to test it today. Will probably do so tomorrow.


255 is not ideal either because it results in a partial vector at the end of either 15 or 31 elements. 256-V where V is the vector size is better, so 240 for SSE2/NEON or 224 for AVX2.

This is still lower than optimal because the compiler will reduce to a uint8. Both SSE2 and NEON support reducing to a wider value by _mm_sad_epu8 and vpadal_u8, respectively. This allows for 255 iterations in the inner loop instead of 15 or 7.


Great observations, thanks!

I wrote the code that you suggested (LMK if I understood your points): https://godbolt.org/z/jW4o3cnh3

And here's the benchmark output, on my machine: https://0x0.st/8SsG.txt (v1 is std::count_if(), v2 is the optimization from my blog post, and v3 is what you suggested).

v2 is faster, but v3 is still quite fast.


Yeah, the difference you're seeing is likely due to the inner loop doing so few iterations, in addition to not being unrolled. A hand-rolled version doing 63 iterations of an x4 unrolled loop should be able to saturate the execution core (it'll be bottlenecked by load throughput). But it'll be tough to get the compiler to generate that without intrinsics.


The fastest thing I would think is to process 255 vectors at a time, using an accumulation vector with the k-th byte in the vector representing the # of evens seen for the k-th byte of the vectors in that 255-vector chunk. But, I don’t know how to make autovectorization do that. I could only do that with intrinsics.


Like @wffurr mentioned, this is indeed discussed in a footnote. I just added another remark to the same footnote:

"It's also debatable whether or not Clang's 'optimization' results in better codegen in most cases that you care about. The same optimization pass can backfire pretty easily, because it can go the other way around too. For example, if you assigned the `std::count_if()` result to a local `uint8_t` value, but then returned that value as a `uint64_t` from the function, then Clang will assume that you wanted a `uint64_t` accumulator all along, and thus generates the poor vectorization, not the efficient one."


I'm not sure how "it can go the other way around too" -- in that case (assigning to a uint8_t local variable), it seems like that particular optimisation is just not being applied.

Interestingly, if the local variable is "volatile uint8_t", the optimisation is applied. Perhaps with an uint8_t local variable and size_t return value, an earlier optimisation removes the cast to uint8_t, because it only has an effect when undefined behaviour has been triggered? It would certainly be interesting to investigate further.

In general I agree that being more explicit is better if you really care about performance. It would be great if languages provided more ways to specify this kind of thing. I tried using __builtin_expect to trigger this optimisation too, but no dice.

Anyway, thanks for the interesting article.


> I'm not sure how "it can go the other way around too" -- in that case (assigning to a uint8_t local variable), it seems like that particular optimisation is just not being applied.

So the case that you described has 2 layers. The internal std::count_if() layer, which has a 64-bit counter, and the 'return' layer of the count_even_values_v1() function, which has an 8-bit type. In this case, Clang propagates the 8-bit type from the 'return' layer all the way to the inner std::count_if() layer, which effectively means that you're requesting an 8-bit counter, and thus Clang generates the efficient vectorization.

However, say that you have the following 3 layers: (1) internal std::count_if() layer with a 64-bit counter; (2) local 8-bit variable layer, to which the std::count_if() result gets assigned; (3) 'return' layer with a 64-bit type. In this case the 64-bit type from layer 3 gets propagated to the inner std::count_if() layer, which will lead to a poor vectorization. Demo: https://godbolt.org/z/Eo13WKrK4 . So this downwards type-propagation from the outmost layer into the innermost layer doesn't guarantee optimality. In this case, the optimal propagation would've been from layer 2 down to layer 1 and up to layer 3.

Note: I'm not familiar with how the LLVM optimization pass does this exactly, so take this with a huge grain of salt. Perhaps it does indeed 'propagate' the outmost type to the innermost layer. Or perhaps the mere fact that there are more than 2 layers makes the optimization pass not happen at all. Either way, the end result is that the vectorization is poor.


I've had a look at what's going on in LLVM, and we're both a bit wrong :)

This optimisation is applied by AggressiveInstCombinePass, after the function has been completely inlined. In cases where it is applied, the i64 result of the count is truncated to i8, and this gets propagated to the counter.

In the case where the result is assigned to a local variable, an earlier pass (before inlining) turns a truncate (for the cast) followed by a zero extend (for the return) into an and with 0xff. This persists, and AggressiveInstCombinePass then doesn't propagate this to the counter.

I've posted some selected bits of LLVM IR here:

https://gist.github.com/tomjnixon/d205a56ffc18af499418965ab7...

These come from running clang with "-mllvm=-print-after-all" and grepping for "^define.*_Z20count_even_values_v1RKSt6vectorIhSaIhEE"

This is why i don't see this as an optimisation pass "backfiring" or "go[ing] the other way around" (well, except for the "trunc,extend->and" one which we weren't talking about). Rather, it's just an optimisation not being applied. That might just be a language thing.


Thanks for looking into it!

I modified the footnote to get rid of the misleading statements regarding the 'backfiring' of the optimization. :)


Wouldn't that violate the as-if rule? If you assign to a u8 in layer 2 then the compiler must truncate regardless of the widening of the value upon return. It can't just ignore the narrowing assignment.


At the very end there's a "movzx eax, dl", i.e. zero-extend the low 8 bits of the accumulated value.


Just to correct my own comment:

> with an uint8_t local variable and size_t return value, an earlier optimisation removes the cast to uint8_t, because it only has an effect when undefined behaviour has been triggered

In this case, there is no undefined behaviour, because a narrowing cast to an unsigned type is well-defined. So, this could never have been a good explanation.


haha I didn't even notice that


This issue doesn't require large switch tables in order to show up. Even if you have 4 cases and the rest of them are default'ed, Clang 18 optimizes that to a switch, while Clang 19 does the (potentially) inefficient labels+jumps approach: https://godbolt.org/z/Y6njP8j38

This whole investigation started because I was writing some Rust code with a couple of small `match`es, and for some reason they weren't being optimized to a lookup table. I wrote a more minimal reproduction of that issue in C++ and eventually found the Clang regression. Since Rust also uses LLVM, `match`es suffer from the same regression (depending on which Rust version you're using).


So it's not a Clang regression per se, it's an issue with the LLVM core? Clang is just a frontend, and Rust AFAIK does not use it at all. If you run LLVM 18's `opt` on bytecode generated by Clang 19 and then compile it, does it also generate the same bad assembly?


> So it's not a Clang regression per se, it's an issue with the LLVM core?

Yes.

> If you run LLVM 18's `opt` on bytecode generated by Clang 19 and then compile it, does it also generate the same bad assembly?

No. If you pass the LLVM IR bitcode generated by Clang 18 to Clang 19, then the assembly is good.

I called it a 'Clang regression' in the sense that the way in which I discovered and tested this difference in performance was via Clang. So from a typical user's perspective (who doesn't care about the inner workings and distinct components of Clang), this is a 'Clang regression'.


Oh wow didn't know about this. Somebody else posted it for me. Sorry for the dup:)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: