Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Overflow from signed arithmetic always results in UB. But an unsigned-to-signed conversion outside the range of the target type just results in implementation-defined behavior; see C17 §6.3.1.3 ("Signed and unsigned integers"). And most people these days (including me) take the simple approach of not supporting any implementation that doesn't just perform two's-complement wrapping for all conversions.


> And most people these days (including me) take the simple approach of not supporting any implementation that doesn't just perform two's-complement wrapping for all conversions.

C2x in fact requires 2's complement.


Not quite; C23 requires a two's-complement representation, but it doesn't require two's-complement wrapping for unsigned-to-signed conversions. §6.3.1.3 hasn't been changed, so it's still left to the implementation to guarantee that. The only easily-visible effects of requiring a two's-complement representation are that X_MIN == -X_MAX - 1 for all signed integer types, and in general that the object representations of x and -x - 1 differ by only 1 bit, if the integer type has no padding bits.


Ah, so the “two’s complement from outer space” option (with X_MIN == -X_MAX and a trap representation in place of -XMAX-1) is also finally gone? I did not notice that, thanks.

C18 6.2.6.2p3 had:

> [It] is implementation-defined [...] whether the value with sign bit 1 and all value bits zero [...] is a trap representation or a normal value [for two’s complement].


Is there any modern day architecture don't use twos complements?


GPUs notionally - I can't speak for truly "modern" GPUs but I recall older GPUs having language level "char"/"byte" implemented as a float (presumably with some minimal support to get expected semantics like clamping?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: