This seems like a good idea to me, since there's evidence that programmers are making this mistake, and the proposal is to add a warning only for the expressions that are most likely to be mistakes, that is 2^{decimal integer literal} and maybe also 10^{decimal integer literal}. There are many constructs that are well-defined in C, but where it is helpful to have warnings: use of = in condition expressions, implicit fallthrough in switch statements, misleading indentation, unsafe multiple statement macros, and so on. Programming in C is hard enough that every bit of compiler support is valuable.
The power vs xor mistake is no doubt a common one, but I take issue with the 2^8, 2^16, 2^32 examples that got singled out because they are also common bitmasks. Should FLAG_BIT1 ^ FLAG_BIT3 really be a warning?
XOR may be rarely used, but 2^8 is not only one of the most common accidental XORs, it's also probably one of the more common legitimate uses of it.
So 2^8 is a warning, but 2^BITS_IN_BYTE is not? I don't think whether or not the preprocessor helped in making the expression is a good heuristic for whether or not it is a mistake.
A warning heuristic needs to have a low false positive rate; a low false negative rate is nice but is not necessary. The purpose of a warning is to detect some common errors without inconveniencing too many correct programs. If some other errors go undetected then that is a shame but at least it is no worse than the current situation.
Modern compilers do have insight into the source before it was preprocessed. You'll notice that clang and (modern) gcc will show you where the error is in the unexpanded source line, and then show the expansion of the macros. So when it detects "2^32", it can look back to see if it was a product of macro expansion or if the literal was directly written, and warn accordingly.
Interestingly, msvc can also do this, by virtue of not even having a distinct preprocessor phase at all.