If you want to represent a value larger than 32bits in an enum in MSVC++ you can use C++0x style syntax to tell the compiler exactly what kind of integral type to store the enum values. Unfortunately by default an enum is always 32bits, and additionally while you can specify constants larger than 32bits for the enum values, they are silently truncated to 32bits.
For instance the following doesn't compile because Lorem::a and Lorem::b have the same value of '1':
enum Lorem {
a = 0x1,
b = 0x100000001
} val;
switch (val) {
case Lorem::a:
break;
case Lorem::b:
break;
}
Unfortunately it is not an error to have b's constant truncated, and the previous without the switch statement does compile just fine:
enum Lorem {
a = 0x1,
b = 0x100000001
} val;
But you can explicitly specify that the enum should be represented by a 64bit value and get expected compiling behavior with the following:
enum Lorem : UINT64 {
a = 0x1,
b = 0x100000001
} val;
switch (val) {
case Lorem::a:
break;
case Lorem::b:
break;
}
Fascinating, but really most of the time it is in your code. Really you should look there first. Usually not the compiler’s fault, or the OS’s fault, or a loose wire in the CPU…
(long*)
to a (long)
you get a W4 warning. However, the #defines are still set for 32bit builds. This means that other parts of
the code can make assumptions based on the #defines that are valid on 32bit but generate 64bit errors or warnings....
#ifdef _WIN64
...
WINUSERAPI
LONG_PTR
WINAPI
SetWindowLongPtrA(
__in HWND hWnd,
__in int nIndex,
__in LONG_PTR dwNewLong);
...
#else /* _WIN64 */
...
#define SetWindowLongPtrA SetWindowLongA
...
#endif /* _WIN64 */
...
In 64bit everything's normal but in 32bit SetWindowLongPtrA is #defined to SetWindowLongA which takes a LONG rather than a LONG_PTR. So take the following code snippet:
...
LONG_PTR inputValue = 0;
LONG_PTR error = SetWindowLongPtrA(hWnd, nIndex, inputValue);
...
This looks fine but generates warnings with the Wp64 flag.