Here is the thing: In C/C++ every pointer is automatically casted into (const void*) and every non-const pointer is automatically casted into (void*). The msdn web page used cast just because of the pointer arithmetic involved, as you see they add an integer value to the start of the buffer: (LPCTSTR)m_sendBuffer + m_nBytesSent. They cast the buffer pointer into LPCTSTR because they want the addition to step the pointer with *byte* granularity. The problem with this is that LPCTSTR can translate not only to (const char*) but also into (const wchar_t*) (when the project character set is unicode, and note that wchar_t is a type whose size is 2 bytes!) so I guess MS guys made a mistake here and (LPCTSTR) is a bug (if the project character setting is set the unicode), they should have used (char*) or its equivalent in winodws: LPSTR or const char* or LPCSTR or something like that, something that is a *byte* pointer. Your problem is that you try to convert an instance of your struct into a pointer. You can not convert an instance into a pointer! You can converty only a pointer into a different type of pointer and in rare cases conversion might be needed between pointer and integral types.
GetSystemTime( &m_current_time );
int t1 = sizeof( m_current_time );
// This is the case where you try to convert your instance into a pointer incorrectly:
int chars_sent = m_C_Server_Send_Time_Socket->Send( (LPCTSTR) m_current_time, t1, 0 );
// Here is the correct way to do that:
int chars_sent = m_C_Server_Send_Time_Socket->Send( (LPCTSTR) &m_current_time, t1, 0 );
// Note that every pointer can be converted into (const void*)
// so the cast is totally unnecessary and you can write simply:
int chars_sent = m_C_Server_Send_Time_Socket->Send( &m_current_time, t1, 0 );
Note that using (LPCTSTR) is a bug even on microsoft's side! You need to convert your struct pointer into a (char*) only if you want to step your pointer with byte precision!
// The following steps the pointer with sizeof(SYSTEMTIME) bytes in memory!!!!!
// The resulting pointer points to the first byte that follows the last byte of your struct.
SYSTEMTIME* p = &m_current_time + 1;
//Since pointers and arrays in C/C++ work very similarly the above code is identical to this:
SYSTEMTIME* p = &(&m_current_time);
//Pointer arithmetic and array indexing behave very similarly.
// The following expressions step the pointer just by 1 byte (sizeof(char)) in memory!!!!!
// We basically index into our struct as if it was a byte array...
// This is what MS guys wanted to do but in some cases (with unicode character setting)
// their code steps the pointer with 2 byte granularity (sizeof(whcar_t)) that is a bug.
char* p = (char*)&m_current_time + 1;
// The statements below are also valid because <code>char*</code> (like any other non-const
// pointer) is automatically casted to both <code>void*</code> and <code>const void*</code>.
void* p = (char*)&m_current_time + 1;
constvoid* p = (char*)&m_current_time + 1;
Note that an addition or a substraction on a pointer always steps the pointer with the size of the type the pointer points to (like when you indexing into an array of the specified type). For this reason you can not step void pointers without casting them into something else - the size of void isn't defined.
If we speak of unicode and wchar_t then its not guaranteed that the data is transferred per character over the network, the variable name they use for incrmenting also reflects this: m_nBytesSent. They code heavily relies on the fact that LPCTSTR==LPCSTR in their case. Changing to unicode charset would introduce a hidden bug that compiles silently. If you decide to write code that has to compile with both ansi and widechar setting then using LPCTSTR is valid in many cases but this is an exception. I myself question the usefulness of supporting both ansi and widechar these days (so I don't anymore use defines like LPCTSTR and LPSTR) since its pain in the ass to write a program that compiles with both settings and today we can say that the majority of machines runs NT whose native is utf16. Its also a pain to search for bugs that arise only with one of the settings.
I searched for their code and I guess I found the codepiece OP is talking about: http://msdn.microsoft.com/en-us/library/aa268613%28v=vs.60%29.aspx[^]
They are using ansi string literals (without TEXT macro) so their code wouldn't compile with uncode charset, for this reason I guess they haven't tried this code with unicode setting so the bug could easily hide there.
This is pretty cool. You guys can disagree about something, go to the facts, then come to an agreement on those facts. All without getting riled up in the slightest. Thanks for thinking and discussing this and thanks for setting a good example.
That's probably because we are just the same as you. Part of the reason for visiting this site is to learn new things from reading answers, getting links to new information, and discussing issues with others. And even after 40+ years working with computers, I still consider myself a novice.
One of these days I'm going to think of a really clever signature.
Becoming an "expert" even in an a very specialized area (small part of the whole computer science miracle) always involved finding out how stupid I am. Computer science covers too many areas for someone to learn even within his full lifespan, not to mention its rapid change over time...
After reading your first post a few times I think I have figured out your message.
In the first code block you are describing my error in the OP and providing the Whys of the situation.
In the second code block you are describing the techniques used to access the various components of the object. I did not ask that, but you provided it as a necessary bonus.
Please comment and confirm/deny my interpertation.
Edit: I addressed this to a) recognize him specifically, b) identify the post. However, every reader is invited to respond.
You are welcome! However the cast to (char*) is barely an access of subcomponents, its rather accessing the contents of the object as binary byte array that should be considered as a primitive form of serialization. It might not work if you share the binary data for example between two machines that are different in endianness or if the class/struct member alignment used by the compiler of those systems is different and you don't set alignment explicitly (that isn't always possible)...
Note: In a simple program that isn't crossplatform its okay to serialize data this way. Lets say you write an exe that communicates over sockets with other machines - if all the machines run windows and you copy the same executable to all machines then everything will be fine, it would be a mistake to overcomplicate your serialization in this case. However the problem I wrote about (endianness, alignment) arises much often on linux platform where the underlying machine architectures can differ significantly and the same program has to be recompiled.