Here is the thing: In C/C++ every pointer is automatically casted into (const void*) and every non-const pointer is automatically casted into (void*). The msdn web page used cast just because of the pointer arithmetic involved, as you see they add an integer value to the start of the buffer: (LPCTSTR)m_sendBuffer + m_nBytesSent. They cast the buffer pointer into LPCTSTR because they want the addition to step the pointer with *byte* granularity. The problem with this is that LPCTSTR can translate not only to (const char*) but also into (const wchar_t*) (when the project character set is unicode, and note that wchar_t is a type whose size is 2 bytes!) so I guess MS guys made a mistake here and (LPCTSTR) is a bug (if the project character setting is set the unicode), they should have used (char*) or its equivalent in winodws: LPSTR or const char* or LPCSTR or something like that, something that is a *byte* pointer. Your problem is that you try to convert an instance of your struct into a pointer. You can not convert an instance into a pointer! You can converty only a pointer into a different type of pointer and in rare cases conversion might be needed between pointer and integral types.
GetSystemTime( &m_current_time );
int t1 = sizeof( m_current_time );
// This is the case where you try to convert your instance into a pointer incorrectly:int chars_sent = m_C_Server_Send_Time_Socket->Send( (LPCTSTR) m_current_time, t1, 0 );
// Here is the correct way to do that:int chars_sent = m_C_Server_Send_Time_Socket->Send( (LPCTSTR) &m_current_time, t1, 0 );
// Note that every pointer can be converted into (const void*)// so the cast is totally unnecessary and you can write simply:int chars_sent = m_C_Server_Send_Time_Socket->Send( &m_current_time, t1, 0 );
Note that using (LPCTSTR) is a bug even on microsoft's side! You need to convert your struct pointer into a (char*) only if you want to step your pointer with byte precision!
// The following steps the pointer with sizeof(SYSTEMTIME) bytes in memory!!!!!// The resulting pointer points to the first byte that follows the last byte of your struct.
SYSTEMTIME* p = &m_current_time + 1;
//Since pointers and arrays in C/C++ work very similarly the above code is identical to this:
SYSTEMTIME* p = &(&m_current_time);
//Pointer arithmetic and array indexing behave very similarly.// The following expressions step the pointer just by 1 byte (sizeof(char)) in memory!!!!!// We basically index into our struct as if it was a byte array...// This is what MS guys wanted to do but in some cases (with unicode character setting)// their code steps the pointer with 2 byte granularity (sizeof(whcar_t)) that is a bug.char* p = (char*)&m_current_time + 1;
// The statements below are also valid because <code>char*</code> (like any other non-const// pointer) is automatically casted to both <code>void*</code> and <code>const void*</code>.void* p = (char*)&m_current_time + 1;
constvoid* p = (char*)&m_current_time + 1;
Note that an addition or a substraction on a pointer always steps the pointer with the size of the type the pointer points to (like when you indexing into an array of the specified type). For this reason you can not step void pointers without casting them into something else - the size of void isn't defined.
If we speak of unicode and wchar_t then its not guaranteed that the data is transferred per character over the network, the variable name they use for incrmenting also reflects this: m_nBytesSent. They code heavily relies on the fact that LPCTSTR==LPCSTR in their case. Changing to unicode charset would introduce a hidden bug that compiles silently. If you decide to write code that has to compile with both ansi and widechar setting then using LPCTSTR is valid in many cases but this is an exception. I myself question the usefulness of supporting both ansi and widechar these days (so I don't anymore use defines like LPCTSTR and LPSTR) since its pain in the ass to write a program that compiles with both settings and today we can say that the majority of machines runs NT whose native is utf16. Its also a pain to search for bugs that arise only with one of the settings.
I searched for their code and I guess I found the codepiece OP is talking about: http://msdn.microsoft.com/en-us/library/aa268613%28v=vs.60%29.aspx[^]
They are using ansi string literals (without TEXT macro) so their code wouldn't compile with uncode charset, for this reason I guess they haven't tried this code with unicode setting so the bug could easily hide there.
This is pretty cool. You guys can disagree about something, go to the facts, then come to an agreement on those facts. All without getting riled up in the slightest. Thanks for thinking and discussing this and thanks for setting a good example.
That's probably because we are just the same as you. Part of the reason for visiting this site is to learn new things from reading answers, getting links to new information, and discussing issues with others. And even after 40+ years working with computers, I still consider myself a novice.
One of these days I'm going to think of a really clever signature.
Becoming an "expert" even in an a very specialized area (small part of the whole computer science miracle) always involved finding out how stupid I am. Computer science covers too many areas for someone to learn even within his full lifespan, not to mention its rapid change over time...
After reading your first post a few times I think I have figured out your message.
In the first code block you are describing my error in the OP and providing the Whys of the situation.
In the second code block you are describing the techniques used to access the various components of the object. I did not ask that, but you provided it as a necessary bonus.
Please comment and confirm/deny my interpertation.
Edit: I addressed this to a) recognize him specifically, b) identify the post. However, every reader is invited to respond.
You are welcome! However the cast to (char*) is barely an access of subcomponents, its rather accessing the contents of the object as binary byte array that should be considered as a primitive form of serialization. It might not work if you share the binary data for example between two machines that are different in endianness or if the class/struct member alignment used by the compiler of those systems is different and you don't set alignment explicitly (that isn't always possible)...
Note: In a simple program that isn't crossplatform its okay to serialize data this way. Lets say you write an exe that communicates over sockets with other machines - if all the machines run windows and you copy the same executable to all machines then everything will be fine, it would be a mistake to overcomplicate your serialization in this case. However the problem I wrote about (endianness, alignment) arises much often on linux platform where the underlying machine architectures can differ significantly and the same program has to be recompiled.
There is nothing you can do since there is no direct back link from the Windows handle to the class object. You will have to modify both your exe and your DLL to pass the pointer across from one to the other.
One of these days I'm going to think of a really clever signature.
Environment Windows 7, Visual Studio 2008, MFC, C++
Goal: create an asynchronous server and client class to demonstrate a simple server client operation. Base class for Server and Client is CAsyncSocket. IP address 127.0.0.1, port 49000
Status: The dialog has buttons for the server to Initialize, Listen, and Accept, so far. It also has buttons for the client to Initialize and Connect. Both have Close buttons. One dialog but two separate classes. Umm, make that three counting the CAsyncSocket class used by the server to communicate after the two have connected.
In the server group buttons Initialize and Accept appear to work correctly, in that order of course. The returned status is TRUE and wsa error code is zero. The Listen button creates a new object to handle the transactions and gets WSA code 10035 WSAEWOULDBLOKC, would block, treated as an expected result. Good so far. Now, hopefully, the server object is waiting for the client.
Then the buttons for the client are selected to Initialize, wsa error = 0, then Connect to the same IP address and port. It gets wsa error code 10048, WSAEADDRINUSE, address already in use. Hmmm. Yeah, it is in use, but if it is not in use by a server then the client cannot connect.
I believe I have the code working as I expect, but there is a fundamental error in my logic. Is this sufficient info for someone to tell me what I have done wrong or omitted? What might that error be?
Following the OP, and per various tutorials I have found, I have created skeleton methods and virtual overrides everywhere I think there should be one I have also set breakpoints in all these methods. The results are not expected and quite interesting.
Note: The send time class is instantiated in the C_Server::Accept() method called by the Accept button in the dialog and passed into the Accept() method that I think is the one from the base class CAsyncSocket. Is that the correct action to perform?
After clicking the Connect for the Client class, the breakpoint is activated in C_Client::OnConnect( int nErrorCode)
The error code said the port was in use, but I am now thinking that maybe the error code should be expected and considered normal. Maybe I should write code in here for when the connect is successful. Per an example this method has code to call CAsyncsocket::OnConnect( int x ); I presume that is the base class. That function has no code. Should there be anything there?
Click Run in the debugger to see what happens and,…
The breakpoint in C_Server::OnAccept( int error_code ) is called. Thanks to an example I found, it calls CAsyncSocket::OnAccept( error_code ) which again is empty. I am now presuming that despite the earlier error code, the server code, the part within the Windows API that I don’t see, has indeed accepted the connection and it ready to send data to and receive data from the client. Is this indeed the case? If so, is there something that should be done in this method?
Clicking the debugger run one more time,
The breakpoint in C_Client::OnSend( int error_code ) is called.
Now this is puzzling. My code does not send anything. Is this telling me the client is expected to send something to the server? What should it send?
Thanks for your time
-- modified 16-Sep-12 16:45pm.
Last Visit: 31-Dec-99 18:00 Last Update: 20-Sep-21 23:29