What if the program remains open for an hour or three (I hope they close it sooner, but I have to think of the "what ifs")? Is the Sleep command going to be the right way to go, or am I going to create a problem when using it over an extended period of time?
Why is common sense not common? Never argue with an idiot. They will drag you down to their level where they are an expert. Sometimes it takes a lot of work to be lazy Please stand in front of my pistol, smile and wait for the flash - JSOP 2012
Thanks, Wes. I was able to resolve the problem by installing Visual Studio 2012 Express and recommended components, removing the offending line from the Microsoft.Cpp.Platform.targets file (double-clicked error message), and compiling as Release.
It might depend on the implementation of the multimap. I checked it long ago but if I remember right your erase() call sets back the allocation to a minimum, identical to the allocation performed by the default ctor of multimap (at least in the SGI stl implementation of Visual C++ I used at the time). Note that even newly created empty std containers have a small piece of memory preallocated and some containers (like std::vector) dont shrink the size of the allocated memory area (capacity) even if you erase items.
A trick that seems to reset the allocated memory of any std containers regardless of stl implementation and container type is the following:
typedef std::multimap<int,int> MyMap;
// We create a new map instance and we swap its contents with the other "big" container instance.// This swap operation replaces the pointers inside the two containers so after the swap() the// global_map contains the small newly allocated blocks, and empty_map contains the big mem blocks// previously owned by global_map. Note: when empty_map runs out of scope it releases the big block.// This trick works with other stl container types too.
Why mix std::strings and MFC CStrings? And yes, the memory used to store the map's contents is freed since the map will handle destructing its contents and the contents don't require any manual cleanup.
Here is the thing: In C/C++ every pointer is automatically casted into (const void*) and every non-const pointer is automatically casted into (void*). The msdn web page used cast just because of the pointer arithmetic involved, as you see they add an integer value to the start of the buffer: (LPCTSTR)m_sendBuffer + m_nBytesSent. They cast the buffer pointer into LPCTSTR because they want the addition to step the pointer with *byte* granularity. The problem with this is that LPCTSTR can translate not only to (const char*) but also into (const wchar_t*) (when the project character set is unicode, and note that wchar_t is a type whose size is 2 bytes!) so I guess MS guys made a mistake here and (LPCTSTR) is a bug (if the project character setting is set the unicode), they should have used (char*) or its equivalent in winodws: LPSTR or const char* or LPCSTR or something like that, something that is a *byte* pointer. Your problem is that you try to convert an instance of your struct into a pointer. You can not convert an instance into a pointer! You can converty only a pointer into a different type of pointer and in rare cases conversion might be needed between pointer and integral types.
GetSystemTime( &m_current_time );
int t1 = sizeof( m_current_time );
// This is the case where you try to convert your instance into a pointer incorrectly:int chars_sent = m_C_Server_Send_Time_Socket->Send( (LPCTSTR) m_current_time, t1, 0 );
// Here is the correct way to do that:int chars_sent = m_C_Server_Send_Time_Socket->Send( (LPCTSTR) &m_current_time, t1, 0 );
// Note that every pointer can be converted into (const void*)// so the cast is totally unnecessary and you can write simply:int chars_sent = m_C_Server_Send_Time_Socket->Send( &m_current_time, t1, 0 );
Note that using (LPCTSTR) is a bug even on microsoft's side! You need to convert your struct pointer into a (char*) only if you want to step your pointer with byte precision!
// The following steps the pointer with sizeof(SYSTEMTIME) bytes in memory!!!!!// The resulting pointer points to the first byte that follows the last byte of your struct.
SYSTEMTIME* p = &m_current_time + 1;
//Since pointers and arrays in C/C++ work very similarly the above code is identical to this:
SYSTEMTIME* p = &(&m_current_time);
//Pointer arithmetic and array indexing behave very similarly.// The following expressions step the pointer just by 1 byte (sizeof(char)) in memory!!!!!// We basically index into our struct as if it was a byte array...// This is what MS guys wanted to do but in some cases (with unicode character setting)// their code steps the pointer with 2 byte granularity (sizeof(whcar_t)) that is a bug.char* p = (char*)&m_current_time + 1;
// The statements below are also valid because <code>char*</code> (like any other non-const// pointer) is automatically casted to both <code>void*</code> and <code>const void*</code>.void* p = (char*)&m_current_time + 1;
constvoid* p = (char*)&m_current_time + 1;
Note that an addition or a substraction on a pointer always steps the pointer with the size of the type the pointer points to (like when you indexing into an array of the specified type). For this reason you can not step void pointers without casting them into something else - the size of void isn't defined.
If we speak of unicode and wchar_t then its not guaranteed that the data is transferred per character over the network, the variable name they use for incrmenting also reflects this: m_nBytesSent. They code heavily relies on the fact that LPCTSTR==LPCSTR in their case. Changing to unicode charset would introduce a hidden bug that compiles silently. If you decide to write code that has to compile with both ansi and widechar setting then using LPCTSTR is valid in many cases but this is an exception. I myself question the usefulness of supporting both ansi and widechar these days (so I don't anymore use defines like LPCTSTR and LPSTR) since its pain in the ass to write a program that compiles with both settings and today we can say that the majority of machines runs NT whose native is utf16. Its also a pain to search for bugs that arise only with one of the settings.