Environment Windows XP and 7, Visual Studio 2008, C++.
Experience level: novice with TCP/IP, but do have the application working with CAsyncSocket.
The message application receives telemetry data from some hardware (arrives fast and furious), processes it and dispatches the results to a display device via TCP/IP. It sends the data to the client but does not receive from the client.
Setup: the TCP “Manager” is started by the main application and it listens for the client. On connect, it creates the “Sender” that does all the sending.
Question: what is the best, read that simple and maintainable, way for the manager to get the Sender pointer to the main application so it can use the Sender? When the client closes the connection, how is the application informed of that?
Tentative proposal: the application provides the Manager with a pointer to itself. When the connect is made the Manager calls an application method such as
Set_TCP_Sender_Pointer( C_Sender_Class *new_sender_pointer );
The Manager can pass the pointer (to the main application) to the Sender on creation. When the Sender gets a close from the client, it calls the same method using a NULL for the argument. This makes the application aware there is no Sender. Then the Sender exits. (There is only one thread so I think there is no concern about the application using a pointer to a non-existent Sender.)
I am pretty sure that will work, but if you had to pick up on this project, would it be easy to understand and maintain? Presume you are not an expert with TCP/IP in the Microsoft world.
I am open to alternative suggestions.
After writing the OP I outlined the process with all the steps then began to implement.
I have started using a common directory for re-usable code. In order for the Manager to call a method in its creator it must know the name of the owner's class. To do that I added a forward declaration in the Manager.
However, this means that when the class is re-used, and the owner has a different name, the Manager must be changed.
There is another indicator this is a bad practice. After the forward declaration in the dot H file, the dot CPP file needs to reference the dot H file of the owner. The common code resides in another directory and it cannot find the central code of the main project unless it is specifically spelled out. Then, if the project is moved or re-name, the common code must change.
There may be a way to do this automatically using directive and path names within Visual Studio. But I now think that even if that can be done it would be miss-guided.
A utility class can call "down" the heiarchy to objects it creates, but should not call up to its owner. While the concept of it calling up may make the owner code neater, it represents an inversion of authority. (Regardless of how the methods are name, a call up in an inversion.)
The main application uses pointers into the lower level objects to accomplish its tasks. How does it become aware that the lower level needs to exit, or maybe even has already exited.
When the TCP sending function detects that the client has closed the connection it does not exit right away. It waits until the upper level code calls the method to send the data. Then the lower level returns an error code stating that the data has not been sent and that the object is exiting.
I generally do not like exceptions, but this appears to be a time when an exception is warranted. When the lower level must terminate unexpectedly, then an exception could work its way back to the upper level and the exception handler can NULL the pointer.
I personally never used exceptions for ATL projects. Now a colleague votes strongly for throwing exceptions instead of returning HRESULTs. I did some research, and even here on codeproject I don't find much about ATL and exceptions. To me it appears that exceptions in ATL are not very popular. So I would like to know from you ATL guys:
Do you use exceptions? Or you simply return an HRESULT?
I used a combination. A colleague and I developed an exception object with various constructors that could, for example, be thrown with parameters of: a HRESULT, a string describing what failed, the file name it was thrown in, and the line number of the file it was thrown in.
The exception object constructor then took care of writing the error as an event to the System Event log as well as looking up any messages from the HRESULT, or accessing any other error handling mechanism needed.
This resulted in code unclutered with complicated error handling and a standard way of making sure that the log showed not only an error code or message, but a comment on what was being attemted, which .cpp file this was detected in and the line number of the file.
I agree when you say you'd rather return an error code than throw an exception. I found using exceptions locally makes the code clearer and imposes some structure. Having caught the exception locally and dealt with it i.e. usually recorded what exactly went wrong etc I too then prefered to return an error code.
Windows XP and 7, Visual Studio 2008 and 2010, MFC, C++
Application A supports Unicode and application B does not. Class Log_Writer does not support Unicode. Both applications use Log_Writer ok passing a char * to log writer.
The problem is within Log_Writer when it cannot open the log file. It uses AfxMessageBox() to display an error to the user. When Log_Writer is compiled within the A environment the error message is:
None of the 2 overloads could convert all the argument types.
What can be done so that Log_Writer will work in both environments?
is wrong for ASCII based compiles, since the L prefix generates a Unicode string. Use the _T() macro around all your string and character constants and they will be correctly generated as ASCII or Unicode depending on the project settings. Note, do not forget to #include <tchar.h> for the foregoing macro.
One of these days I'm going to think of a really clever signature.
If you are writing code for windows NT and not for Win9x and its friends then forget about the A version of your program and simply stop maintaining it. This results in much less trouble for programmers, cleaner easier to read code. Actually WinNT uses utf16 internally so all A functions are just functions that do string conversions before and/or after calling a W version. Win9x is almost dead and the same is true for the A version of programs that is simply a heritage from old times.
You have a good point. My problem (and notice my phrasing) is that I dislike unicode. All the various option make dealing with it an absolute pain.
If you (you in the general sense) want to use computers, then it is time to exit the hieroglyphics epoch and move into the age of alphabets. If you cannot fit your character set into 256 limit then, in my not so humble opinion, you do not have an alphabet.
Yeah, I know I may be in the minority, and no, this is not intended as the start of a flame war. Just my opinion.
I have spent "WAY" too much time trying to figure out the syntax of dealing with unicode. And finally, I work in the military world and my code will never ever be used in the world of hieroglyphs. Straight up ASCII is much easier to deal with.
You have to deal with unicode if you make applications to sell worldwide. 256 characters isnt enough for example for chinese character sets (maybe for simplified chinese) and china is a big market. Unicode isnt so painful if you use utf8 that is more natural to use in C/C++ program because you can write "string" instead of L"string" and you have to convert from utf8 to utf16 only when you call winNT api. Thats what I usually do and this way your program becomes easy to port to other platforms too (linux, mac, ...).