After writing the OP I outlined the process with all the steps then began to implement.
I have started using a common directory for re-usable code. In order for the Manager to call a method in its creator it must know the name of the owner's class. To do that I added a forward declaration in the Manager.
However, this means that when the class is re-used, and the owner has a different name, the Manager must be changed.
There is another indicator this is a bad practice. After the forward declaration in the dot H file, the dot CPP file needs to reference the dot H file of the owner. The common code resides in another directory and it cannot find the central code of the main project unless it is specifically spelled out. Then, if the project is moved or re-name, the common code must change.
There may be a way to do this automatically using directive and path names within Visual Studio. But I now think that even if that can be done it would be miss-guided.
A utility class can call "down" the heiarchy to objects it creates, but should not call up to its owner. While the concept of it calling up may make the owner code neater, it represents an inversion of authority. (Regardless of how the methods are name, a call up in an inversion.)
The main application uses pointers into the lower level objects to accomplish its tasks. How does it become aware that the lower level needs to exit, or maybe even has already exited.
When the TCP sending function detects that the client has closed the connection it does not exit right away. It waits until the upper level code calls the method to send the data. Then the lower level returns an error code stating that the data has not been sent and that the object is exiting.
I generally do not like exceptions, but this appears to be a time when an exception is warranted. When the lower level must terminate unexpectedly, then an exception could work its way back to the upper level and the exception handler can NULL the pointer.
I personally never used exceptions for ATL projects. Now a colleague votes strongly for throwing exceptions instead of returning HRESULTs. I did some research, and even here on codeproject I don't find much about ATL and exceptions. To me it appears that exceptions in ATL are not very popular. So I would like to know from you ATL guys:
Do you use exceptions? Or you simply return an HRESULT?
I used a combination. A colleague and I developed an exception object with various constructors that could, for example, be thrown with parameters of: a HRESULT, a string describing what failed, the file name it was thrown in, and the line number of the file it was thrown in.
The exception object constructor then took care of writing the error as an event to the System Event log as well as looking up any messages from the HRESULT, or accessing any other error handling mechanism needed.
This resulted in code unclutered with complicated error handling and a standard way of making sure that the log showed not only an error code or message, but a comment on what was being attemted, which .cpp file this was detected in and the line number of the file.
I agree when you say you'd rather return an error code than throw an exception. I found using exceptions locally makes the code clearer and imposes some structure. Having caught the exception locally and dealt with it i.e. usually recorded what exactly went wrong etc I too then prefered to return an error code.
Windows XP and 7, Visual Studio 2008 and 2010, MFC, C++
Application A supports Unicode and application B does not. Class Log_Writer does not support Unicode. Both applications use Log_Writer ok passing a char * to log writer.
The problem is within Log_Writer when it cannot open the log file. It uses AfxMessageBox() to display an error to the user. When Log_Writer is compiled within the A environment the error message is:
None of the 2 overloads could convert all the argument types.
What can be done so that Log_Writer will work in both environments?
is wrong for ASCII based compiles, since the L prefix generates a Unicode string. Use the _T() macro around all your string and character constants and they will be correctly generated as ASCII or Unicode depending on the project settings. Note, do not forget to #include <tchar.h> for the foregoing macro.
One of these days I'm going to think of a really clever signature.
If you are writing code for windows NT and not for Win9x and its friends then forget about the A version of your program and simply stop maintaining it. This results in much less trouble for programmers, cleaner easier to read code. Actually WinNT uses utf16 internally so all A functions are just functions that do string conversions before and/or after calling a W version. Win9x is almost dead and the same is true for the A version of programs that is simply a heritage from old times.
You have a good point. My problem (and notice my phrasing) is that I dislike unicode. All the various option make dealing with it an absolute pain.
If you (you in the general sense) want to use computers, then it is time to exit the hieroglyphics epoch and move into the age of alphabets. If you cannot fit your character set into 256 limit then, in my not so humble opinion, you do not have an alphabet.
Yeah, I know I may be in the minority, and no, this is not intended as the start of a flame war. Just my opinion.
I have spent "WAY" too much time trying to figure out the syntax of dealing with unicode. And finally, I work in the military world and my code will never ever be used in the world of hieroglyphs. Straight up ASCII is much easier to deal with.
You have to deal with unicode if you make applications to sell worldwide. 256 characters isnt enough for example for chinese character sets (maybe for simplified chinese) and china is a big market. Unicode isnt so painful if you use utf8 that is more natural to use in C/C++ program because you can write "string" instead of L"string" and you have to convert from utf8 to utf16 only when you call winNT api. Thats what I usually do and this way your program becomes easy to port to other platforms too (linux, mac, ...).
I have a CAB file which has a COM DLL and .INF file. My application is embedding this component in a JSP page and it is working fine in IE7 & IE8 but failed to work on IE9.
Any suggestion on how to solve this issue
Where can I get books on ActiveX and COM. I started with an ActiveX book (ATL) and it began by stating I must know COM. I wound up with Dale Rogerson's "Inside Com" Its on Amazon but no soft copy. I don't want any more hard copy books.
Every where I find a PDF or ebook version they want me to first down load their downloader. Their code could have anything in it so: No! to that.
So: What books do you recommend to a novice that needs to write an ActiveX component. The container is a commercial product and I need to write a video display for some very custom video input.
I am happy to pay, but it must be a soft copy.
Thanks for your time
EDIT: Another book I need: Win APIs.
The APIs and method that are needed to directly use TPC/IP. I want the ability to bypass all the cycle stealing intermediate classes and go directly to the API. I am having a difficult time with my searches on this and prefer a book that someone recommends rather than a stab in the dark.
You mentioned Inside COM by Dale Rogerson which remember using so I had a glance at my bookshelves. I know Inside COM taught me the underlying principles of interfaces and COM without resorting to ATL or MFC.
Professional COM Applications with ATL (Wrox, 1999) was quite good on Active X, and does have a section 'Building an ActiveX Calendar Control' which explains a lot. However something like this, whilst being the same era as Inside COM, refers to VC6 and ATL3, not the most up to date now.
I did also buy ActiveX Controls Inside Out (Microsoft Press, 1997) and see it on my shelf but this is un-thumbed so I obviously didn't use it a lot, but its more wider ranging than the ATL book mentioning MFC and Visual Basic controls (VBXs).
I don't want to sell mine but hope this might help.
While launching a windows application(.exe) as OLE server, some time CWinApp::OnFileNew() gets failed; due to this handle to the main window (m_pMainWnd) becomes NULL and error message "Error related with mfc42.dll occurred" appeared.
At windows XP, CWinApp::OnFileNew() failed in 1 out of 100 times.
We have not implemented the OnFileNew() in our application; we are using default implementation of OnFileNew().
As per MSDN CWinApp::OnFileNew implements this command differently depending on the number of document templates in the application. If there is only one CDocTemplate, CWinApp::OnFileNew will create a new document of that type, as well as the proper frame and view class.
Problem occurs only at one system and frequency is one out of 100 and error message "Error occurs in mfc42.dll" appeared.
Environment Windows XP Pro, Visual Studio 2008, MFC, C++
This application processes real time telemetry data. The first incarnation uses blocking TCP calls and works fine. It is just difficult to deal with due to the blocking calls.
The second implementation uses CAsyncSocket. After much work I discovered that it can keep up when about 1/3 of the data is processed. It cannot keep up with all the data.
I say that with some level of confidence because the code monitors the depth of the buffering queue. Since the telemetry data never stops, at the TCP/IP level, when it receives the WSAEWOULDBLOCK error, it buffers payload packets until it get the OnSend. At the one third data rate the max buffer fill level is 91 payload packets. At the full rate the buffer, currently 240 deep) overflows, frequently.
My next step might be to go to the Win32 API level.
I started with MFC because the vendor whose software feeds me the data provided an MFC template that showed how to get data from their application. It was written in Visual Studio 2008 MFC. During real time operations it does no user interactions and no GUI updates.
Or, I maybe I should just abandon the asynchronous effort due to a combination of too much effort, too litle return, and maybe even not possible.
Drawing on the reader's experience, and on Richard MacCutchan's response in my previous thread, and with a packet rate well in excess of 10 per millisecond:
What is the probability of success if I switch over to the Win32 API programming. Switch to a console application? Is the asynchronous code inherently less efficient, or is this more likely a problem using the CAsynchSocket class. (I presume it looses efficiency in the tradeoff for ease of use.) Or maybe an MFC problem, or combination thereof.
Right now I am leaning towards abandoning the asynchronous effort and just use the blocking code that works well. I would like to hear your thoughts.
Networking is always tricky because you have to tweak the parameters of your networking code to get as "nearly optimal" results as you can. Regardless of the solution you choose you should definitely try to set the send and/or recv bufsize associated with your socket handle (setsockopt() call with SO_RCVBUF/SO_SNDBUF parameters). Set the buffer sizes for example to 1Megabyte and then halve it until your net performance starts degrading. Note that even if you use async sockets the os is still receiving and storing data in the background into the rcv buf of your socket and then you can read that out with a single call! Unfortunately if you use a socket implementation that doesn't allow you to have direct access to the size parameter of your send() / recv() calls then you can not tweak those! Whether async or blocking??? I think this shouldn't be the question. If done well then these should have about the same performance because blocking/async communications are really just 2 different ways to write/read the send and recv buffers of the socket object, the real networking is done by the operating system in the background by the network stack that works with the send/recv buffers of the socket. I prefer async because that is a superset of blocking sockets, you can not solve every problems with just blocking sockets. Servers with the highest performance are also using async because lots of async operations from different sources can be optimized much neatly by the OS anyway.
To be honest I have always written the async socket class for myself, mainly because of crossplatform development. I think writing a basic async socket code is no big deal but you will probably face some problems if you dont read the win32 api docs carefully. On windows you have several choices to write a single async socket - select, WSAWaitForMultipleEvents, iocp, overlapped, ... who knows). To handle a single async socket on its dedicated thread I have always used WSAWaitForMultipleEvents as it is quite easy to use with its companion functions (WSAEventSelect/WSAWaitForMultipleEvents/WSAEnumNetworkEvents). Why WSAWaitForMultipleEvents and not the crossplatform select??? Because with WSAWaitForMultipleEvents you can wait for the socket and also for a custom event of yours (created with WSACreateEvent) because sometimes you have to explicitly wake up the waiting - for example when your program quits or when you add some sendable data to your empty send buffer and your network thread waits. Doing the same wakeup with select() is always tricky and dirty.
Note that previously we were talking only about exchanging data between your application and the network stack (the socket buffers). There is a delay between the send or recv calls that you use to transfer data between the socket buffers and your application memory buffers. If this delay is big, for example because you do other work on your network thread and you don't have a dedicated network thread in your app then the delays will be bigger and you will have to compensate for that with bigger SO_SNDBUF and SO_RCVBUF values because the OS might run out of recvbuf space of the socket or might run out of sendable data while your network thread is doing something in your app. I highly recommend using dedicated threads that do just the transfer between your memory buffers and the socket buffers.
Another thing that can be a bottleneck is the data processing of your application - filling the application layer send buffer and reading the application layer recv buffer and doing some other stuff with the data (calculations, file io, ...). For example if your data processor thread doesn't read your memory buffer quickly enough then it might happen that the network thread that reads data from the socket buffer have to suspend transferring data from the socket recv buffer to your application level memory buffer because its full. Then if the OS/network stack fills up the recv buffer of the socket completely then you cant keep up with the network bandwith and it will affect performance. Lets assume that you are doing calculations with the received data and then you write it out to disk. If it can happen that some kind of data requires more processing than the average kind of data then you might want to compensate those "negative performance peeks" of your processor thread with a larger recv mem buffer in your application but you benefit from this only if the average processing speed of your application is bigger than what is required by the average incoming data. -> This shows quite well that you can tweak a networking application only if you know the exact specs (server hardware configuration, network config, ...). There is no single good solution. You will have to profile each part of your application independently (network recv/send speed, data processing, file io, ...) and then you will have to find out where to put buffers, maybe threads in between parts.
Last Visit: 9-Jul-20 10:59 Last Update: 9-Jul-20 10:59