The computer was rebooted, same results. On the above suggestion, did a Clean and Rebuild, same results. I will try another reboot and clean and if sucessful, will post the results. Otherwise, assume no change.
(The working computer has no internet connection and is in another room)
I also noticed:
m_hour = 0;
m_minute = 0;
m_second = 0;
All are declared a member variables. The debugger skipped over m_hour and visited m_minute and m_second. Very curious.
After a reboot and after deleting the *ncp and *.pch files, the results were the same. Another Clean and Build had no effect.
I realized the project was set to build the realease version. (That is displayed in the toolbar) After changing that to debug, everything seems normal again. I could whine about this, that, and the other, and how it SHOULD work but that is not going to change anything. The end result is almost two hours expended learning this little lesson because I did not notice all the indicators.
Yeah, release version isn't built with debug information so the debugger doesn't work right.... You can however, mix debug and release versions of things to debug the portions that are built with debug information hence Visual Studio allowing you to do this... So, it's really working as it should be.
I realized the project was set to build the realease version.
Yeah, we've all done that once or twice. The thing with the Release build is that variables can get optimized away that will always show up in the Debug build. BTW I don't think that rebooting would have made any difference to the situation.
Sorry if this is not the good place where I put my question but please I have an urgent problem.
I have a MFC application which create mdb and mdw files , I build it into two platform:
1) 32 bit : its include odbc library and it works on both OS 32 and 64 (as Win 7)
2) 64 bit : this version not works and it says that cant get the driver to create mdb and mdw files.
It seems that cant get the driver ODBC 64 bit and after a long search it seems also that there is no version 64 for driver for mdb files
Have you any idea about this ? and how can I have a 64 bit version of my application which run normaly and without this type of problem ?
I have just gone from Visual C++ 6.0 to Visual Studio 2012. So far it is working well, I have manged to transfer my application over, albeit with some difficulties. But the program creates this big SQL Server Compact Edition, which I don't need. My application is not a database tool. How do I remove it?
Environment Windows XP and 7, Visual Studio 2008, C++. Experience level: novice with TCP/IP, but do have the application working with CAsyncSocket.
The message application receives telemetry data from some hardware (arrives fast and furious), processes it and dispatches the results to a display device via TCP/IP. It sends the data to the client but does not receive from the client.
Setup: the TCP “Manager” is started by the main application and it listens for the client. On connect, it creates the “Sender” that does all the sending.
Question: what is the best, read that simple and maintainable, way for the manager to get the Sender pointer to the main application so it can use the Sender? When the client closes the connection, how is the application informed of that?
Tentative proposal: the application provides the Manager with a pointer to itself. When the connect is made the Manager calls an application method such as Set_TCP_Sender_Pointer( C_Sender_Class *new_sender_pointer );
The Manager can pass the pointer (to the main application) to the Sender on creation. When the Sender gets a close from the client, it calls the same method using a NULL for the argument. This makes the application aware there is no Sender. Then the Sender exits. (There is only one thread so I think there is no concern about the application using a pointer to a non-existent Sender.)
I am pretty sure that will work, but if you had to pick up on this project, would it be easy to understand and maintain? Presume you are not an expert with TCP/IP in the Microsoft world. I am open to alternative suggestions.
After writing the OP I outlined the process with all the steps then began to implement.
I have started using a common directory for re-usable code. In order for the Manager to call a method in its creator it must know the name of the owner's class. To do that I added a forward declaration in the Manager.
However, this means that when the class is re-used, and the owner has a different name, the Manager must be changed.
There is another indicator this is a bad practice. After the forward declaration in the dot H file, the dot CPP file needs to reference the dot H file of the owner. The common code resides in another directory and it cannot find the central code of the main project unless it is specifically spelled out. Then, if the project is moved or re-name, the common code must change.
There may be a way to do this automatically using directive and path names within Visual Studio. But I now think that even if that can be done it would be miss-guided.
Conclusion: A utility class can call "down" the heiarchy to objects it creates, but should not call up to its owner. While the concept of it calling up may make the owner code neater, it represents an inversion of authority. (Regardless of how the methods are name, a call up in an inversion.)
Problems remain: The main application uses pointers into the lower level objects to accomplish its tasks. How does it become aware that the lower level needs to exit, or maybe even has already exited.
Controlled exits: When the TCP sending function detects that the client has closed the connection it does not exit right away. It waits until the upper level code calls the method to send the data. Then the lower level returns an error code stating that the data has not been sent and that the object is exiting.
Uncontrolled exits: I generally do not like exceptions, but this appears to be a time when an exception is warranted. When the lower level must terminate unexpectedly, then an exception could work its way back to the upper level and the exception handler can NULL the pointer.
I personally never used exceptions for ATL projects. Now a colleague votes strongly for throwing exceptions instead of returning HRESULTs. I did some research, and even here on codeproject I don't find much about ATL and exceptions. To me it appears that exceptions in ATL are not very popular. So I would like to know from you ATL guys:
Do you use exceptions? Or you simply return an HRESULT?
I used a combination. A colleague and I developed an exception object with various constructors that could, for example, be thrown with parameters of: a HRESULT, a string describing what failed, the file name it was thrown in, and the line number of the file it was thrown in.
The exception object constructor then took care of writing the error as an event to the System Event log as well as looking up any messages from the HRESULT, or accessing any other error handling mechanism needed.
This resulted in code unclutered with complicated error handling and a standard way of making sure that the log showed not only an error code or message, but a comment on what was being attemted, which .cpp file this was detected in and the line number of the file.
I agree when you say you'd rather return an error code than throw an exception. I found using exceptions locally makes the code clearer and imposes some structure. Having caught the exception locally and dealt with it i.e. usually recorded what exactly went wrong etc I too then prefered to return an error code.
Windows XP and 7, Visual Studio 2008 and 2010, MFC, C++
Application A supports Unicode and application B does not. Class Log_Writer does not support Unicode. Both applications use Log_Writer ok passing a char * to log writer.
The problem is within Log_Writer when it cannot open the log file. It uses AfxMessageBox() to display an error to the user. When Log_Writer is compiled within the A environment the error message is:
None of the 2 overloads could convert all the argument types.
What can be done so that Log_Writer will work in both environments?
is wrong for ASCII based compiles, since the L prefix generates a Unicode string. Use the _T() macro around all your string and character constants and they will be correctly generated as ASCII or Unicode depending on the project settings. Note, do not forget to #include <tchar.h> for the foregoing macro.
One of these days I'm going to think of a really clever signature.
If you are writing code for windows NT and not for Win9x and its friends then forget about the A version of your program and simply stop maintaining it. This results in much less trouble for programmers, cleaner easier to read code. Actually WinNT uses utf16 internally so all A functions are just functions that do string conversions before and/or after calling a W version. Win9x is almost dead and the same is true for the A version of programs that is simply a heritage from old times.
You have a good point. My problem (and notice my phrasing) is that I dislike unicode. All the various option make dealing with it an absolute pain.
If you (you in the general sense) want to use computers, then it is time to exit the hieroglyphics epoch and move into the age of alphabets. If you cannot fit your character set into 256 limit then, in my not so humble opinion, you do not have an alphabet.
Yeah, I know I may be in the minority, and no, this is not intended as the start of a flame war. Just my opinion.
I have spent "WAY" too much time trying to figure out the syntax of dealing with unicode. And finally, I work in the military world and my code will never ever be used in the world of hieroglyphs. Straight up ASCII is much easier to deal with.
You have to deal with unicode if you make applications to sell worldwide. 256 characters isnt enough for example for chinese character sets (maybe for simplified chinese) and china is a big market. Unicode isnt so painful if you use utf8 that is more natural to use in C/C++ program because you can write "string" instead of L"string" and you have to convert from utf8 to utf16 only when you call winNT api. Thats what I usually do and this way your program becomes easy to port to other platforms too (linux, mac, ...).
I have a CAB file which has a COM DLL and .INF file. My application is embedding this component in a JSP page and it is working fine in IE7 & IE8 but failed to work on IE9.
Any suggestion on how to solve this issue
Where can I get books on ActiveX and COM. I started with an ActiveX book (ATL) and it began by stating I must know COM. I wound up with Dale Rogerson's "Inside Com" Its on Amazon but no soft copy. I don't want any more hard copy books.
Every where I find a PDF or ebook version they want me to first down load their downloader. Their code could have anything in it so: No! to that.
So: What books do you recommend to a novice that needs to write an ActiveX component. The container is a commercial product and I need to write a video display for some very custom video input.
I am happy to pay, but it must be a soft copy.
Thanks for your time
EDIT: Another book I need: Win APIs. The APIs and method that are needed to directly use TPC/IP. I want the ability to bypass all the cycle stealing intermediate classes and go directly to the API. I am having a difficult time with my searches on this and prefer a book that someone recommends rather than a stab in the dark.
You mentioned Inside COM by Dale Rogerson which remember using so I had a glance at my bookshelves. I know Inside COM taught me the underlying principles of interfaces and COM without resorting to ATL or MFC. Professional COM Applications with ATL (Wrox, 1999) was quite good on Active X, and does have a section 'Building an ActiveX Calendar Control' which explains a lot. However something like this, whilst being the same era as Inside COM, refers to VC6 and ATL3, not the most up to date now. I did also buy ActiveX Controls Inside Out (Microsoft Press, 1997) and see it on my shelf but this is un-thumbed so I obviously didn't use it a lot, but its more wider ranging than the ATL book mentioning MFC and Visual Basic controls (VBXs). I don't want to sell mine but hope this might help.
While launching a windows application(.exe) as OLE server, some time CWinApp::OnFileNew() gets failed; due to this handle to the main window (m_pMainWnd) becomes NULL and error message "Error related with mfc42.dll occurred" appeared.
At windows XP, CWinApp::OnFileNew() failed in 1 out of 100 times.
We have not implemented the OnFileNew() in our application; we are using default implementation of OnFileNew().
As per MSDN CWinApp::OnFileNew implements this command differently depending on the number of document templates in the application. If there is only one CDocTemplate, CWinApp::OnFileNew will create a new document of that type, as well as the proper frame and view class.
Problem occurs only at one system and frequency is one out of 100 and error message "Error occurs in mfc42.dll" appeared.
Environment Windows XP Pro, Visual Studio 2008, MFC, C++
This application processes real time telemetry data. The first incarnation uses blocking TCP calls and works fine. It is just difficult to deal with due to the blocking calls.
The second implementation uses CAsyncSocket. After much work I discovered that it can keep up when about 1/3 of the data is processed. It cannot keep up with all the data.
I say that with some level of confidence because the code monitors the depth of the buffering queue. Since the telemetry data never stops, at the TCP/IP level, when it receives the WSAEWOULDBLOCK error, it buffers payload packets until it get the OnSend. At the one third data rate the max buffer fill level is 91 payload packets. At the full rate the buffer, currently 240 deep) overflows, frequently. My next step might be to go to the Win32 API level.
I started with MFC because the vendor whose software feeds me the data provided an MFC template that showed how to get data from their application. It was written in Visual Studio 2008 MFC. During real time operations it does no user interactions and no GUI updates.
Or, I maybe I should just abandon the asynchronous effort due to a combination of too much effort, too litle return, and maybe even not possible.
Drawing on the reader's experience, and on Richard MacCutchan's response in my previous thread, and with a packet rate well in excess of 10 per millisecond:
What is the probability of success if I switch over to the Win32 API programming. Switch to a console application? Is the asynchronous code inherently less efficient, or is this more likely a problem using the CAsynchSocket class. (I presume it looses efficiency in the tradeoff for ease of use.) Or maybe an MFC problem, or combination thereof.
Right now I am leaning towards abandoning the asynchronous effort and just use the blocking code that works well. I would like to hear your thoughts.
Networking is always tricky because you have to tweak the parameters of your networking code to get as "nearly optimal" results as you can. Regardless of the solution you choose you should definitely try to set the send and/or recv bufsize associated with your socket handle (setsockopt() call with SO_RCVBUF/SO_SNDBUF parameters). Set the buffer sizes for example to 1Megabyte and then halve it until your net performance starts degrading. Note that even if you use async sockets the os is still receiving and storing data in the background into the rcv buf of your socket and then you can read that out with a single call! Unfortunately if you use a socket implementation that doesn't allow you to have direct access to the size parameter of your send() / recv() calls then you can not tweak those! Whether async or blocking??? I think this shouldn't be the question. If done well then these should have about the same performance because blocking/async communications are really just 2 different ways to write/read the send and recv buffers of the socket object, the real networking is done by the operating system in the background by the network stack that works with the send/recv buffers of the socket. I prefer async because that is a superset of blocking sockets, you can not solve every problems with just blocking sockets. Servers with the highest performance are also using async because lots of async operations from different sources can be optimized much neatly by the OS anyway.
To be honest I have always written the async socket class for myself, mainly because of crossplatform development. I think writing a basic async socket code is no big deal but you will probably face some problems if you dont read the win32 api docs carefully. On windows you have several choices to write a single async socket - select, WSAWaitForMultipleEvents, iocp, overlapped, ... who knows). To handle a single async socket on its dedicated thread I have always used WSAWaitForMultipleEvents as it is quite easy to use with its companion functions (WSAEventSelect/WSAWaitForMultipleEvents/WSAEnumNetworkEvents). Why WSAWaitForMultipleEvents and not the crossplatform select??? Because with WSAWaitForMultipleEvents you can wait for the socket and also for a custom event of yours (created with WSACreateEvent) because sometimes you have to explicitly wake up the waiting - for example when your program quits or when you add some sendable data to your empty send buffer and your network thread waits. Doing the same wakeup with select() is always tricky and dirty.
Note that previously we were talking only about exchanging data between your application and the network stack (the socket buffers). There is a delay between the send or recv calls that you use to transfer data between the socket buffers and your application memory buffers. If this delay is big, for example because you do other work on your network thread and you don't have a dedicated network thread in your app then the delays will be bigger and you will have to compensate for that with bigger SO_SNDBUF and SO_RCVBUF values because the OS might run out of recvbuf space of the socket or might run out of sendable data while your network thread is doing something in your app. I highly recommend using dedicated threads that do just the transfer between your memory buffers and the socket buffers.
Another thing that can be a bottleneck is the data processing of your application - filling the application layer send buffer and reading the application layer recv buffer and doing some other stuff with the data (calculations, file io, ...). For example if your data processor thread doesn't read your memory buffer quickly enough then it might happen that the network thread that reads data from the socket buffer have to suspend transferring data from the socket recv buffer to your application level memory buffer because its full. Then if the OS/network stack fills up the recv buffer of the socket completely then you cant keep up with the network bandwith and it will affect performance. Lets assume that you are doing calculations with the received data and then you write it out to disk. If it can happen that some kind of data requires more processing than the average kind of data then you might want to compensate those "negative performance peeks" of your processor thread with a larger recv mem buffer in your application but you benefit from this only if the average processing speed of your application is bigger than what is required by the average incoming data. -> This shows quite well that you can tweak a networking application only if you know the exact specs (server hardware configuration, network config, ...). There is no single good solution. You will have to profile each part of your application independently (network recv/send speed, data processing, file io, ...) and then you will have to find out where to put buffers, maybe threads in between parts.
Wow, what an incredible reply. I don't understand everything you wrote, but a question or two will help.
This is a telemetry application. It receives data from the hardware as a series of identified parameters. It extracts messsages from within the list of parameters and sends the data to the display device. It outputs data via TCP/IP to the display device, but does not input anything via TCP/IP.
When I switched to asynchronous TCP/IP I discovered about WOULDBLOCK and created an array that buffers data until OnSend is called. That worked OK at relatively slow packet rates, two per milliseconds.
Upon adding in another set of payload packets (there are several types of payload packets that can be inhibited or enabled at run time) the payload packet rate jumped up to five to fifteen or so payload packets per millisecond. They are generally smaller packets, but the order is indeterminate so it is very intensive to combine payload packets to reduce the overall payload packet rate.
When that happened, the app ran out of buffer space and started loosing payload packets. I bumped up the buffer size from 16, to 32, to 64, then jumped to 240. It still overflowed.
My interpertation is that CAsyncSocket cannot keep up with this packet rate. When I use blocking TCP/IP calls it works okay. I can even run four simultaneous copies with no trouble.
As I understand your post, I am thinking that this probably cannot be accomplished with CAsyncSocket.
If that a true or false statement?
Don't write too much, I will certainly need to think a while, and re-read your post depending on how you answer this question.
As I see you just get some data somewhere and send it over to another place, your app is a transmitter between 2 endpoints. This is quite simple fortunately. If you are not allowed to drop data then your send speed must be at least as big as your receive speed. A buffer helps only in smoothing away jitter in the incoming data to maintain a better average throughput. If your incoming data is more than what you can send then the problem can not be solved. If you are able to reach the required send speed with blocking you should be able to do the same with async as well. Anyway, why do you want to use async sockets?
Hello pasztorpisti, Re: Anyway, why do you want to use async sockets? Async makes the application easier to deal with. First, when the client has not connected the main application can still capture data from the source and provide feedback as to how it is performing. Once the Listen() is posted, the app is stuck there and can do nothing. Second, if the client closes the connection before my app closes, the app is stuck and must be killed. That causes some resource loss eventually requiring a computer boot.
I have mitigated that quite a bit by writing my own client application that can be fired up and release the main application. But I am not always the user and that is a real pain to require someone else to do. (BTW: Writing the client application was indeed a learning experience.)
I suspect that both of these problems can be resolved by using a separate thread for the TCP/IP part of the application. I found a tutorial and will be working on that aspect.
However, I have a working version and do not have unlimited time to devote to this project.
pasztorpisti, Richard M, and others, Thank you very much for the time you have spent answering my questions and making suggestions. I am very gratefull.
You are welcome! If you have limited time (as I suspected) then choose a working solution of yours if you already have one. You can later experiment on better solutions if you are interested in threading/sockets.
Unfortunately I don't know about any good tutorials because I'm not in need of one. I learnt from teammates and from my own experiments. You dont always benefit from dedicated threads, that depends on the scenario but you can not find that out without trying.
This is a great answer, I really wish I could vote on it.
Networking is always tricky
You are totally correct. It looks so simple and we use networking applications all the time, but as soon as you have to do a bit more than a basic chat sample, small surprises crop up all over and you start racking up on "experience points" as you try to solve them .
Last Visit: 31-Dec-99 18:00 Last Update: 3-Sep-15 2:41