|
I think the original design chose to avoid the WSAsyncSelect style operations to maintain the ability to operate in a windowless service environment. Remember, this is 10 years back, winsock 1. I agree the WSAEventSelect mechanism with IOCP is an advance, but would be more than a retrofit at this point.
So, no argument with IOCP, except maybe for a bit of additional complexity in 'getting it right'. IO_COMPLETION seems to me an intrinsic part of the kernel's APC mechanism, which is I think at the heart of getting this object based beast to operate efficiently in the first place.
But I _think_ there's always been an assumption that a blocking call on a socket (Accept, Recv, etc) doesn't busy-wait, and that allows for efficient CPU usage, at the expense of allocating a dedicated thread. Previous discussions on this in the early days would center around the number of threads a process could spawn, thread pooling, etc. But I think at something like 2k threads per app the marketing speak isn't too far off.
< /end_feeble_design_defense >
|
|
|
|
|
"But I _think_ there's always been an assumption that a blocking call on a socket (Accept, Recv, etc) doesn't busy-wait, and that allows for efficient CPU usage, at the expense of allocating a dedicated thread. Previous discussions on this in the early days would center around the number of threads a process could spawn, thread pooling, etc. But I think at something like 2k threads per app the marketing speak isn't too far off."
The only way that 2k threads would be effecient is if a majority stayed in a 'wait' mode. However, this just isn't the case for servers today. I am talking specfically about servers that would expect to handle greater than a few dozen clients. (64 to be exact, if you used a WaitForMultipleEvents style handler that Winsock provides) There is a cost related to creating threads, which can be minimized by pooling - but you still have the cost of context switching. If you create a process that creates 2000 threads and randomly causes scores of them to 'wakeup', the performance of the application will be horrible. Not only will a slow down occur, due to context switching, but unfair thread scheduling will really nail you. Windows decides what is 'fair' as far as when to service a thread, therein lies the rub. That is why the good folks at microsoft decided to introduce the IOCP method. IOCP is not specific to winsock, its for all system handles. You can have a single thread monitoring a IOCP handle, which could in turn be servicing thousands of connections. Better yet, on a dual core/multi processor, two or more threads can be doing a GetIOCompletionPortStatus (That is the function name if my memory serves me).
You don't have to defend your 'feeble' design. You have in fact accomplished one of your design goals: Works on all versions of windows. I don't believe that feature is a must any longer. The demand should now be shifted towards taking advantage of NT offerings. You guys have put a bunch of work into the source that you have now made open source. To ignore your efforts, in regards to community contribution, would be unjust. If my sofware went open source, I would hate to read the comments/fall out from people picking it apart. What you have here is a component that simply does a job. It may not please all people, but for many people it may do just fine.
|
|
|
|
|
Well, thanks for the additional - er, yes defense is a bad word... - support.
Speculating that something like a critical section (not requiring a context switch) would be enough to implement a block for a socket is probaly naive, and that would only help in part. But I think that in either model servicing a high number of transport intensive connections is going to hit the wall at some point.
I'm pretty much convinced that IOCP is a glorified APC, but then perhaps all I have is a hammer. (I did some spelunking some years back on this topic[^]). And yes, aside from the possible need to switch between the User and Kernel mode contexts of one particular thread in servicing certain APCs, it would certainly cut down on overhead to service connections as you describe.
Wish we had the source for these things.
But hey - it's our code now. If someone with a good working model for an IOCP approach wants to rewrite the server class [edit](or write an alternate one)[/edit], I say go for it. Load testing will be fun. Meanwhile, I hope that folks will, if I may simplify your comments, take it for what it is.
Cheers
Tim
-- modified at 15:29 Sunday 14th October, 2007
|
|
|
|
|
Dear M.Tim I read your previous discusions with M.Ken: not only we can write an IOCP model : we already have many here at CodeProject we may adapt one to the Ultimate TCP/IP paradigms. I personaly have one within a chess server project I have developed and trying to deploy it in the internet by the end of this year if I make it with the resources. M. Mark Russinovich already explained the internals of IOCPs at OSR I think. But M. Tim other people simulated the mechanism with their own code using semaphore for the main events queu and separate/ fixed number of threads fetching events and processing them, etc. I don’t know where I saw it but I remember somebody did it in the context of a SIP stack at the server side (SIP proxy application). Also I think Apache has something of its own, ie some model similar to IOCPs for servicing the browsers requests Thus Apache does not create a separate thread for evry connection (this is for sure if I am wrong with the IOCP similar model).But of course I will go with M. Mike and M. Ken with the option just to reuse the IOCP api and not reinvent a model if we want to integrate that model into your very powerful Ultimate TCP/IP . I remember myself last year when I shifted all the code of my chess server into IOCP: first of all I was using multiple asynchronous sockets, then Select, then I was not understanding IOCPs for many times while suddenly I found it obvious and representing the right model (yes a right model): in fact first of all I was getting nightmares with the old code while trying to foresee the scenario that can happen to my server if suddenly 2000 users connect at the same time.I was tortured with the funny picture that when I deploy the server I will need to look after it all the time when it crashes so to restart it for the other players to connect again (Humm!) and play with the program.but I was remebering good terms said about IOCPs wich led me to try understand it again and again utill I implemented it and regained confidence that my application can run one day on the internet.
M. Tim, I already tested my server with special client application that simulates many connecting chess players. With 2000 sim. connections it still runs at constant performances !. There are two points : first with the IOCP being in place, when there’s high number of connections, we don’t hit the wall of nothing it only about many socket handles, but if those high-number connection become simultaneously intensive (ie we receive big flow of request per connection also we don’t hit the wall of performance : it is only we need to be careful about RAM in case we create outstanding data answers and that data get’s not liberated fast as the seding flow per user is weak. With my chess server yes I want to be able to manage 2000 players at the same time : the limit does not need be on that number and I don’t need to have an equal number of threads at the server side : with those 2000 players may be I have 800 players idle : thinking on their moves so the server is not receiving much load but from the other 1200: the others are sending chat and sending moves and when things come together my server is just processing the number of request it can process (imagine you have 2000 thread: do you control the moment or prevent it when all of them are awaiked from waiting status and get running to service the correponding requests, and can 2000 do execute without causing the application to crash ?) the more important thing M.Tim is that in case with IOCPs and that all get together at the same time, then you can percieve/detect that load moment and hence put dynamic code so to do somthings about it, for example in my case, I disregard the chat messages and only let the chess move requests pass (so those who play will still be able to play and see each others’ moves on the chessboards but only chat get’s disabled it is not a priority against chess moves ) and when the load goes off, then I release that particular behaviour.
I am eagerly wanting to my application to be a good experience of servers models as I spent tremdous time on its server architecture,and I’ll be releasing the IOCP parts of it as open source and of course I would not say I’ll be glad to bring my additions to this powerful ToolBox because I think many people will soon be able to integrate IOCPs into it by the time I release myself from my project and try to do something else.
Ken Tompson, I think I remember I read in a paper two years ago when I was at university that it is a person with the same name as yours who wrote the first GM chess engine.Can it be you ? Otherwise forgive me I am always speaking the chess langage.
M. Tim please where does the Anti-Brute Force Attack code reside ? I found the code after the accept moment, which checks the clients ip address and looks if it is permitted or blocked(Access Control) but I expected there is a code that detect when somebody is automatically firing many SYN packets and blocks him so that it does not cause the server to create 1000 unuseful socket references for example(ouufff again this will be much dangerous with the one thread/per socket mechanism as the attack will cause the creation of 1000 thread !).Does your code calculate the rate of incoming connection requests for example (the number of SYNs per seconds, or other thing) so to prevent the attack ?
Ahmed, 24 Tunisia.
|
|
|
|
|
Ahmed
You've done more work on this than I have. I don't know how much work it would take to revise the existing server code for IOCP - as it is now the design is heavily influenced by the threaded paradigm. Any value in a rewrite would need to be guaged on how much of the individual protocol specific code will affected by changes in the class structure. I suspect it would be a significant amount.
On the technical side of things, there seems to be no shortage of comparisons and anaysis available wrt 'standard' windows async socket mechanisms and IOCP, but I haven't found a good description of what mechanism is in place when, say, an Accept call is issued on a blocking socket. If it's something as simple as a critical section, then the paradigm of the thread 'waking up' is less valid - it could be just waiting it's turn in the scheduler's mysterious queue. Yes, there would be context switching, but not in the same demand fasion as when 200 sync objects become signalled. Really wish I had a better um... handle on this.
[edit] I should have used the example of a recv call above. Need coffee. [/edit]
All for now - keep the possibilities open by all means, and excuse me if I haven't addressed all your points - you could build an article with that message
Cheers
Tim
-- modified at 11:07 Wednesday 17th October, 2007
|
|
|
|
|
"Threads are not free, so a design that uses hundreds of ready threads can consume quite a lot of system resources in the form of memory and increased scheduling overhead." - http://msdn2.microsoft.com/en-us/library/ms810434.aspx[^]
I don't think blocked threads use any significant CPU time but sleeping threads (i.e. those which are waiting on some kind of time out) do because the scheduler has to 'count them down' every clock tick.
Thread switches are very expensive. We once wrote a piece of (very bad) code which performed a thread switch every time it sent a network packet. Sending 1k packets, CPU overhead halved when we fixed this particular piece of insanity.
The last server project I worked on used IO completion ports; they work very well, although they do complicate the flow of control somewhat. Fibres might be worth a look; they could make incorporating IO completion ports into the existing code a lot easier. A fibre switch still involves saving and reloading all the registers though, and each fibre needs its own stack.
Really useful set of tools, BTW. Thanks for sharing.
|
|
|
|
|
The Ultimate TCP/IP model is based on synchronous calls to the winsock api. Pretty much a Berkely sockets model in this regard. In fact, the code can be ported to a later flavor of Unix (Linux, Solaris, etc) that has support for threading.
Which raises a point - I may not be up on the lastest *nix server models, but earlier versions would implement multi process servers (forking a new copy of the app to service a connection) and moved to a threaded model (out of necessity when porting to NT (Apache, for example) or just to take advantage of the introduction of the thread API in later versions). There must be a similar expense in each of these models - yet I would think that there are *nix servers that can scale pretty well.
I found this article "Why Events are a Bad Idea"[^] interesting. The authors have something of an advantage over us. They can compare different threading models, various OS patches, and compilers with varying degrees of support for lightweight threads. But one point they make is that much that is wrong with threaded approaches lies in the implementation.
Their conclusion:
"Although event systems have been used to obtain good performance in high concurrency systems, we have shown that similar or even higher performance can be achieved with threads. Moreover, the simpler programming model and wealth of compiler analyses that threaded systems afford gives threads an important advantage over events when writing highly concurrent servers. In the future, we advocate tight integration between the compiler and the thread system, which will result in a programming model that offers a clean and simple interface to the programmer while achieving superior performance."
I think I've mentioned earlier in this thread discussion that it's hard to be an apologist for the model without the knowledge of the inner workings of the socket code - I think you're right about calls that time-out - but what exactly is the mechanism for the time-out in a select() call? Is it worth trading for an event - even one as efficient as an APC, at the expense of 'compilcating the flow control'?
You make a good point about the possibility of introducing fibres into the design - it would be one of the easier mods - the authors of the article above have this to say on the topic:
"Many of the techniques we advocate for improving threads were introduced in previous work. Filaments ... and NT’s Fibers are good examples of cooperative user-level threads packages, although neither is targeted at large numbers of blocking threads."
Hmmm... but then, your point is to use fibres as a bridge to IO completion - fair enough, and interesting - a possible way forward.
But I think we need a better idea of the state of the code as is before we do or can go about improving things. And we're dealing with a bit of a black box. I'd say continuing this discussion would entail proposing some workable testing model - something that gives us some benchmarks and/or a failure threshhold for the code as implemented, with a specified set of TCP/IP control parameters (TIME_WAIT, MTU, etc). There are tweaks that might improve the current model - (e.g. specification of stack size in the BeginThread call), but we'll hit the wall at some point, then we can compare and contrast. Hopefully we can be forgiven for not having this in place, given the history of the code.
Threads may not be the best choice (at least for an NT based system), given the availability of the IO completion model. But I suspect they shouldn't be dismissed outright - any suggestions for a good test harness?
And thanks for the input - appreciated!
Tim
|
|
|
|
|
Hi,
I'd like to apologise for my previous post; it didn't really contribute much. But your post, and the article you cite, made me think about the problem in more detail so I thought I'd have another stab at it.
Thinking back, the main reason why IO completion ports worked well for us was that the logic of our server was very simple - it basically just routed HTTP requests (and responses) from one connection to another. This meant that implementing it in an event-driven way was not hard to do. The other thing about it was that it did no disk I/O, which in turn meant that a completion routine (or event handler, if you prefer) was never stalled waiting for disk I/O to complete.
The more I think about it, the less I would like to implement a more complicated server in this way. Any disk I/O would have to be asynchronous too, for example, and, as the article Tim cites points out, event driven systems are much harder to debug and test (and code!) compared to simple call - process - return logic. Multi core/processor systems also complicate life.
So, having completed my volte face, I'd like to make a couple of suggestions which might make the threading model more attractive for servers which need to handle lots of concurrent connections, or indeed a lot of short-lived connections in rapid succession.
The first is very simple in concept: if you need a connection to time out if the client does not send it anything for a while, there's a way of doing so while still making a convenient (and, we hope, efficient) blocking call on the receiving socket. You add a single 'watchdog' thread to your server which keeps an eye on connection timeouts for each connection and runs once a second, say. To timeout a connection, the watchdog thread closes the associated socket, thus releasing the 'connection handler' thread which is waiting for input. This in turn notices that the socket has been closed, cleans up whatever needs to be cleaned up and then either exits or listens for a new connection, depending on your server design. The same idea can be applied to send timeouts. Connection handler threads would need to register with the 'watchdog service' before blocking and de-register immediately afterwards, of course. There is a potential race condition here, but the connection handler will find out about it when it next tries to perform I/O on its socket.
The second suggestion is to consider providing some kind of support for 'thread pooling' in the Ultimate TCP library. This is particularly attractive for servers with short-lived connections (like webservers) where the overhead of creating (and subsequently destroying) a thread for each request can be expensive. The kind of thing I have in mind would allow a pool of, for arguement's sake, 10 threads to all listen for incoming connections on the same socket. Then, each time an incoming connection is received, one thread is 'released', handed the socket returned by accept , and can go off and service the request entirely independently. When it is done, it re-joins the pool of threads waiting for an incoming connection.
It would be nice if this idea could be extended to long-lived conections by allowing our 10 threads to all listen (as a group) for incoming data on however many connections are currently in existence. The idea would be that when a particular thread returns from 'GroupReceive', it would be handed a void * referencing the connection on which data was received, but I don't think that the Winsock API can support it.
Sorry, this is rather a rambling, hand-waving post and makes some possibly unwarranted assumptions about the behaviour of the Winsock API in a multi-threaded environment. Hope it at least provides some food for thought.
BTW, tried to download your documentation from the link above (CHM) but could not view it on my system (XP, VS 2005 installed); all pages just say 'Navigation to the webpage was canceled'. Might be worth your checking if there's a problem with it. I can read other CHM's OK.
|
|
|
|
|
Thanks Paul
I think I reflect some of your feelings in that I wasn't all that sure of my own post - there is so much to consider here.
You've made some good suggestions - I think if implemented we could also consider making some of these behaviours optional - hard to find a one-size-fits-all solution.
For my part, I would like to see this evolve (and hopefully by way of a community effort) with the backing of a few known stats, so I'll continue to harp on the need for some base testing - which may be a bit of work in itself.
Thanks again for taking the time - I hope this discussion can progress. This is the kind of thing that will at the very least make the kit more transparent, and hopefully lead to some improvements.
For the CHM problem, I'm not sure if I've seen this particular message - I sent you a private message with a link to a discussion of chm problems on the Ultimate Toolbox homepage - wonder if that was helpful? (And yes, there should be a readme with the chm dl - just not sure I have all the info yet).
Thanks again
Tim
|
|
|
|
|
I'm sorry that it has taken me so long to respond to Ahmed. I'm not the same Ken Thompson that you know of. I just happen to share a name with someone who has made a very large contribution to C++/programming in general.
Ken
|
|
|
|
|
I realized that today ! OK, but you are very good too, as I saw from your articles.
Ahmed Charfeddine.
Our Philosophy, Mohammed Baqir Al Sadr
|
|
|
|
|
Tim Deveaux wrote: If someone with a good working model for an IOCP approach wants to rewrite the server class [edit](or write an alternate one)[/edit], I say go for it.
where is that hammer icon when you need it.
Ghazi Hadi Al Wadi, PMP, ASQ SSGB, DBA
|
|
|
|
|
Ghazi Al Wadi , PMP wrote: where is that hammer icon when you need it
Did you look in the toolbox.gif?
|
|
|
|
|
Hey Tim,
I found the set of the examples. Trying to build them. I am stuck on CoMarshalInterThreadInterfaceInStream on the server example I doing. It returns the error E_NOINTERFACE. Any pointers where to look for such error
By the way, I am using UT_THREADLIST, so it is better to be declared as protected in CUT_WSServer.
Regards
Ghazi
Ghazi Hadi Al Wadi, PMP, ASQ SSGB, DBA
|
|
|
|
|
Sent you an email!
Ghazi Hadi Al Wadi, PMP, ASQ SSGB, DBA
|
|
|
|
|
Hi,
Surely thread pooling addresses the problem of 2000 threads all contesting for the CPU at the same time, which I agree would be a bad thing. If you have a thread pool of, say, 100 threads, then that is the maximum number of threads that can be active at any one time, QED.
Limiting the number of 'worker' threads in this way sacrifices concurrency for efficiency, but you don't really lose anything unless (a) all 100 threads are stalled for some reason, in which case the server falls idle or (b) 100 long-running requests prevent a few pending short-lived requests from running when they otherwise could do.
Having mulled this over a bit, and having briefly revisited the Winsock API to refresh my memory, I think that to implement thread pooling you would need a small number of 'dispatcher' threads (possibly just one) listening for incoming requests and buffering them. When a request has been received in its entirety, the dispatcher thread queues it up and the next available 'worker' thread will process it as soon as there is one available (which, we hope, is straightaway most of the time).
This costs you one thread-switch per incoming request, but I think that is reasonable for a solution which should make efficient use of the machine and still allows server code to be written in a straightforward fashion (asynchronous code is *always* tricky to write, debug and test). It also avoids the need to create a new thread each time a client connects to the server and hence would be particularly good for webservers.
I'm sure it would be a significant challenge for Tim (and co-workers) to incorporate this into the existing implementation, but I believe it could be done if the resources are available and is a reasonable thing to shoot for. Tim, I'd be interested in your take on this. I'm afraid I'm not volunteering though - too much to do already!
receive comm
|
|
|
|
|
Paul,
Thanks again for this.
Pooling will be of greatest benifit for short lived (HTTP e.g.) connections, and with this in mind it would be nice to make the initial size and perhaps a 'grow by' setting configurable.
BTW I've seen caveats relating to SuspendThread / ResumeThread - is there an alternative?
As for resources, technically speaking I don't have a mandate from CP for new development on these offerings. What I'm supposed to be doing is shepherding things until a few devs can take up the cause, and I hope to be making that clearer in future.
That said, I can volunteer too - I'll be looking at some code Ghazi has submitted dealing with some server side extensions, and will keep the pooling design in mind.
Anyone with additional input on design aspects is more than welcome to contribute. Done right, this could be a very worthwhile enhancement.
Cheers
Tim
|
|
|
|
|
I just donloaded the ultimate TCP/IP sample and their are no samples just the help files inside the zip.
|
|
|
|
|
Seems to be ok - could you try again?
Tim
|
|
|
|
|
Today it is working,
the file size and contents are completely different from what I was getting before.
|
|
|
|
|
Excellent library! Great job, but I have some remarks:
Besides this is the only open source library that I found for getting mail from TLS/SSL servers.
1) It seems strange for me to use such pure C data inside classes:
<br />
UT_HeaderField *m_listUT_HeaderFields;
LPSTR m_lpszMessageBody;
LPSTR m_lpszHtmlMessageBody;
.....<br />
char m_lpszDefaultCharSet[MAX_CHARSET_SIZE+1];<br />
char m_lpszTextCharSet[MAX_CHARSET_SIZE+1];<br />
char m_lpszHtmlCharSet[MAX_CHARSET_SIZE+1];<br />
char m_lpszHeaderCharSet[MAX_CHARSET_SIZE+1];<br />
Why STL is not used?
2) I had a problems with linking POP3 secure client sample: UTSECURELAYER_EXPORTS should be defined in most cases to use sources inside of my project (MSVC6). But it is not stated anywhere.
3) POP3 secure client (pop3_c.cpp) don't work with GMail. Google's Gmail uses SSL and don't use STLS command. I created simple patch for pop3_c.cpp.
This patch also changes POP3Close function to check if it is connected before sending QUIT command and to get answer for it.
Index: pop3_c.cpp
===================================================================
--- pop3_c.cpp (revision from The Ultimate ToolBox TCP/IP Version 4.2)
+++ pop3_c.cpp (working copy)
@@ -123,7 +123,13 @@
*********************************/
int CUT_POP3Client::POP3Close(){
- Send("Quit\r\n");
+ if(IsConnected())
+ {
+ Send("QUIT\r\n");
+
+ if(ReceiveLine(m_szBuf,sizeof(m_szBuf),m_nPOP3TimeOut) <= 0)
+ return OnError(UTE_SVR_NO_RESPONSE);
+ }
return CloseConnection();
}
@@ -1034,27 +1040,37 @@
if(bSecFlag)
{
-
-
- SetSecurityEnabled(FALSE);
-
-
- if( !GetResponseCode(m_nPOP3TimeOut))
- rt = UTE_CONNECT_FAILED;
- else{
-
-
- Send("STLS\r\n");
-
-
- if( GetResponseCode(m_nPOP3TimeOut))
- {
-
- SetSecurityEnabled(bSecFlag);
- rt = CUT_SecureSocketClient::SocketOnConnected(s, lpszName);
- }
- else
- rt = UTE_POP3_TLS_NOT_SUPPORTED;
- }
+
+
+
+
+ if(995 == GetPort())
+ {
+ rt = CUT_SecureSocketClient::SocketOnConnected(s, lpszName);
+ }
+ else
+ {
+
+ SetSecurityEnabled(FALSE);
+
+
+ if( !GetResponseCode(m_nPOP3TimeOut))
+ rt = UTE_CONNECT_FAILED;
+ else{
+
+
+ Send("STLS\r\n");
+
+
+ if( GetResponseCode(m_nPOP3TimeOut))
+ {
+
+ SetSecurityEnabled(bSecFlag);
+ rt = CUT_SecureSocketClient::SocketOnConnected(s, lpszName);
+ }
+ else
+ rt = UTE_POP3_TLS_NOT_SUPPORTED;
+ }
+ }
return rt;
}
-- modified at 6:53 Thursday 13th September, 2007
|
|
|
|
|
Hi Sergey
Thanks for this!
Sergey Kolomenkin wrote: 1) It seems strange for me to use such pure C data inside classes:
This is legacy - I think that version 1.0 of the Ultimate TCP/IP was actually C based (I could be wrong here - that's going back 10 years). There is some use of the STL in the classes (vector, string) but these were later addons - I don't think anyone was fully comfortable with the STL implementation in Visual Studio implementations previous to VC7.
Sergey Kolomenkin wrote: 2) I had a problems with linking POP3 secure client sample: UTSECURELAYER_EXPORTS should be defined in most cases to use sources inside of my project (MSVC6). But it is not stated anywhere.
Thanks!
Sergey Kolomenkin wrote: 3) POP3 secure client (pop3_c.cpp) don't work with GMail. Google's Gmail uses SSL and don't use STLS command. I created simple patch for pop3_c.cpp.
This patch also changes POP3Close function to check if it is connected before sending QUIT command and to get answer for it.
Be somewhat careful with the IsConnected call - its pretty much impossible to code an IsConnected method that will work in all contexts, with all Winsock implementations etc. If it works here though, good - and good point.
Thanks for this. I'm sure it will be of help to others, and it probably should be integrated into the code er, after some testing! It would be nice to find a more generalized method of detection than just a port comparison, but I think that's what you intended with your comments.
Cheers and thanks,
Tim
|
|
|
|
|
Yesterday I've found some new problems:
1) Adding <windows.h> to Ultimate's StdAfx.h includes <winspool.h>. Winspool.h contains API function SetPort defined as SetPortA or SetPortW. So while creating static lib it will export no CUT_POP3Client::SetPort, but CUT_POP3Client::SetPortA for example.
I've got problems in client code during linking - it couldn't find CUT_POP3Client::SetPort (client used MFC and had no winspool.h in headers).
Solution:
We can either rename method or make following defines in StdAfx.h before including <windows.h> or <winsock*.h>:
#define _WINSPOOL_ // to prevent including <winspool.h> (and defining SetPort as macros)<br />
2)You could disable warning (for MSVC6) in StdAfx.h for example:
#pragma warning(disable: 4146)<br />
It will prevent from following warnings:
pop3_c.cpp
c:\microsoft visual studio 6\vc98\include\xlocnum(180) : warning C4146: unary minus operator applied to unsigned type, result still unsigned
c:\microsoft visual studio 6\vc98\include\xlocnum(169) : while compiling class-template member function 'class std::istreambuf_iterator<char,struct std::char_traits<char> > __thiscall std::num_get<char,class std::istreambuf_iterator<char,str
uct std::char_traits<char> > >::do_get(class std::istreambuf_iterator<char,struct std::char_traits<char> >,class std::istreambuf_iterator<char,struct std::char_traits<char> >,class std::ios_base &,int &,unsigned short &) const'
c:\microsoft visual studio 6\vc98\include\xlocnum(195) : warning C4146: unary minus operator applied to unsigned type, result still unsigned
c:\microsoft visual studio 6\vc98\include\xlocnum(184) : while compiling class-template member function 'class std::istreambuf_iterator<char,struct std::char_traits<char> > __thiscall std::num_get<char,class std::istreambuf_iterator<char,str
uct std::char_traits<char> > >::do_get(class std::istreambuf_iterator<char,struct std::char_traits<char> >,class std::istreambuf_iterator<char,struct std::char_traits<char> >,class std::ios_base &,int &,unsigned int &) const'
c:\microsoft visual studio 6\vc98\include\xlocnum(180) : warning C4146: unary minus operator applied to unsigned type, result still unsigned
c:\microsoft visual studio 6\vc98\include\xlocnum(169) : while compiling class-template member function 'class std::istreambuf_iterator<unsigned short,struct std::char_traits<unsigned short> > __thiscall std::num_get<unsigned short,class std
::istreambuf_iterator<unsigned short,struct std::char_traits<unsigned short> > >::do_get(class std::istreambuf_iterator<unsigned short,struct std::char_traits<unsigned short> >,class std::istreambuf_iterator<unsigned short,struct std::char_traits<un
signed short> >,class std::ios_base &,int &,unsigned short &) const'
c:\microsoft visual studio 6\vc98\include\xlocnum(195) : warning C4146: unary minus operator applied to unsigned type, result still unsigned
c:\microsoft visual studio 6\vc98\include\xlocnum(184) : while compiling class-template member function 'class std::istreambuf_iterator<unsigned short,struct std::char_traits<unsigned short> > __thiscall std::num_get<unsigned short,class std
::istreambuf_iterator<unsigned short,struct std::char_traits<unsigned short> > >::do_get(class std::istreambuf_iterator<unsigned short,struct std::char_traits<unsigned short> >,class std::istreambuf_iterator<unsigned short,struct std::char_traits<un
signed short> >,class std::ios_base &,int &,unsigned int &) const'
3) Pop3_c.h includes following line:
using namespace std;<br />
Why should we change namespace in public headers? This namespace is not necessary in this header. It may be commented and nothing will fail. Changing default namespace may produce conflicts in client code. For example I can use std::list and use list as argument name at the same time. Changing global default namespace in your headers will produce compile errors in my code.
4) I don't know why are you using header includes like this:
#include <UTExtErr.h>
instead of
#include "UTExtErr.h"
Code like that makes me to declare additional include paths in every project that includes Ultimate headers.
|
|
|
|
|
I've placed an update 02 on the updates page[^].
I've implemented your POP3Close change, and changes for POP3S - comments welcome.
Thanks again,
Tim
|
|
|
|
|
Thanks for this, it was a valuable patch to get gmail working.
I have added this code, and our pop reader has worked for a long time. In the last 24-48 hours, the code has stopped working 'most' of the time and gives error "The connection timed-out." I am using port 995, SSL is enabled, and I do have your fixes in my code.
Is anyone else having problems using this POP reader with gmail?
Specifically what is happening is in CUT_POP3Client::POP3Connect. The Connect call is passing, but the GetResponseCode gets no data and times out.
Any chance someone knows a fix for this?
Of course if I use outlook express to test the connection parameters that works, and again this code has worked for at least a year and stopped working yesterday. The SMTP sending code is doing similar issues, so I am thinking something changed on google's side that isnt being accounted for in this code.
Thanks in advance.
|
|
|
|
|