Please see my comments to the question.
Not only client per thread is too much, it is, at the same time, not enough. Server side needs at least two communication thread for two different purposes: one listens for new connection, another one sending/receiving data. Please see my past answers:
an amateur question in socket programming[
^],
Multple clients from same port Number[
^],
How Do I Get To Know If A A Tcp Connection Is Over[
^].
Also, you mentioned "poll the server". Why polling the server "for any request"? The meaning of request is different; this is something which can be done at any time. But if you really need polling (to get to the moment of time when some activity is completed, typically), this is another bad idea. You need to understand that client-server model is a very bad, limiting model. Please see my past answer:
Application 'dashboard' for website accounts[
^].
Now, the bright side is: with sockets, you are not limited by the client-server. The
server push mentioned in the post post referenced above can be easily implemented. For example, you can implement
application-layer protocol implementing
published-subscriber or its combination with
client-server, or anything like that. Just note that you always implement some application-level protocol on top of the
transport-layer protocol, even if you don't call it a protocol. So, it's better not to pretend you don't and do it explicitly. Naturally, this application-layer will be custom for a custom service (I would prefer call it "service", not "server" to avoid suggestive naming referring to pure client-server.) Please see:
http://en.wikipedia.org/wiki/Application_layer[
^],
http://en.wikipedia.org/wiki/Transport_layer[
^],
http://en.wikipedia.org/wiki/Transmission_Control_Protocol[
^].
—SA