1) Both variants are very bad. This is a custom networking application, so it gives you enough freedom to avoid this polling approach. You need to use push, not pull technology (please see:
http://en.wikipedia.org/wiki/Pull_technology[
^],
http://en.wikipedia.org/wiki/Push_technology[
^]). Polling approach cannot be effective in principle. Here is what you can do: server accepts any number of clients. You need to develop a two-way custom application-level protocol (
http://en.wikipedia.org/wiki/Application_layer[
^]) with data going from the server to a client when some new data in the field requested by a client becomes available. A server part should memorize the set of current clients. This is called
"publisher/subscriber". A client should connect and remain connected. A connection itself can be treated as subscription, but your protocol can define how a client submits subscription detail or modifies it.
2) No. Both your current variant and your proposed variant are bad. It's a common fallacy to create some unpredictable number of threads in the service. Such service would be on mercy of the client behavior: they can easily overwhelm its resources and ultimately crash it. The number of threads should be constant or defined in the very beginning of the run time (by reading some persistent configuration, for example). Actually, a service needs only two network threads: one accepting new connections, and another one performing all the data exchange with all available clients. If your service host has many CPUs/cores, you can add more threads, a little more then a number of cores, and divide the data exchange work load between them. Thread pool is much better then a thread created per request (or anything similar), but fixed number of threads is the best, for this kind of applications.
For some related ideas, please see my past answer:
Multple clients from same port Number[
^].
—SA