Click here to Skip to main content
15,867,686 members
Please Sign up or sign in to vote.
4.20/5 (3 votes)
I am working on a UDP Server Client Application. I want my Server to be able to handle 40 Clients at a time. I have thought of creating 40 threads at Server side, each thread handling one Client. Clients are distinguished on the basis of IP Addresses and there is one thread for each unique IP Address Whenever Client sends some data to Server, the Main Thread extracts the IP Address of the Client and decides which thread will process this specific Client. Is there any better way to achieve this functionality? I really need help with this :(
Posted

The main question is: Do you need to preserve context information from one to the next UPD requestion message?

If not, i.e. if every UDP request contains all the information that is needed to answer it, there is no other reason to dispatch the requests to multiple threads than using the power of multiple processors (or processor cores). For that purpose it is sufficient to have the same number of threads as you have processors and distribute a request to the thread with the smallest input queue. (Yes, will need request queues in that case).

If yes, the one-thread-per-client approach is definitely useful, because you can keep the control flow within the thread and use a per-thread data area to store the context information.

There is also an approach that lies right in the middle of these two extremes: You store the per-client context information in a data block. At the time a new request comes in, you schedule it to one of the relatively few per-processor threads and attach the per-client information block, so that the thread can pickup where it left after the last message exchange with the client. This approach was used in earlier times when multi-threading was no standard feature in operating systems. It uses fewer resources than the thread-per-client approach, but is a lot harder to implement. The addition programming effort might pay off when you are dealing with several hundred or thousand clients at a time.
 
Share this answer
 
A one thread per client solution will work on recentish hardware at that scale however it is probably not the best solution. Many servers use a thread pool rather than one thread per client. This means that you run only up to a maximum number of threads which ideally is the limit at which the machine is fully loaded. Each client request waits in a queue and is serviced by the next thread that becomes available. Thread startup and shutdown overhead is reduced because threads don't end just because a client goes away they are only suspended (returned to the pool) if there is nothing to process from any client.

Two things that are critical to this kind of design are a very good thread safe subscriber queue and very carefully sync protected client state. Because each thread serves a sequence of different clients rather than just one, any per-thread state must be kept separate from the client state and it must be safe and efficient to pass the client state from thread to thread.
Each clients requests usually also need to be serialised to ensure that they are processed in order.

In some scenarios there are also radically different solutions which may be used, processing each request in a separate process using IPC to communicate with the server. This only makes sense if each job is a very large or long term request like getting a download from a Mars rover or communicating with a serial port device or something else inherently slow.
 
Share this answer
 
Comments
Sergey Alexandrovich Kryukov 28-Mar-13 10:37am    
5ed.
—SA
Matthew Faithfull 28-Mar-13 11:04am    
Thanks, they're easy when its a problem you've had to solve repeatedly yourself :-)

I've voted the question back up top a reasonable level as I thought it was a reasonable question. Someone had objected to it for some reason and handed out a 1
H.Brydon 28-Mar-13 16:05pm    
I +5'd both Solutions and the question, as you did, for the same reason...
ayesha hassan 29-Mar-13 0:50am    
Thank you everyone for the help, but since I am beginner in this, I am a little confused that waht happens if I create 40 threads and few of the thread try to write at the socket simultaneously? (Since there is only one socket for data reception and sending)
Matthew Faithfull 29-Mar-13 3:48am    
Why only one socket?
You can still do this by having a single thread that handles the socket and using a read queue and a write queue to talk between that thread and a group of processing threads. This model is often used in complex systems where a single connection may be routed to a number of different subsystems but I don't understand why you would limit yourself to one socket in this case.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900