Click here to Skip to main content
15,886,639 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Hey guys

I'm busy working on an asynchronous TCP server & client.

I'm starting to understand the ideas and concepts behind this design pattern. And feel pretty comfortable with it.

What I'd like to know is when I create a read buffer for a client, how big do I make it? 256 bytes? 512 bytes? 1024 * 10 bytes?

To stress test this I have multiple clients that send plain text messages to the server in in infinite loop, more or less 1GIG of traffic per minute. The server then just echos it to all the other connected clients.

I've seen some weird things happen like packets or even single characters in the packets going missing, I think it has something to do with the buffers cause its works fine when im not sending messages via an infinite loop.

Any thoughts and ideas or pointers are more than welcome :-)

Thanks
Posted

Buffer size isn't the issue so much as ensuring you read all incoming data. In otherwords, when you call read() on the stream and it returns the entire buffer size as the bytes read, you will want to handle that. If you don't, you risk losing some data.

On a side note, TCP guarantees delivery (barring catastrophic failure), so if you're getting corrupted data, it's most likely from your application.
 
Share this answer
 
Just to check, you are using TCP? If so all messages are guaranteed [for a given value of guaranteed] to be complete.

If you are expecting high volume then the first thing to do is make sure your receiver takes the data off the inbound queue PDQ and puts it onto a call stack that you manage on a different thread.

A regular mistake is to process data directly from the TCP/IP pipe and expext the world to be sunny.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900