Click here to Skip to main content
15,886,137 members
Articles / High Performance Computing / Parallel Processing

Use of free SocketPro package for creating super client and server applications

Rate me:
Please Sign up or sign in to vote.
4.48/5 (19 votes)
23 Feb 200211 min read 179.5K   6.3K   59  
A set of socket libraries for writing distributed computing applications over the internet
<html>

<head>
<meta name="GENERATOR" content="Microsoft FrontPage 3.0">
<title>Net data communication speed test and client side development guide with SocketPro</title>
</head>

<body>

<h3 align="center">Net data communication speed test and client side development guide
with SocketPro</h3>

<ol>
  <li>Purpose<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; This short article is written for telling you
    how to use the core component, NetBaseR.dll, of SocketPro package for client side
    development. You are expected to be familiar with Visual C++ and MFC. Particularly, it
    gives your speed data convincing you that SocketPro can beat DCOM (MTS) in many aspects,
    speed, parallel computation and non-blocking calls without use of any worker threads in a
    client application. <br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </li>
  <li>Code<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; As you can see from the file SpeedTestDlg.cpp,
    the sample application code is very simple. To successfully start use any of SocketPro
    based class objects, your code should first call the global function, InitSocket(). You
    code should also call the function, UninitSocket(), after all of CAsySocket derived
    objects are destroyed.<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </li>
  <li>Speed data<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Hardwares: client -- P133 with 48 RAM; Server
    -- P700 with 256 RAM; Fast Ethernet 10/100 Network Cards + Ethernet 5-Port Hub (10Mbps) +
    Two Category 5 100BaseTX Network Cables ($50 totally, LinkSys NC100).<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Software: Client -- Win95; Server -- NT 4.0
    Server.<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Table 1. Speed comparison among
    different methods with the average of 1000 calls. </p>
    <table width="71%" border="1">
<TBODY>
      <tr>
        <td width="39%">&nbsp;</td>
        <td width="17%">Asyn/Batch</td>
        <td width="17%">Asyn/Nagle</td>
        <td width="21%">Syn/OneByOne</td>
        <td width="29%">DCOM</td>
      </tr>
      <tr>
        <td width="39%">Speed AutoDetect (100 Mb/10Mb Full-Duplex)</td>
        <td width="17%">7-45</td>
        <td width="17%">4-7</td>
        <td width="21%">2.3</td>
        <td width="29%">1 (2.5 ms/call)</td>
      </tr>
      <tr>
        <td width="39%">10Mb Full-Duplex</td>
        <td width="17%">9-52</td>
        <td width="17%">28-60</td>
        <td width="21%">1</td>
        <td width="29%">1 (25.5 ms/call)</td>
      </tr>
</TBODY>
    </table>
    <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Under all of tests and conditions, SocketPro
    applications always run faster than DCOM. I hope this test data are attractive to you.
    Running with the sample, please give me a message here or privately if you find your DCOM
    speed data is against with these results. <br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </p>
  </li>
  <li><a href="fundamentals.htm">Theories and explanation</a><br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The VB sample project and code are placed in
    the folder of samples\client\dnssvs\. Look at the code, you will see two socket
    connections are built, once clicking the button Connect. Both of the two connections run
    in non-blocking mode. When clicking button Batch calls 1 or button Batch calls 2, you send
    a batch of requests to the above SocketPro Server to process in parallel. The server knows
    how to assign requests onto different threads/message queues. When clicking the button All
    running in parallel, two sets of batch calls are sent to the SocketPro server. In the
    server side, five threads/message queues work togather to process these requests
    concurrently. It is notified that the SocketPro server threads will be killed about 60
    second after all of these requests are processed. Certainly, you can click the button All
    running in parallel again, and send another two sets of batch calls to the SocketPro
    server for processing. For details of theories, click <a href="fundamentals.htm">here</a>.<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </li>
  <li>Advantages of the sample clinet-server system<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The system is designed with many advantages.
    Among them, you could see BIG four advantages. They are speed, non-blocking/blocking,
    parallel computation and no worker threads involved with your code at both client and
    server side. In the server side, SocketPro handles all of threads, message queues and
    others for you. The server code can't be simpler, and just can't be simpler! <br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </li>
  <li>Non-blocking dead lock<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Don't be afraid of the dead lock problem with
    non-blocking socket. You can easily expect when the dead lock could happen and what method
    could be used to prevent it from happening. As discribed in the article <em><a
    href="file://C:\netdemo\help\tutorial1.htm/fundamentals.htm">Fundamentals about data
    communication using socket with SocketPro</a>, </em>the client sending, server processing
    and server sending could happen at the same time and in parallel if enough requests
    originates from a client. If there are too many requests sent to a server in a batch, both
    client socket receiving and server sending buffers are fully filled with returned results
    while the client is still sending requests. At this case, it causes non-blocking dead
    lock. To prevent it from happening, the batch can't contains too many requests. For the
    safety, the returned results of the batch of requests can not oversize the sum of client
    side socket receiving buffer and server side sending buffer. You could also increase the
    sizes of client side socket receiving buffer and server side sending buffer to avoid
    non-blocking dead lock. You could use socket function to peak if too many returned data
    fill a client side socket receiving buffer while the client is sending requests too.<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; At last, dead lock will not happen with
    blocking socket. In most cases, you will not meet dead lock with non-blocking socket.<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </li>
  <li>Service is available now<br>
    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; If you need us to do something for you, give us
    your message using email <a href="yekerui@yahoo.com">yekerui@yahoo.com</a>. Service is
    available now!</li>
</ol>
</body>
</html>

By viewing downloads associated with this article you agree to the Terms of Service and the article's licence.

If a file you wish to view isn't highlighted, and is a text file (not binary), please let us know and we'll add colourisation support for it.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Software Developer (Senior)
United States United States
Yuancai (Charlie) Ye, an experienced C/C++ software engineer, lives in Atlanta, Georgia. He is an expert at continuous inline request/result batching, real-time stream processing, asynchronous data transferring and parallel computation for the best communication throughput and latency. He has been working at SocketPro (https://github.com/udaparts/socketpro) for more than fifteen years.

Comments and Discussions