Today, a colleague asked me, why his simple
select query was taking around 3000ms (3 seconds) to execute while, the same query was quite fast when executed from application.
The answer is simple: SQL Server Management Studio use RBAR - Row By Agonizing Row method to fetch rows and inform row by row to SQL Server that row is received while on other hand, application which doesn’t use
RBAR method, informs once after whole batch is received and reluctantly is fast as compared to SSMS or those applications which use
To confirm that query is running slow just because of RBAR factor, I have used extended events for single session waits analysis, a well defined method by Paul Randal. Output was as follows:
NETWORK_IO is basically ASYNC_NETWORK_IO, when working with extended events. According to BOL "Occurs on network writes when the task is blocked behind the network. Verify that the client is processing data from the server."
But you can find a more proper definition for this type of wait on Karthick PK’s Blog. He states that "When a query is fired, SQL Server produces the results, places it in output buffer and sends it to client/Application. Client/Application then fetches the result from the Output buffer, processes data and sends an acknowledgement to SQL Server. If client/Application takes a long time to send acknowledgement, then SQL Server waits on
ASYNC_NETWORK_IO (SQL 2005/2008) or
Network_IO (SQL 2000) before it produces additional results."
Hence it has been proved that our query delay was just because of
NETWORK_IO wait (2870ms out of total 3000ms) and we were on the same machine where SQL Server was installed so there are no chances of any network problem and it's only
RBAR method of SQL Server Management Studio which was causing this delay.