Click here to Skip to main content
15,919,749 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I have a simple query on a single table
The table contains 10M+ rows
The query in ran during a file loading process and is run hunderd of thousands of times
The table is indexed on the relevant fields
It takes around 20msec's to execute. It ain't much, but when you run it 500,000 times it adds up
Is there any way to reduce it, aside from caching the entire table on the client side and managing it in memory?
SELECT * FROM MyTable WITH (NOLOCK) WHERE  EntryID !=75375653 AND IsRemoved=0 AND CompanyID=4986771 AND Reference=9790490 AND PaymentNumber=6 AND Token='2342342342343' AND (VoucherID=892124 OR ExternalTransactionID='10978186298')

Execution plan just shows Index seek on the single index(79%) + nested loops inner join(21%)
Updated 17-Dec-14 21:16pm
Tomas Takac 18-Dec-14 3:22am    
I would suggest you explain the whole process. How does this query fits in the file loading process? What do you do with the data? Your question is clearly headed this way. BTW are you sure there is a join in your query plan becuase there is no join in your query.
Shweta N Mishra 18-Dec-14 3:30am    
which columns are index and in what order ?
Jörgen Andersson 18-Dec-14 4:22am    
The question isn't how you can speed up the query performance.
The question is how you can run fewer queries.
If you run the query 500000 times you must have previously loaded the parameters for the query somehow.
Is there a way to turn it into one or maybe just a few queries?
LegendaryMike 19-Dec-14 5:59am    
HiMy file loading process involves loading large files (100K records) and matching each record separately using complex logic with existing data in several tables (aside from this problamtic one)
I've already gone through the algorythm and reduced the number of queries to the absolute minimum of what I have to (unless local caching is used). Local caching is problematic due to 2 facts:
1. large amount of data, that will grow in future to round 100-300M records
2. data being processed affects the data against which I compare. Meaning, after every operation I have to update the local cache

Now that I look at the query (the loading runs for several days now) it switched into a different plan but did not become any faster. Below is the exact query and the plan:

<!--?xml version="1.0" encoding="utf-16"?-->
<showplanxml xmlns:xsi="" xmlns:xsd="" version="1.2" build="11.0.2100.60" xmlns="">
<stmtsimple statementcompid="1" statementestrows="1" statementid="1" statementoptmlevel="FULL" statementoptmearlyabortreason="GoodEnoughPlanFound" statementsubtreecost="0.00657384" statementtext="SELECT * FROM M4U_Operations WITH (NOLOCK) WHERE EntryID !=72003158 AND IsRemoved=0 AND CCCompanyID=4986771 AND ShvaTransmissionReference=7519002 AND PaymentNumber=10 AND Token=N'99999999999999' AND (ShvaVoucherID=600076OR ExternalTransactionID=N'10212980654')" statementtype="SELECT" queryhash="0xC5BB519BDC66FF50" queryplanhash="0xC618D02498890435">
<statementsetoptions ansi_nulls="true" ansi_padding="true" ansi_warnings="true" arithabort="true" concat_null_yields_null="true" numeric_roundabort="false" quoted_identifier="true">
<queryplan degreeofparallelism="1" memorygrant="1024" cachedplansize="88" compiletime="21" compilecpu="20" compilememory="792">
<relop avgrowsize="382" estimatecpu="4.18E-06" estimateio="0" estimaterebinds="0" estimaterewinds="0" estimaterows="1" logicalop="Inner Join" nodeid="1" parallel="false" physicalop="Nested Loops" estimatedtotalsubtreecost="0.00657276">
<columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="EntryID"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="PaymentNumber"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="TerminalID"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="ShvaTransmissionReference"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="ShvaTerminalID"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="ShvaVoucherID"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="ShvaZField"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="CCCompanyID"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="CreditType"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="TransactionDate"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="TransmissionDate"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="CreditTransferDate"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="CCIdentificationType"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="CCBrand"> <columnreference database="[DB]" schema="[dbo]" table="[M4U_Operations]" column="ProcessedByRealCompany">
LegendaryMike 19-Dec-14 6:04am

1 solution

Caching it on the client absolutely won't help. The database engine should be more efficient at handling the data. I would suggest using the tuning advisor (if your using sql server or oracle). Other database engines have simmilar tools. The database engine has statistics about the queries (and more) and can advise about missing indexes that would really help performance. It is more usefull than just the execution plan.

Good luck!
Share this answer

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900