You can try this code snippet.
object padlock = new object();
DataTable dtMerge = new DataTable();
ParallelLoopResult result = Parallel.For(0, strFilePath.Count, index =>
{
Debug.WriteLine("Iteration {0} : {1}", index, strFilePath[index]);
clsNewClass objCls = new clsNewClass();
DataTable dt = objCls.ReadCSV(strFilePath[index]);
lock (padlock)
{
dtMerge.Merge(dt);
}
});
Debug.WriteLine("Result: {0}", result.IsCompleted ? "Completed Normally" : String.Format("Completed to {0}", result.LowestBreakIteration));
You should probably also look into the way you read data from the CSV file.
When you have so many files with that many lines, small inefficiencies here and there in the code will add up to a lot of time losses.
Even a millisecond per file will add up to 1 second slower time.