Click here to Skip to main content
15,039,359 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
using lock make performace slow

C#
Parallel.For(0, newDt3.Rows.Count, row =>
{
    for (int col = 0; col < newDt3.Columns.Count; col++)
    {
        lock (table)
        {
            if (col == 0 || col == 1)
                continue;
            table.AddCell(new Phrase(newDt3.Rows[row][col].ToString(), normalFont));
        }
    }
});


What I have tried:

Initially i am using for loop for large records then i use parallel.for loop but when i add cell in table it giving me crash after that i use lock,but it make performance slow
Posted
Updated 10-Aug-16 9:04am
v2
Comments
F-ES Sitecore 10-Aug-16 8:33am
   
It doesn't looks like that code is suitable for parallel processing, not all code is. You need the loop to fire in a specific order and you can't guarantee that with parallel processing.

Of course it does. You essentially made this code a single thread because you're locking on every single column access of every single row. Only one thread can go through the lock at a time.

There isn't enough information provided to reliably tell you how to get around this or how to rewrite to avoid the lock. We don't know what newDt3 and table are and don't know what your operation is supposed to be doing or its business rules.
   
Comments
Member 8358871 10-Aug-16 8:49am
   
table is my Itextsharp Pdfptable in which i am adding cells.My requirement is exporting large amount of data to pdf near abount 80,000 records.please provide me some solution
Dave Kreskowiak 10-Aug-16 12:45pm
   
I don't think you have a threadable solution available to you. The problem doesn't lend itself to a multithreaded solution. You're trying to build a UI, visual, table which must be done in a specific order. UI rendering problems are not threadable by they're very nature.

Normally, the solution in other UI frameworks is to not show 80,000 records but just a single page of data at a time. But you can't do that since you're building an entire PDF. There is no opportunity to "page" only the visible data since it's all visible at the same time.

Frankly, I wouldn't want to look through hundreds of pages of a PDF to find the rows I'm looking for. It just doesn't make any sense to output that much data to a book. Even at 100 records per page, you're looking at a PDF of about 800 pages.

I'd probably delay writing this PDF for a night-time operation where a service runs that is dedicated to writing this PDF. Yes, it's still going to take a while but, in the morning, it would be available for anyone who needs it without having to build it on demand.
Obviously using a lock will slow things down.

Parallel.For is only good if all the code being called is thread safe i.e. table.AddCell() which doe not seem to be.

Don't optimize unless you specifically have a problem (and in that case you can achieve orders of magnitude more by better algorithms).
   
you can optimize the code a little, but basically, it is not suitable for paralel execution.
C#
Parallel.For(0, newDt3.Rows.Count, row =>
{
    for (int col = 2 0; col < newDt3.Columns.Count; col++)
    {
        lock (table)
        {
            if (col == 0 || col == 1)
                continue;
            table.AddCell(new Phrase(newDt3.Rows[row][col].ToString(), normalFont));
        }
    }
});
   

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)




CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900