Click here to Skip to main content
15,886,067 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
When using the partitioner object in a parallel construct "Parallel.ForEach", why do we have to keep the size of the partitioned chunks large and why do we have to keep the number of locks small?

C#
static double ParallelPartitionerPi()
        {
            double sum = 0.0;
            double step = 1.0 / (double)num_steps;
            object monitor = new object();
            Parallel.ForEach(Partitioner.Create(0, num_steps), () => 0.0,
            (range, state, local) =>
            {
                for (int i = range.Item1; i < range.Item2; i++)
                {
                    double x = (i + 0.5) * step;
                    local += 4.0 / (1.0 + x * x);
                }
                return local;
            }, local => { lock (monitor) sum += local; });
            return step * sum;
        }
Posted
Updated 16-Jan-13 5:43am
v3
Comments
phil.o 16-Jan-13 11:48am    
Questions are not meant to be deleted (unless it is qualified spam). The fact that the answer you got was not helpful to you does not mean it could not be helpful to someone else.
Deleting the text of your question seems quite rude, IMHO.

1 solution

The performance impact of too many locks defeats the object of multi-threading
See http://tipsandtricks.runicsoft.com/CSharp/ParallelClass.html[^] for a fuller explanation.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900