Click here to Skip to main content
13,191,630 members (64,144 online)
Rate this:
Please Sign up or sign in to vote.
See more:
When using the partitioner object in a parallel construct "Parallel.ForEach", why do we have to keep the size of the partitioned chunks large and why do we have to keep the number of locks small?

static double ParallelPartitionerPi()
            double sum = 0.0;
            double step = 1.0 / (double)num_steps;
            object monitor = new object();
            Parallel.ForEach(Partitioner.Create(0, num_steps), () => 0.0,
            (range, state, local) =>
                for (int i = range.Item1; i < range.Item2; i++)
                    double x = (i + 0.5) * step;
                    local += 4.0 / (1.0 + x * x);
                return local;
            }, local => { lock (monitor) sum += local; });
            return step * sum;
Posted 19-Dec-12 5:01am
Updated 16-Jan-13 5:43am
phil.o 16-Jan-13 11:48am
Questions are not meant to be deleted (unless it is qualified spam). The fact that the answer you got was not helpful to you does not mean it could not be helpful to someone else.
Deleting the text of your question seems quite rude, IMHO.

1 solution

Rate this: bad
Please Sign up or sign in to vote.

Solution 1

The performance impact of too many locks defeats the object of multi-threading
See[^] for a fuller explanation.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

  Print Answers RSS
Top Experts
Last 24hrsThis month

Advertise | Privacy |
Web03 | 2.8.171017.2 | Last Updated 16 Jan 2013
Copyright © CodeProject, 1999-2017
All Rights Reserved. Terms of Service
Layout: fixed | fluid

CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100