Click here to Skip to main content
Rate this: bad
good
Please Sign up or sign in to vote.
See more: C# Parallel
When using the partitioner object in a parallel construct "Parallel.ForEach", why do we have to keep the size of the partitioned chunks large and why do we have to keep the number of locks small?
 
 
static double ParallelPartitionerPi()
        {
            double sum = 0.0;
            double step = 1.0 / (double)num_steps;
            object monitor = new object();
            Parallel.ForEach(Partitioner.Create(0, num_steps), () => 0.0,
            (range, state, local) =>
            {
                for (int i = range.Item1; i < range.Item2; i++)
                {
                    double x = (i + 0.5) * step;
                    local += 4.0 / (1.0 + x * x);
                }
                return local;
            }, local => { lock (monitor) sum += local; });
            return step * sum;
        }
 
Posted 19-Dec-12 6:01am
Edited 16-Jan-13 6:43am
v3
Comments
phil.o at 16-Jan-13 11:48am
   
Questions are not meant to be deleted (unless it is qualified spam). The fact that the answer you got was not helpful to you does not mean it could not be helpful to someone else.
Deleting the text of your question seems quite rude, IMHO.

1 solution

Rate this: bad
good
Please Sign up or sign in to vote.

Solution 1

The performance impact of too many locks defeats the object of multi-threading
See http://tipsandtricks.runicsoft.com/CSharp/ParallelClass.html[^] for a fuller explanation.
  Permalink  

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



Advertise | Privacy | Mobile
Web04 | 2.8.141220.1 | Last Updated 16 Jan 2013
Copyright © CodeProject, 1999-2014
All Rights Reserved. Terms of Service
Layout: fixed | fluid

CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100