We have given a list of N numbers.

We then create 3 partitions A, B and C (thse can be just integers). We want to store the numbers in the three partitions in such way that the partition with the biggest sum to have the least minimum difference from the second and the third sum. So we want to store the numbers in that way so the sum of all partitions to be as close as possible between each other. The way we should store the numbers is linear. By that I mean every sum should have numbers that are one-next-to-the-other sequence. I don't know if you understand me so if someone has, please edit my question properly.

I will provide an example bellow to show you how.

Let's say we have the following numbers :

*8 5 4 3 2 1 3*

The best solution is :

A = 8
B = 5 + 4 = 9
C = 3 + 2 + 1 + 3 = 9

I wanted to create a linear complexity solution of the problem but I don't know how. I created a O(n^2) solution but I wanted to sure if there is better solution. maybe logarythmic. I wan't upload my code just not to confuse you guys. If you want then I will.

The solution I want can be provided in C or just in an algorithm!

This is my view which is false and in no way you should base in that logic. I test many testcases to say that!

#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int sum_array(const int* array, int cnt)
{
int res;
for ( res = 0; cnt; --cnt)
res += *array++;
return res;
}
struct Partition {
const int *begin;
const int *end;
int sum;
};
int indexOfMax(const struct Partition *p, int partcount)
{
int i, maxIndex;
for (i = maxIndex = 0; i < partcount; ++i) {
if (p[i].sum > p[maxIndex].sum)
maxIndex = i;
}
return maxIndex;
}
int giveRight(struct Partition *p, int partcount, int t)
{
if (t >= partcount || t < 0)
return INT_MAX;
if (t+1 >= partcount || (p[t].end - p[t].begin == 1))
return p[t].sum;
p[t+1].begin = --p[t].end;
int n = *p[t+1].begin;
p[t].sum -= n;
p[t+1].sum += n;
if (p[t].sum > p[t+1].sum)
return p[t].sum;
return p[t+1].sum;
}
int giveLeft(struct Partition *p, int partcount, int t)
{
if (t >= partcount || t < 0)
return INT_MAX;
if (t-1 < 0 || (p[t].end - p[t].begin == 1))
return p[t].sum;
int n = *p[t].begin;
p[t-1].end = ++p[t].begin;
p[t].sum -= n;
p[t-1].sum += n;
if (p[t].sum > p[t-1].sum)
return p[t].sum;
return p[t-1].sum;
}
#define PART_COUNT 3
int minmax(const int *array, int N)
{
const int sum = sum_array(array, N);
struct Partition p[PART_COUNT] = {
{&array[0], &array[1], array[0]},
{&array[1], &array[2], array[1]},
{&array[2], &array[N], sum - array[0] - array[1]}
};
int max, maxIndex, test;
do {
maxIndex = indexOfMax(p, PART_COUNT);
max = p[maxIndex].sum;
test = giveLeft(p, PART_COUNT, maxIndex);
if (test >= max) {
giveLeft(p, PART_COUNT, maxIndex-1);
test = p[indexOfMax(p, PART_COUNT)].sum;
if (test >= max) {
giveRight(p, PART_COUNT, maxIndex-2);
giveRight(p, PART_COUNT, maxIndex-1);
test = giveRight(p, PART_COUNT, maxIndex);
if (test >= max) {
giveRight(p, PART_COUNT, maxIndex+1);
test = p[indexOfMax(p, PART_COUNT)].sum;
if (test >= max) {
giveLeft(p, PART_COUNT, maxIndex+2);
giveLeft(p, PART_COUNT, maxIndex+1);
}
}
}
}
} while (test < max);
return max;
}
int main()
{
int N = 0;
FILE* input = fopen("share.in","r");
fscanf(input,"%d",&N);
int *array = malloc(N * sizeof(int));
if (array != NULL) {
for (int i = 0; i < N; i++) {
fscanf(input,"%i",&array[i]);
}
}
fclose(input);
printf("%d\n", minmax(array, N));
free(array);
}

So the outcome for your list - '8 5 4 3 2 1 3' is not the same as of the list '8 3 2 5 4 3 1'...(it is 8, 10, 8)

Why the order is a must have?

What have you triedso far?—SA

—SA

—SA

You posted code does not show any useful evidence. Explain what it does in general and why you think it's broken.

This is an optimization task. Requiring linear execution time may not produce an optimal solution, but maybe a good-enough solution.

The simplest solution is to produce all permutations and select the "best" according to your criterions. There may be multiple such solutions. On the other hand, this might not be the smartest way to approach... If it's so little elements, it is certainly a feasible approach.

So, what is your strategy in your algorithm?

Cheers

Andi