
int[] table = new int[] { 0, 1, 2, 3, 4, 7, 6, 5, 4, 3, 2, 1, 0, 1, 2, 3, 4, 5, 6, 7, 4, 3, 2, 1 };
x = table[x + 12];
Nice. I think you missed %table.Length though.
Greetings  Jacek





Not if x is only in [12, 12], which you said at some point..
As an alternative I have a slower way that only uses arithmetic (I just like doing that)
(((abs(x)+4)%12)4)*sign(x)
int abs(int x)
{
int t = x >> 31;
return (x + t) ^ t;
}
int sign(int x)
{
return 1  (x >> 31);
}





If x==12 then table[x+12]=table[24] is out of bounds. Also, without %, most of the "lookup table" would be wasted.
David1987 wrote: (I just like doing that)
Which is noticable.
Greetings  Jacek





Jacek Gajek wrote: If x==12 then table[x+12]=table[24] is out of bounds.
Okay.. add one more item.
Jacek Gajek wrote: Also, without %, most of the "lookup table" would be wasted.
Would it be? x can be any value in [12,12], right? That's pretty much all items in that table .. and one more yes, I missed the last one





Ummm, right. Thanks and good night!
Greetings  Jacek





So I'm trying to use the hash functions provided by .Net in C#. However the implementation they provide does not provide a way (at least, not easy to find) to where I can add bytes to update the current hash sum. Please note, I am NOT talking about using a stream. I need to be able to add from a byte array.
Yes, I have googled around for several hours trying to find a solution, so shut up.
I have a hard time believing that a solution does not already exist out there, as this seems like a necessity for any hash function. So please don't tell me something that "might" work or "should" work. I'm looking for a solution that is known to work.
Thanks for your help.
John





That's a remarkably obnoxious way to ask for help, especially for a relative newbie. And I suggest that you rephrase your question, as I can think of several interpretations of "add from a byte array."
Will Rogers never met me.





First off, I apologize. I was incredibly annoyed when I asked posted this, but this doesn't excuse me being rude. I'm sorry.
As an example, when using hash functions in C, there are normally 3 functions:
1) Initialize
2) Update
3) Finalize
I'm looking for the equivalent of update, where you can add more bytes to to be used. I can see now where the confusion lies in my first question.
Hope this clears things up, and again, sorry,
John





All of the .net hashes work this way; Maybe you should have used google; I found this on the very first search I entered.
http://msdn.microsoft.com/enus/library/system.security.cryptography.hashalgorithm.transformblock.aspx[^]
In order to hash data one block at a time, you need to use the hash algorithm's ICrytpoTransform interface. This interface provides two methods of note:
TransformBlock(byte[] inputBuffer, int inputOffset, int inputCount, byte[] outputBuffer, byte[] outputOffset)
TransformFinalBlock(byte[] inputBuffer, int intputOffset, int intputCount)





You are talking about cryptographic hashes, like System.Security.Cryptography.SHA1[^] and friends?
ComputeHash takes a byte array. So I don't understand the question. You can't 'add bytes' to a hash, you need to rehash the modified full data.
If there is a stream solution (and I'm not really in the mood to search for one for you after the tone of your post) then you can use MemoryStream from your array of bytes.





What exact kind of hash function are you talking about?
Many can't be updated that way (eg SHA1)





Hello Everybody,
suppose
A = number of ones
B = number of zeros
C = A + B
now there will be total of X unique Cbit numbers in which A bits are 1 and B bits are 0
X = (C!) / (A! * B!)
example
A = 4
B = 2
C = 6
X = 6! / (4! * 2!) = 15
01 001111
02 010111
03 011011
04 011101
05 011110
06 100111
07 101011
08 101101
09 101110
10 110011
11 110101
12 110110
13 111001
14 111010
15 111100
now,
is there any algorithm by which we can find values of these X Cbit numbers with out going through each of Cbit numbers i.e
without using following method
for(i = 0; i < pow(2, C); ++i)
{
if i contains A 1s and B 0s then add i to list
}
thanks.





Never use pow(2,someint). It's inaccurate and slow. You might as well do 1<<someint
And yes, there is such an algorithm, you can quickly generate the next bitpermutation[^]. Just start out with (1<<k)1 and keep using that permutation function until you have them all.






Hello Everybody,
I want some help from your side. Actually i m writing an Algorithm for Sewer Colony Optimization Algorithm.
In case of Ant Colony Optimization is generate a flow. But In case of Sewer Colony Algorithm this generate a Tree which Parent is Sink.
And Sewer is not pumped. It's just goes by gravity.
Thanks
If you can think then I Can.





A little clearer description of the problem space would be helpful, but in the meantime, look at the HazenWilliams Equation[^] for full pipe flow, or Manning's Formula[^] for open channel gravity flows.
At equilibrium, pressures at each node must be equal, and the flow volume exiting the node must equal the sum of the individual flows entering it. For calculation purposes I would expect that it's safe to assume that each of the sources has infinite capacity, but if not, you have some additional complexity to deal with. I'm not clever enough to do it myself, but I'd love to see an article on the topic when you're done.
Will Rogers never met me.





Hello,
I want ask you to read my algorithm for solving Knapsack Problem and tell me if i'm wrong or if there is similar algorithm
If you are interested, i can send you source code, description and etc.
Knapsack problem algorithm.
Nikolay Nedelchev Bulgaria, Sofia email :
I. Integer item weight.
Let's review how the DP algorithm works and focus on the important points.
Let N be the item count, and M  the capacity of the knapsack. The DP algorithm constructs a table with dimensions M * N, and calculates for EVERY possible knapsack with capacity <= M all N possible solutions, for items (1), (1,2), (1,2,3)..(1,2..N), i.e Omega(M*N).
One of the bigger problems with this algorithm is that in practise it has to solve "similar problems", which give one and the same solution. Because knapsacks with capacities M4 and M5 can have identical solutions, but the DP algorithm has to calculate them both, no matter what.
Let's have a completely different look at the knapsack problem.
Let's ask ourselves how many knapsacks which are smaller or equal in capacity than M and have ONLY DIFFERENT BEST SOLUTIONS, exist(solutions with different total value and weight, all solutions with equal total weight and value are considered as a single solution, and only one of them is used, no matter which). Let these different solutions form the set S.
It's obvious that the size of S is <= M (eg: if M = 1000, there is no way that 1001 problems with different solutions exist, when there are 1000 possible knapsack capacities {M=1000})
Let's also note that the solution from S with largest value (and it's also the one with largest weight, because if a solution with greater weight and smaller value existed, it wouldn't be present in the set, because it wouldn’t be BEST solution for any knapsack) will be our solution for knapsack with capacity M.
This means that if we can build the set S with time and space complexity O(S*N) in sorted form, we can get to the solution of the initial problem by another O(1) operations and because S <= M, this algorithm would be better or at least equal (in the special cases) to the DP algorithm.
The Algorithm:
Let’s have a set of objects O={oi}, i = 1,2…n each object having a weight and value.
Let’s have a set of objects S = {si}, I = 1,2..m, each object being an unique subset of O.
Each object of S has value which is the total value of all the objects oi in the represented subset of O, and weight, formed in the same way.
S contains:
1. Every subset of O except those which have a smaller value than at least one of the other subsets with the same or smaller weights. (and except those which have same value with at least one of the other subsets with the smaller weights)
see Clarification
2. Only one of the subsets with exact same weights and values, no matter which.
Let S be sorted by descending weights.
The so formed set S, has the following properties:
Property A: The so defined set S contains all the possible best answers for all the possible knapsacks for which there is a valid solution.
Property B: Each object in S is the best answer for at least one possible knapsack.
Property C: The list is sorted by weight.
Property D: The list is sorted by value (because of 1.).
Let’s have a new object p of type O.
Let’s have a new set T = {ti} of type S, which contains all the objects from S, but each subset is augmented with the object p, so the value of each object of T, ti , is equal to the value of si plus the value of p. The same applies to the weight.
T has all the same properties A(*), B(*), C, D.
Let’s add p to T, as a singular subset in the right position, in order to preserve the sort order by weight.
The so formed set T has lost properties B and D, but has kept properties A(*) and C.
* It follows from Prop A, that the set T contains the solutions to all the possible knapsacks if t should be present in the solution.
As a reminder, S contains all the solutions if t should not be present.
So, we can create a new set S, which is formed from merging the old S and T, keeping with 1. and 2.
The new set S has all the properties A.B.C and D for the new set O including object p.
The time complexity of the construction of the new set S takes O(N) time, with N being equal to the size of the old set S.
If at the time of the execution of the so presented algorithm, all the O objects and the capacity of the knapsack are known, all the sets S can be optimized according to the currently found best solution and input information, i.e. we can add to points 1. and 2. a new point 3. That the weight of the subsets in S does not exceed the capacity of the knapsack and etc.
Because all the operations described here on S take a linear time, it follows that the overall time complexity of the algorithm is O(S1+S2+S3…SN) (the sum of the sizes of all the sets S i.e. from definition of S : m1 + m2… mn) .
Clarification to 1.
If we have o1(w = 2, v = 9) and o2(w = 4, v = 8).
Then S = { s1 = (o1) , s2 = ( o1 + o2) }
We do not need the o2 object as a singular subset and will never need the o2 object in a subset that does not contain the o1 object
NOTES:
1. With a suitable representation in memory, every object in S can take constant space and the solution can be extracted in O(n) time, n = size of solution. Also in general there is no need to keep all objects of all sets S.
2. If we compare this algorithm to the currently available dynamic programming algorithms we will note that the sum of all the sizes sets S which we have created, is smaller or equal to the size of the table, which we would have allocated (and calculated) during DP, in other words, the time and space complexities of the algorithm are always smaller or equal to the complexity of the DP algorithms
3. The algorithm can be used with all kinds of input data, including real, negative (as well as very big), etc, numbers as weights.
4. We don’t need to calculate the last set of subsets S
5. The algorithm can be easily transformed to the algorithm of Horowitz and Sahni.
Please tell me about errors and other uncertainties.
modified on Friday, July 1, 2011 12:07 PM





Never post your email address in any forum, unless you really like spam! If anyone replies to you, you will receive an email to let you know.
Real men don't use instructions. They are only the manufacturers opinion on how to put the thing together.
Manfred R. Bihy: "Looks as if OP is learning resistant."





Hi all,
I am sure there are loads of samples around but I am really not finding a relevant one for some reasons (perhaps it's a Friday brain melt thing)
I have two points in 3D space  I know their coordinates (x, y, z) and I know their rotations (x, y, z)
I know this should be enough information to calculate a line (is it a vector? it's been sooo long since uni )
What I want is to calculate the point (x, y, z) where the lines intersect (if any). The Z axis is actually not important, I want an intersect point on only X and Y planes, the Z value of the point will be half way between the original 2 points Z value.
Can any shed some light, or has a link? All info I am finding doesn't seem to allow me to make use of my rotation data.
Thanks for any help
I may or may not be responsible for my own actions





Here is a very good tutorial[^], section 2 has what I think you're looking for. It deals mostly with 2D, but it is easily extended to 3D.
In general, I highly recommend TopCoder's tutorials on algorithms: very downtoearth, they are often written by highly skilled "competition programmers" for other "competition programmers" (as opposed to typical coursework written by professors for students, often loaded with very important and equally boring details).





Thanks I will take a look into this
I may or may not be responsible for my own actions





I think we need a bit more information; you've said that you have two points in 3space, but want to calculate the intersection of two lines. That does not compute, Will Robinson. Do you have equations for your two lines? If so, let one line be f(x,y,z) and the other g(x,y,z). Use simple linear algebra to solve for x, y, and z such that f(x,y,z)  g(x,y,z) = 0. If not, you really need to clarify your question.
Will Rogers never met me.





the two points have rotation data which is how I consider them lines (infinitely long lines running thou the know point at the given angle)
I didn't have the line equations which turns out to be what I was actually trying to find.
I have worked it all out now anyway, but thanks for the reply
I may or may not be responsible for my own actions





In general, lines in 3D do not intersect. The closest thing is the lineline closest approach[^] which is too complex to explain here, but that article is good.
The Z axis is actually not important, I want an intersect point on only X and Y planes
This leads me to believe that you actually want 2D line intersection. There are a number of ways to formulate that but a nice one is using the line equations:
Line 1 = (a1,b1) + k1(dx1,dy1) =>
x1 = a1 + k1.dx1 and
y1 = b1 + k1.dy1
Line 2 = (a2,b2) + k2(dx2,dy2) =>
x2 = a2 + k2.dx2 and
y2 = b2 + k2.dy2
Target: L1 = L2 =>
a1 + k1.dx1 = a2 + k2.dx2
b1 + k1.dy1 = b2 + k2.dy2
There are only four unknowns (k1 and k2, and the meeting point (x1,x2)). The bottom two equations eliminate the coordinates leaving two equations and two unknowns. Now eliminate k2 (k2 = (k1.dy1 + b1  b2)/dy2):
a1 + k1.dx1 = a2 + (dx2/dy2)(k1.dy1 + b1  b2)
k1(dx1  [dx2.dy1/dy2]) = a2 + (dx2/dy2)(b1  b2)
k1 = [a2 + (dx2/dy2)(b1  b2)] / (dx1  [dx2.dy1/dy2])
Looks complex, but all those quantities are known at the start, so you can calculate k1 (unless dy2 is 0, in which case special case it, or the whole right hand side is 0, in which case the lines are parallel and there is no intersection).
Then you can use the k1 to calculate k2 from either of the last two equations, and then use k1 and k2 to find the intersection point. Note that if k1 or k2 is < 0 the intersection is 'before' the 'beginning' of the respective line; you may want to count such cases as failures, depending on the problem (and similarly for k > 1).
(NB I didn't check that maths, there may be a mistake, but the principle is sound.)





Thanks for the reply. I have solved this already now thou.
I did just want 2D intersection which is why I said the Z axis is not important, I can take a middle ground value for the resulting Z coordinate. The part I was really trying to find out (as it turns out) is to determine the line equations from the data I had.
For this I used cosine and sine methods to calculate a "second point" for each "line". From this I could then calculate the slope (m) and yintersect (b) for my line equations. Then I solve the equations to get the intersect point. simple when you know how
I may or may not be responsible for my own actions




