|
yippee!! my opinion is just as identical as yours. i just think that they may be used in high frequence and may occur not very small size. i think an app. that has 100+ kinds of msg. to proc. is popular, so is a class with 100+ virtual function. we assume the count is 128. so the average time cost of sequential search is (1+128)*(128/2)/128 == 64.5 and for binary search it is 2log128 == 7
big difference!!
|
|
|
|
|
acelandin wrote: my opinion is just as identical as yours
apparently not.
acelandin wrote: so the average time cost of ...
that is completely wrong, as it ignores reality, where a non-linear search takes more code to execute, and has lower "locality of reference" possibly resulting in a smaller caching advantage. That is why I suggested you write actual code and try it, rather than come up with unrealistic theories.
|
|
|
|
|
to implement what? a new mode of proc msg in WndProc with my suppose? i've already done it. i created an array that the element is a struct that consists of two part: the msg id and the msg handle-function's address. this array can be sure and sorted at compile-time and static at run-time, just like the "switch..case.." in traditional one. then in WndProc i use binary search with the msg id as the key to find out the func addr.
why you said there exist non-linear access problem? sorry, i can't understand.
|
|
|
|
|
Note: this post contains mostly speculation and theory, and I'm not in the mood to analyze less-important things such as register access stalls, or even to benchmark anything. But really, that is your task, not mine. It's also quite early in the morning and I just woke up, so there will be mistakes/errors/clear lack of reasoning/etc - use the information in this post at your own risk.
Really?
Let's assume for a minute that all values are equally likely to occur (not true in practice)
Now consider the conditional jumps here, in the sequential version predicting "not found" would be more than 50% accurate for all but the last two entries. In the binary search code, none of the relevant jumps can be predicted at all (every path down the "tree" is equally likely, as assumed).
It depends on your CPU architecture how much this matters, on a Core2 a misprediction costs you 15 cc's. In the binary search way you will have 3.5 mispredictions on average so 52.5 extra cc's due to branch misprediction. In the sequential search code you only need 1 mispredicted branch (the one that ends your search*) giving you 15 extra cc's due to misprediction.
A 36.5 cc's difference doesn't look too bad, but that's only from branch misprediction.
So let's look at the memory access pattern. Suppose the array is not cached (otherwise we'd have little to talk about). In the binary search way, the first couple of accesses will be a cache miss, and in addition to being a cache miss they're also bringing in a piece of data you probably won't need again soon. That sounds as bad as it is. In the sequential scan you're bringing in blocks of data you probably will need, and the CPU will notice your sequential access pattern and start prefetching the next block even before you need it.
* - this sounds lame, since it means putting your most-likely entries near the end would be better than putting them in the beginning to make sure the "not found" prediction isn't changed to "found" by your CPU who's trying to be clever and thereby messing up the upper bound of 1 misprediction. Whether it's better to put those at the beginning and predict "found" depends on how likely it is that the first entries are used compared to the rest. In the binary search we could unbalance the search tree to make the more likely cases faster than the unlikely cases.
And of course, benchmark!
|
|
|
|
|
Whatever makes you think that a switch statement uses a sequential search - my compiler certainly doesn't, it seems to use a range of options, including binary search, depending on the values of the case statements. Check your compiler and if it doesn't do this get a better compiler.
Peter
"Until the invention of the computer, the machine gun was the device that enabled humans to make the most mistakes in the smallest amount of time."
|
|
|
|
|
I was wondering what the technical term and best method is to solve a set of equations that are both dependent on the other. I'm not talking about a normal system of equations problem, I know how to solve them. This is not the problem I am trying to solve but it gives a simple example of the type of problem I am trying to solve.
Suppose the value of 2 companies was the total value of their assets.
Company A has $2,000,000 of equipment and owns 25% of Company B.
Company B has $1,000,000 of equipment and owns 10% of Company A.
What would be the total value of each company?
Does anybody know the name for this type of problem and what is considered the best approach for these types of problems?
Thanks,
Mike
modified on Monday, December 7, 2009 9:59 PM
|
|
|
|
|
Isn't the answer just $3,000,000? That's the total of the assets. The rest of it is just haggling about what each company is worth individually. That's not asked. As for a name, how about "ugly"?
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Tim Craig wrote: Isn't the answer just $3,000,000? That's the total of the assets.
Oops it should have said each company. But to be honest I'm pretty sure you're wrong because if you owned 100% of company A you would own the $2,000,000 in equipment plus 25% of company B. Therefore the value is > $2,000,000. The same is true if you owned 100% of company B. You would have a value that is > $1,000,000 therefore the total is greater than $3,000,000. The reason why this isn't an absurd answer I think is because it is impossible to own 100% of either since part of each company is already sold off to the other.
Tim Craig wrote: As for a name, how about "ugly"?
You got that right! And the real problem is even uglier, it's a probability calculation than involves any number of interdependent calculations. What I posted is just a simplified illistration of the type of problem I am trying to solve so people would understand what I'm asking about. Who knows maybe this is a new field of mathematics .
Actually since I posted the original I think I might have an idea on how to solve this but I'll put it in another post since it's kind of long.
By the way how's the robotics stuff been going (you mentioned it when you helped me with a dual camera opencv question a while back)?
Thanks,
Mike
|
|
|
|
|
Well, ultimately all that's owned is the equipment even though it's cross owned. What the other company owns of the other can't count toward it's assets, so I still maintain that their total worth is still $3,000,000. Now I'm prefectly sure that some of today's financial geniuses who just gave us the financial meltdown could come up with a way to make a large number of suckers believe the companies are worth a lot more than what they actually have.
I believe you can couch the equations as a pair of linear equations and solve the infinite recursion using the eigenvalues and eigenvectors. It becomes an infinite power of the matrices. Of course, there are probably other ways to do it as well.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Tim Craig wrote: Well, ultimately all that's owned is the equipment even though it's cross owned. What the other company owns of the other can't count toward it's assets, so I still maintain that their total worth is still $3,000,000. Now I'm prefectly sure that some of today's financial geniuses who just gave us the financial meltdown could come up with a way to make a large number of suckers believe the companies are worth a lot more than what they actually have.
Yeah there does seem to be a bit of a paradox here. Maybe this isn't the best example. What I was trying to figure out is how to approach circular problems like this. What I'm really trying to program is some vision recognition/AI stuff. I use the probability that some object is something to influence the interpretation of another object which is also influencing the first object ect.. (it could involve more than 2 objects interacting in this way).
Tim Craig wrote: I believe you can couch the equations as a pair of linear equations and solve the infinite recursion using the eigenvalues and eigenvectors. It becomes an infinite power of the matrices. Of course, there are probably other ways to do it as well.
Thanks for the suggestion I will have to study that .
Thanks for your help,
Mike
|
|
|
|
|
MikeMarq wrote: What I'm really trying to program is some vision recognition/AI stuff. I use the probability that some object is something to influence the interpretation of another object which is also influencing the first object ect.. (it could involve more than 2 objects interacting in this way).
Ah, I think what you may be looking for is Bayesian statistics named after Bayes, of course. It's used for robot decision making and is the basis for optimal filters like the Kalman Filter and doing sensor fusion. It's used for estimating state (the prior condition) and then updating the state estimate based on new information from measurements (the posterior). This is a good, although high level, book[^] by Sebastian Thrun who left Stanford's efforts in the DARPA Challenges for autonomous vehicles.
If you haven't had a formal linear algebra class or are looking for a refresher, I can recommend this one[^]. The instructor is excellent and the price is right. It does take time to get through all the lectures. I think I've got 8 to go. I was working through some other robot material that became linear algebra intensive and decided I needed a refresher. Once I got into it, I realized I had never had a formal linear algebra class, only what had been introduced in my calculus classes and I picked up from numerical analysis. Linear algebra has changed greatly since then.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Thanks once again for your help.
Yeah thank goodness for youtube , I've never taken linear algebra myself but I know what you mean you can practically take a course that way (except for there being no tests). By the way have you ever seen this guys play list on just about every subject http://www.khanacademy.org/[^]. He has a knack for explaining things in 10 or 20 minutes that would normally take someone else an hour and his explainations are usually clearer too.
|
|
|
|
|
I didn't know about that site, thanks. It looks like quite a list of topics, I'll peruse a few in my copious spare time. How did I ever get anything done when I had to work??? His list of linear algebra topics looked awfully familia and then I noticed he went to MIT so maybe he had Strang as an instructor or at least used his book.
Right now I'm fighting with OpenCV and trying to figure out why its camera calibration function isn't doing quite what I expect.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Ok I think I might have thought of a solution. Those of you that are math aces let me know if I am wrong or if they're might be a better method.
Here's the question again:
Suppose the value of 2 companies was the total value of their assets.
Company A has $2,000,000 of equipment and owns 25% of Company B.
Company B has $1,000,000 of equipment and owns 10% of Company A.
What would be the total value of each company?
To solve for Company A:
A = 2m + 0.25(B) since B = 1m + 0.1(A) then I can expand the first calculation to:
A = 2m + 0.25(1m + 0.1(A)); expanded again we get:
A = 2m + 0.25(1m + 0.1(2m + 0.25(B))); expanded again we get:
A = 2m + 0.25(1m + 0.1(2m + 0.25(1m + 0.1(A)))); ect....
Since each expansion gets less and less significant we could eventually drop A or B and replace it with zero once this has been expanded enough times. This would get us something that is very close to the real answer the accuracy being determined by how many expansions are used. Something tells me there must be some way to use true calculus to get an exact answer but I haven't figured out how yet.
Let me know if you guys agree with this method.
Thanks,
Mike
|
|
|
|
|
And something tells me the accountants wouldn't stand for doing calculus. I think the problem would be a bit easier if you restate it. Here's how I'd write it:
Company A manages $2,000,000 of equipment. 10% of that is owned by company B and the rest by company A.
Company B manages $1,000,000 of equipment. 25% of that is owned by company A and the rest by company B.
What is the total value of equipment owned by each company?
Stating it that way makes the following code seem to be a striaghtforward way to answer:
equipValueA = 2000000
equipValueB = 1000000
equipAOwnedByB = .1
equipBOwnedByA = .25
totalValueA = equipValueA * (1 - equipAOwnedByB) + equipValueB * equipBOwnedByA
totalValueB = equipValueB * (1 - equipBOwnedByA) + equipValueA * equipAOwnedByB
This would then generalize pretty well to a matrix equation:
T = O * E where
T is the column vector of total values
E is the column vector of equipment values, and
O is the matrix of ownership percentages. The columns are the company being owned and the rows are the percentage owned by that company. The sum of a column must then be 100%. (The values in row 2 would be how much of each company is owned by company C.)
|
|
|
|
|
Gideon Engelberth wrote: And something tells me the accountants wouldn't stand for doing calculus. I think the problem would be a bit easier if you restate it. Here's how I'd write it:
Company A manages $2,000,000 of equipment. 10% of that is owned by company B and the rest by company A.
Company B manages $1,000,000 of equipment. 25% of that is owned by company A and the rest by company B.
Thanks for the suggestion but are you sure that's correct? Why should Company B owning part of A reduce A assets? It's not buying part of the assets it's buying part of the company that owns the assets not the assets themselves. For example if google gave me 10% ownership of their company that wouldn't reduce the value of google's assets would it? They would still own as many servers ect as they did before.
Maybe I chose a bad illustration for this type of problem. What I'm really trying to figure out is how to program something related to vision recognition/AI problem where I use a probability about what one object is to influence my conclusions about another object which in turn influences the first in a circular way like in my example. I was just using the example to try to get an idea how to approach this type of circular problem.
Thanks,
Mike
|
|
|
|
|
Company A owns 25% of company B and 90% of itself = 0.25 * 1M + 0.90 * 2M = 2.05 M
Company B owns 10% of company A and 75% of itself = 0.10 * 2M + 0.75 * 1M = 0.95 M
Which is in sum 3 M.
Everything else would be financial math
Greets
Matthias
|
|
|
|
|
Such questions or situations never exist. Though they seem to be.
The way you have suggested, it will never end into an equilibrium. In reality, A can only own a maximum of 100% of its own company. If 10% is owned by B, A looses 10% automatically. If you give me 10 USD, I am richer by 10 USD and you are poorer by 10 USD. It cannot happen that you keep your amount and I gain it too.
Now,
Once you accept this fact, this brings us to a different kind of situations. What you are suggesting here is that every time the value of A changes, the value of B changes as well. Similarly the same happens for B.
In any example, there always will be one finite value that would make a transition from object1 to object2. It is upto you to identify this thing.
Do you have any more such examples? Probably, its just a incorrect assumption.
|
|
|
|
|
Dear all,
I am a newbie here, this is my first post. I have looked over a few algorithms and have searched this board for my question but have not been able to narrow down to a clear answer.
I am looking for any pointers/code/advice to help decide how to solve the following problem.
I have to write a program to suggest the starting time of a job. There is 1 CPU, and there are n jobs already scheduled on it and we know their run times. I can assume a run time (maybe a median of the known run times or something) for this new job. The constraint being at no time should there be more than m jobs running in parallel.
I think this problem is relatively simple. For a hammer and tongs solution, I have implemented a quick-sort kinds binary cleaving program which starts with a random time of the day, say, 13:00, and checks if one more process can be started at that time, it accepts the "assumed" run time of the new process and finds if the number of jobs will become greater than m during the assumed run time of the new process. If not, success! else, I cleave the two sides of the time chosen 00:00-12:59 and 13:01-23:59 (of course I can add the assumed run time to 13:01 to make a better choice of the window/range) to get the midway time position and check there and so on.
Can there be a more formal solution to this problem? Like maybe based on the Knapsack algorithm or something.. any advice.
Also, what could be a good measure of the "assumed" run time, I'm thinking median. I do realize that this question is a bit vague.
Thanks in advance,
-J
|
|
|
|
|
Hi
I think I would go for a observer program that checks if there is one more job that can be started. If so I would start one.
If you want you can do a importance weighting schema based on the assumed run time.
I don't know if this answer your question. but this is the direction I would tend to go.
Cheers
You have the thought that modern physics just relay on assumptions, that somehow depends on a smile of a cat, which isn’t there.( Albert Einstein)
|
|
|
|
|
thebiggestbangtheory wrote: Also, what could be a good measure of the "assumed" run time, I'm thinking median.
I'd go for a progress-indication that's displayed in "% work done", instead of using a calculated guess based on the previous runtimes.
You could still calculate the time that's required to finish the job, based on the percentual feedback that you get
I are Troll
|
|
|
|
|
Why not just use a simple, "greedy" approach like an operating system, and start jobs as they come in, until m jobs are running. Then when one finishes, start another one?
A few nuances:
1. If a job can start other jobs before it can complete, you risk a deadlock situation where two jobs are each waiting for the other to complete before they can start other jobs.
If you know in advance how many other jobs a job can start, a conservative approach is the Banker's Algorithm: If a job will start k other jobs, don't start it if more than m - (k + 1) jobs are currently running. This will guarantee you have the resources needed to finish the job.
2. Starting short jobs before longer jobs will increase throughput. (But if you give shorter jobs too much priority, some long jobs may never run. This was observed in some early operating systems.)
|
|
|
|
|
For Shopping Cards i want to Generate Unique Serial Number for every card(countless no of cards) what is the solution plz help me
|
|
|
|
|
They need to have an order or they must be "random"?
They have a size limit?
One of the possible solutions is to use GUID.
When GUID is not possible, I generally use a long value, that is get with DateTime.Ticks.Now (or incremented randomically) if both calls are in the same microsecond.
The class code follows, but you only need to call:
TicksOrIncrement.GetValue() to use it.
using System;
namespace Pfz.Threading
{
public static class TicksOrIncrement
{
private static readonly object fLock = new object();
private static long fLastValue = DateTime.Now.Ticks;
private static readonly Random fRandom = new Random();
public static long GetNext()
{
long ticks = DateTime.Now.Ticks;
lock(fLock)
{
if (ticks <= fLastValue)
ticks = fLastValue + 1 + fRandom.Next(1000);
fLastValue = ticks;
}
return ticks;
}
}
}
|
|
|
|
|
Yes Lenght should b 14 and i want randoms coz in sequence there is risk of pattern matching
|
|
|
|
|