|
And something tells me the accountants wouldn't stand for doing calculus. I think the problem would be a bit easier if you restate it. Here's how I'd write it:
Company A manages $2,000,000 of equipment. 10% of that is owned by company B and the rest by company A.
Company B manages $1,000,000 of equipment. 25% of that is owned by company A and the rest by company B.
What is the total value of equipment owned by each company?
Stating it that way makes the following code seem to be a striaghtforward way to answer:
equipValueA = 2000000
equipValueB = 1000000
equipAOwnedByB = .1
equipBOwnedByA = .25
totalValueA = equipValueA * (1 - equipAOwnedByB) + equipValueB * equipBOwnedByA
totalValueB = equipValueB * (1 - equipBOwnedByA) + equipValueA * equipAOwnedByB
This would then generalize pretty well to a matrix equation:
T = O * E where
T is the column vector of total values
E is the column vector of equipment values, and
O is the matrix of ownership percentages. The columns are the company being owned and the rows are the percentage owned by that company. The sum of a column must then be 100%. (The values in row 2 would be how much of each company is owned by company C.)
|
|
|
|
|
Gideon Engelberth wrote: And something tells me the accountants wouldn't stand for doing calculus. I think the problem would be a bit easier if you restate it. Here's how I'd write it:
Company A manages $2,000,000 of equipment. 10% of that is owned by company B and the rest by company A.
Company B manages $1,000,000 of equipment. 25% of that is owned by company A and the rest by company B.
Thanks for the suggestion but are you sure that's correct? Why should Company B owning part of A reduce A assets? It's not buying part of the assets it's buying part of the company that owns the assets not the assets themselves. For example if google gave me 10% ownership of their company that wouldn't reduce the value of google's assets would it? They would still own as many servers ect as they did before.
Maybe I chose a bad illustration for this type of problem. What I'm really trying to figure out is how to program something related to vision recognition/AI problem where I use a probability about what one object is to influence my conclusions about another object which in turn influences the first in a circular way like in my example. I was just using the example to try to get an idea how to approach this type of circular problem.
Thanks,
Mike
|
|
|
|
|
Company A owns 25% of company B and 90% of itself = 0.25 * 1M + 0.90 * 2M = 2.05 M
Company B owns 10% of company A and 75% of itself = 0.10 * 2M + 0.75 * 1M = 0.95 M
Which is in sum 3 M.
Everything else would be financial math
Greets
Matthias
|
|
|
|
|
Such questions or situations never exist. Though they seem to be.
The way you have suggested, it will never end into an equilibrium. In reality, A can only own a maximum of 100% of its own company. If 10% is owned by B, A looses 10% automatically. If you give me 10 USD, I am richer by 10 USD and you are poorer by 10 USD. It cannot happen that you keep your amount and I gain it too.
Now,
Once you accept this fact, this brings us to a different kind of situations. What you are suggesting here is that every time the value of A changes, the value of B changes as well. Similarly the same happens for B.
In any example, there always will be one finite value that would make a transition from object1 to object2. It is upto you to identify this thing.
Do you have any more such examples? Probably, its just a incorrect assumption.
|
|
|
|
|
Dear all,
I am a newbie here, this is my first post. I have looked over a few algorithms and have searched this board for my question but have not been able to narrow down to a clear answer.
I am looking for any pointers/code/advice to help decide how to solve the following problem.
I have to write a program to suggest the starting time of a job. There is 1 CPU, and there are n jobs already scheduled on it and we know their run times. I can assume a run time (maybe a median of the known run times or something) for this new job. The constraint being at no time should there be more than m jobs running in parallel.
I think this problem is relatively simple. For a hammer and tongs solution, I have implemented a quick-sort kinds binary cleaving program which starts with a random time of the day, say, 13:00, and checks if one more process can be started at that time, it accepts the "assumed" run time of the new process and finds if the number of jobs will become greater than m during the assumed run time of the new process. If not, success! else, I cleave the two sides of the time chosen 00:00-12:59 and 13:01-23:59 (of course I can add the assumed run time to 13:01 to make a better choice of the window/range) to get the midway time position and check there and so on.
Can there be a more formal solution to this problem? Like maybe based on the Knapsack algorithm or something.. any advice.
Also, what could be a good measure of the "assumed" run time, I'm thinking median. I do realize that this question is a bit vague.
Thanks in advance,
-J
|
|
|
|
|
Hi
I think I would go for a observer program that checks if there is one more job that can be started. If so I would start one.
If you want you can do a importance weighting schema based on the assumed run time.
I don't know if this answer your question. but this is the direction I would tend to go.
Cheers
You have the thought that modern physics just relay on assumptions, that somehow depends on a smile of a cat, which isn’t there.( Albert Einstein)
|
|
|
|
|
thebiggestbangtheory wrote: Also, what could be a good measure of the "assumed" run time, I'm thinking median.
I'd go for a progress-indication that's displayed in "% work done", instead of using a calculated guess based on the previous runtimes.
You could still calculate the time that's required to finish the job, based on the percentual feedback that you get
I are Troll
|
|
|
|
|
Why not just use a simple, "greedy" approach like an operating system, and start jobs as they come in, until m jobs are running. Then when one finishes, start another one?
A few nuances:
1. If a job can start other jobs before it can complete, you risk a deadlock situation where two jobs are each waiting for the other to complete before they can start other jobs.
If you know in advance how many other jobs a job can start, a conservative approach is the Banker's Algorithm: If a job will start k other jobs, don't start it if more than m - (k + 1) jobs are currently running. This will guarantee you have the resources needed to finish the job.
2. Starting short jobs before longer jobs will increase throughput. (But if you give shorter jobs too much priority, some long jobs may never run. This was observed in some early operating systems.)
|
|
|
|
|
For Shopping Cards i want to Generate Unique Serial Number for every card(countless no of cards) what is the solution plz help me
|
|
|
|
|
They need to have an order or they must be "random"?
They have a size limit?
One of the possible solutions is to use GUID.
When GUID is not possible, I generally use a long value, that is get with DateTime.Ticks.Now (or incremented randomically) if both calls are in the same microsecond.
The class code follows, but you only need to call:
TicksOrIncrement.GetValue() to use it.
using System;
namespace Pfz.Threading
{
public static class TicksOrIncrement
{
private static readonly object fLock = new object();
private static long fLastValue = DateTime.Now.Ticks;
private static readonly Random fRandom = new Random();
public static long GetNext()
{
long ticks = DateTime.Now.Ticks;
lock(fLock)
{
if (ticks <= fLastValue)
ticks = fLastValue + 1 + fRandom.Next(1000);
fLastValue = ticks;
}
return ticks;
}
}
}
|
|
|
|
|
Yes Lenght should b 14 and i want randoms coz in sequence there is risk of pattern matching
|
|
|
|
|
Since you want randomness and no duplicates, do the following. Generate a list of all consecutive numbers m to n (where m is a 15 digit number, and you may want to ignore any number that has duplicate adjacent characters, i.e. 4672950748223546). Set Count to n-m-1. Using any "good" RNG (which still could have duplicates), generate a random number x between 0 and Count, and use x as an index to select a number from the list m to n. Save the selected number in the serial number list, then remove it from the m to n list and decrement Count. Repeat n-m-1 times. Caution: This can be very slow if you have a massive list (the serial number list grows linearly - write each developed number to a file, but shrinking the massive list is slow). If you are interested, I can dig out a fast C implementation with multiple arrays that I used for randomly selecting words from a dictionary (where the list was a list of file offsets to the dictionary words).
Dave.
|
|
|
|
|
|
GUIDs are rather long (128 bits--32 hex digits). If one had a good 40-bit random number generator, one could generate roughly 2^17 (130,000) items without the risk of even a single collision becoming very significant). The output from a 40-bit generator could be converted to a 12-digit number, and possibly merged with a header and/or checksum to yield a 16-digit card number (a common size).
BTW, does anyone know whether putting a GUID through an MD5 or similar hash function and grabbing the first few bytes of the result would be a good way to generate 'random' numbers of e.g. 32-96 bits that are not supposed to duplicate more often than purely random numbers would?
|
|
|
|
|
I have this binary heap, and I need to get the index of a particular node. I can do that now by iterating through the array one by one until I find it, its too slow. Are there any faster ways to search? Whats the fastest?
Here is my code:
class NodeHeap
{
private Node[] m_nodeHeap = new Node[1000000];
private uint m_lastNodeIndex;
public long Count
{
get { return m_nodeHeap.LongLength - 1; }
}
public Node NodeAt(uint index)
{
return m_nodeHeap[index];
}
public void RemoveAt(uint index)
{
if (index > m_lastNodeIndex)
{
throw new ArgumentOutOfRangeException("uint index");
}
SwapNode(ref m_nodeHeap[index], ref m_nodeHeap[m_lastNodeIndex]);
m_nodeHeap[m_lastNodeIndex--] = null;
SiftUp(index);
}
public uint IndexOf(Node n)
{
for (uint i = 1; i <= m_lastNodeIndex; i++)
{
if (m_nodeHeap[i] == n) return i;
}
return 0;
}
public void Add(Node n)
{
m_nodeHeap[++m_lastNodeIndex] = n;
SiftDown(m_lastNodeIndex);
}
public void SiftUp(uint index)
{
if (index >= m_lastNodeIndex) return;
uint next = index;
next *= 2;
if (next > m_lastNodeIndex) return;
if (m_nodeHeap[next].Cost < m_nodeHeap[index].Cost)
{
if (next != m_lastNodeIndex && m_nodeHeap[next + 1].Cost < m_nodeHeap[next].Cost)
{
SwapNode(ref m_nodeHeap[++next], ref m_nodeHeap[index]);
}
else
{
SwapNode(ref m_nodeHeap[next], ref m_nodeHeap[index]);
}
SiftUp(next);
}
}
public void SiftDown(uint index)
{
if (index == 1) return;
uint next = index;
next /= 2;
if (m_nodeHeap[next].Cost > m_nodeHeap[index].Cost)
{
SwapNode(ref m_nodeHeap[next], ref m_nodeHeap[index]);
SiftDown(next);
}
}
public Node RemoveFirst()
{
Node n = m_nodeHeap[1];
RemoveAt(1);
return n;
}
private void SwapNode(ref Node a, ref Node b)
{
Node temp = a;
a = b;
b = temp;
}
}
|
|
|
|
|
Btw, your swap work "oddly", I would expect to swap by index and avoid ref hacks, but w/e.
To search faster, use your heap property. Meaning that, if this is a min heap (looks like one), see if your value is smaller than the current root, and if so, return and go the other way instead (recursively). Alternatively, do the test before recursing, to save method calls.
(btw, are you going to do A* with it? Heap is a good choice for A* IME)
|
|
|
|
|
harold aptroot wrote: Btw, your swap work "oddly", I would expect to swap by index and avoid ref hacks, but w/e.
Sounds like a nice modification.
harold aptroot wrote: see if your value is smaller than the current root, and if so, return and go the other way instead (recursively)
I'm not quite sure what you mean. Are you saying bounds back and forth between the first and last, then second to first, then second to last, ect...?
harold aptroot wrote: (btw, are you going to do A* with it? Heap is a good choice for A* IME)
Yes, and right now it takes too long to find the path in a complex map like a spiral @ with the endnode inside of it.
|
|
|
|
|
CaptainSeeSharp wrote: I'm not quite sure what you mean. Are you saying bounds back and forth between the first and last, then second to first, then second to last, ect...?
Not sure what you mean here.. I mean, start at the root, if the value you're searching is smaller than the root then clearly the item is not in the heap - this applies recursively to every sub-heap as well
OTOH, I didn't need that feature (searching by value) in my A* implementation. Why do you need it?
|
|
|
|
|
harold aptroot wrote: I didn't need that feature (searching by value) in my A* implementation. Why do you need it?
I would really like to see how to make mine not need it. When mine gets the nodes around it, it checks to see if they are on the open list (that is where IndexOf is called), and if they are not then it will add whatever nodes not on the open list.
|
|
|
|
|
But surely your heap is sorted by F cost? And if you find a node a second time, the G (and therefore the F) will probably be different since you would have reached it from an other node (since the one you reached it from the first time should be in your closed list)
I used a sparse "grid of grids"-of bits to test membership of open and closed lists (closed "list" was only that grid and no list), a normal grid would be easier but I had huge maps where paths would "usually" stay in roughly 1 "area", which meant that usually only up to 4 entries of the "supergrid" would be used. The whole thing was just a BitArray[,] btw. The "supertile"-size should be a power of 2 and the area divisible by 32 to be efficient this way, like 16x16, or 32x32. Testing membership of open or closed list is then only O(1), at the expense of .. not even all that much memory (just 1 bit per tile and some overhead, and usually only for a subset of tiles).
|
|
|
|
|
I think I get what you are saying. Use a grid (that is the same size of the map) of bits or bools where a value of 1 or true in a particular X,Y coordinate of the bitarray[,] means that it is in the open list? I don't use a closed list by the way, and it works just as well.
|
|
|
|
|
Hello! I have a problem to make a detection, I need to locate multiple objects (people) was planning to use the algorithm of Viola and Jones extended version of Lienhart (included in OpenCV), can I make a massive detection (crowd, each one people) of objects with these algorithms? (I know it is mainly used for face detection) or exist another solution whithin OpenCV?
Can anyone help me? I no need a source code, just help me to choose an algorithm to do this please!
|
|
|
|
|
There's an implementation by Irshad Ali & Mathew of people detection using Viola Jones and tracking using particle filter. They have created a Haar cascade of heads to deteck humans.
They have used particle filter implementation by Rob Hess.
All the links can be found in my webpage.
https://sites.google.com/site/learningcomputervision/particle_filter[^]
I have corrected some bugs in Irshad's code and added the modified files as attachment in the webpage.
Regards,
Kaushik
|
|
|
|
|
I'm a newbie and would appreciate it if someone could start me off and guide to on how how can you convert algorithm to C# (probability distribution in Wireless LAN)
Algorithm 1 x= Corr GetLocation (n, S, X, P RM)
Input:
n : Number of samples from each access point.
S : Measured signal strength vectors from k access points (S =(s1, ..., sk)).
Each si, 1 ≤ i ≤ k is a vector containing n samples from access point i.
X : Radio map locations.
RM : Radio map, whereRM[a][x](q) represents the pdf of the average of n signal
strength samples from access point a at location x ∈ X.
Output:
The location x ∈ X that maximizes P(x/S).
1: for i = 1..k do
2: Avg(i) ← average(si)
3: end for
4: Max ← 0
5: for l ∈ X do
6: P ←k
i=1
Avg(
i)+0.5
Avg(i)−0.5
RM[i][x](q) dq
7: if P > Max then
8: x ← l
9: Max ← p
10: end if
11: end for
modified on Wednesday, November 11, 2009 1:28 PM
|
|
|
|
|
This is more a programming question than an algorithm question.
What languages do you know?
Silver member by constant and unflinching longevity.
|
|
|
|
|