|
Why not just use a simple, "greedy" approach like an operating system, and start jobs as they come in, until m jobs are running. Then when one finishes, start another one?
A few nuances:
1. If a job can start other jobs before it can complete, you risk a deadlock situation where two jobs are each waiting for the other to complete before they can start other jobs.
If you know in advance how many other jobs a job can start, a conservative approach is the Banker's Algorithm: If a job will start k other jobs, don't start it if more than m - (k + 1) jobs are currently running. This will guarantee you have the resources needed to finish the job.
2. Starting short jobs before longer jobs will increase throughput. (But if you give shorter jobs too much priority, some long jobs may never run. This was observed in some early operating systems.)
|
|
|
|
|
For Shopping Cards i want to Generate Unique Serial Number for every card(countless no of cards) what is the solution plz help me
|
|
|
|
|
They need to have an order or they must be "random"?
They have a size limit?
One of the possible solutions is to use GUID.
When GUID is not possible, I generally use a long value, that is get with DateTime.Ticks.Now (or incremented randomically) if both calls are in the same microsecond.
The class code follows, but you only need to call:
TicksOrIncrement.GetValue() to use it.
using System;
namespace Pfz.Threading
{
public static class TicksOrIncrement
{
private static readonly object fLock = new object();
private static long fLastValue = DateTime.Now.Ticks;
private static readonly Random fRandom = new Random();
public static long GetNext()
{
long ticks = DateTime.Now.Ticks;
lock(fLock)
{
if (ticks <= fLastValue)
ticks = fLastValue + 1 + fRandom.Next(1000);
fLastValue = ticks;
}
return ticks;
}
}
}
|
|
|
|
|
Yes Lenght should b 14 and i want randoms coz in sequence there is risk of pattern matching
|
|
|
|
|
Since you want randomness and no duplicates, do the following. Generate a list of all consecutive numbers m to n (where m is a 15 digit number, and you may want to ignore any number that has duplicate adjacent characters, i.e. 4672950748223546). Set Count to n-m-1. Using any "good" RNG (which still could have duplicates), generate a random number x between 0 and Count, and use x as an index to select a number from the list m to n. Save the selected number in the serial number list, then remove it from the m to n list and decrement Count. Repeat n-m-1 times. Caution: This can be very slow if you have a massive list (the serial number list grows linearly - write each developed number to a file, but shrinking the massive list is slow). If you are interested, I can dig out a fast C implementation with multiple arrays that I used for randomly selecting words from a dictionary (where the list was a list of file offsets to the dictionary words).
Dave.
|
|
|
|
|
|
GUIDs are rather long (128 bits--32 hex digits). If one had a good 40-bit random number generator, one could generate roughly 2^17 (130,000) items without the risk of even a single collision becoming very significant). The output from a 40-bit generator could be converted to a 12-digit number, and possibly merged with a header and/or checksum to yield a 16-digit card number (a common size).
BTW, does anyone know whether putting a GUID through an MD5 or similar hash function and grabbing the first few bytes of the result would be a good way to generate 'random' numbers of e.g. 32-96 bits that are not supposed to duplicate more often than purely random numbers would?
|
|
|
|
|
I have this binary heap, and I need to get the index of a particular node. I can do that now by iterating through the array one by one until I find it, its too slow. Are there any faster ways to search? Whats the fastest?
Here is my code:
class NodeHeap
{
private Node[] m_nodeHeap = new Node[1000000];
private uint m_lastNodeIndex;
public long Count
{
get { return m_nodeHeap.LongLength - 1; }
}
public Node NodeAt(uint index)
{
return m_nodeHeap[index];
}
public void RemoveAt(uint index)
{
if (index > m_lastNodeIndex)
{
throw new ArgumentOutOfRangeException("uint index");
}
SwapNode(ref m_nodeHeap[index], ref m_nodeHeap[m_lastNodeIndex]);
m_nodeHeap[m_lastNodeIndex--] = null;
SiftUp(index);
}
public uint IndexOf(Node n)
{
for (uint i = 1; i <= m_lastNodeIndex; i++)
{
if (m_nodeHeap[i] == n) return i;
}
return 0;
}
public void Add(Node n)
{
m_nodeHeap[++m_lastNodeIndex] = n;
SiftDown(m_lastNodeIndex);
}
public void SiftUp(uint index)
{
if (index >= m_lastNodeIndex) return;
uint next = index;
next *= 2;
if (next > m_lastNodeIndex) return;
if (m_nodeHeap[next].Cost < m_nodeHeap[index].Cost)
{
if (next != m_lastNodeIndex && m_nodeHeap[next + 1].Cost < m_nodeHeap[next].Cost)
{
SwapNode(ref m_nodeHeap[++next], ref m_nodeHeap[index]);
}
else
{
SwapNode(ref m_nodeHeap[next], ref m_nodeHeap[index]);
}
SiftUp(next);
}
}
public void SiftDown(uint index)
{
if (index == 1) return;
uint next = index;
next /= 2;
if (m_nodeHeap[next].Cost > m_nodeHeap[index].Cost)
{
SwapNode(ref m_nodeHeap[next], ref m_nodeHeap[index]);
SiftDown(next);
}
}
public Node RemoveFirst()
{
Node n = m_nodeHeap[1];
RemoveAt(1);
return n;
}
private void SwapNode(ref Node a, ref Node b)
{
Node temp = a;
a = b;
b = temp;
}
}
|
|
|
|
|
Btw, your swap work "oddly", I would expect to swap by index and avoid ref hacks, but w/e.
To search faster, use your heap property. Meaning that, if this is a min heap (looks like one), see if your value is smaller than the current root, and if so, return and go the other way instead (recursively). Alternatively, do the test before recursing, to save method calls.
(btw, are you going to do A* with it? Heap is a good choice for A* IME)
|
|
|
|
|
harold aptroot wrote: Btw, your swap work "oddly", I would expect to swap by index and avoid ref hacks, but w/e.
Sounds like a nice modification.
harold aptroot wrote: see if your value is smaller than the current root, and if so, return and go the other way instead (recursively)
I'm not quite sure what you mean. Are you saying bounds back and forth between the first and last, then second to first, then second to last, ect...?
harold aptroot wrote: (btw, are you going to do A* with it? Heap is a good choice for A* IME)
Yes, and right now it takes too long to find the path in a complex map like a spiral @ with the endnode inside of it.
|
|
|
|
|
CaptainSeeSharp wrote: I'm not quite sure what you mean. Are you saying bounds back and forth between the first and last, then second to first, then second to last, ect...?
Not sure what you mean here.. I mean, start at the root, if the value you're searching is smaller than the root then clearly the item is not in the heap - this applies recursively to every sub-heap as well
OTOH, I didn't need that feature (searching by value) in my A* implementation. Why do you need it?
|
|
|
|
|
harold aptroot wrote: I didn't need that feature (searching by value) in my A* implementation. Why do you need it?
I would really like to see how to make mine not need it. When mine gets the nodes around it, it checks to see if they are on the open list (that is where IndexOf is called), and if they are not then it will add whatever nodes not on the open list.
|
|
|
|
|
But surely your heap is sorted by F cost? And if you find a node a second time, the G (and therefore the F) will probably be different since you would have reached it from an other node (since the one you reached it from the first time should be in your closed list)
I used a sparse "grid of grids"-of bits to test membership of open and closed lists (closed "list" was only that grid and no list), a normal grid would be easier but I had huge maps where paths would "usually" stay in roughly 1 "area", which meant that usually only up to 4 entries of the "supergrid" would be used. The whole thing was just a BitArray[,] btw. The "supertile"-size should be a power of 2 and the area divisible by 32 to be efficient this way, like 16x16, or 32x32. Testing membership of open or closed list is then only O(1), at the expense of .. not even all that much memory (just 1 bit per tile and some overhead, and usually only for a subset of tiles).
|
|
|
|
|
I think I get what you are saying. Use a grid (that is the same size of the map) of bits or bools where a value of 1 or true in a particular X,Y coordinate of the bitarray[,] means that it is in the open list? I don't use a closed list by the way, and it works just as well.
|
|
|
|
|
Hello! I have a problem to make a detection, I need to locate multiple objects (people) was planning to use the algorithm of Viola and Jones extended version of Lienhart (included in OpenCV), can I make a massive detection (crowd, each one people) of objects with these algorithms? (I know it is mainly used for face detection) or exist another solution whithin OpenCV?
Can anyone help me? I no need a source code, just help me to choose an algorithm to do this please!
|
|
|
|
|
There's an implementation by Irshad Ali & Mathew of people detection using Viola Jones and tracking using particle filter. They have created a Haar cascade of heads to deteck humans.
They have used particle filter implementation by Rob Hess.
All the links can be found in my webpage.
https://sites.google.com/site/learningcomputervision/particle_filter[^]
I have corrected some bugs in Irshad's code and added the modified files as attachment in the webpage.
Regards,
Kaushik
|
|
|
|
|
I'm a newbie and would appreciate it if someone could start me off and guide to on how how can you convert algorithm to C# (probability distribution in Wireless LAN)
Algorithm 1 x= Corr GetLocation (n, S, X, P RM)
Input:
n : Number of samples from each access point.
S : Measured signal strength vectors from k access points (S =(s1, ..., sk)).
Each si, 1 ≤ i ≤ k is a vector containing n samples from access point i.
X : Radio map locations.
RM : Radio map, whereRM[a][x](q) represents the pdf of the average of n signal
strength samples from access point a at location x ∈ X.
Output:
The location x ∈ X that maximizes P(x/S).
1: for i = 1..k do
2: Avg(i) ← average(si)
3: end for
4: Max ← 0
5: for l ∈ X do
6: P ←k
i=1
Avg(
i)+0.5
Avg(i)−0.5
RM[i][x](q) dq
7: if P > Max then
8: x ← l
9: Max ← p
10: end if
11: end for
modified on Wednesday, November 11, 2009 1:28 PM
|
|
|
|
|
This is more a programming question than an algorithm question.
What languages do you know?
Silver member by constant and unflinching longevity.
|
|
|
|
|
Just a brief question. In a little demo I've got going I can have up to 300 objects floating around the screen and checking every object against every other object is painfully slow.
Using a quad tree to split the screen up and group the objects greatly speeds up checking for collisions but unfortunately rebuilding the quad tree every frame eats up quite a lot of the performance that you gain from using it.
Does anybody have any idea's on what I could do, bearing in mind that all 300 objects will constantly be moving and are all fairly close together, so trying to only update objects that have moved would cause the entire tree to be jiggled about a little bit, perhaps costing more time that a full rebuild.
My current favourite word is: Sammidge!
-SK Genius
Game Programming articles start - here[ ^]-
|
|
|
|
|
why use a tree that is bound to become invalid all the time?
I would see the screen as a 2D matrix (say N*N) with N around typical screen size divided by typical object size (and not larger than say 10), so each object typically spans two rows and two columns. Each screen cell could be represented by a List containing references to the objects that (partially) lie inside it, and a timestamp.
One time step would:
- move some objects, checking them against the 2D matrix cells; resulting in possibly removing them from some lists, and possibly adding them to some lists; store current time in cells objects got added to.
- then foreach cell marked with current time, check the listed objects for overlap.
Possible optimization: give each object a list of cells it currently is in; this would speed up the removal step.
Luc Pattyn
I only read code that is properly indented, and rendered in a non-proportional font; hint: use PRE tags in forum messages
|
|
|
|
|
Alternative solution:
re-structure your "tree" and use a radial(gross) test
radial:
if((dist(a,b)^2-RadiusSqrd)<0)
{
}
(pathagarean therum with a bit of inside/outside circle, with small error...) it is fast.
Gross:
or iterativly restructure your list...
public static int blah(point P, point Q)
{
if(P.X>Q.x){ return 1;}
else if(P.X==Q.x){return 0}
else{return-1}
}
this way you would only need to "check" against neighbors in the tree structure.
alternativly you could use a float type(but that is basically the radial method of above)
using this method you basically only ever have a few checks,
there is one other way off hand..
I lik eto use the System.Drawing.Region (i forget the name) but it will give the union or difference of two "polygon"(regions) which is on the gpu and crazy fast.. in which case you may be able to do it brute force.. cus it is fast.. if you like this, or need more reference send me a ping and I will scrounge up the code for you.. leaving work --Hurray
|
|
|
|
|
in the name of god
hello
how can we solve this problem to find the order of it:
T(n)=sqrt(n)*(T(sqrt(n))+n
order =?
valhamdolelah.
|
|
|
|
|
There is no base case, what is T(1) ?
|
|
|
|
|
take a constant as C>0
valhamdolelah.
|
|
|
|
|
Ok then I think:
T(n) = (n * C)/(e^2) + (n * ln(ln(n))) / ln(2)
edit: that would be O(n log(log n)) I think
|
|
|
|
|