

Apologies for the shouting but this is important.
When answering a question please:
 Read the question carefully
 Understand that English isn't everyone's first language so be lenient of bad spelling and grammar
 If a question is poorly phrased then either ask for clarification, ignore it, or mark it down. Insults are not welcome
 If the question is inappropriate then click the 'vote to remove message' button
Insults, slapdowns and sarcasm aren't welcome. Let's work to help developers, not make them feel stupid.
cheers,
Chris Maunder
The Code Project Cofounder
Microsoft C++ MVP





For those new to message boards please try to follow a few simple rules when posting your question. Choose the correct forum for your message. Posting a VB.NET question in the C++ forum will end in tears.
 Be specific! Don't ask "can someone send me the code to create an application that does 'X'. Pinpoint exactly what it is you need help with.
 Keep the subject line brief, but descriptive. eg "File Serialization problem"
 Keep the question as brief as possible. If you have to include code, include the smallest snippet of code you can.
 Be careful when including code that you haven't made a typo. Typing mistakes can become the focal point instead of the actual question you asked.
 Do not remove or empty a message if others have replied. Keep the thread intact and available for others to search and read. If your problem was answered then edit your message and add "[Solved]" to the subject line of the original post, and cast an approval vote to the one or several answers that really helped you.
 If you are posting source code with your question, place it inside <pre></pre> tags. We advise you also check the "Encode "<" (and other HTML) characters when pasting" checkbox before pasting anything inside the PRE block, and make sure "Use HTML in this post" check box is checked.
 Be courteous and DON'T SHOUT. Everyone here helps because they enjoy helping others, not because it's their job.
 Please do not post links to your question into an unrelated forum such as the lounge. It will be deleted. Likewise, do not post the same question in more than one forum.
 Do not be abusive, offensive, inappropriate or harass anyone on the boards. Doing so will get you kicked off and banned. Play nice.
 If you have a school or university assignment, assume that your teacher or lecturer is also reading these forums.
 No advertising or soliciting.
 We reserve the right to move your posts to a more appropriate forum or to delete anything deemed inappropriate or illegal.
cheers,
Chris Maunder
The Code Project Cofounder
Microsoft C++ MVP





Hello,
For my new project, I have to develop an algorithm called frequency sweep.
The function parameters are nFreqStart, nFreqEnd and dSweepTime'.
As an example: for nFreqStart = 800Hz, the period T of this signal is 1.25ms, for nFreqEnd = 1600Hz, T is 0.625ms and the sum of all frequencies from nFreqStart to nFreqEnd must not be higher than dSweepTime.
My first version of this function is:
uint16_t nFrequencies;
void sweep_initialize(const uint16_t nFreqStart, const uint16_t nFreqEnd, const double dSweepTime)
{
double dTotalTime = 0.0;
double dIncrement = 0.0;
uint16_t nStep = 0;
double dTmpFreq = 0.0;
bool bSweep = true;
double dInitialTime = (1.0 / nFreqStart) + (1.0 / nFreqEnd);
nFrequencies = 2;
while (bSweep == true)
{
dTotalTime = dInitialTime;
dIncrement = (double)(nFreqEnd  nFreqStart);
dIncrement /= (double)(nFrequencies);
dTmpFreq = (double)nFreqStart;
nStep = nFrequencies  1;
for (uint16_t i = 0; i < nStep; i++)
{
dTmpFreq += dIncrement;
dTotalTime += 1.0 / dTmpFreq;
}
nFrequencies += 1;
if (dTotalTime >= dSweepTime)
{
bSweep = false;
break;
}
}
}
The next thing is, I need a counter (cpu clocks) for each frequency:
uint16_t* pSweep_Array;
void sweep_getCounter()
{
if (pSweep_Array != NULL)
{
free(pSweep_Array);
pSweep_Array = NULL;
}
pSweep_Array = (uint16_t*)malloc(sizeof(uint16_t) * nFrequencies);
if(pSweep_Array != NULL)
memset(pSweep_Array, 0, sizeof(uint16_t) * nFrequencies);
double dFrequency = (double)nFreqStart;
double dIncrement = (double)(nFreqEnd  nFreqStart);
dIncrement /= (double)(nFrequencies  1);
double dCounter = 0.0;
for (uint16_t i = 0; i < nFrequencies; i++)
{
dCounter = 65536  (F_CPU / dFrequency);
pSweep_Array[i] = (uint16_t)round(dCounter);
dFrequency += dIncrement;
}
}
But as a result, the computation complexity is O((n1)! * n^{2}), where n is the number of frequencies between nFreqStart and nFreqEnd, including nFreqStart & nFreqStop.
I need help for a less complex solution (function).





Hello, so I have to write a term paper on the subject of Root sorting. Google doesn't give me anything useful so I am here to ask someone who understands data structures and algorithms to give me some pointers on the subject. As far as I got is that I figured out it has something to do with sorting a binary tree and using the root of a binary tree. Can you please give me some insight on what algorithms should I look for or what book should I read?





Is that really what they said? It doesn't look like it's a thing.





I am doing an introductory course on algorithms. I've come across this problem which I'm unsure about. I would like to know which of the 2 are dominant
f(n): 10 log n or g(n): log(n^2)
Given the definitions of each of: Ω, Θ, O
I assumed f(n), so fn = Ω(g(n))
Reason being that log(n^2) is equivalent to 2 log n, so 10 log n is dominant.
Could somebody verify or offer an alternative?





Why don't you draw the graph and see which one is closer to the bottom? Basically, these are just the grading system of the algorithms (used to classify the algorithms based on their complexity), to find out which is better as compared to which is not. I personally use only Big O notation (instead of others, lower bounds, both bounds etc. instead of all of that just consider Big O, and draw the graph). Go here and do that, https://www.desmos.com/calculator[^]. The larger that a function grows with each input, the more time and space that it consumes (time and space might be relative to the data source or the algorithm being used). That is why, a better algorithm is the one that doesn't take much time. In my opinion, log (n^{2}) is a better function — look at the slopes that each of them makes.
Secondly, Quote: which of the 2 are dominant What do you mean by dominant? The better algorithm would have a lower slope as compared to the one with a higher slope (the higher slope one is a bad algorithm depending on the data source).
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out  Eminem
~! Firewall !~





thanks for the link.
I assumed that a "dominant" function is equivalent to f = Ω(g), where f is 10 log n.
So we are in agreement, since conversely g is O(f), where g is log(n^2). I was simply attempting to use a mathematical approach to prove or disprove.





hello all
i found this task:
"Given an undirected graph G = (V,E), and an integer k, the cycleelimination problem is to decide if all the cycles of the graph can be eliminated by deleting at most k edges from the graph. Either show that the problem is NPcomplete, or describe a polynomialtime algorithm for it
"
do you have some ideas for the solution ?
shall i reduce the task to Feedback arc set problem ?
Thank you very much






If I want to compute/calculate A*B, then:
In russian peasant's algorithm:
declare and define new helper variable C and P for product, initialized to 0, i.e. all bits are initialized to 0.
NOW:
1. copy A to C
2. Select 1 random bit 1 (whose value/state is 1) of B and left shift C the number of bits that come in right of the selected bit, i.e. if B=0000100, then C should be left shifted 2 bits, because there are 2 bits in right of the single bit 1 of B.
The new bits of C after the left shift are set to 0.
3. Add C to P
4. Turn the selected bit to 0 and if B becomes 0, i.e. all bits are 0, then P is the product of A and B, i.e. P=A*B, otherwise/else repeat all steps, i.e. redo step 1, 2, 3, 4 and etc.
Is that correct/right? If yes, then why CPU/ALU is designed to use Booth's algorithm instead of Russian Peasant's algorithm? In some cases, Russian Peasant's algorithm can be much easy to perform than Booth's algorithm, when most bits of the multiplier B, are 0, and only few are 1.





That is correct (though usually you wouldn't select a random 1 but a particular 1, eg going from low to high or from high to low, you can use some additional tricks that way). Typical hardware implementations don't use this because it takes a lot of steps. They typically don't really use Booth's algorithm either, but some of the ideas are reused.
Yes it may take very few steps, but it can also take many steps (consider 1 * 1). It's bad enough already that it can take many steps, but the variation is really bad as well  throws a wrench into the usual pipelining and forwarding tricks. I suppose you might pipeline it by unrolling that entire loop, not very pleasant in terms of area, and then it has multiple exits which on top of being generally annoying means you can't do the usual thing of dispatching an instruction the cycle before its inputs will be ready and then rely on forwarding because you don't know when the multiplication ends until it does (ok you kind of could but it sucks).. it's horrible.
So other algorithms were invented, such as the Dadda multiplier and the Wallace tree. The idea (for both of them) is to first compute all pairwise singlebit products, then add them up using as many simple adders as possible (not full word adders, but the 3:2 compressor kind). That works out really well, because the first step (the single bit products) are all completely independent, and then you get a ton of product bits that you can reduce to fewer bits with independent full adders. It takes some number of layers of such adders, and the layers are dependent, but within a layer everything is parallel.
And then you still need a multibit adders in the end, where you'd use any fast adder design you like. You only need one of them so go crazy, unlike an unrolled russian peasant multiplication (RPM) where you'd need a metric ton of big adders (you could use ripple carry but then it sucks). So it's smaller and faster than unrolled RPM, and pipelinable unlike iterative RPM.
That's not the end though, for example you can use slightly larger modules that add up 5 bits into 3 in a particular way and build the same sort of layered tree out of those, which is a bit faster. And you can shuffle the connections of those modules around a bit so that you connect slow bits (the modules don't calculate their outputs all at the same time) to "fast inputs" (the delay through a module is not the same for every input) and vice versa to compensate the timings.
Now Booth's encoding, specifically the modified Booth encoding of radix4 (if you go higher the circuits become annoying, not worth it), can be used to improve that even more, because you get fewer partial products .
Anyway, on a typical modern processor, multiplication takes maybe 3 to 5 times as much time as an addition. You can't match that with RPM unless there are very few bits set in the multiplicand, on average it would lose bit a huge margin.





Thanks for the quick and detailed answer. Now I am satisfied.





I am creating a DOORS database script that will be used to get all differences between 2 versions of a particular DOORS table (new vs old). I want to get all differences between the 2 tables (additions, deletions and changes). For anyone who doesn't know about DOORS, when an object is removed from a DB, the program marks the item's attribute as deleted, but you can still see it when you loop through all items. What I want to do is create a function that will display any items that have been changed since the older version.
Here is the algorithm I'm using:
for each newItem in newVers
if (newItem.deleted) {
check if deleted in oldVers
if (!oldVers.deleted) { }
} else {
if (newItem.id exists in oldVers) {
if (newVers != oldVers) {
}
} else {
}
}
}
Does this look like the way I should do this, or is there a better way? Just to reiterate, I am only looking for what, in the newer version has changed since the older one. I am not worried about changes in the opposite direction.
Any ideas would be appreciated.
Chris
modified 26Sep16 9:02am.





Your approach is accurate, but could be slow. I'm guessing that most of the items are unchanged, so
if (newItem.id exists in oldVers) could be an expensive operation, particularly if id is not indexed.
If your database supports fast sequential access on that field, then there are algorithms that "walk" the old and new, much the way text editors do file differencing.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012





This is part of my last post here, but I made it much smaller in question.
I'm trying to write a function that will compress items measured by inches.
So say; I have 10 of these
Length = 72
Width = 4
Height = 3
And I package 6 of them into this, leaving the next 4 to be in another package.
Length = 72
Width = 24
Height = 3
I want to further compress them by decreasing the width and increasing the height such as this. I took 2 of them and stacked them taller increasing the height.
Length = 72
Width = 12
Height = 6
I have this, which works in 1 pass, but I'm trying to figure out a way to make multiple passes until there are no more passes to make. But I can't seem to figure out a method in which to do this.
If (item.Width > item.Height) Then
If (item.Qty > 2) Then
item.Width = p.Width * 3
item.Height += p.Height
End If
End If





Debug the programme.
I do believe that below line is causing that "it works only in 1 pass"
If (item.Width > item.Height) Then





Well... i'm pretty sure, you have to rethink your concept of programme...
Imagine, you have a box (container):
Height (3)
_ _ _ _ _ _
/ 
/  
/ _ _ _ _ _ _
/ / /
/ / /
 / /
 / / Length (72)
/_ _ _ _ _ _/
Width (24)
In that container 6 smaller boxes(containers {72x4x3}) are there. You want to repack these boxes into container with changed width and height:
Height (6)
_ _ _ _
/ 
/  
/  
/  
/  
 _ _ _ _
 / /
 / /
 / /
 / / Length (72)
/_ _ _ _/
Width (12)
Assuming, that you want to find out, if there's possibility to repack these innercontainers, your algorithm have to check, if:
 any dimension of innercontainer does not exceed the size of outercontainer,
 occupied area inside a box (sum of innercontainers areas) is smaller than outercontainer.
Till above condition is met, you can add another box (innercontainer)!
Note, that:
Area (of box) = Width * Length * Height
Occupied area = sum of innercontainers areas.
And finall note: do not change the size of outerbox, create new one and repack all boxes
Try!





I thought about that.
It would be easier to say will this fit in the space rather than try to create the space.
Not sure If I mentioned this, but the point of the program is to just get a rate from UPS or Fedex by creating virtual packages of dimensional weight, gravity weight, and to not create large package conditions by making them smaller.
Maciej Los wrote: Area (of box) = Width * Length * Height
That's dimensional weight, yes I could just sum up the dWeight till I hit a target.
So If I were to sum up say
3 x 3 x 3 = 27
and I have 10 of them for 270
then 3 x 3 x 30 should equal 270 and equals 270. hmm ...
My question was to adjust the size of 3 x 3 x 30 to 15 x 6 x 3 then try 1 more time.
I should just hire someone to write this for me.





I should just hire someone to write this for me.
Jim, never give up! There's a ton of members ready to help you (at least me).
Seems, you forgot to use loop!
If (item.Qty>2) Then
Do While (item.Width > item.Height)
item.Width = p.Width * 3
item.Height += p.Height;
Loop
End If
Am i right?
modified 28Sep16 2:03am.





I have no clue how to paste in my Linked In link, tried many times.
I actually have that sample code above working better now.
If your curious about the program ....
This is the code packaged in a console app in VS2015
There is a class called sample data, that you can remove the remarks to load extra data
In the main class, you can unmark data to load.
It outputs packages in the console. In the real program, those packages will be submitted to UPS for a rate quote, and be packaged in the packages XML section and looped around for a single multipackage quote.
The rules are long items will be shipped seperate
Isolated items are items that will ship in the Original box, slap a sticker on them
Common items are the rest of the stuff that will be tossed in a box
If items trigger the Large Package Indicator, then start a new package to save the customer money.
Ar last, see if we can further combine packages to keep the count down.
Dropbox  RTST Rev 2016092701.zip[^]
I'm not trying to get someone to do my work for me, I'm just having trouble wrapping my head around this.
I use to just add up all the mass until a package is created, then cube the mass into a square. But that failed for long items from 6FT to 8Ft long.
And it failed for isolated items as well.
Thin items are tough, I was doing thin separate, but have now combined them into common.
[edit]
This is a clean copy of the program, in which I started again today. I put the thin items back in and will try to streamline the packaging, see Rev5 class comments.
Dropbox  RTSR_Console 3.zip[^]
modified 28Sep16 2:03am.






Thank you for looking at it. I'm working on it today with some new ideas.





After writing it about 12 times, I came up with this. Works pretty well, but could be better.
Theirs a program called combine_virtual_packages_common that could use better refining.
I was thinking, perhaps I should treat a batch of items like water. Take a box that is 24x17x14 and calculate the item line as water level, and fill the box, say 1 or 2 inches at a time.
Dropbox  RTSR Console 2016093001.zip[^]





Sorry, Jim, i've been bit busy...
I've downloaded the latest version of RTSR Console. As it turned out i can't install .net framework 4.6.1 on my laptop (win10). After that, i tried to find combine_virtual_packages_common function, but i've failed. I've found only combine_virtual_Common .
Can you accept my invitation on LinkedIn? Then we'll be able to continue discussion in private.
Cheers
Maciej




