For those new to message boards please try to follow a few simple rules when posting your question.
Choose the correct forum for your message. Posting a VB.NET question in the C++ forum will end in tears.
Be specific! Don't ask "can someone send me the code to create an application that does 'X'. Pinpoint exactly what it is you need help with.
Keep the subject line brief, but descriptive. eg "File Serialization problem"
Keep the question as brief as possible. If you have to include code, include the smallest snippet of code you can.
Be careful when including code that you haven't made a typo. Typing mistakes can become the focal point instead of the actual question you asked.
Do not remove or empty a message if others have replied. Keep the thread intact and available for others to search and read. If your problem was answered then edit your message and add "[Solved]" to the subject line of the original post, and cast an approval vote to the one or several answers that really helped you.
If you are posting source code with your question, place it inside <pre></pre> tags. We advise you also check the "Encode "<" (and other HTML) characters when pasting" checkbox before pasting anything inside the PRE block, and make sure "Use HTML in this post" check box is checked.
Be courteous and DON'T SHOUT. Everyone here helps because they enjoy helping others, not because it's their job.
Please do not post links to your question into an unrelated forum such as the lounge. It will be deleted. Likewise, do not post the same question in more than one forum.
Do not be abusive, offensive, inappropriate or harass anyone on the boards. Doing so will get you kicked off and banned. Play nice.
If you have a school or university assignment, assume that your teacher or lecturer is also reading these forums.
No advertising or soliciting.
We reserve the right to move your posts to a more appropriate forum or to delete anything deemed inappropriate or illegal.
Both have the same aspect ratio. Theoretically, I could calculate square edges and build a grid of points. And after, extrapolate each pattern pixel to approximate destination on mask based on parent square edges.
But maybe there is more elegant way to do it? With openCV or anything of computer vision algorithms?
I have n points in the plane and I need to find a pair of concentric circles such that all the points lie between the cicels and the area between the cicels is minimal.
I have an O(n^2) algorithm but it seems that it's possible to improve.
Hey im having trouble in finishing this algo:
A robot engine warms up with his trip and cooled down by a fan.
The robot required to ensure that the engine is not overheating.
Given that the temperature versus time engine is
Temperature of the enviroment is 15=Te
It i1s known that the rate of the cooling is define by
dT/dt = -k(T-Te)+Tm
Also known that the speed of the robot is 25m/s And the starting temparture of the engine is 35.
We need to find the Differential Equations for the road founded in the last question for Each Δt=0.1
This is the answers that i dont know how to get to.
And when its getting warmer:
An algorithm presents a method to do something which ideally works for all manner of inputs. As you know, there are lots of different sorting algorithms, but bearing in mind they are all meant to do the same thing, why so many?
It's to do with the starting condition. If a list is sorted already when you give it to a sorting algorithm, it doesn't have to do any sorting but merely determine that it is already sorted and nothing needs to be done. You can do this by making sure each item is bigger (or equal) to the last. This would be a best case scenario, but each of your N items in the list need to be checked so you end up with O(n) - this means linear performance. A list twice as long takes twice as long.
The worst possible case is when the list is sorted the wrong way around. In this case there are more items, and each item has to bubble up further so you end up with something like n * (n/2) required ops which becomes O(n*n). Each operation takes time so number of ops/time are the same thing here. Double the size and it takes 4 times as long.
In the space complexity O(1) means constant. No new space is required as the sort is contained within the datastructure.
In your case with 25 elements, best case means checking those 25 elements are in order. Worse case is bubbling each the furthest possible distance. A randomized list will be somewhere in between the extremes.
f(n) ∈ O(g(n)) iff there exists an n0 and c such that for all n > n0, f(n) < c g(n)
This quite technical definition means the whole big O deal is really a statement about membership of a function in some set of functions that, informally, all "grow about the same". But they only have to start growing about the same after some point n0 (which you don't know) and an arbitrary constant is hidden.
For time complexity that function we make statements about is the function that counts how many "elementary operations" some algorithm takes a function of the size of its input. In order to do that, one must agree about what kind of operations are elementary. There are several models for that, the "usual one" has the at first sight odd property that mathematical operations on integers of size O(log n) run in constant time. This is necessary to prevent the following problem: suppose you could only do operations on constant-length integers in constant time, and you want to manipulate an index into an array. That array has length n, so the index must have size (you guessed it) log n. It would therefore take non-constant space to even just have an index, and non-constant time to do anything with an index, which usually no one wants.
It should be clear from the n0 that big O notation explicitly says nothing about the behaviour of the function at any particular constant. It also says nothing about what happens when you double that constant, a mistake that is commonly made, leading to questions such as "it took x milliseconds for an input of size 10, how long will it take for an input of size 20" - you cannot know based on some big O.
Let us suppose a simple array contains 25 elements
So it should be clear by now that you can't say much about that situation. Bubble sort (and literally anything else) is going to take some constant number of elementary operations (aka "time") on that input, because there is no variable to vary. You cannot compute the number of operations based on the big O (but based on detailed knowledge of the algorithm, you can).
Let us suppose a simple array contains 25 elements. What does above information mean and how should I visualize it??
Nothing ! it don't work that way.
Say that you just measured runtime it takes for 25 elements for best case and worst case.
The Big O notation allow you to evaluate what it takes for 250 elements or 2500 elements.
Best case is O(n) means that 10 times the data will take 10 time the runtime and 100 times the data will take 100 time the runtime.
Worst case is O(n^2) means that 10 times the data will take 100 time the runtime and 100 times the data will take 10000 time the runtime.
I am currently working on a procedurally generated game, and vaguely remember having read a book/paper/article in which it was said that there exists a published algorithm which can generate high quality game board games with a random set of rules, which are as good as human designed game board games. The algorithm was NOT limited to a single game such as Sudoku or Crossword puzzles, it rather would generate a set of rules for the players to play by, so it could potentially generate tic tac toe, checkers or 4 in-a-row wins and variations of all 3 games.
I think the article was published between 1980's and 2006ish
and that it was published in a journal.
Sadly I cannot remember where I read the article, only that it must've been in the last 8 months. If you know about this algorithm or something similar to it, please post.
Hi i have this Algorithm, could anyone help me figure out the answer to it, and how?
Create a recurrence function to represent the following algorithm's time complexity and use the master method to analyse the following algorithm:
/* The function f(x) is unimodal over the range [min, max] and can be evaluated in Θ(1) time
* ε > 0 is the precision of the algorithm, typically ε is very small
* max > min
* n = (max - min)/ε, n > 0, where n is the problem size */
if ((max - min) < ε)
return (max - min)/2 // return the answer
leftThird = (2 * min + max) / 3 // represents the point which is 1/3 of the way from min to max
rightThird = (min + 2 * max) / 3 // represents the point which is 2/3 of the way from min to max
if (f(leftThird) < f(rightThird))
return Algorithm(leftThird, max) // look for the answer in the interval between leftThird and max
return Algorithm(min, rightThird) // look for the answer in the interval between min and rightThird
If you are familiar with idle game or incremental games like Adventure Capitalist or Clicker Heroes you know that after a while you are dealing with very large numbers. If you are not aware basically you earn points in these games which you invest so that you can earn more points even quicker, rinse and repeat.
I wrote a small game of this style myself but didn't leave the range that a double couldn't handle but it did get me thinking on how to deal with extraordinarily large numbers.
Just to play around I figured a quick and easy way was to just keep a list with ints, each index represents a higher power and you can easily go higher if needed. index zero keeps numbers between 0-999 and index 1 then starts at 10^3. So index 5 would be ^7 etc.
Except for numbers 0-999 if you want to increase the number I have a function where you specify what number 1-9 you want to add and to which power. If you add say 9^5 to 3^5 it sorts this out by becoming 2^5 and 1^6.
So far I haven't accounted for subtraction yet in my little project but that should be fairly easy in a similar manner.
I think this shouldn't be too computationally heavy unless if used to run addition between several large lists.
Should I want to use this in a game I think it could also be easily adapted to use some approximation to reduce unnecessary work. For example if you have an entity in the game which adds 10^15 points per second and one that adds 10^3 and you show 5 decimals instead of doing all the background computations of adding those smaller numbers every second you just approximate how many seconds before they add as much so that it shows and then go from there if I made myself clear.
publicvoid AddNumber(intvalue, int power) // 10^3 = power1, 10^4 = power 2, 10^5 = power 3 etc.
if (power == 0)
NumberContainer += value;
while (NumberContainer >= 1000)
NumberContainer -= 1000;
} elseif(power < mPower)
NumberContainer[power] += value;
while( NumberContainer[power] >= 10)
NumberContainer[power] -= 10;
AddNumber(1, power + 1);
//power too large
publicstatic NumberControl operator +(NumberControl c1, NumberControl c2)
//set new maxpower to largest of the two
int mMaxp = c1.mPower;
if (c2.mPower > mMaxp) mMaxp = c2.mPower;
NumberControl nc = new NumberControl(mMaxp);
for(int i = 0;i < c1.NumberContainer.Count;i++)
for (int i = 0; i < c2.NumberContainer.Count; i++)
Here is how I add numbers. Thinking about this really got me wondering about how others would choose to solve it instead.
Number container is just a list.
I'm planning on profile how well it handles a few different scenarios, for example if you have a list that goes to the power of 100 and have say 80 "generators" which adds a number at different power levels each second how will it will hold out compared to if I approximate those that are smaller than a certain range from the biggest number.
Reading the numbers is done by specifying how many decimals I want shown and then just walk backwards in the list and adding to a string so unless I want full precision it also works fairly well.
Maybe but then it would need a bit more polish and I'm not sure it would fit, maybe a small blog post and carry with it some test results and discussion around it.
When I started thinking about how to solve this I had a nagging suspicion that there already was something readily available but I couldn't figure out what search terms to use to find it so I turned it in to code instead. That blog post was interesting I admire how some people can go so in-depth in a subject.
It may be a good idea to create a struct or class I'll call it LN which has a private vector with intergers VEC in the range of 0 to 9. The class has functions for Algebraic operations (+,-,*,/) to add two LN together. These functions must:
- determine the sizes of both VEC and resize accordingly.
- iterate though both VEC and execute the operation digit by digit.
Since vectors are dynamically allocated, you have virtually no cap on how many digits the number has (accept for RAM).
You'll have to replace all score related constant values in your code with LN's
I'm not sure how your game mechanics are designed, but if you have many hundreds of entities which add some constant value per second, it might be a good idea to have a single variable X which is added to the game score S every second, instead of iterating through all entities and adding the constant value to S. That is, when an entity is created, it adds it value per second constant E to X and when the entity is destroyed it subtracts E from X.
In every game step you then add X to S. This therefore only requires a single addition operation per game step and of course an addition operation when creating or destroying an entity.
Concerning the situation that might arise if you have hundreds of entities I figured that I could just do some approximations. The most important parts would be those entities that's closest to the current largest number in the bank so to speak.
If you have one entity that adds say 10^70 you it I could just go over everything below say 10^60 to calculate what to add per second for all those entities instead of doing 60 different calculations.
So far I'm not working on creating a game but this kinda grew from curiosity but I did some testing and 100 different entities, one for each level up to 10^100 was just a few ms. No graphics or anything else but even if I kept increasing it would be a while before there would be a performance hit. For a mobile app it might me more important but my home computer doesn't have a high end cpu so I was quite happy with the current performance.
There is a game called clicker heroes and the highest I ever came in that game was in the span of 10^130.
There are so many of these games out there and they mainly target mobile with in game purchases which costs exorbitant amounts of money. While I see the allure of these games where the amount of in game points you accumulate while you are idle and can just pop in every once in a while to spend and increase the gathering rate I find it hard to comprehend how people can justify dropping 100$ > for in game credits.
I'll see if I make an implementation of your suggestion of vectors if I got the time but it's always interesting to compare different solutions. If I do I'll make a comparison with one of the string based methods too.
It sounds like you need arbitrary-precision numbers.
With C-1999 or C++2011, you are guaranteed to have a 64-bit built-in type. If you limit yourself to "digits" containing 9 decimal digits (they fit into 32 bits), you can perform the four basic arithmetic operations with "digits" that are easily converted to decimal notation.
i've got this homework problem which i tried to solve and didnt succeed so far. it goes like this:
Given k sets of integers A1,A2,..,Ak of total size O(n), you should determine whether exist a1A1, a2A2,...,akAk, such that a1+a2+..+ak-1 =ak. Your algorithm should run in Tk(n) time, where Tk(n) = O(n^(k/2)*log n) for even k, and O(n^((k+1)/2)) for odd values of k.
I have to implement 3 data compression algorithms. I was thinking about LZW, Deflate and Bzip2. I can't find any good documentation for these algorithms. Do you know any data compression algorithms with good documentation, samples etc.?
I find that extremely hard to believe. A TON of documentation and discussion comes up with a simple Google for "LZW compression algorithm" "Deflate compression algorithm" and "bzip2 compression algorithm".
If you were looking for copy'n'paste code, yeah, that's going to get you a failing grade in class.
Can anyone say which subjects and chapters in mathematics I need to know (thorough or basic) for learning algorithms? Also help me find a book on this mathematical subjects and a good book on algorithms for beginner?
The general definition of "algorithm", in simple terms, is: A sequence of instructions to solve a problem. I can write an algorithm for how to clean my house. Or for picking apples from a tree. Or for flying a plane. There are as many potential algorithms as there are ideas what to do in this world and the variety of required skills to handle all of them is vast. The same holds true when entering the domain of computer science: There are algorithms for all sorts of stuff. Obviously, logical thinking and some sense for numbers won't hurt. In fact, most algorithms I've come across should be understandable by non-computer scientists if only written down in normal language instead of a programming language. Of course there are algorithms for which you will need very specific mathematical knowledge but the intersection of required skills for understanding arbitrary algorithms remains as simple as: Logical thinking and some sense for numbers.
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
Algebra is the most important, since everything is based on it. Graph theory helps develop the kind of reasoning you need for algorithms. I've found linear algebra useful for real-world algorithms I've had to implement, but it depends on the domain you're working in.
I'm looking for an algorithm to find all the possible combinations of meetings (details in attached photo). I would like to introduce the teams algorithm calculates all threrefore combinations of matches, or better yet, I could enter the value to the team as the level of play by all the teams and I got all the options from the most probably
The problem at first seems to be simple but it is not, al least form me...
Have array/set/group of numbers,
I need to make groups of these numbers, so that each group has element(s) that sum is euqal or max close to certain value, for example 10. Of course I understand that there are lots of solutions, but I need only one.
So, for this set I should get:
A - 2,7,1 -> 10
B - 9,1 -> 10
C - 2,8 -> 10
D - 5,5 -> 10
E - 6,3 -> 9
F - 3 -> 3
of course this is one of possible solution, but I don't need super optimalization for this algorithm.
If anyone has some idea how to approach to this problem I would be very grateful for help.
You would have to extend that classical approach to allow sub-optimal group-solutions (where sum < 10 or maybe also where sum > 10 up to a certain maximum)*, temporarily store all potential overall-solutions and when the backtracking is done, pick the best based on some criteria that you have to define. E.g. where the sum of differences from 10 is smallest (would be 8 in your example). Or, as you don't need the best possible solution, you could cancel the process once a satisfactory overall-solution has been found.
* : Which means that the evaluation should continue even after finding a potential group-solution until exceeding the tolerable limit.
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
Last Visit: 31-Dec-99 18:00 Last Update: 27-Jun-16 22:26