|
hai
i need to fill the blank space with certain images of diffarent sizes.images should be well placed in the space next to previous one.
sabarikuttan
Sabari MD
Application Developer
Veloxit Info Solutions
|
|
|
|
|
But what is "well placed"? Do you mean you want to do Optimal Rectangle Packing with irregular sizes and an additional constraint on the shape of the result? (you will not like the solution..)
|
|
|
|
|
Yes my rquirement is
I have rectangular free space .
i need to fill it with images of diffarent sizes.
Sabari MD
Application Developer
Veloxit Info Solutions
|
|
|
|
|
That is known as the knapsack problem. Look it up!
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
As Harold said, it really depends on what you mean by "well placed".
If you just treat all images as rectangles, then a greedy algorithm (see knapsack problem, as Luc suggested) surely is the simplest and fastest solution.
If you go for that, remember you should manage situations where there is no solution (for example if you want ALL images placed in your blank space: it may not be possible). You may also have to go through different placing patterns in order to find a solution, so a one-go greedy algorithm may not be sufficient.
2+2=5 for very large amounts of 2
(always loved that one hehe!)
|
|
|
|
|
Hi
i want 2 implement a project in matlab tittled as 'Fingerprint Image Quality Classification'
where we get an image and classified it as wet/oily , dry and normal image .
Can anybody like 2 share his work or help me out?
Regards
|
|
|
|
|
Can you define for me wet/oily , dry and normal image?
You have the thought that modern physics just relay on assumptions, that somehow depends on a smile of a cat, which isn’t there.( Albert Einstein)
|
|
|
|
|
wet and oily image is one in which ridges are overlapped by dark circle , give impression as black ink has been spotted on fingerprint .
Dry image is one in whcih some pixels from ridges are rubbed
normal is good quality image
|
|
|
|
|
So its been a while since I did image processing and it was not in that field.
But my first thought would be if I can great a sort of pattern for each state and then try to correlate this.
Or you could try to eliminate the finger in the image and check the rest.
Something like that.
I don't know if this is helpful, because as I said its been a while.
So good luck!
Cheers
You have the thought that modern physics just relay on assumptions, that somehow depends on a smile of a cat, which isn’t there.( Albert Einstein)
|
|
|
|
|
Hi,
I know that I should not ask a general question but I spend whole week looking for any solution and I am just fed up. I need some easy guide (I am not mathematician) for the barrel distortion correction algorithm.
Thanks in advance.
|
|
|
|
|
I suggest you read some of these hits[^].
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Hello all,
Am trying implement TEA in my application. So far I have been able to encrypt a table with eleven colums. But i can only decrypt three of the colums. I dont seem to understand what the problem is because even the colums have similar datatype i.e. nvarchar.
why would it chose to decrypt only three colums. Below is the code where i passed the arguement.
For Each rw As DataRow In dt.Rows
decdata = New TEA()
deRecDataFnam = decp.Decrypt(rw.Item("First Name"), ned)
deRecDataLnam = decDatalab.Decrypt(rw.Item("Last Name"), ned)
deRecDataSec = decdata.Decrypt(rw.Item("Social Security Number"), ned) --> can decrypt
deRecDataPh = decdata.Decrypt(rw.Item("Phone Number"), ned)
deRecDataCon = decdata.Decrypt(rw.Item("Contact Address"), ned)
deRecDataDOB = decdata.Decrypt(rw.Item("Date of Birth"), ned) --> can
deRecDataMar = decdata.Decrypt(rw.Item("Marital Status"), ned)
deRecDataBlud = decdata.Decrypt(rw("Blood Group"), ned)
deRecDataGeno = decdata.Decrypt(rw.Item("Genotype"), ned) --> can
deRecDataGen = decdata.Decrypt(rw("Gender"), ned)
Next
the encryption and decryption code can be found here
http://allmysocial.net/post/Tiny-Encryption-Algorithm-(TEA)-in-Visual-BasicNET.aspx[^]
I can paste the algorithm if u want me to.
Thanks a lot.
|
|
|
|
|
just guessing actually, but could it be those are the only columns that are never empty?
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
can some good reading material or examples be suggested for implementation of a dfs to find all paths preferably c++ using linked list. books, tutorials anything...
|
|
|
|
|
Here is one of the good pdf
DFS Tutorial
Abhijit Jana | Codeproject MVP
Web Site : abhijitjana.net
Don't forget to click "Good Answer" on the post(s) that helped you.
|
|
|
|
|
this shows the depth search in action by showing the nodes you visited and non visited
http://www.cs.usask.ca/content/resources/csconcepts/1998_3/DFS/java/index.html
|
|
|
|
|
Hi,
FYI :
While Replying to any post, please make sure you are Replying to the right person. Because the link which you have provide may be useful to that person who had posted this question. But if you are replying me, he will not get the reply until he visited the site, But if you directly reply to his own message and if his mail filter is on , He will get an email notification . So that he can track of it.
|
|
|
|
|
Hi everyone,
How would I go about taking X number of inputs and combining them all to create an 'output ID'? The key thing about the ID is that it should take all the inputs into account (preferably with weighting such as having X1 slightly more important than X2) and produce a number where number can be used to compare similarity. For example:
A) Inputs 1,2,3 ---> <perform function=""> ---> 0123456
B) Inputs 1,2,4 ---> <perform function=""> ---> 0123457
C) Inputs 5,3,8 ---> <perform function=""> ---> 1354565
Notice A and B are similar, so the outputs are similar to each other than A/B and C.
Thank you for any help
modified on Friday, July 24, 2009 6:43 PM
|
|
|
|
|
If your inputs are very small integers, say for example 1 or 2 digit numbers, and you have a small amount of them, then you might simply use a sum of factors.
Just put your X's in order by weight so that X1 has the bigger weight and so on, then (supposing you are using 2 digit integers) just do something like (code is in C# but neutral enough):
long ID = (X1 * 1000000) + (X2 * 10000) + (X3 * 100) + X4;
Comparison with other IDs to find similarity can now be done using a threshold:
if (Math.Abs(ID1 - ID2) <= Threshold) ...
I know, too many "IF"s to be useable, but I hope this can somehow help.
2+2=5 for very large amounts of 2
(always loved that one hehe!)
|
|
|
|
|
Good. But maybe you could use smaller and more close numbers to multiply, or else a slight diferent X1 will never pass a threshold, and a very different X4 will always pass.
I think you could do the same with different weights:
long ID = (X1 * W1) + (X2 * W2) + (X3 * W3) + (X4 * W4);
and vary the weights W1, W2, W3 and W4 and the threshold until you get a good result.
As another solution, if your inputs are really 3 numbers, you could just treat them as mathematical three dimentional vectors and calculate the size of the vector, that is, the modulus of the vector. This would be:
double Modulus = Math.Sqrt(X1*X1 + X2*X2 + X3*X3)
You can also add weights in this formula, like X1*X1*W1 and so on. Then you can compare:
if (If Math.Abs(Modulus1 - Modulus2) <= Threshold) ...
Regards,
Leonardo Muzzi
|
|
|
|
|
I particularly like the vector modulus idea, very elegant.
But I see a problem with the two approaches you suggested: you should use dynamic weights to take into account for the difference in magnitude between the various inputs. If, for example, we have:
W1=3 ; W2=2 ; W3=1
X1=100 ; X2=300 ; X3=10
it's clear that we should increase the value of W1 in order to restrain X2 from weighing more than X1, even if its weight W2 is already less than W1. With different input sets, weights may have to be re-adjusted again. This means you would have to go through all the input sets in order to come up with proper weights before you start applying them. Even when possible, this would not be optimal.
I suggested factors of 10 because in most real world applications it's natural to have inputs whose range is in boundaries defined by factors of 10 (for example 0 to 999 etc.). Using factors of 10 will retain all of the digits for each input.
If we want to save memory (bits), I think we should switch to factors of 2 and lose the least significant bits. Switching to factors of 2 is trivial: we just left-shift the values with higher weights, while values with lower weights will drop to the right (least significant bits), and we preserve the logic I suggested with factors of 10 in terms of comparisons with thresholds.
For example, let's say that our inputs will be in the range from 0 to 999. We need 10 bits to hold that range of values, hence if we want to combine four inputs X1...X4 we need 40 bits. Now, it would sure be good to make that 32 bits, so that we can optimize memory usage and processing time, going for a nice standard 32-bit int. To do that, we just drop the two least significant bits for each input:
int ID = ((X1 >> 2) << 24) & ((X2 >> 2) << 16) & ((X3 >> 2) << 8) && (X4 >> 2);
EDIT: of course the same can be done with factors of 10, by dividing the inputs by 10, 100, etc. in order to use up less digits.
Looking forward to hear your thoughts about this.
2+2=5 for very large amounts of 2
(always loved that one hehe!)
modified on Saturday, August 8, 2009 4:12 AM
|
|
|
|
|
Hi there! I think the dynamic weights should be used if the scenario asks for them. If he needs to always make X1 more valuable than X2, than go with dynamic weights and increase W1 as long as he needs. If the scenario asks for different results based on the input sets, that you be nice!
I think a good approach would be using dynamic weights, but vary them based on the results. That is, choose a formula (like the sum of factors, the vector modulus or any other), implement the solution, and make a test program that do as many tests as possible. Them compare the results generated with the desirable results, and vary the weights to get closer to the desirable. The test program could even vary the weights by himself.
This would be close to a small neural network solution. The program learns how to proceed based on test data. The problem with this approach is that you need a large test data so the program can "learn" enough.
About the memory usage, a very nice suggestion! I just think that maybe shrinking the more important factors (X1, X2,...) can compromise the solution, since the least significant bits of X1, for instance, could be more important than the whole X4. But again, that depends on the scenario.
By the way, I forgot to mention that the vector modulus can be used with any quantity, not just 3, just adding more factors to the formula.
Anyway, I think the author of the post has enough to work with!
Regards,
Leonardo Muzzi
|
|
|
|
|
Leonardo Muzzi wrote: Anyway, I think the author of the post has enough to work with! Smile
That's for sure hehe, the rest is just for our fun! :P
2+2=5 for very large amounts of 2
(always loved that one hehe!)
|
|
|
|
|
Leonardo Muzzi wrote: About the memory usage, a very nice suggestion! I just think that maybe shrinking the more important factors (X1, X2,...) can compromise the solution, since the least significant bits of X1, for instance, could be more important than the whole X4. But again, that depends on the scenario.
BTW I forgot to mention - you are absolutely right (depending on the scenario, of course, yep, but I think it goes for most)!
So it should be changed to something similar to:
int ID = (X1 << 22) & ((X2 >> 1) << 13) & ((X3 >> 3) << 6) & (X4 >> 4);
2+2=5 for very large amounts of 2
(always loved that one hehe!)
|
|
|
|
|
Help! I need a book that describes hash in details.
|
|
|
|
|