|
Do you have an idea how to do it in matlab, I am pretty new at that language and I need to program it in matlab, also I don't think it's that simple, the purpose is for image resizing, so basically it will be applied on a Matrix(not that it changes anything).
|
|
|
|
|
I gave you the pseudo-code you asked for.
For Matlab, see its documentation, and/or Google.
Luc Pattyn
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
Something I usually will do when using matlab is to first implement my algorithm to be computed sequentially, like I would in C and then move to the matrix form. It executes far more slowly, but with the data sizes you're talking about it won't matter.
|
|
|
|
|
Hi - I am bit of a noob and walking in the dark so would like to ask for some help/pointers.
I have a list of data (relates to power consumption of devices) - that has been TEA encrypted ( I think using xtra block method). I have a specific example, where I know what is behind the cipher (ie what it decrypts too) but need to apply this across the whole data.
Eg.
<citem partnumber="dde5b92cc715b817" value="HP U320e SCSI Host Bus Adapter" lowpower="no" idle="4218c8b7d21d5e08" max="5bb81eba4f0fcc1f" />
I know dde5b92cc715b817 = AH627A
Within the XML database there is mention of a passkey!
<passkey>16028d22793c8d6d1637fe6ddc1b68b64462463a7f236fb4e7b9b1d547251a4e</passkey>
But I am guessing this is encrypted also! I have tried all the online tools etc. But this does not look right or work. I have deduced that the output is Hex Ciphertext, but now I am stuck.
Anyone help / point me in right direction?
|
|
|
|
|
I have embarked on writing an image processing application - and am now concerned with simple operations like brightness / contrast - window / level. Now, the big question is about performance. For small images (upto 1000 x 1000), things happen at real time. But, for images of size 3000 x 2000, the program is sluggish. My program is written in C#, with GDI+. Now, taking a step back, is this (C#, GDI+) a good choice for such a program; or should one revert back to unmanaged C++, MFC? I would like to use WPF, but again, have any of you, great programmers out there, seen any performance problems on WPF with fast image processing? Also, would be grateful if you can share some performance improving tips.
|
|
|
|
|
All the .NET languages are slow. Despite the dubious claims by some people that they can be as fast as unmanaged code, .NET programs are usually sluggish. Managed code does JITing, boxing/unboxing, and run-time checking (e.g. bounds checking on array accesses) for example.
However, .NET programs are more reliable. Recently the same subsystem was implemented at my company in C# and unmanaged C++. The C++ code was faster but would regularly crash mysteriously. The C# code was solid.
The best approach is to write the bulk of your code in C#, but do the image processing in unmanaged C++. This gives you the reliability of managed code, and the speed of unmanaged code for the time-consuming repetitive tasks.
The fastest processing is in small C++ loops that fit entirely into the cache, with no contained branches. This allows out-of-order execution which speeds up processing.
Since a loop is a branch, which makes out-of-order execution difficult, you can process two or more cases inside your loop (instead of one) to do more processing before the branch. This is called "loop unrolling", and can speed up processing.
Processing the image from low addresses to high addresses minimizes memory accesses and makes use of the cache, which fills a 32 (or 64)-byte buffer (called a "cache line") with a single memory access (even one pixel). Subsequent memory accesses at addresses just above this DON'T access memory because the contents are already in the cache. This saves time by avoiding the need for the processor to wait for a memory access.
The Intel (and AMD) SSE and MMX extensions to the instruction set (http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions[^]) allow the use of 128-bit registers which may allow you to do image-processing operations in parallel for more speed.
|
|
|
|
|
Alan Balkany wrote: The C++ code was faster but would regularly crash mysteriously. The C# code was solid.
I would suggest the C++ code was written by an incompetent or hadn't been debugged properly. I've written a ton of C++ code over the years and my code does not typically "just crash mysteriously". Or it was some Microsoft benchmark slanted to show how their proprietary garbage is superior.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
"I would suggest the C++ code was written by an incompetent or hadn't been debugged properly."
Probably. But my point is that C++ lets you get away with that. C# quickly detects that type of problem.
In this case, a few months previously the C++ programmer had been deriding the need for C# memory protections, saying "My code doesn't have memory leaks!". People aren't perfect, and a language that protects you against some mistakes will produce more reliable programs.
|
|
|
|
|
Alan Balkany wrote: C# quickly detects that type of problem.
Don't count on it.
|
|
|
|
|
The whole memory leak problem that .NET is supposed to cure is a big Microsoft FUD. Since at least the early 90s, Microsoft as provided a debug heap that's instrumented to detect memory leaks. All you have to do is enable its use and test your debug version. When you exit it will not only tell you if you have memory leaks but where the unfreed memory was allocated. This works both for C and C++. The bigger problem is resource leaks which Microsoft didn't do much to address. Forget to call Dispose and/or not implement it properly and it's no better than forgetting to call free or not handling your destructor properly.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
.NET detects memory problems that the unmanaged debug heap won't, such as array indexes out of bounds.
.NET code is more reliable. Of course if you write perfect code, you can have a reliable unmanaged program, but who writes perfect code? At the current level of technology, we can't prove that ANY program is correct.
|
|
|
|
|
I guess we'll just have to agree to disagree about the benefits of locking yourself into a proprietary language for dubious benefits.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
I had implemented an image processing program in Delphi 7, and it showed real-time performance. However, the support for Delphi is not very strong. Let me hasten to add that I found absolutely no problems with the Delphi 7 executable - no crashing, etc. for big images. Also, Delphi has a function to assign all bits of a single scan-line - so you don't need nested 'for' loops (outer loop over height, and inner loop over width); such a function is missing in C#. I am amazed at the way Delphi achieves high performance.
Java is yet another option, which I have not yet explored.
modified on Tuesday, September 22, 2009 2:35 AM
|
|
|
|
|
I don't think Java is a good approach if you're looking for maximum performance. It's an interpreted language, with bytecodes being executed by a software virtual machine, so it's slower than native code.
|
|
|
|
|
Alan Balkany wrote: It's an interpreted language, with bytecodes being executed by a software virtual machine
That of course is highly debatable. Both Java and C# compile to an intermediate language, which is then compiled to and stored and executed as native code (at "run-time", which actually means just before it runs, so not really different from "at build-time" except that it adds to your app's start-u[ time). An interpreter would never generate native code.
Whether the end result is worse, equal or better performance-wise is mainly determined by the amount of effort they have chosen to spend in the compiler and virtual machine. After all, the intermediate code, containing a lot of meta information, is a perfect representation of the original source code.
BTW: most/all regular compilers also have a front-end dealing with the source language, and a back-end generating the final instructions. With the two parts communicating through a rather language-agnostic internal representation of the source; that basically is what bytecode and IL also are.
Luc Pattyn
Have a look at my entry for the lean-and-mean competition; please provide comments, feedback, discussion, and don’t forget to vote for it! Thank you.
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
Luc Pattyn wrote: That of course is highly debatable
However, experience shows that java programs are (usually) deadly slow.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
Experience also show that VB code is sh!t, this is often due to the monkeys rather than the tree.
Panic, Chaos, Destruction.
My work here is done.
|
|
|
|
|
That's true. Anyway I wouldn't suggest anyone to use java for programming 'damned-fast' applications. While java has many many many qualities, alas speed is not one of (of course this is going on my...).
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
I have been using a very fast and proprietary Java system for many years, well before .NET came to be. The average Java implementation being slow is due to the fact that everyone can create a JVM (just like everyone can write a compiler), it takes a professional and performance oriented approach to create a good one. "It works, lets ship it" isn't good enough. Not here, not anywhere if performance matters.
Luc Pattyn
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
No: nothing can even approach plain C language performance (well assembly can be better).
I'm talking about well written java applications vs well written C ones.
Believe me there's a reason why light speed is c .
On the other hand java has many many many good features, but, you know it is the "compile once, slow down everywhere" language (well, everywhere but the proprietary implementation you experienced...)
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
modified on Friday, September 25, 2009 3:45 PM
|
|
|
|
|
unmanaged code kills managed code for this type of thing.
we do all of our stuff in C++, with a bit of assembly for the MMX/SSE optimizations. sometimes we use the MMX/SSE intrinsics (which are essentially macros), but that's just being lazy. hand-coded assembly can beat the intrinsics.
nothing will beat an unmanaged pointer zipping across the image data for sheer speed.
of course, a decent algorithm can do wonders, too.
|
|
|
|
|
why isn't talking about matlab?
|
|
|
|
|
Hello Good people,am not so sure if this should go here,i am student who is interested in developing a search engine that indexes pages from my country.I have been doing my research on Algorithm to use for sometime now and i have found HITS and PageRank as the best out there.I have choosing to go with PageRank since it is more stable than the HITS Algorithm(so i read).
I have found countless articles and university researches about PageRank but my problem is that i do not understand most of the mathematical symbols that form the algorithm in this papers.Currently,i cannot understand how the Google Matrix(the irreducible,stochastic matrix) was calculated with the algorithm,i do not seem to understand the Algorithm used.
I did my reading from the articles below:
PDF 1
PDF 2
Please i need you to help me go through it,i need a basic explanation(examples will be nice) with less mathematical symbols.
Thanks in advance.
|
|
|
|
|
kentipsy wrote: Please i need you to help me go through it,i need a basic explanation(examples will be nice) with less mathematical symbols.
I don't think you will get a tutorial on statisitics and matrix arithmetic. I suspect that you will need to study the mathematics quite closely if you want to have any possibility of implementing this sort of product. However if you want a ready made solution you could always try one of these[^].
|
|
|
|
|
Hi i have a problem.
Thing like this:
This is multi-thread program, 1 master thread processing the msgs generated by child threads(typical consumer-producer model). I create a large share buf, all the children will put their msgs into this buf if they can get the buf lock. When a child put its msg into the buf completely, it will notify the master there is a msg waiting for process, so master aquire the buf lock and hold it and begin to deal with the msgs.
The problem is when there are a lot of child threads startup, there will be a lot of msgs waiting for process, then master thread will hold the the buf lock for a while which is not acceptable , as no child thread can put msg into it. This problem apparently causes poor performance of this program.
So what i try to do is to separete the large share buf to a array of small buf, and children can put their msgs into the available buf, and master can process msgs in each small share buf one by one.
But i don't know whether there is a good and mature algorithm or way to manage the share buf array.
Any good idea is welcome! Thanks.
BTW, i'm not sure whether i make the problem clear.(Because english is not my mother tongue, forgive me if it is not clear enough.) if you are interested in this problem, please feel free to contact me and ask me more details by email or
MSN: donniehan@live.cn
Thanks again!
|
|
|
|
|