|
No, you've already assigned values to your tape sensors. That's the input to the PID loop. The output is a signal to steer the robot to keep the tape centered, ie. the input signal is zero. It sounds like you have a typical two powered wheel differentilly steered robot so you steer by having one wheel turn at a different rate than the other. The robot will turn toward the slower wheel. Take the simple case of following a straight tape (if your robot won't follow a straight tape, you'll never follow a curving one. You start out with the same output set for both motors. If the robot drifts to the left, the right sensors will start to pick this up and you'll want to steer the robot back to the right. For practical purposes, this means slowing down the right motor since you can always slow down or even go into reverse. You can't always speed up if you're running at or near top speed. What you have to do is to tune the PID parameters so that the output tells you how much to slow the motor.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
From what you said, I thought you'd started with this article?[^]
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
|
I've done wire following (of machines weighing well over 2,000lbs). On larger-scale devices, the momentum of the machine becomes a major factor, but with smaller ones it will probably be less of one. Still, it would be helpful to know what mechanical control constraints you're operating under. Making a robot that will stay on the wire isn't too hard if hyper-sensitive steering is acceptable. Making the robot travel smoothly along the wire is harder.
At least on the larger machines I've worked with, getting a 'straight' PID to work well under all circumstances has generally proved difficult. Your situation may be further complicated by the fact that many PWM systems modulate motor current rather than voltage. Do you have dynamic breaking available? How is it controlled?
|
|
|
|
|
|
I am trying to Write a MATLAB code to scale the input image of 256x256 to a smaller image of 160x160 using the bilinear interpolation, the idea behind it is to compare it with the image resulted from the function imresize, Can anybody give me a detailed algorithm(preferably a pseudo code) on how to approach that.
I already have some links but they are so poorly commented that I get easily lost.
Any Help will be highly appreciated.
|
|
|
|
|
Hi,
this[^] seems pretty straightforward. What is the problem?
For all kinds of transformations, walk the new coordinate system, find the location in the old coordinate system, and apply the transformation, hence:
foreach yy in new coordinates
calculate y in old coordinates
round to neighbours y1 and y2
foreach xx in new coordinates
calculate x in old coordinates
round to neighbours x1 and x2
apply bilinear formulas
store result at (xx,yy)
next
next
Luc Pattyn
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
Do you have an idea how to do it in matlab, I am pretty new at that language and I need to program it in matlab, also I don't think it's that simple, the purpose is for image resizing, so basically it will be applied on a Matrix(not that it changes anything).
|
|
|
|
|
I gave you the pseudo-code you asked for.
For Matlab, see its documentation, and/or Google.
Luc Pattyn
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
Something I usually will do when using matlab is to first implement my algorithm to be computed sequentially, like I would in C and then move to the matrix form. It executes far more slowly, but with the data sizes you're talking about it won't matter.
|
|
|
|
|
Hi - I am bit of a noob and walking in the dark so would like to ask for some help/pointers.
I have a list of data (relates to power consumption of devices) - that has been TEA encrypted ( I think using xtra block method). I have a specific example, where I know what is behind the cipher (ie what it decrypts too) but need to apply this across the whole data.
Eg.
<citem partnumber="dde5b92cc715b817" value="HP U320e SCSI Host Bus Adapter" lowpower="no" idle="4218c8b7d21d5e08" max="5bb81eba4f0fcc1f" />
I know dde5b92cc715b817 = AH627A
Within the XML database there is mention of a passkey!
<passkey>16028d22793c8d6d1637fe6ddc1b68b64462463a7f236fb4e7b9b1d547251a4e</passkey>
But I am guessing this is encrypted also! I have tried all the online tools etc. But this does not look right or work. I have deduced that the output is Hex Ciphertext, but now I am stuck.
Anyone help / point me in right direction?
|
|
|
|
|
I have embarked on writing an image processing application - and am now concerned with simple operations like brightness / contrast - window / level. Now, the big question is about performance. For small images (upto 1000 x 1000), things happen at real time. But, for images of size 3000 x 2000, the program is sluggish. My program is written in C#, with GDI+. Now, taking a step back, is this (C#, GDI+) a good choice for such a program; or should one revert back to unmanaged C++, MFC? I would like to use WPF, but again, have any of you, great programmers out there, seen any performance problems on WPF with fast image processing? Also, would be grateful if you can share some performance improving tips.
|
|
|
|
|
All the .NET languages are slow. Despite the dubious claims by some people that they can be as fast as unmanaged code, .NET programs are usually sluggish. Managed code does JITing, boxing/unboxing, and run-time checking (e.g. bounds checking on array accesses) for example.
However, .NET programs are more reliable. Recently the same subsystem was implemented at my company in C# and unmanaged C++. The C++ code was faster but would regularly crash mysteriously. The C# code was solid.
The best approach is to write the bulk of your code in C#, but do the image processing in unmanaged C++. This gives you the reliability of managed code, and the speed of unmanaged code for the time-consuming repetitive tasks.
The fastest processing is in small C++ loops that fit entirely into the cache, with no contained branches. This allows out-of-order execution which speeds up processing.
Since a loop is a branch, which makes out-of-order execution difficult, you can process two or more cases inside your loop (instead of one) to do more processing before the branch. This is called "loop unrolling", and can speed up processing.
Processing the image from low addresses to high addresses minimizes memory accesses and makes use of the cache, which fills a 32 (or 64)-byte buffer (called a "cache line") with a single memory access (even one pixel). Subsequent memory accesses at addresses just above this DON'T access memory because the contents are already in the cache. This saves time by avoiding the need for the processor to wait for a memory access.
The Intel (and AMD) SSE and MMX extensions to the instruction set (http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions[^]) allow the use of 128-bit registers which may allow you to do image-processing operations in parallel for more speed.
|
|
|
|
|
Alan Balkany wrote: The C++ code was faster but would regularly crash mysteriously. The C# code was solid.
I would suggest the C++ code was written by an incompetent or hadn't been debugged properly. I've written a ton of C++ code over the years and my code does not typically "just crash mysteriously". Or it was some Microsoft benchmark slanted to show how their proprietary garbage is superior.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
"I would suggest the C++ code was written by an incompetent or hadn't been debugged properly."
Probably. But my point is that C++ lets you get away with that. C# quickly detects that type of problem.
In this case, a few months previously the C++ programmer had been deriding the need for C# memory protections, saying "My code doesn't have memory leaks!". People aren't perfect, and a language that protects you against some mistakes will produce more reliable programs.
|
|
|
|
|
Alan Balkany wrote: C# quickly detects that type of problem.
Don't count on it.
|
|
|
|
|
The whole memory leak problem that .NET is supposed to cure is a big Microsoft FUD. Since at least the early 90s, Microsoft as provided a debug heap that's instrumented to detect memory leaks. All you have to do is enable its use and test your debug version. When you exit it will not only tell you if you have memory leaks but where the unfreed memory was allocated. This works both for C and C++. The bigger problem is resource leaks which Microsoft didn't do much to address. Forget to call Dispose and/or not implement it properly and it's no better than forgetting to call free or not handling your destructor properly.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
.NET detects memory problems that the unmanaged debug heap won't, such as array indexes out of bounds.
.NET code is more reliable. Of course if you write perfect code, you can have a reliable unmanaged program, but who writes perfect code? At the current level of technology, we can't prove that ANY program is correct.
|
|
|
|
|
I guess we'll just have to agree to disagree about the benefits of locking yourself into a proprietary language for dubious benefits.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
I had implemented an image processing program in Delphi 7, and it showed real-time performance. However, the support for Delphi is not very strong. Let me hasten to add that I found absolutely no problems with the Delphi 7 executable - no crashing, etc. for big images. Also, Delphi has a function to assign all bits of a single scan-line - so you don't need nested 'for' loops (outer loop over height, and inner loop over width); such a function is missing in C#. I am amazed at the way Delphi achieves high performance.
Java is yet another option, which I have not yet explored.
modified on Tuesday, September 22, 2009 2:35 AM
|
|
|
|
|
I don't think Java is a good approach if you're looking for maximum performance. It's an interpreted language, with bytecodes being executed by a software virtual machine, so it's slower than native code.
|
|
|
|
|
Alan Balkany wrote: It's an interpreted language, with bytecodes being executed by a software virtual machine
That of course is highly debatable. Both Java and C# compile to an intermediate language, which is then compiled to and stored and executed as native code (at "run-time", which actually means just before it runs, so not really different from "at build-time" except that it adds to your app's start-u[ time). An interpreter would never generate native code.
Whether the end result is worse, equal or better performance-wise is mainly determined by the amount of effort they have chosen to spend in the compiler and virtual machine. After all, the intermediate code, containing a lot of meta information, is a perfect representation of the original source code.
BTW: most/all regular compilers also have a front-end dealing with the source language, and a back-end generating the final instructions. With the two parts communicating through a rather language-agnostic internal representation of the source; that basically is what bytecode and IL also are.
Luc Pattyn
Have a look at my entry for the lean-and-mean competition; please provide comments, feedback, discussion, and don’t forget to vote for it! Thank you.
Local announcement (Antwerp region): Lange Wapper? Neen!
|
|
|
|
|
Luc Pattyn wrote: That of course is highly debatable
However, experience shows that java programs are (usually) deadly slow.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
Experience also show that VB code is sh!t, this is often due to the monkeys rather than the tree.
Panic, Chaos, Destruction.
My work here is done.
|
|
|
|
|
That's true. Anyway I wouldn't suggest anyone to use java for programming 'damned-fast' applications. While java has many many many qualities, alas speed is not one of (of course this is going on my...).
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|