|
How far apart are your cameras? And when you say that when the right image is warped to the left image plane is far from "ideal", do you know what the ideal image should be from theory or what you just expect it should be?
Have you tried the OpenCV group on Yahoo Groups? You might be more likely to get a good answer there than here.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
Hey,Tim,Thanks for your reply,
My right camera is only 15 degree right rotation from the left camera.
I know the ideal warp result
I've post a message on OpenCV group,
http://tech.groups.yahoo.com/group/OpenCV/message/64370
welcome to join discussion
Do you know how to get the homography of two cameras by the camera matrix or extrinsic matrix?
Thanks!
|
|
|
|
|
Do you have the book "Learning OpenCV"[^]? I noticed a section of homography in it. In my exploration of OpenCV I haven't gotten quite that far. I've been spending a lot of time getting a framework in place so I can quickly and easily write test applications. That and fighting the lack of documentation and weirdness in OpenCV.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
I've got the book "learning Opencv",but it doesn't solved my problem
So how is it going of your framework?
|
|
|
|
|
The one thing I got out of the homography section was that he suggests using at least 10 images when using the chessboard technique to get a good matrix. That's more than I expected.
My framework has been one step forward then two steps back. I had to get a new computer last year and to get it quickly, I took one with Vista preinstalled. OpenCV and Vista don't get along well. I've finally started gaining traction. After a false start, I settled on wxWidgets for the GUI and have a lot of the basics of OpenCV wrapped in C++ to hide some of the ugliness (like the relationship between CvMatrix and IplImage ). Right now I'm working through object detection and obstacle avoidance for mobile robots.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
object detection and obstacle avoidance for mobile robots?
That's really an interesting thing.What developing lib are you using? OpenCV or other robot vision lib in sorceForge.net
|
|
|
|
|
Right now I'm using OpenCV, particularly for acquiring the image. There's a lot there for testing ideas but I sometimes wonder if it was really with it. I suspect that for any production system I'll want to recode the algorithms to remove the generality that OpenCV brings and do specialized versions to work with just the format of the camera I'll be using. I've looked at a few other libraries but so far OpenCV has the best support and that's minimal. I do have a friend from Homebrew Robotics here who's interning at Willow Garage so I could get access to Gary Bradski if I really needed to. The last few days I've been looking at what it will take to visually detect the target cones for a RoboMagellan robot. How are you planning to use your stereo rig? Another friend has a two wheel balancing robot that has a stereo vision setup running into a Beagleboard.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
My two cameras are relatively fixed,so I want to program a reliable algorithm based on some stable thing just like camera parameter.
In the past,I've done mosaic algorithm based on SIFT,SURF,KLT,Harri Corner,But the mosaic result of these kind of algorithm based of feature points matching is very unstabile,sometimes good,sometimes bad
I've seen some projects of robot stereo vision before,They are mostly based on feature points matching.
|
|
|
|
|
Getting something robust enough to reliably work over a wide range of situations seems to be one of the big stumbling blocks in machine vision. Laser scanners seem to be more widely successful for mobile robots today since they provide easy range information but we're talking $$$. While a number of the DARPA Urban Challenge teams investigated vision systems, I don't think any actually used them for navigation. Willow Garage is using vision but they still have laser scanners on their robot.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hey Tim,
Today I got another project from my boss;
Car Safety System based on cameras
That means fix some cameras(two or more) on the car, protecting the car from some objects getting too close.
It reminded me of you.I think this project is similar with your project of robot avoiding obstacle.
But my hardware system is DM6467 or DM6446 from TI(Texas Instruments)
Have you developed the software system of robot avoiding obstacle?
|
|
|
|
|
That sounds similar to the OMAP35x they use in the Beagle Board except the OMAP has an ARM8. I haven't gotten that far along yet. I have a level with a laser line generator I want to try to detect with the camera and use the parallax shift to detect objects and get a range estimate similar to this article[^] although I envisioned the laser and camera positions reversed so the line would always be in view on the floor. Wouldn't work too well outside probably. I've been able to detect the spot from a laser pointer fairly reliably with a webcam but haven't gotten around to trying the line yet. Might have to spring for an optical bandpass filter. I'll try some red plastic first.
Are you planning on trying to extract depth with a single image or are you going to use your stereo rig?
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Apologies for the late reply,I've been out for a work travel there days.
It sounds your robot project is on the right track. Congratulations!
Now I use stereo rig to extract depth.
What's your E-Mail address?
I sent my project solution and project report to you
Welcome your advice!
|
|
|
|
|
Is anyone aware of any existing work to find the centerline of a font boundary?
How about a suggested method?
For example I would the letter V would actually just be 2 lines instead of 7.
This will be used for engraving text.
I have tried exploding the glyph boundary into lines and then creating perpendicular lines from the mid points. Then I trim to near intersect points. Results are not great.
Thanks,
Jason
|
|
|
|
|
You could find the points top.l and bot.r of a bounding box and find the mid points.
If you are referring to something more complex, please give more detail.
|
|
|
|
|
Hi guys,
Thanks for colleagues who have answered my previous questions. Now my question is a modification of my first question.
How can I read and display from two usb webcams using MATLAB. I did it for just one usb webcam, changed the name of the variable for the second usb webcam but it didn't work.
Thank you in advance.
Sarkuzi
|
|
|
|
|
I don't know matlab well enough to answer this off of the top of my head but if you show us the code that worked for the first webcam and the code that isn't working we might be able to offer ideas on what you might doing wrong.
ps Also list any error messages (if any) that you are getting.
|
|
|
|
|
Hi there,
I'm working on a project to finding the depth from a 3D stereo Camera.
I have been adviced to use "ch professional 6.1" with two usb cameras. Can anyone please point me or send me a complete code that reads from these two cameras and give me the stereo picture. I just want to concentrate on my work... finding depth.
Many thanks in advance
Sarkuzi
|
|
|
|
|
sarkuzi wrote: send me a complete code
I bid $10,000.
Henry Minute
Do not read medical books! You could die of a misprint. - Mark Twain
Girl: (staring) "Why do you need an icy cucumber?"
“I want to report a fraud. The government is lying to us all.”
|
|
|
|
|
I raise you $5,000.
Luc Pattyn [Forum Guidelines] [My Articles]
DISCLAIMER: this message may have been modified by others; it may no longer reflect what I intended, and may contain bad advice; use at your own risk and with extreme care.
|
|
|
|
|
Hi there,
I'm working on a project to finding the depth from a 3D stereo Camera.
I have been advices to use "ch professional 6.1" with two usb cameras. Can anyone please point me or send me a complete code that reads from these two cameras and give me the stereo picture. I just want to concentrate on my work... finding depth.
Many thanks in advance
Sarkuzi
|
|
|
|
|
sarkuzi wrote: Can anyone please point me or send me a complete code that reads from these two cameras and give me the stereo picture. I just want to concentrate on my work... finding depth.
I'm not sure anything like that exist and if it does I kind of doubt that it would be totally reliable in real world situations without some human intervention. The problem I see is that there still is not an AI algorithm that can divide an image perfectly into different objects and object recognition is still a work in progress too. These would be needed for a computer to match the objects from the 2 webcams and then compare the differences in the 2 images to find depth. The only way it might be able to do this I would think is if you had a scene with no gradual transitions between colors of 2 different objects (this rarely exist in real world images but if you made an artificial scene it might work).
There are libraries available (such as the opencv library) that can help you alot to get the input of webcams if you don't mind writing some code. If you are looking for a stereo webcam I did find this on the internet one time http://www.minoru3d.com/[^] but I've never actually used it or tried it.
hope this helps,
Mike
|
|
|
|
|
I've been doing the same recently in c++ with Unity3d in mind, mainly for windows/vista, I suggest the video capture library found:-
www.muonics.net/school/spring05/videoInput/
or CodeVis library by mike ellis also very good, but not so hot with two cameras unless you want to start fiddling with the threading and serialisation.
Once you have the camera images into memory try passing them to open CV for starters (cvstereocorrespondance), if you wish to do realtime you will have to look a bit further, middlesbury college or uni have some stereo matching links to try, plus you could even look at CUDA by nvidea, but this then becomes graphics card capability based.
As soon as I've got my code reasonably working reasonably well I will either provide the code, or list the method code and cite the sources (depends on what licenses I'm restricted by at the time!
|
|
|
|
|
I'm trying to recreate this gradient pattern with GDI+ using C#:
Metal Circle[^]
My approach is to use a PathGradientBrush and define eight points around the circle where the gradient fades from one color to the next. Here's the results I've gotten so far (ignore the blue background and how the circle is cut off at the edges):
Metal Circle Test[^]
The problem seems to be the way the colors blend; the change in shading is too abrupt. So I'm looking for clues and pointers on how to improve my results. I'm studying the Blend property of the PathGradientBrush, but it's taking some time to wrap my head around it.
|
|
|
|
|
Hi Leslie,
I am not very familiar with gradient brushes, and didn't manage to get an angular one; all I tried resulted in radial gradients. So I tried it without gradients, and came up with a square cosine function; try the following in the Paint handler of some Control, say a Panel:
Graphics g=e.Graphics;
int cx=pan.Width/2;
int cy=pan.Height/2;
int r=cx;
if (r>cy) r=cy;
r-=20;
int t=cx/64;
if (t<2) t=2;
g.TranslateTransform(cx, cy);
int darkest=80;
int lightest=255;
for (int i=0; i<=90; i+=1) {
double cos=Math.Cos(i*Math.PI/180);
int c=darkest+(int)((lightest-darkest)*cos*cos);
Brush brush=new SolidBrush(Color.FromArgb(c,c,c));
int j=i+45;
g.RotateTransform(j);
g.FillRectangle(brush, -r, -t, 2*r, 2*t);
g.RotateTransform(-j);
j=225-i;
g.RotateTransform(j);
g.FillRectangle(brush, -r, -t, 2*r, 2*t);
g.RotateTransform(-j);
brush.Dispose();
}
BTW: the code is rather expensive, however you could turn the result into a bitmap and use that.
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Thanks, Luc. That's very close to what I'm looking for. I like your use of the cosine function.
|
|
|
|