![]() ![]() In fact, it's certainly going to degrade VSLAM - the question is by how much (I'm not sure that anyone can answer that right now).Īs for the PML - my comments were that "it wouldn't work with *gmapping*" - that is, you may find that other SLAM methodologies can work with the PML, but the default AMCL/gmapping combo is probably not going to handle the low sensor density and short sensor range. Most webcams are going to be rolling shutter - which means even if you sync the camera, there is a potential for skew within the image itself (especially under motion) which could cause the VSLAM not to work. One warning I would have is: that the PR2 not only has frame-synchronized cameras for stereo, but the cameras themselves are global shutter - that is, all pixels are exposed at the exact same time. He just wrote a pretty nice blog post about ROS here: Patrick was using Serializer with Windows/.NET for quite a while and then switched to Linux/ROS. That's really powerful.Īs far as Windows is concerned - look at Pi Robot. ROS has great visualization tool - rviz that allows seeing results of stereo algorithm as a cloud of 3D points. I also did a few stereo experiments on ROS with a pair of Microsoft Lifecam cameras. Mostly doing tutorials and also connecting it to Mindstorms NXT using nxt_ros package. I've been using ROS on Kubuntu for about 2 moths now. RobotNV, I take you'll also be using Linux, using ROS and all that? From what I gathered so far, there are Windows drivers which you can use for fps, for Linux one has to patch the available driver ( ) to get more control about the cam settings.Thanks for the links SK. I'm wondering how (in color as I understand) is possible (or which compression/format is used at least) via USB2.0 anyways, as commonly 30fps is the maximum for YUV422 images at this resolution, taking most available bandwidth. RobotNV, I take you'll also be using Linux, using ROS and all that? From what I gathered so far, there are Windows drivers which you can use for fps, for Linux one has to patch the available driver ( ) to get more control about the cam settings. The camera seems to be very popular in the multitouch community, this forum is basically full of posts about using it for these applications: I found a blog entry about a stereo setup using multiple Playstation Eyes, it's from June, unfortunately with no further updates: At least I'm looking for 2 cameras on ebay now. I'm actually also very interested in that, just not sure I'll have the time to really work on PS Eye stereo. How do you determine what’s the good distance between cameras? Which driver do you use for Microsoft LifeCam Cinema on Linux? What was the motivation to write stereo_usb_cam? What is its advantage over, uvc_stereo? In the mean time, I have a couple of questions for Fergs: ![]() Once I get it, I’ll post my findings and then purchase second camera and actually try using it with ROS. I already bought one PlayStation Eye to see if instruction here can be followed: Plus, you can get them on eBay for under $20 shipped (compare that to $600 cheapest synced camera). Sony states that the PlayStation Eye can produce "reasonable quality video" under the illumination provided by a television set. Would you like to know what’s so great about that camera? Looking into it, it turns out that one of the webcams that has ability to synch is PlayStation Eye. This link talks about how to sync consumer webcameras: That is until 4 days ago, when Kurt Konolige posted this link as a reply to one of the questions on ros-users mailing list. I was under impression that consumer webcams just don’t have any ways of syncing two cameras. Lead to a significant distortion in depth estimate in any stereo algorithm. This ability is especially important on mobile robots, considering that as little as one pixel of movement between frames can One of the biggest problems there was ability to synchronize two cameras, so that images are taken at exactly the same time. Well, there’s VSLAM – a high-end ROS package that allows doing SLAM based on stereo camera pair. With real LiDARs being north of one grand, what other choices do we have to do SLAM on our robots? While there’s still potential in using multiple PML sensors, maybe it’s better to look into adding other sensors for SLAM. I seams important for a robot to do its own mapping and localization rather than relying on human-generated maps.Īfter experimenting more with sonar sensor on Lego NXT over the weekend, I’m starting to agree with what Fergs was saying all along:Īt 30 points per second per IR sensor it might be too hard (impossible?) to be able to do SLAM with just IR sensor(s). ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |