My Assistant
![]() ![]() |
Range Finding - Parallax Calculations |
Dec 3 2005, 02:46 PM
Post
#1
|
|
![]() Dublin Correspondent ![]() ![]() ![]() ![]() Group: Admin Posts: 1799 Joined: 28-March 05 From: Celbridge, Ireland Member No.: 220 |
Just setting this up as a topic.
Rodolfo (RNeuhaus) has pointed out that he gets hard to interpret results when using Joe Knapp's online Parallax Calculator. I don't know the formulas behind this calculator but have been trying to work out my own. Taking (for now) that the geometries are as follows: Pancam, 16deg FOV, 300mm separation between L and R and a 1deg toe in. Navcam, 45deg FOV 200mm separation between L and R and no toe in. Parallax formula - Distance to object=half the Camera Separation/Tan angle between the point in the L+R images. (I don't think this is 100% correct for parallel cameras but it should be good to a close first approximation provided things aren't too close). Sticking with the Navcam which doesn't have any toe in my gut feel for things is that distant objects should be affected less by parallax. So taking Joe Knapps sample calculation for Navcam. Left Camera 512 pixels, right camera 500 pixels. Separation is 12 pixels so the parallax angle is 12 * (45deg/1024pixels) = 0.527344 degrees. The distance should then be (100mm/tan(0.527344 deg). That gives me 108m. Joe Knapps calculator gives me 20.3m. The Pancam is a bit trickier because of the toe in angle, I need to think a bit more about whether my default assumption that it can be compensated for by simply subtracting 64 pixels from the right image position is valid or not. Anyway - something is amiss in either my numbers or Joe Knapp's calculator. Anyone out there able to throw some light on this. |
|
|
|
Dec 5 2005, 03:47 PM
Post
#2
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 1636 Joined: 9-May 05 From: Lima, Peru Member No.: 385 |
Good to hear from you. I have already written a note to the PANCAM Parallex calculator's author on last Friday. Up to know, there is no news from him.
About a manual measurement, the document (Bell's one) says that the PANCAM resolution is designed specially up to 100 meters which is the designated range of roving per day. Its resolution is about 2.8 cms per pixel at a range of 100 meters. This leads me to think that the distance of every pixel is not lineal but the closer range, the distance would be smaller than 2.8 cms per pixel and the average would be at 50 meters with 2.8 cms per pixel and at 100 meters would be smaller than 2.8 cms / pixel. Isn't that right? On the other hand, I remember that someone was able to calculate the distance between the Spirit to the slope obstacles of Hasking Ridge with some kind of software Pancam Parallax. Which ones is that? Rodolfo |
|
|
|
Dec 5 2005, 07:53 PM
Post
#3
|
|
![]() Dublin Correspondent ![]() ![]() ![]() ![]() Group: Admin Posts: 1799 Joined: 28-March 05 From: Celbridge, Ireland Member No.: 220 |
Rodolfo,
The author (Joe Knapp) posts here as jmknapp I think. It might be worth sending him a PM. I'm still getting the impression that you don't understand the fundamental geometry of the set up. If I'm mistaken, apologies but I think I need to step through this carefully to be certain you understand the basic principles. Each camera covers a fixed field of view. That means that if you were to draw a triangle looking down on the scene then the angle between the left hand side of the image, the camera andthe right hand side of the image is 16deg for the Pancams and 45deg for the NavCam. So the field of view (from extreme left to extreme right) of any of these images always has the same angular distance. This means that as you move further away from the camera the physical dimension covered by an individual pixel gets bigger. The exact physical dimensions (D) of a Pancam pixel at a given distance (X) is represented by the following: D=X*(tan(8)/512 See this web page for a useful diagram and an explanation of this formula. We use half the FOV and half the number of pixels of a full image so we can work with the simple right handed triangle. So the pixels on a Pancam image, like every other camera with a 16deg FOV taking a 1024 pixel image have the following dimensions (in cm): Range(m) Pixel size 1 -----------0.027449382 5 -----------0.137246909 10 ----------0.274493818 20 ----------0.548987636 50 ----------1.372469089 70 ----------1.921456724 100 ---------2.744938178 I have to reiterate that are fundamentally related to the FOV of the camera and they apply equally to any camera anywhere which has a 16deg FOV. The numbers for the Navcam are different since it has a wider FOV. The calculations for finding the specific range of an object uses the same basic idea but in reverse (more or less). From a pair of left and right images you can read off the parallax angle of a feature by counting how many pixels it appears to move between the two images since each pixel always corresponds to a fixed angle (16/1024==0.015625deg for Pancam). Since you also know the distance between the two cameras you are once again dealing with a triangle where you know all the angles and have the length of one side. From that you can work out the missing bits, in particular the range to the object. My back of the envelope drawings of the parallel situation (for the Navcam) lead me to believe that the range should be closely approximated by: Range=0.1m/Tan(No of Pixels * 0.043945deg) (Navcam) Range=0.15m/Tan(No of Pixels * 0.015625deg) (Pancam) I'd welcome some comments on this because I suspect that this is not completely accurate but I'm pretty sure that even if that is the case the error is going to be small provided the range is above a couple of metres. The Pancam case is complicated by the 1 deg toe but I think that simply adding 64 pixels (1024/16 == no of pixels in 1 deg for a Pancam image) to the measured measured value for the right hand image should be sufficient. |
|
|
|
Dec 5 2005, 11:40 PM
Post
#4
|
|
|
Junior Member ![]() ![]() Group: Members Posts: 90 Joined: 13-January 05 Member No.: 143 |
QUOTE (helvick @ Dec 3 2005, 06:46 AM) Taking (for now) that the geometries are as follows: Pancam, 16deg FOV, 300mm separation between L and R and a 1deg toe in. Navcam, 45deg FOV 200mm separation between L and R and no toe in. Parallax formula - Distance to object=half the Camera Separation/Tan angle between the point in the L+R images. (I don't think this is 100% correct for parallel cameras but it should be good to a close first approximation provided things aren't too close). Sticking with the Navcam which doesn't have any toe in my gut feel for things is that distant objects should be affected less by parallax. So taking Joe Knapps sample calculation for Navcam. Left Camera 512 pixels, right camera 500 pixels. Separation is 12 pixels so the parallax angle is 12 * (45deg/1024pixels) = 0.527344 degrees. The distance should then be (100mm/tan(0.527344 deg). That gives me 108m. Joe Knapps calculator gives me 20.3m. I think you're off by a factor of two in your formula and a factor of 10 in your calculator. I get 200mm/tan(0.527344 deg) = 21.7 m. |
|
|
|
Dec 6 2005, 03:17 AM
Post
#5
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 1636 Joined: 9-May 05 From: Lima, Peru Member No.: 385 |
Helvick, Thanks for the response. However I still did not grasp fully the understanding of MER Stereo Parallax Calculator and below I have some questions to clarify.
About the formula: 100mm/tan(0.527344 deg). What is 100 mm (milimiter)? Is the distance of what are you saying. Tan(@) = opposite/adjacent. --> adjacent (distance) = tan (@) / opposite For the NAVCAM, according to the material you showed me about the formule: View has an angle: @ Half view of the angle is: @/2 The distance of the angle is: l Half distance of the angle is: l/2 The adjacent distance is: d (vertical height) Then the formule: tan (@/2) = (l/2)/d to know the distance from the viewer: d = (tan(@/2))/(l/2) Let suppose that a point of interest in the picture: a stone: Left navcam: 580 pixels (GIMP tool measurement) and the Right navcam: 590 pixels: According to the Parallalx calculator: object distance: 8.92 m, one-pixel error: 0.037 m object dimension: 0.2 cm. According to the formule: The separation distance is: 590-580= 10 pixels. @ angle is: (10 pixels*45 degree)/1024 pixels @ = 0.439453 distance = "opposite side, not known, how to obtain it?" x tan(0.439453). I guess that the opposite side must be always of 100 meters? Rodolfo P.D.jamescanvin has talked about the measurement of distances with parallax calculator in the topic: Haskin Ridge, how did he measured it? See at the post |
|
|
|
Dec 6 2005, 04:24 AM
Post
#6
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 2262 Joined: 9-February 04 From: Melbourne - Oz Member No.: 16 |
Just jumping in as I'm mentioned:
QUOTE (RNeuhaus @ Dec 6 2005, 02:17 PM) About the formula: 100mm/tan(0.527344 deg). What is 100 mm (milimiter)? Is the distance of what are you saying. As mars_armer pointed out that should be 200mm which is the distance between the navcams. I guess helvick new that and was using half the seperation distance to make a true right angled triangle (his mistake was then not to then divide the angle by two!) QUOTE (RNeuhaus @ Dec 6 2005, 02:17 PM) Let suppose that a point of interest in the picture: a stone: Left navcam: 580 pixels (GIMP tool measurement) and the Right navcam: 590 pixels: According to the Paralalx calculator: object distance: 8.92 m, one-pixel error: 0.037 m object dimension: 0.2 cm. According to the formule: The separation distance is: 590-580= 10 pixels. @ angle is: (10 pixels*45 degree)/1024 pixels @ = 0.439453 distance = "opposite side, not known, how to obtain it?" x tan(0.439453). I guess that the opposite side must be always of 100 meters? as above: opposite side = distance between cameras = 200mm. Which gives about 26.0m using 0.2m/tan(0.439453) Why the discrepency with your value of 8.92m from the calculator? Well, it looks like you were using the pancam setting! When I put in those values I get: object distance: 24.4 m, one-pixel error: 1.234 m All seems consistent to me. QUOTE (RNeuhaus @ Dec 6 2005, 02:17 PM) P.D.jamescanvin has talked about the measurement of distances with parallax calculator in the topic: Haskin Ridge, how did he measured it? See at the post Just using the Parallax Calculator like above, nothing more. Measure the pixel positions of the same object in the L & R frames and feed the numbers to tthe program. James. -------------------- |
|
|
|
Dec 6 2005, 07:40 AM
Post
#7
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
Rodolfo, just an additional reminder:
You will see that the farther the distance to a feature, bigger is the "one-pixel" error. When measuring a range via parallax you must be *exact* (as much as possible) with the pixel positions for a feature on the L and R images. |
|
|
|
Dec 6 2005, 10:31 AM
Post
#8
|
|
![]() Dublin Correspondent ![]() ![]() ![]() ![]() Group: Admin Posts: 1799 Joined: 28-March 05 From: Celbridge, Ireland Member No.: 220 |
QUOTE (jamescanvin @ Dec 6 2005, 05:24 AM) Just jumping in as I'm mentioned: As mars_armer pointed out that should be 200mm which is the distance between the navcams. I guess helvick new that and was using half the seperation distance to make a true right angled triangle (his mistake was then not to then divide the angle by two!) Err, yes. Wasn't paying attention to myself there. Apologies for my confusion, I appear to have had enough coffee by the time I made my later post and didn't repeat the error. I still seem to have some problems with the parallax calculator site on my main machine - it appears to cache results strangely which another reason why I was getting confused. Apart from forgetting trig 101 that is. Oh well. I'm still a bit confused though because the Parallax calculator page doesn't generally agree with my numbers. I found Joe Knapps original post regarding the Pancams - it seems that I misunderstood the meaning of the "~1 deg" toe in, again not thinking clearly, both cameras have the toe in angle so the observed parallax must be adjusted by twice the amount I referred to (~64 pixels based on an 1deg toe in). There's more to it than that though: QUOTE Based on the left and right images, there's a parallax of 92 pixels at the rock. The distance equation for Opportunity's pancam is: D = 1071/(130 - N) D = 1071/(130 - 92) = 28.2 meters The width of the rock is 54 pixels. At 0.28 mrad/pixel (pancam spec) that's 15 mrad. Width then would be 28.2*0.015 = 0.42 m. The width of the bounce mark is 223 pixels or 1.8 m. Interesting. |
|
|
|
Dec 6 2005, 11:57 AM
Post
#9
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 1465 Joined: 9-February 04 From: Columbus OH USA Member No.: 13 |
My parallax calculator could sure use some improvements. It's based on a simplified geometric model and neglects complications like actual (vs. designed) pointing and toe-in of the cameras, non-linear optical effects, etc. There may be some operational issues too like the "strange caching of results" mentioned above.
More recently I have found the NAIF file speciying the camera geometry which makes some parameters clearer, but some less so as I don't have any experience with the CAHVOR model the experts use to define the camera geometry. The latest "frames definition kernel" for MER2 is at: ftp://naif.jpl.nasa.gov/pub/naif/MER/kernels/fk/mer2_v09.tf This is a text file that can be opened in an editor. It has both text descriptions of the geometry and also parameter/value pairs. Here is one snippet: QUOTE Nominal PMA [primary mast assembly] camera orientations are such that NAVCAM left and right boresights are parallel to each other and the PMA head +X axis, while the PANCAM left and right boresights "toe"ed in by 1 degree toward the PMA head +X axis. In order align PMA head frame with the camera frames in this nominal orientation, it has to be rotated by +90 degrees about Y and then about X by non-zero "toe"-ins for the PANCAM (-1 degree for the left camera and +1 degree for the right camera) and by zero "toe"-ins for NAVCAM, and finally by -90 degrees about Z (to line up Y axis with the vertical direction.) The following sets of keywords should be included into the frame definitions to provide this nominal orientation (provided for reference only): TKFRAME_-254121_AXES = ( 2, 1, 3 ) TKFRAME_-254121_ANGLES = ( -90.000, 1.000, 90.000 ) ...etc. So that makes it fairly clear that both boresights are toed-in, for a total 2 degrees (1 degree left and right) by design. The PANCAM X-axis is the axis looking out from the pancams (e.g., towards the horizon). The Y-axis is the left/right direction and the Z-axis is the vertical. The SPICE kernel developers have taken ASCII-art to a wonderful level, and here is their diagram of the PANCAM frame: QUOTE UHF /\ HGA \/ PMA .--. # || / \ # || | | # || \ /=. # || `--' || # || Rover ======================= (deployed) | =o=. | | .' Yr `.__|o====o .===o=== o------> Xr \\ .-. .|. `.-. ##o### | o | | | | | o | `-' `|' `-' IDD V Zr Clear now? The 1-degree toe-in per camera is by design, but evidently, if I read the frames kernel right, reality is quite a bit different. For the left PANCAM we have: QUOTE The actual MER-2_PANCAM_LEFT_F1 frame orientation provided in the frame definition below was computed using the CAHVOR(E) camera model file, 'MER_CAL_176_SN_104_F_1.cahvor'. According to this model the reference frame, MER-2_PMA_HEAD, can be transformed into the camera frame, MER-2_PANCAM_LEFT_F1, by the following sequence of rotations: first by 90.05088205 degrees about Y, then by -0.65914547 degrees about X, and finally by -90.31522256 degrees about Z. The frame definition below contains the opposite of this transformation because Euler angles specified in it define rotations from the "destination" frame to the "reference" frame. \begindata FRAME_MER-2_PANCAM_LEFT_F1 = -254121 FRAME_-254121_NAME = 'MER-2_PANCAM_LEFT_F1' FRAME_-254121_CLASS = 4 FRAME_-254121_CLASS_ID = -254121 FRAME_-254121_CENTER = -254 TKFRAME_-254121_RELATIVE = 'MER-2_PMA_HEAD' TKFRAME_-254121_SPEC = 'ANGLES' TKFRAME_-254121_UNITS = 'DEGREES' TKFRAME_-254121_AXES = ( 2, 1, 3 ) TKFRAME_-254121_ANGLES = ( -90.051, 0.659, 90.315 ) \begintext So the toe-in rotation about the X-axis is only 0.58 degree rather than 1.0 degree, moreover the are fractional-degree deviations of the Y and Z axes. Then there are the CAHVOR complications. The CAHVOR model is specified in the instrument kernels left PANCAM instrument kernel right PANCAM instrument kernel At this point I throw up my hands! -------------------- |
|
|
|
Feb 11 2006, 04:02 PM
Post
#10
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
QUOTE (jmknapp @ Dec 6 2005, 06:57 AM) ... The 1-degree toe-in per camera is by design, but evidently, if I read the frames kernel right, reality is quite a bit different. For the left PANCAM we have: So the toe-in rotation about the X-axis is only 0.58 degree rather than 1.0 degree, moreover the are fractional-degree deviations of the Y and Z axes. Then there are the CAHVOR complications. The CAHVOR model is specified in the instrument kernels ... At this point I throw up my hands! Perhaps I can shed a little light on this. I pulled-up the ftp://naif.jpl.nasa.gov/pub/naif/MER/kernels/fk/mer2_v09.tf file and looked at the pancam reference frame orientations: PancamLeft TKFRAME_-254128_AXES = ( 2, 1, 3 ) TKFRAME_-254128_ANGLES = ( -90.051, 0.659, 90.315 ) PancamRight TKFRAME_-254131_AXES = ( 2, 1, 3 ) TKFRAME_-254131_ANGLES = ( -89.946, -1.376, 90.400 ) I worked-out the combined rotations, rotating (for PancamL) as specified first -90.051 degrees about the Y axis, then 0.659 degrees about the X axis, and 90.315 degrees about the Z axis (in that order), then did the same for PancamR, and worked-out the relative orientation between them (I did all this using quaternions, as they're easier and more accurate than 3X3 matrices). The net result is that the relative orientation between the left and right pancams is 2.039451 degrees about the unit axis vector (-0.045257,-0.998119,0.041349). Not exactly the expected 2 degrees, but darn close. Incidentally, in the CAHVOR data from the instrument kernels is a unit referred to as the CAHVOR_QUAT. From this quaternion I extract a rotation of 181.908954 degrees about axis vector (0.046757,0.000780,0.998906), which is surprisingly close to 2 degrees as well (if you subtract off 180 degrees). However both cameras have EXACTLY the same CAHVOR_QUAT value, so I'm not clear on what this refers to. I pulled-up a paper discussing the CAHVOR camera model but it didn't give any mention to CAHVOR_QUAT, so I don't know what rotation that parameter refers to. I'm still not sure how to go from this to photogrammetry, or I'd whip-up some software. |
|
|
|
Feb 14 2006, 09:49 PM
Post
#11
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 1465 Joined: 9-February 04 From: Columbus OH USA Member No.: 13 |
Interesting that taking all the rotations into account gives the correct figure of 2 degrees. Kudos for the quaternion math!
-------------------- |
|
|
|
Mar 2 2006, 01:51 AM
Post
#12
|
||
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
I have just completed a new 3D MER RangeFinder:
http://www.clarkandersen.com/RangeFinder.htm Screenshot: The new RangeFinder uses the CAHVOR camera models for the pancams and navcams, takes 2D coordinates from the image pairs, and uses photogrammetry to calculate a 3D distance, error, and even the coordinates of the point, albeit in the camera reference frame. If there is sufficient interest I may add a batch processing utility to it at some point. |
|
|
|
||
Mar 2 2006, 12:21 PM
Post
#13
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 1465 Joined: 9-February 04 From: Columbus OH USA Member No.: 13 |
I have just completed a new 3D MER RangeFinder: Very nice & a lot of work I bet! I notice it works out to 100m or so--my calculator really went haywire at the longer ranges. -------------------- |
|
|
|
Mar 2 2006, 01:31 PM
Post
#14
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Very nice & a lot of work I bet! I notice it works out to 100m or so--my calculator really went haywire at the longer ranges. Thank you A surprising amount of the work involved in creating this application was simply tracking down the details of the CAHVOR camera model; the vast majority of the relevant papers & web pages were more concerned with generating the model parameters than using them. Frustratingly, rather a lot of the needed papers (found via Google) were bad links to the JPL robotics site; seems like they've decided to isolate a lot of material from the web lately |
|
|
|
Mar 2 2006, 01:50 PM
Post
#15
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
I know JB's out the office at the moment, but I'm sure he'd help you out if you needed more info - his email addy is easily googlable.
Doug |
|
|
|
Mar 2 2006, 03:29 PM
Post
#16
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
I know JB's out the office at the moment, but I'm sure he'd help you out if you needed more info - his email addy is easily googlable. Doug I was actually on the verge of resorting to emailing questions yesterday, about the time I stumbled upon a paper which provided an algorithm for transforming from an image coordinate to a vector directed out from the camera's principal point (using the CAHVOR model), which was all I needed to get this thing working. Otherwise I'd found lot's of references to the inverse transformation. I was also giving Mathematica a workout trying to figure it out myself. At this point the camera model of the application seems to be working great, and the photogrammetry results speak for themselves. It might be nice to spot check it with some calibrated images, but there's not a lot of room for error in the approach I've used. At this point I'm pretty jazzed about adding more capabilities ... I see what people are doing in terms of projecting images using the camera pointing angles, and it seems that photogrammetry capability would be complementary. I'm looking forward to seeing what transpires |
|
|
|
Mar 2 2006, 10:14 PM
Post
#17
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
Thanks, algorimancer.
I was using these simple formulas for NAVCAM (results are very close to Joe Knapp's calculator): CODE //NAVCAM object_distance = 0.2*1024*14.672/(12.29*(Xl-Xr)); object_dimension = 0.2*size/(Xl-Xr); But I notices some distortions, so I was looking for CAHVOR model data too... So thanks for the links. |
|
|
|
Mar 3 2006, 01:48 AM
Post
#18
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Thanks, algorimancer. I was using these simple formulas for NAVCAM (results are very close to Joe Knapp's calculator): CODE //NAVCAM object_distance = 0.2*1024*14.672/(12.29*(Xl-Xr)); object_dimension = 0.2*size/(Xl-Xr); But I notices some distortions, so I was looking for CAHVOR model data too... So thanks for the links. You're welcome |
|
|
|
Mar 3 2006, 02:02 AM
Post
#19
|
||
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
While it's only a day since the initial announcement, I've just implemented a few improvements to my MER RangeFinder application. It's now version 1.1
1. Integrated the CAHVOR parameters for Opportunity in addition to Spirit. It turns-out to be noticeably more accurate with the correct rover selected. 2. Added a batch processing option, to permit processing files full of rows of pairs of pixel coordinates. It occurs to me that the ImageJ application might be very helpful in quickly acquiring lot's of pixel coordinates, perhaps combined with Excel to handle organizing the data files. I looked into the notion of providing separate CAHVOR calibrations for each pancam filter, but there doesn't seem to be any difference in the parameters between the filters, so I won't bother. Here's the link and updated screenshot: http://www.clarkandersen.com/RangeFinder.htm Enjoy. Please let me know of any problems or suggestions; I've done a fair amount of testing, but not exhaustive. |
|
|
|
||
Mar 3 2006, 03:59 AM
Post
#20
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
Batch Process seems to work fine, that's very useful.
But could you explain "position", please? Where is the center? |
|
|
|
Mar 3 2006, 08:36 AM
Post
#21
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
Thanks for this new tool, algorimancer.
It occurs to me that the ImageJ application might be very helpful in quickly acquiring lot's of pixel coordinates, perhaps combined with Excel to handle organizing the data files. Do you mean (semi-)automatically pick pairs of pixel coordinates from both L & R images? That would be great! I'm not familiar with ImageJ, and it's home page gives me no hint about such capability. Any help? |
|
|
|
Mar 3 2006, 08:40 AM
Post
#22
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
Then once you have a fairly populated array of values, you can generate a mesh......and put the image back on it...and bingo - the full on 3d navigable environment we've been dying for
Doug |
|
|
|
Mar 3 2006, 10:23 AM
Post
#23
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
Actually, that's exactly what I'm playing with right now...
I found this nice program for automatic point selection... It's the same method used in Autostitch, so it's quite good: http://www.cs.ubc.ca/~lowe/keypoints/ I already got some quick-and-dirty 3d models from NAVCAMs. But the model from one pair of images is just not enough - looks like a small patch. What I really want is to stitch them slices into the whole pizza... but that's kinda hard. |
|
|
|
Mar 3 2006, 11:58 AM
Post
#24
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
Alternatively, just write a convertor for the released data which includes meshes
Oh yeah- hint hint Doug |
|
|
|
Mar 3 2006, 12:15 PM
Post
#25
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Batch Process seems to work fine, that's very useful. But could you explain "position", please? Where is the center? The position is oriented in the pancam masthead reference frame (using a pair of center coords is illustrative, (512.5,512,5) for each camera), and the center/origin is set by the application as the midpoint between the two pancam's principle points (essentially the focal points). This means that you can generate a set of points in a consistent reference frame for one stereo pair of images. If you do this for multiple pairs of images you'll have to apply a rotation to the resulting points (and possibly a translation if the origin doesn't coincide with the masthead's center of rotation). One solution would be to have at least 3 overlapping points between them, figure the absolute orientation transformation, and apply it as needed. Or perhaps it's adequate to get the pancam orientation corresponding to the images and apply that rotation. I can provide a capability to transform coordinates sometime in the next couple of days, I think. Meanwhile, there's an open source (free) application out there called Blender, which I haven't used, but apparently is popular among the 3D animation crowd, and may or may not be helpful. To answer Tesheiner's question, "I'm not familiar with ImageJ, and it's home page gives me no hint about such capability. Any help?", I believe that once you get ImageJ running you'll find a "measure" option under one of the menus. My recollection is that measure allows you to capture pixel coordinates, in addition to measuring distances. On the other hand, the application that MaxSt mentioned above may be more targeted to capturing coords from image pairs and worth a look. Generating meshes... possibly Blender would be helpful. Other ideas? |
|
|
|
Mar 3 2006, 02:42 PM
Post
#26
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 1465 Joined: 9-February 04 From: Columbus OH USA Member No.: 13 |
If you do this for multiple pairs of images you'll have to apply a rotation to the resulting points (and possibly a translation if the origin doesn't coincide with the masthead's center of rotation). One solution would be to have at least 3 overlapping points between them, figure the absolute orientation transformation, and apply it as needed. Or perhaps it's adequate to get the pancam orientation corresponding to the images and apply that rotation. I think that--at least conceptually--once you have the position of a point in the masthead frame as your app provides, the SPICE kernels released to the NAIF website can be used to transform it to whatever coordinate system desired. In this case I guess one would want the Mars body-fixed frame IAU_MARS (essentially latitude-longitude-altitude). So if one had a point in the masthead frame MER-1_PMA_HEAD stored in a vector pmaxyz, these would be the SPICE calls: QUOTE // load SPICE kernels for MER1 furnsh_c("mer1_surf_roverrl.bsp") ; // rover position furnsh_c("mer1_struct_ver10.bsp") ; // rover structures position furnsh_c("mer1_surf_pma.bc") ; // rover PMA pointing // et is ephemeris time spkezr_c("MER-1_PMA_HEAD",et,"IAU_MARS","LT+S","MARS",headbf,<) ; // get PMA head position headbf, Mars body-fixed pxform_c("MER-1_PMA_HEAD","IAU_MARS",et,pma2bf) ; // generate pma->body-fixed rotation matrix pma2bf mxv_c(pma2bf,pmaxyz,marsiauxyz) ; // multiply rot matrix times pma vector to get body-fixed pma vector vadd_c(headbf,marsiauxyz,pointbf) ; // add to head position to get point position, body-fixed reclat_c(pointbf,&radius,&longtitude,&latitude) ; // convert to radius, lon, lat So most of the automagic would be provided by NASA. The SPICE kernels must be a lot of work for them to generate, with rover slippage complicating matters, but I think they may do a lot of the eyeball work you refer to and release updated SPICE kernels daily. To wit, this is from the SPK (position kernels) README: QUOTE mer[1,2]_surf_roverrl_YYMMDDHRMN MER-1/2 rover and site position SPK file generated daily from rover TLM combined with the latest bundle-adjustment position input from Dr. Ron Li. The latest of these files supersedes all previous files. The latest such file for MER1 is mer1_surf_roverrl.bsp (currently 54 MB) released yesterday, covering the rover position from 25JAN2004 through 02MAR2006. The position of various fixed points on the rover (such as the PMA head) are given in another file mer1_struct_ver10.bsp. The file mer1_surf_pma.bc (currently 21MB) gives the pointing info for the PMA, also udpated daily. That said, I haven't actually tried this for MER... -------------------- |
|
|
|
Mar 3 2006, 04:53 PM
Post
#27
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
... masthead frame MER-1_PMA_HEAD stored in a vector pmaxyz, these would be the SPICE calls: ... That said, I haven't actually tried this for MER... Is SPICE available via the web? I was reading through the user guide a few days ago and it had some interesting options, but I saw no mechanism for remote access. Fundamentally, simply knowing the transformation from the pancams' principal points to the mast head's center of rotation would be of tremendous assistance when combined with camera orientation parameters. |
|
|
|
Mar 3 2006, 07:10 PM
Post
#28
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
The position is oriented in the pancam masthead reference frame (using a pair of center coords is illustrative, (512.5,512,5) for each camera), and the center/origin is set by the application as the midpoint between the two pancam's principle points (essentially the focal points). That's what I was hoping for... but I'm still a bit confused. For example, I give coordinates (465,402),(453,402), and I get position (-19.7276,-0.0000,-0.3330). What's the second number means? Why it's zero? The original point is not on center or anything... |
|
|
|
Mar 3 2006, 08:44 PM
Post
#29
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
That's what I was hoping for... but I'm still a bit confused. For example, I give coordinates (465,402),(453,402), and I get position (-19.7276,-0.0000,-0.3330). What's the second number means? Why it's zero? The original point is not on center or anything... Well, it's an (x,y,z) coordinate describing a position in space. In this case the x axis direction is positive in the direction behind the camera, the y-axis is positive to the left, and the z axis is positive towards the ground. This forms a right-handed coordinate system. Personally I wouldn't worry too much about what the numbers in the coordinates mean in and of themselves, but rather focus on how the vertices relate to each other. For example, I see a rock in a pair of spirit navcam images, find the position of a point on its left side is about (-2.133,-0.056,-0.468) and that of a point on its right is about (-2.097,-0.204,-0.418), subtract those and calculate the magnitude of the resulting vector and I find that the rock is about 16 centimeters across (a calculator that handles vectors makes this a lot easier). Better yet, sample a variety of points in a scene, graph the resulting vertices in a 3D grapher, and get a better sense of the topography and how it varies from different perspectives. Eventually it would be nice to be able to do this with meshes and overlay the images on the meshes (Photomodeler can do this, but is beyond my price range). Having said all that and re-reading your question I see that I've gone off on a tangent a bit; sorry about that. Why is the middle coordinate 0? I notice that the x-coordinate of each of your pixels is near the center of the images. Bear in mind that the x-coordinate of the pixel is an indicator of the y-axis in the coordinate system, and it appears that you have a point whose y-coordinate just happens to be aligned with the midpoint of the 2 cameras principal points. EDITED: After further consideration, it appears that I have muddied the waters a bit when I said that the origin was the midpoint between the cameras principal points. The cameras princpal points are given by the CAHVOR parameter C, which in one paper is described as "the camera center vector C from the origin of the ground coordinate system ... to the camera perspective center". In other words, neither camera's C is at coordinate (0,0,0), nor is the midpoint between them. For Spirit, the left and right C's are (0.382152,0.149178,-1.246381) and (0.443429,-0.142099,-1.246638 ), and the midpoint between them (which is what my application measures the distance to) is at (0.412791,0.00354,-1.24651), and the distance between them is 0.297653 [30 centimeters]. This is not very useful as an origin, and I have the impression that the coordinate frame is different from that resulting from the image - to - world transformation, so I'm going to make a modification to the application to make the origin coincide with the midpoint between the cameras perspective centers. I'm also inclined to modify to frame of the resultant coordinates to a more user friendly system in which x-axis is positive to the right, the y-axis is positive towards the sky, and the z-axis is positive towards the rear of the camera. Yes, this implies that the z coordinate will be negative in the viewing direction, but that is necessary to preserve a right-handed coordinate system. None of this will affect the distances measured, but it will be easier to understand in terms of the images and the x-y-z coordinate system we all learned in school. [Done. Version 1.2 has these updates; origin is midpoint between camera perspective centers, new x is last -y, new y is last -z, new z is last x. Ranges are unchanged, as is relative orientation between points.] If anyone disapproves, we can discuss it. This is a quick and easy change to make to the application. |
|
|
|
Mar 3 2006, 08:55 PM
Post
#30
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
To answer Tesheiner's question, "I'm not familiar with ImageJ, and it's home page gives me no hint about such capability. Any help?", I believe that once you get ImageJ running you'll find a "measure" option under one of the menus. My recollection is that measure allows you to capture pixel coordinates, in addition to measuring distances. On the other hand, the application that MaxSt mentioned above may be more targeted to capturing coords from image pairs and worth a look. I had a look to the application mentioned by MaxSt but it seems to not solve my "problem". I was thinking in something like what is available in PTGui to match same pixels/features in two images. The idea is to automate just a bit the now manual process of identifying the (x,y) coordinated of e.g. a rock on both L and R images. |
|
|
|
Mar 3 2006, 10:23 PM
Post
#31
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
so I'm going to make a modification to the application to make the origin coincide with the midpoint between the cameras perspective centers. I'm also inclined to modify to frame of the resultant coordinates to a more user friendly system in which x-axis is positive to the right, the y-axis is positive towards the sky, and the z-axis is positive towards the rear of the camera. That would be great! By the way, I forgot to mention that I'm using settings "Spirit" and "Navcam". Eventually it would be nice to be able to do this with meshes and overlay the images on the meshes. I already got that. Maybe I'll post a couple of my 3d models... |
|
|
|
Mar 3 2006, 10:55 PM
Post
#32
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
That would be great! By the way, I forgot to mention that I'm using settings "Spirit" and "Navcam". I already got that. Maybe I'll post a couple of my 3d models... I would enjoy seeing your 3d models Also, the change is updated (version 1.2), as edited in my prior post. One thing still perplexes me a bit, but I probably just need to think on it a little more. |
|
|
|
Mar 3 2006, 11:12 PM
Post
#33
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
I had a look to the application mentioned by MaxSt but it seems to not solve my "problem". That's strange... It gives me like 3000-4000 points after matching left and right, with only 2-3 false positives. --- OK, here is a couple of models I promised, in attachment. There are many free viewers available for VRML format. I use IE plug-in called Cortona. You'll also need a couple of textures from Spirit's NAVCAM (sol 751): 2N193038341EFFAOA0P0615L0M1.JPG 2N193038393EFFAOA0P0615L0M1.JPG
Attached File(s)
|
|
|
|
Mar 4 2006, 12:31 AM
Post
#34
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
That's strange... It gives me like 3000-4000 points after matching left and right, with only 2-3 false positives. --- OK, here is a couple of models I promised, in attachment. There are many free viewers available for VRML format. I use IE plug-in called Cortona. You'll also need a couple of textures from Spirit's NAVCAM (sol 751): 2N193038341EFFAOA0P0615L0M1.JPG 2N193038393EFFAOA0P0615L0M1.JPG Wow, the second one is particularly good (I used Flux Player). You used my application to get the points for these? That's pretty impressive results (I hadn't expected anything like this so quickly). That Keypoints application (http://www.cs.ubc.ca/~lowe/keypoints/ for those who missed it) pretty nice. How did you go from vertices to triangulated surface? Personally I'd enjoy results in the STL file format, but it's easy to convert, I think. |
|
|
|
Mar 4 2006, 03:01 AM
Post
#35
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
No, I'm still using my simple formulas, I've been working on this for some time...
I'd like to switch to your method, but still having problems with "position". Can you check it so for (524,512)-(500,512) it would return x=0,y=0? There are many utilities for Delaunay triangulations. I found one called "Triangle". Seems to work fine: http://www.cs.cmu.edu/~quake/triangle.html |
|
|
|
Mar 4 2006, 06:58 AM
Post
#36
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 2228 Joined: 1-December 04 From: Marble Falls, Texas, USA Member No.: 116 |
Holy Moses! I wasn't expecting the synergy here to develop so quickly. It took me a little while to realize how I could drop the texture onto the scene, but it was worth the effort.
-------------------- ...Tom
I'm not a Space Fan, I'm a Space Exploration Enthusiast. |
|
|
|
Mar 4 2006, 07:49 AM
Post
#37
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
I've been waiting 2 years for files like that.
A little animation is on the way. The ideal scenario would be the conversion of the pds released wedges and textures ( I desperately want to combine multiple Navcam panoramas complete wedge-sets into a 3D terrain from the landing site up to Bonneville crater and animate the complete traverse that way ) BUT - this is the first MER wedge's I've seen - and as a result, they are beautiful http://www.unmannedspaceflight.com/doug_im...it_751_2nav.mov (720p WMV-HD version on the way, hopefully - that QT is H264, but approx 420 x 270) Doug |
|
|
|
Mar 4 2006, 08:35 AM
Post
#38
|
|
![]() Dublin Correspondent ![]() ![]() ![]() ![]() Group: Admin Posts: 1799 Joined: 28-March 05 From: Celbridge, Ireland Member No.: 220 |
I started this thread just looking to get a slightly better understanding of the basic geometry of the stereo imaging - I'm amazed at the technical capabilities that you folks have.
Fantastic stuff. |
|
|
|
Mar 4 2006, 11:40 AM
Post
#39
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
|
|
|
|
Mar 4 2006, 11:54 AM
Post
#40
|
|
![]() Dublin Correspondent ![]() ![]() ![]() ![]() Group: Admin Posts: 1799 Joined: 28-March 05 From: Celbridge, Ireland Member No.: 220 |
15MB 720p WMV-HD movie of it http://www.unmannedspaceflight.com/doug_im...751_3d_720p.wmv Jaw drops to floor. This is really stunning. |
|
|
|
Mar 4 2006, 12:00 PM
Post
#41
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
Now extrapolate to the complete traverse from the lander to Bonne
All I need is the wedges - and I have no idea how to get them out of the PDS data sets. Doug |
|
|
|
Mar 4 2006, 02:05 PM
Post
#42
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 1465 Joined: 9-February 04 From: Columbus OH USA Member No.: 13 |
Is SPICE available via the web? I was reading through the user guide a few days ago and it had some interesting options, but I saw no mechanism for remote access. Fundamentally, simply knowing the transformation from the pancams' principal points to the mast head's center of rotation would be of tremendous assistance when combined with camera orientation parameters. It's available at the NAIF website, both C and FORTRAN libraries, & the data files are there as well. It's difficult to use though because so many elements (specifically the SPICE kernel files) have to be in place or it throws an exception and quits. I tried yesterday to code up something to see if it would work & reached a stumbling block. I can get the position of the rover itself on the surface (MER-1_ROVER -> IAU_MARS), but trying to get the pointing of the mast assembly head fails: QUOTE Toolkit version: N0058 SPICE(NOFRAMECONNECT) -- There is insufficient information available to transform from -253110 (MER-1_PMA_HEAD) to frame 10014 (IAU_MARS). Frame -253110 could be transformed to -253110 (MER-1_PMA_HEAD). Frame 10014 could be transformed to 1 (J2000).' So there's a disconnect in my code. Getting the position of the rover is one thing, but the exact pointing requires knowing the local level of the rover and the instrument pointing. Must not have the right kernels loaded--If I can figure it out I'll post. -------------------- |
|
|
|
Mar 4 2006, 03:40 PM
Post
#43
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
What I can see of it is very impressive, but what I see is maybe 5 near-static perspectives, with a hint of movement at each translation (no luck at all with the earlier .mov file, Quicktime and WMP both don't recogize it). Pretty sure I have pretty up-to-date codecs, but apparently I'm lacking something. |
|
|
|
Mar 4 2006, 04:02 PM
Post
#44
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
I'd like to switch to your method, but still having problems with "position". Can you check it so for (524,512)-(500,512) it would return x=0,y=0? Honestly, I'm not entirely clear as to just why it doesn't return something like that. Bear in mind that, at least for the pancams, the ccd's are not horizontally parallel. Quoting from my post #10 in this thread: [QUOTE...] PancamLeft TKFRAME_-254128_AXES = ( 2, 1, 3 ) TKFRAME_-254128_ANGLES = ( -90.051, 0.659, 90.315 ) PancamRight TKFRAME_-254131_AXES = ( 2, 1, 3 ) TKFRAME_-254131_ANGLES = ( -89.946, -1.376, 90.400 ) I worked-out the combined rotations, rotating (for PancamL) as specified first -90.051 degrees about the Y axis, then 0.659 degrees about the X axis, and 90.315 degrees about the Z axis (in that order), then did the same for PancamR, and worked-out the relative orientation between them (I did all this using quaternions, as they're easier and more accurate than 3X3 matrices). The net result is that the relative orientation between the left and right pancams is 2.039451 degrees about the unit axis vector (-0.045257,-0.998119,0.041349). Not exactly the expected 2 degrees, but darn close. [...QUOTE] My application depends entirely on the provided CAHVOR parameters for each camera, which thankfully were calibrated in the same frame (I don't have to apply that 2 degree toe-in transformation, for instance, nor the 30 cm translation between camera). It's one thing to set the origin to a handy location like the midpoint between the cameras' principle points, but I'd hesitate to force it into an arbitrary location. There is nothing preventing you from finding a translation vector that achieves that effect and applying it to all the vertices. I'm intending to review some of the details of my coding today and verify that it is doing what I think it is. The precision of the pixel-vector intersection suggests that it is correct, but I'm wondering a bit about the behavior of the vertical dimension (the zero seems to correspond with the top of the image rather than the center); the other dimensions seem fine. Due to the non-ideal nature of the lenses and orientations of cameras I'm not surprised that putting in (511.5,511.5) and (511.5,511.5) (the true center of the ccd arrrays) does not return exactly x=0 (it's close), but it bothers me a bit that y is not similarly close to 0. |
|
|
|
Mar 4 2006, 04:11 PM
Post
#45
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
|
|
|
|
Mar 4 2006, 04:45 PM
Post
#46
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Soliciting opinions for additional development ...
It will be relatively simple to provide a command line interface to the MER photogrammetry portion of my application. I'll likely get this out this weekend, perhaps in conjunction with a c++ class library & dll. The notion of providing transformations to align one stereo image pair's dataset (produced 3D coordinates) with that of other datasets brings up a number of questions and approaches regarding the desired user interface and operations. I would appreciate it if potential users of the application(s) would consider how they might like to interact with the data, and how I can create software to facillitate that. Given pancam tilt angles, along with center(s) of rotation in the masthead frame, it would be easy to apply those transformations to the vertices. I'm not sure where this orientation information would come from, nor what format it is in, but I'm envisioning a gui interface where the user pastes in this information and the gui applies it to a file of vertices. This gui could be the same as the current application, or a separate one dedicated to the purpose. Alternately, a command line interface could do the same thing. Incidentally, when I go about transforming vertices I'm typically doing something like 1) translate to the center of rotation, 2) rotate a given angle about an axis vector, and 3) translate back, and possibly 4) apply some additional translation. The axis vector to rotate about could be one of the standard x, y, z axes (usually in sequence), or (my preference) some arbitrary axis in space. I handle all of this in terms of 3D direction vectors (for position, translation, and axis direction) and a 4D quaternion (for rotation) or 4X4 matrix (combining translation and rotation). The information describing the masthead position/orientation for a particular stereo image pair might be given simply by a 3D center of rotation and a 4D quaternion, or it may be described in terms of a sequence of 3 rotations, each given by an angle, axis vector, and center of rotation, or it may be very simply described as a 4X4 matrix. I'll need to know what format to use prior to implementing the application (currently I'd be guessing). Alternately, if the pancam angles are not available (and perhaps even if they are), another option for aligning data sets captured from multiple stereo pairs of images would be to find a triad of 3 vertices in common between two data sets (sets of 3D vertices resulting from my application's photogrammetry of the original stereo image coordinates). It is simple (as in I already have the code) to calculate the transformation between one triad and the other (it's called absolute orientation), and then apply it to the desired data set (vertices). The problem with working with the masthead orientations is that the resulting aligned data sets will only be aligned for that particular position of the rover. If we wish to combine position data from multiple rover positions I see no alternative to using the absolute orientation approach. My inclination at the moment is to provide a standalone gui application to handle transforming text files containing rows of 3D vertices, much as the current application does with batch files of image coordinates. Your thoughts & opinions would be appreciated |
|
|
|
Mar 4 2006, 08:49 PM
Post
#47
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
Nice video, Doug. Gives me confidence that automatic stitching should be possible, after I figure out all the angles. Creating mesh for the combined set of points is going to be tricky. But that application for triangulation seems to be pretty powerful, I just need to figure out what combination of the switches I should use.
algorimancer - since you asking for opinions on development... You know what I'd like to see - what if your program could actually display both images and let user select the points? And when you load the list of coordinates for Batch process, you could actually display all of them on top of images? That would be extremely helpful. That keypoint application sometimes creates false positives, and finding/eliminating them manualy is a tedious process. Visual inspection would help a lot. |
|
|
|
Mar 4 2006, 11:43 PM
Post
#48
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
algorimancer - since you asking for opinions on development... You know what I'd like to see - what if your program could actually display both images and let user select the points? And when you load the list of coordinates for Batch process, you could actually display all of them on top of images? That would be extremely helpful. That keypoint application sometimes creates false positives, and finding/eliminating them manualy is a tedious process. Visual inspection would help a lot. I like that idea a lot. However... adding image handling adds rather a lot of complexity to the application; it's not something which I could whip-up over a weekend. It's relatively easy to display the images and overlay the connections, where it gets tricky is that it will pretty-much require a zoom/pan option. Sounds simple enough, but that sort of thing can be really complex to get right. I'll cogitate on it for awhile, see what I can think up. I think the source code for the keypoint application is available, so it could be directly integrated with the photogrammetry application. This weekend I've encountered a development dead end. My compiler has acquired a weird bug that prevents me compiling. I'll have to re-install next week, I think. I have an upgrade to MSVC2005 anyway. I have the command line version of the photogrammetry tool essentially complete, but can't compile it. I've also thought of a way to achieve a good validation of my CAHVOR model, but again am unable to compile it. I'll work on some other projects. |
|
|
|
Mar 8 2006, 03:18 AM
Post
#49
|
||
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
For those of you into that whole command line thing (scripting and programming and so on), I have the command line version of my photogrammetry and rangefinding tool compiled and posted:
http://www.clarkandersen.com/RangeFinder.htm Screenshot: For the rest of us, I'd recommend sticking with the Windows gui version on that same page. Currently I'm beginning work on an application which will integrate an image viewer with interactive point selection and editing to expedite the photogrammetry. At some time I'm sure we'll want to drape image segments over reconstructed triangle meshes, but just at the moment I'm not sure how to do that. There's also still the notion of aligning one 3D dataset (from one image pair) with another, but there hasn't been any feedback on that issue so I'll assume it isn't an immediate need, but it's still in the works. |
|
|
|
||
Mar 8 2006, 02:24 PM
Post
#50
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
Great tool, algorimancer.
One feature I would really appreciate to have on it would be the option to measure an object's size, similar to jmknapp's tool "Dimension of object (pixels)". When using your or jmknapp's tools to measure driving distances, you must find reference rocks/features on both pre-drive and post-drive images and calculate the distance to them on both set of pictures. And the size of those rocks, measured on both pre-drive and post-drive images, is a good way to double-check that they are actually the same ones on both sets of pics. |
|
|
|
Mar 8 2006, 03:01 PM
Post
#51
|
|
|
Founder ![]() ![]() ![]() ![]() Group: Chairman Posts: 14457 Joined: 8-February 04 Member No.: 1 |
|
|
|
|
Mar 8 2006, 08:21 PM
Post
#52
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Great tool, algorimancer. One feature I would really appreciate to have on it would be the option to measure an object's size, similar to jmknapp's tool "Dimension of object (pixels)". When using your or jmknapp's tools to measure driving distances, you must find reference rocks/features on both pre-drive and post-drive images and calculate the distance to them on both set of pictures. And the size of those rocks, measured on both pre-drive and post-drive images, is a good way to double-check that they are actually the same ones on both sets of pics. I may be able to add that capability this evening... I'm envisioning giving the application a memory of the last point found, and automatically showing the distance between that point and the current point. I may also see about providing an estimate of pixel size at that range, per Doug's suggestion. Both of these are easy additions. |
|
|
|
Mar 9 2006, 02:35 AM
Post
#53
|
||
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Okay, the MER RangeFinder and Photogrammetry application has been updated (version 1.3). Per request, it now displays the distance from the current position to the previous position (assuming both positions were found with the same rover and camera), plus it displays the pixel size at the current range.
http://www.clarkandersen.com/RangeFinder.htm Be careful with the "distance to last position" data... bear in mind that this is only correct for consecutive positions in the same stereo image pair, factoring in the relevant error. It should be a simple matter now to figure the size of rocks and other features. Enjoy |
|
|
|
||
Mar 15 2006, 03:12 AM
Post
#54
|
||
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Here is a significant new version of the MER Photogrammetry and RangeFinding utility.
Download it here: http://www.clarkandersen.com/RangeFinder.htm Quoting from the download site: AlgorimancerPG Photogrammetry and RangeFinder Application (Version 2.0). This version integrates stereo image pairs with the capabilities of the prior version (1.3). Treat this as something of a beta version. It works pretty well as-is, but there's a lot of room for improvement. To use it, start the application, open a stereo pair of images from either of the MER rovers (use shift or control to select both images when opening), and ensure that the correct rover and camera is checked. Like Version 1.3, you can choose to manually enter pixel coordinates in the Analysis control and calculate results from those. The major improvement is that the Analysis control is linked to the images, so you can get the coordinates directly by simply clicking on the target location in each of the images using the LEFT mouse button. When you click on the image a red cross will mark the location, and the pixel coordinates will appear in the correct location in the Analysis control. Click left image, then right image, and click the Calculate button to find the resulting range and position information, just as with the old version. After clicking calculate, the red crosses will turn green, and these will accumulate in the images as you perform additional calculations. Eventually the intent is that I'll add a selection capability, so that you can edit prior positions, and even save the cumulative calculated results to a file; but that will be a later version. The images initially appear with the upper left corners of each image synchronized with the upper left corner of the window region where it will display, with the image pixels the same size as the window pixels. You can zoom-in and out of the images (together) with the numeric keypad's +/- keys, and you can click and drag with the RIGHT mouse button to pan around the images (separately). If you need to revert to the starting condition there is a reset option under the View menu, along with an option to swap the left image with the right image in case they're reversed. There are a number of cosmetic issues to improve upon, as well as lot's of additional capabilities to be added, but the current version should be a great improvement over the last. Enjoy. Please let me know about any problems encountered. |
|
|
|
||
Mar 16 2006, 05:29 PM
Post
#55
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Updated AlgorimancerPG photogrammetry and rangefinding application to version 2.1. The main change is the ability to save accumulated calculated results to a textfile, similar to the way this was handled by the batch procession utility before. Still haven't added a selection utility, still mentally deciding how best to handle that.
http://www.clarkandersen.com/RangeFinder.htm |
|
|
|
Mar 16 2006, 11:10 PM
Post
#56
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 2492 Joined: 15-January 05 From: center Italy Member No.: 150 |
Nice software. However, I have impression that error on distance is widely sub-estimated (using Parallax Calculator I obtained higher, more reasonable numbers)...
I would like to better know the coordinate system (xyz) you are using and I would like to precisely select points throug cross-air, instead the Windows pointer... Thanks. -------------------- I always think before posting! - Marco -
|
|
|
|
Mar 16 2006, 11:47 PM
Post
#57
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Nice software. However, I have impression that error on distance is widely sub-estimated (using Parallax Calculator I obtained higher, more reasonable numbers)... I would like to better know the coordinate system (xyz) you are using and I would like to precisely select points throug cross-air, instead the Windows pointer... Thanks. Thank you. The error is indeed sub-estimated. The way this works is, based upon the camera models I am able to extract a unique sight-vector normal to the center of the selected pixels and originating from a known point for each camera. These vectors and the originating points are in the coordinate system in which the masthead was calibrated; there is no consistent correspondence to the rest of the rover or the ground, since the camera moves. The results are self-consistent for a particular stereo pair, but not with other stereo pairs (additional transformations of the coordinates would be required). The resulting coordinates I have (by request) transformed so that the x & y axes are generally aligned with those of the images as viewed in a monitor (x is + right, y is + up), and the origin is the midpoint between the cameras principal points. Since the cameras are not oriented in parallel (there are rotations about 3 axes relative to each other), there are some aspects of the coordinates derived from the images which are less than intuitive, however everthing seems self-consistent. As to the error... I find the photogrammetric position from the input pixels as the approximate intersection of the two resulting lines based upon the cameras principal points and the pixel-intersecting vectors. I say approximate intersection because a true intersection will never actually happen, what I find is the midpoint between those lines, and the error reported is the actual minimum distance between the lines (I state all this somewhere in the documentation). Something closer to the true error would be to take this minimum distance and add to it the size of 1 pixel at that range (also given in the results). And of course it is only as accurate as the pixels chosen by the user, so multiply that pixel size by however many pixels of precision are managed. I'm sure someone who does photogrammetry professionally could come up with a better measure of accuracy, I'm just going on geometry. As to the difference in error between this application and the parallax finder application, the major difference is that I'm calculating in 3D and the parallax finder app is working in 2D, so the majority of the accuracy difference would lie there. I'm also using the very good camera calibration data acquired in the lab prior to the launch. Taking all of that into account, the given errors (adding in pixel size at range) seem reasonable, as is the difference in results between the two applications. I might add that all the position, range, distance, and error results given by my app are in meters, which may make the error seem smaller than it is. An error of 0.5 sounds good until you think in terms of measuring the size of a meter scale object. A cross-hair point selector sounds like a good idea, I'll look into that. |
|
|
|
Mar 17 2006, 01:34 AM
Post
#58
|
|
![]() Dublin Correspondent ![]() ![]() ![]() ![]() Group: Admin Posts: 1799 Joined: 28-March 05 From: Celbridge, Ireland Member No.: 220 |
Thank you. .... An error of 0.5 sounds good until you think in terms of measuring the size of a meter scale object. A cross-hair point selector sounds like a good idea, I'll look into that. Algorimancer - I don't know about anyone else but I'm helluva impressed by this. |
|
|
|
Mar 17 2006, 06:56 AM
Post
#59
|
||
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 2492 Joined: 15-January 05 From: center Italy Member No.: 150 |
Thanks for the infos, algorimancer.
About the error estimation, now I understand your method and the difficulty to work in 3D instead 2D, with never-intersecting lines (!). Your considerations are valid in the plane perpendicular to the line of sight (the XY plane, approximately). But in the Z (distance) direction you are forgetting to add the uncertain arising from small parallax angle errors, which is by far the main error factor for distant objects! You will agree that a very small azimut error must introduce a huge distance error for a far, small-parallax object close to horizon. At the limit, when parallax angle is comparable or smaller than angle covered by a pixel, this effect becomes dramatic! This should happens for ranges of about 300m (NavCam) or 1Km (PanCam). Now look to this plot of the ranging error as a function of distance for the MER stereo cameras, assuming a 0.25 pixel stereo correlation accuracy (from MAKI ET AL.: MARS EXPLORATION ROVER ENGINEERING CAMERAS): To estimate this z-direction error, you can start from this plot but multipling error by 2 at least... Or, you can use empiric calculation method, considering many alternative rays located 0.5 pixels around original ones and then report the range for XYZ positions separately (a little bit complicated, but I like it). -------------------- I always think before posting! - Marco -
|
|
|
|
||
Mar 17 2006, 01:58 PM
Post
#60
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Your considerations are valid in the plane perpendicular to the line of sight (the XY plane, approximately). But in the Z (distance) direction you are forgetting to add the uncertain arising from small parallax angle errors, which is by far the main error factor for distant objects! You will agree that a very small azimut error must introduce a huge distance error for a far, small-parallax object close to horizon. At the limit, when parallax angle is comparable or smaller than angle covered by a pixel, this effect becomes dramatic! This should happens for ranges of about 300m (NavCam) or 1Km (PanCam). Now look to this plot of the ranging error as a function of distance for the MER stereo cameras, assuming a 0.25 pixel stereo correlation accuracy (from MAKI ET AL.: MARS EXPLORATION ROVER ENGINEERING CAMERAS): ... To estimate this z-direction error, you can start from this plot but multipling error by 2 at least... Or, you can use empiric calculation method, considering many alternative rays located 0.5 pixels around original ones and then report the range for XYZ positions separately (a little bit complicated, but I like it). I like the graph, very helpful. Incidentally, considering the zoom capability and that internally the application records positions in terms of floating point variables rather than integers, sub-pixel measuring precision is (in principal) quite feasible, but a great deal depends upon the user's ability to select the same position in each image. Ultimately I would like to implement some sort of local object recognition, probably gradient-based. Yes, I had given some thought to the z-direction error estimation problem, I just had not yet decided how to address it, though something similar to the pixel-offset estimation technique you mentioned was one consideration. Initially I felt more comfortable by simply defining what I was referring to as the error (distance between two non-intersection lines), and providing the pixel size at that range, both well-understood numbers, rather than using an ad-hoc error estimation. This is an interrum solution, and perhaps I could have chosen a better term than "error", although ideally it is indeed a type of error in achieving perfect intersection of the lines; I'm sure there's a better term for it. I'll come up with something more substantial in a later version. It may be helpful to review the source of that chart, thanks for finding it. |
|
|
|
Mar 18 2006, 03:22 PM
Post
#61
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Okay, per dilo's suggestions, here is an updated version of the AlgorimancerPG photogrammetry and rangefinding utility. A cross pointer has been provided for more precise picking, and much more extensive error information added.
Dilo, the way I handled the error was to exhaustively explore +/- 0.25 pixel offsets away from those picked by the user, in each dimension and for each image, and from this determined the error ranges (measured with respect to the reported values). If you prefer, I can do that as +/- 0.5 or more pixels (just one # to change in the code). I gave some thought to calculating an error volume as the intersection of two directed cones, but decided that was just masochistic AlgorimancerPG Version 2.2 can be found here: http://www.clarkandersen.com/RangeFinder.htm |
|
|
|
Mar 18 2006, 07:16 PM
Post
#62
|
||
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 2492 Joined: 15-January 05 From: center Italy Member No.: 150 |
Very good job, algorimancer!
Cross pointer is great, a lot more precise! Error estimation is now given with great detail, you introduced also different positive/negative values and this could appear a little bit technical, but I like it because is very useful at least for range value of distant objects, were we cannot ignore the error asimmetry! About the 0.25 pixel estimation, probably is a little bit optimistic value; I think 0.5 pixel is more reasonable, while 1 pixel is too much, considering error propagation theory (is improbable to have worst combination from two coordinates measures!). But I would like to hear other evaluations on this item, before you modify this. A small suggestion for next release: in order to make data more readable, you could report errors on the right of each register, something like this: (this would also make the dialog box less tall, avoiding to cover part of images in full screen mode). Anyway, now the program is really a powerful instrument and I want to thank you very much for the effort! PS: a little curiosity - is the need to specify the rover related to some small geometry differences you are taking in account? -------------------- I always think before posting! - Marco -
|
|
|
|
||
Mar 18 2006, 08:01 PM
Post
#63
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
Very very nice program. And very easy to use.
One more small suggestion - it should be possible to set checkboxes automatically, based on filenames. |
|
|
|
Mar 18 2006, 08:49 PM
Post
#64
|
||
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
A small suggestion for next release: in order to make data more readable, you could report errors on the right of each register, something like this: (this would also make the dialog box less tall, avoiding to cover part of images in full screen mode). PS: a little curiosity - is the need to specify the rover related to some small geometry differences you are taking in account? Glad you like it. Wow, you really went to a lot of effort to lay-out just what you want in terms of display. I appreciate that And yes indeed, the reason for specifying the rover has to do with slight differences in the camera calibrations which seem to add up to some pretty noticeable differences in the results. Try doing a separate calculation for the same pixel coordinates between one rover and the other, and compare the errors. My initial version just used the Spirit calibration, but it was easy enough to add Opportunity and the results suggest that was wise. Incidentally, it will also be very easy to add camera calibrations for other landers/rovers, like Phoenix and MSL, once the calibrations become available. I'd add-in Pathfinder (assuming it uses the CAHVOR model), except there doesn't seem much interest in revisiting it at the moment. At some point I'll likely add a manual camera model import option, so you can load your own calibrations. |
|
|
|
||
Mar 18 2006, 08:59 PM
Post
#65
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Very very nice program. And very easy to use. One more small suggestion - it should be possible to set checkboxes automatically, based on filenames. I'm glad you like it. You know, I've thought several times about setting the checkboxes automatically based upon the filenames. I have thus far resisted doing that because I want to leave open the option of using this application with other missions, and too much immersion with the naming scheme of this mission may cause problems down the line, although honestly the main issue is finite time (I have a day job, etceteras). Now that you've gotten me thinking about it, it ought to be pretty easy to do - just check the filename to see whether the format agrees with the standard, set the checkboxes if it does, but otherwise give a message stating that it will need done manually (as with color composite images). |
|
|
|
Mar 18 2006, 11:35 PM
Post
#66
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Here it is, per MaxSt's and Dilo's suggestions, version 2.3 of AlgorimancerPG. I did a little experimenting myself and decided that 0.5 pixels was indeed a better measure of likely picking error, so went ahead and changed that. Re-arranged the Analyse dialog per Dilo's example, with additional mods to make the more immediately interesting results easier to find. Per MaxSt's suggestion, the selections for rover and camera are checked in the file title upon loading, along with some left-right validation.
http://www.clarkandersen.com/RangeFinder.htm |
|
|
|
Mar 19 2006, 07:30 AM
Post
#67
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 2492 Joined: 15-January 05 From: center Italy Member No.: 150 |
GREAT!
You do not waste time... I was reading your answers to me and MaxSt and I discover you already released this new, fantastic version! Glad to see you increased error to 0.5 pixels, based on your experimentation... it sound reasonable! I have some other ideas, but I prefer to tell you later (I do not want to stress you too much! Thanks again!!! -------------------- I always think before posting! - Marco -
|
|
|
|
Mar 19 2006, 11:25 AM
Post
#68
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
Thanks a lot, algorimancer!
I didn't have time to try the v2.2 when I found that a new v2.3 was available; I was planning to ask for the capability to automatically recognise the rover/camera, but I see it was already asked AND implemented. Thanks again. |
|
|
|
Mar 21 2006, 06:21 AM
Post
#69
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Members Posts: 2228 Joined: 1-December 04 From: Marble Falls, Texas, USA Member No.: 116 |
This really is a sweet application. I had always saved a link to the MER Parallax site because measurements from MER imagery was always something that would come up, but this makes it so easy. I have been trying to use the application to measure its accuracy using objects of known size in the images, like features on the rovers and features in the rover tracks. That has turned out to be more of a challenge than I initially thought it would be, but the preliminary numbers are all falling within the "ballpark." Probably one of the most surprising results for me has been the realization that objects in the pancams are often further from the cameras than I would have thought they were. This is yet another example of the kinds of collaborations that are possible in this truly amazing forum.
-------------------- ...Tom
I'm not a Space Fan, I'm a Space Exploration Enthusiast. |
|
|
|
Mar 24 2006, 03:45 AM
Post
#70
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
I just posted the next AlgorimancerPG update (version 2.4) at the usual location,
http://www.clarkandersen.com/RangeFinder.htm. This version is going to be mostly of interest to those who want the ability to edit their point selections and are interested in aligning one image pair's calculated vertices with the coordinate system of another image pair's calculated vertices. The basic position calculating and range finding capabilities haven't been updated, though a couple of potential interface bugs have been fixed. See the included User Guide document for more details. As usual, enjoy and let me know of any problems or suggestions CosmicRocker, glad you like it, and the small scale of the features in the pancam images caught me a little by surprise as well. 16 degrees just seems like it ought to be bigger :/ |
|
|
|
Mar 24 2006, 09:17 AM
Post
#71
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
Thanks for this new update!
And here is something for the "wish list": it would be nice to have a dynamic image panning (click&drag right-button). I.e. image updates while the mouse is moved and not only after releasing the button. |
|
|
|
Mar 24 2006, 01:44 PM
Post
#72
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
And here is something for the "wish list": it would be nice to have a dynamic image panning (click&drag right-button). I.e. image updates while the mouse is moved and not only after releasing the button. That would be relatively easy to do, but the images seem to update a little slowly for that to work well. I'll try it and see how it looks... may need to do some double buffering to get it to work nicely. |
|
|
|
Mar 25 2006, 06:12 AM
Post
#73
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
I understand that left and right rays only in theory should intersect at some distance.
But I wonder what's the usual error, in practice? If you shift left and right pointers vertically just a little bit, you should be able to assure the ray intersect. Just how much shift is needed, in terms of pixels or maybe degrees? |
|
|
|
Mar 25 2006, 02:57 PM
Post
#74
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
I understand that left and right rays only in theory should intersect at some distance. But I wonder what's the usual error, in practice? If you shift left and right pointers vertically just a little bit, you should be able to assure the ray intersect. Just how much shift is needed, in terms of pixels or maybe degrees? That's one of those problems which is going to have infinite solutions. If you held the ray from one camera constant, you could adjust the other camera's ray to achieve a perfect intersection. But you could move that ray about the plane spanned by the first ray and the origin of the second ray, and find infinite solutions along it. And of course there's no reason to assume that the first ray is the one to be fixed, which adds additional dimensions to the problem. It's basically a matter of constraining the error of the rays and saying that if each of the rays is allowed so much error in orientation, then the solution space is constrained to something like the intersection of two cones (where the apexes of the cones are the cameras' principal points). In addition to the Analyse dialog giving the error allowing for +/- 0.5 pixels, the provided figure for intersection gap is a measure of the distance between the intersecting rays, so to figure how many degrees that corresponds to, simple trigonometry would be to take the inverse tangent of (half the gap distance divided by the range distance). A few checks w/ Opportunity's navcam (pixel size is 0.04686770 degrees) give offsets of between 0.002 and 0.04 degrees. Likewise w/ Spirit's pancam (pixel size 0.01600068 degrees) yield offsets of between 0.003 and 0.011 degrees. This seems to further confirm that sub-pixel precision is being achieved. Someday it might be nice get a bunch of users to measure a bunch of objects, then combine the results and get a good sense of the true statistical error involved. |
|
|
|
Mar 25 2006, 09:56 PM
Post
#75
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
But you could move that ray about the plane spanned by the first ray and the origin of the second ray, and find infinite solutions along it. Well, I mean if shift only vertically... should be one solution. This seems to further confirm that sub-pixel precision is being achieved. Oh, that's good. I tried to do something like that myself, but I'm getting much bigger vertical errors... |
|
|
|
Mar 26 2006, 12:59 AM
Post
#76
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Oh, that's good. I tried to do something like that myself, but I'm getting much bigger vertical errors... Try zooming in before selecting points. I find this can get me down to better than half a pixel error most of the time (optimistically half that). I tend to zoom in about ten times prior to selecting important points. Lot's of times no zoom at all yields adequate results, just depends what you're up to. |
|
|
|
Mar 26 2006, 02:31 AM
Post
#77
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
Try zooming in before selecting points. No, I mean bigger errors with my own implementation... Something wrong with my CAHVOR model... I wonder how they do "linearization" of pictures? Their results are very good, but when I tried that myself, I couldn't figure out how to select new CAHV vectors. |
|
|
|
Mar 26 2006, 04:11 PM
Post
#78
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
No, I mean bigger errors with my own implementation... Something wrong with my CAHVOR model... I wonder how they do "linearization" of pictures? Their results are very good, but when I tried that myself, I couldn't figure out how to select new CAHV vectors. If you get stuck on your CAHVOR model I'd be happy to share some tips or code with you, but I know it's more satisfying to figure out yourself. I have little experience with transforming images (other than using existing software), so I'm not sure how I'd go about linearizing one. Seems like you might be able to use the existing CAHVOR model and just choose to project to a sphere rather than a plane, or something to that effect. I don't have a good mental model of what needs to happen. |
|
|
|
Mar 26 2006, 08:21 PM
Post
#79
|
|
|
XYL Code Genius ![]() ![]() ![]() Group: Members Posts: 138 Joined: 23-November 05 Member No.: 566 |
If you get stuck on your CAHVOR model I'd be happy to share some tips or code with you, but I know it's more satisfying to figure out yourself. True, true... Besides, you probably implemented photogrammetric model, and I'd like to work with actual CAHVOR vectors. I have little experience with transforming images (other than using existing software), so I'm not sure how I'd go about linearizing one. Seems like you might be able to use the existing CAHVOR model and just choose to project to a sphere rather than a plane, or something to that effect. I don't have a good mental model of what needs to happen. Well, I think they project object rays back a plane, which is defined by new CAHV vectors. The same plane for both cameras (in the case of navcams). So when I look at the pair of their linearized images, they fit each other very well - the objects far far away have the same coordinates on both images, and objects close to the rover are always on the same row. I guess their error is much less then a pixel. |
|
|
|
Mar 26 2006, 09:00 PM
Post
#80
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
True, true... Besides, you probably implemented photogrammetric model, and I'd like to work with actual CAHVOR vectors. Actually my approach was indeed via the CAHVOR vectors, using that information combined with some simple vector analysis. I plug in a pixel coordinate, out comes a corresponding real-space vector emerging from the camera. I define a line originating at the coordinates of the C in the CAHVOR model, with the vector specifying the direction. Same for the other camera, then I find the approximate intersection of the two lines (points on each line where they're othogonal to each other, and I settle for the midpoint between those points as the photogrammetric point). I don't do any conversion to a separate photogrammetric model, I just work with what CAHVOR gives me. I have a copy of a paper somewhere which describes converting from CAHVOR to a specific photogrammetric model, but my sense is that the only purpose was to validate their equivalence. |
|
|
|
Apr 22 2006, 07:55 PM
Post
#81
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
Algorimacer,
Speaking of "improvements"/changes to the tool, I'm wondering if you would add another way to zoom-in/out in addition to the "+" and "-" keys. You know, it works on those machines with a numeric keypad but since last christmas I have a brand new laptop and the zooming feature doesn't work on it. Maybe there is a workaround, but if it doesn't it would be nice to have zooming via other keys. Could you please add that to the "wish list"? |
|
|
|
Apr 22 2006, 10:31 PM
Post
#82
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 267 Joined: 5-February 06 Member No.: 675 |
... I have a copy of a paper somewhere which describes converting from CAHVOR to a specific photogrammetric model, but my sense is that the only purpose was to validate their equivalence. I suspect you mean Di and LI, "CAHVOR camera model and its photogrammetric conversion for planetary applications," JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 109. |
|
|
|
Apr 24 2006, 01:48 PM
Post
#83
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
I suspect you mean Di and LI, "CAHVOR camera model and its photogrammetric conversion for planetary applications," JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 109. Yes, I believe that's the one. Algorimacer, Speaking of "improvements"/changes to the tool, I'm wondering if you would add another way to zoom-in/out in addition to the "+" and "-" keys. You know, it works on those machines with a numeric keypad but since last christmas I have a brand new laptop and the zooming feature doesn't work on it. Maybe there is a workaround, but if it doesn't it would be nice to have zooming via other keys. Could you please add that to the "wish list"? Sure, that would be easy to do. I'm pretty sure there's a means of enabling the numeric keypad overlaid on the standard keypad (perhaps a Fn switch or something like that), but it's no big deal adding alternate keys. Consider it on the list... may take me a couple of days to get around to it. |
|
|
|
Apr 26 2006, 12:44 AM
Post
#84
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
Algorimacer, Speaking of "improvements"/changes to the tool, I'm wondering if you would add another way to zoom-in/out in addition to the "+" and "-" keys. You know, it works on those machines with a numeric keypad but since last christmas I have a brand new laptop and the zooming feature doesn't work on it. Maybe there is a workaround, but if it doesn't it would be nice to have zooming via other keys. Could you please add that to the "wish list"? Okay, here you go, AlgorimancerPG version 2.5: http://www.clarkandersen.com/RangeFinder.htm Now the PageUp/PageDown keys zoom-in and out just like the numeric keypad +/- keys. Incidentally, I'm told that setting the numlock will allow you to use the numeric keypad on a laptop that's overlaid on the keypad. The next thing I'd like to add to AlgorimancerPG is the capability to query the SQL server that holds the camera orientation information and use that data to transform the calculated coordinates accordingly. Unfortunately I have zero experience with SQL, so it may be awhile before I get to it. If anyone has a C/C++ function lying around (ideally VC++ 2005) which can handle this, I'd be happy to drop it into my code |
|
|
|
Apr 26 2006, 01:43 AM
Post
#85
|
|
![]() Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 3431 Joined: 11-August 04 From: USA Member No.: 98 |
Actually MMB doesn't access the SQL server directly. I have to do that manually and save the results to a file. Then MMB parses the saved file to get the data and generate its own metadata. The generated metadata is the only part of this that the end user gets, and even that they're usually not aware of. I wouldn't add the ability to automatically hit the SQL server, because we really really want that server to stay around, if you know what I mean.
|
|
|
|
Apr 26 2006, 01:49 AM
Post
#86
|
|
![]() Member ![]() ![]() ![]() Group: Members Posts: 656 Joined: 20-April 05 From: League City, Texas Member No.: 285 |
|
|
|
|
Apr 26 2006, 02:47 PM
Post
#87
|
|
![]() Dublin Correspondent ![]() ![]() ![]() ![]() Group: Admin Posts: 1799 Joined: 28-March 05 From: Celbridge, Ireland Member No.: 220 |
|
|
|
|
Apr 26 2006, 08:58 PM
Post
#88
|
|
|
Senior Member ![]() ![]() ![]() ![]() Group: Moderator Posts: 4280 Joined: 19-April 05 From: .br at .es Member No.: 253 |
Okay, here you go, AlgorimancerPG version 2.5: http://www.clarkandersen.com/RangeFinder.htm Now the PageUp/PageDown keys zoom-in and out just like the numeric keypad +/- keys. It works! Thanks a lot |
|
|
|
![]() ![]() |
|
Lo-Fi Version | Time is now: 13th December 2024 - 04:38 PM |
|
RULES AND GUIDELINES Please read the Forum Rules and Guidelines before posting. IMAGE COPYRIGHT |
OPINIONS AND MODERATION Opinions expressed on UnmannedSpaceflight.com are those of the individual posters and do not necessarily reflect the opinions of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer UnmannedSpaceflight.com moderation team is wholly independent of The Planetary Society. The Planetary Society has no influence over decisions made by the UnmannedSpaceflight.com moderators. |
SUPPORT THE FORUM Unmannedspaceflight.com is funded by the Planetary Society. Please consider supporting our work and many other projects by donating to the Society or becoming a member. |
|