This article is an excerpt from Gregory Hallock Smith’s book – Camera Lenses: From Box Camera to Digital which was published by SPIE press in 2006. In this excerpt, Dr. Smith discusses the different camera types used on the Mars rover; detailing their purpose as well as their unique specifications.
Authored By Dr. Gregory Hallock Smith
Gregory Hallock Smith received his Ph.D in 1972 from the Optical Sciences Center, University of Arizona. He specializes in lens design and designed all of the camera lenses for Mars Exploration Rovers. 
His book, Camera Lenses covers a wide range of topics, from basic optical geometry to the manufacture of cameras through the years. In this excerpt, Dr. Smith discusses the different camera types used on the Mars rover; detailing their purpose as well as their unique specifications. In each section, the author’s personal favorite images taken with the rover cameras are shown. All photos are courtesy of NASA/JPL-Caltech.
For more information on the book, or to purchase the full text, please visit SPIE Publications.
Chapter 31: The Mars Rover Camera Lenses
The exploration of Mars has become a high-priority goal for the U.S. space agency, NASA. Recent high-resolution images from spacecraft orbiting the planet indicate that liquid water may once have flowed there, and perhaps it still occasionally does. The presence of liquid water suggests the possibility of life, either now or in the past. This exciting prospect has spurred great interest in taking a closer look directly on the Martian surface itself.
Thus, on June 10 and July 7, 2003, NASA launched on separate Delta II rockets two identical Mars Exploration Rovers (MER). For several months they coasted nearly halfway around the Solar System while orbiting outward. On January 3 and 24, 2004, they landed using airbags at two different locations on opposite sides of the planet. MER-A (named Spirit) landed in Gusev Crater, thought to have once been a large Martian lake. MER-B (named Opportunity) landed on Meridiani Planum, thought to have once been a Martian sea. After emerging from their landing enclosures (which were of no further use), these six-wheeled robotic geologists embarked on an extended period of exploring and data gathering. The results so far at both locations indicate that Mars indeed once did have large amounts of liquid water. Early in its history, Mars was a much warmer and wetter place than it is today. At the time of this writing, after more than an Earth-year on Mars, both rovers are still healthy and active. Their mission has been very successful and is still continuing.
A major part of a rover's scientific and engineering equipment are its cameras (its electronic eyes). On each rover, there is a total of 9 cameras, each having one of four different lens types. The cameras serve two functions. First, they allow people on Earth to see what is out there for scientific evaluation and discovery. Second, the operation of the rovers combines Earth-based control and target selection with on-board autonomous navigation. The rovers must be smart enough to avoid getting into trouble. Crucial for success (even survival) is their being able to see what is around them as they roam. These spacecraft are prime examples of machine vision plus state-of-the-art artificial intelligence coordinated with a little help from Mission Control back home on Earth.
The author was privileged to have been asked to design all four lens types for these cameras. They are described here to illustrate how the classic camera lenses can be successfully adapted to new and different applications in sometimes very exotic situations.
31.1: The Cameras
The four types of cameras on board the Mars Exploration Rovers are called the Panoramic Camera (PanCam), the Navigation Camera (NavCam), the Hazard Avoidance Camera (HazCam), and the Microscopic Imager (for close-up views). The lenses for these cameras have to be small, simple, and effective, and they must survive getting to Mars and on Mars. The most has to be done with the least. Thus the least number of lens elements are used. Only two types of optical glass (one crown and one flint) are used throughout (with one exception). All optical surfaces are spherical or flat (no aspheres). And no cemented surfaces are allowed (that could break apart in the Martian cold). There can be no mechanical shutters, variable aperture stops, or refocusing that could malfunction. Short focal lengths (to match the CCD format), small apertures (to allow great depth of field), and electronic shutters (frame-transfer CCDs) make these restrictions realistic. To facilitate polishing, anti-reflection coating, and mechanical mounting, the lens elements have been made somewhat oversized. In no case is any mechanical vignetting (off-axis beam clipping) allowed.
A temperature of 23°C and an air pressure of 0.95 atmosphere were adopted during the optical design process on the assumption that these lenses have to be made, tested, and calibrated here on Earth. However, image quality was reevaluated analytically for the Martian environment of 10°C, -22.5°C, and -55°C, all at 0.01 atmosphere. In no case was a significant change found when going to Mars. Nighttime temperatures on Mars can fall to -120°C, but the cameras need only survive, not operate, in those conditions.
All the cameras use the same type of solid-state, silicon-based CCD image sensor and associated electronics. This flat CCD is a 1024 x 2048 array of 12 x 12 µm pixels. It is a frame-transfer type, as described earlier in Section A3.2. The top 1024 x 1024 pixels is the photo-sensitive area, and the bottom 1024 x 1024 pixels is a covered frame-transfer buffer. Thus the active imaging area is 12.29 mm square with a diagonal of 17.38 mm. (Relatively speaking, in photography this is not very small. The diagonal of a 16 mm movie camera frame is 12.63 mm, and the original 8 mm movie frame had a diagonal of only 5.95 mm.) The CCDs themselves are sensitive to wavelengths ranging from about 0.40 µm to 1.1 µm. Filters plus the CCD response determine the specific wavebands.
Each rover has a stereo pair of PanCams. These are the narrow-angle high-resolution science cameras for remote sensing. To make their panoramas, many individual pictures are joined together to form a mosaic.
To get a good view, the PanCams are mounted at about human eye level on top of a mast. Full 360 degree azimuth and ±90 degree elevation pointing is provided by a two-axis turret on the mast. Inside the mast are mirrors to send light down to the infrared Mini-Thermal-Emission Spectrometer, which is protected in the warmer interior of the rover.
The distance between the two PanCams is 280 mm. This separation will yield hyper-stereo images for better depth perception (the stereo separation or interpupillary distance between human eyes ranges between about 54 mm and 72 mm).
The two PanCams are also the color cameras; each has a filter wheel in front. Each filter wheel has 8 positions for filters of various wavebands covering the sensitivity range of the CCD. These allow multi-spectral imaging for quantitative geological and atmospheric studies. The filter selection is different on the two cameras, although there is some duplication to make the color stereo images. A sapphire window seals against dust and protects the filter wheel mechanism.
Figure 31.1 PanCam lens: 42.97 mm, f/20, +/- 11.25 deg; (a) layout and (b) spot diagram. Filter wheel wavelength coverage 0.40 to 1.1 mm. Spot size measured in microns.
Figure 31.1(b) is the polychromatic spot diagram. The scale bar length is 24 µm, which is the dimension of one side of a 2 x 2 matrix of pixels that defines one limiting-resolution detector element. The circles indicate the diameter of the Airy diffraction disk for a wavelength of 0.43 µm. The main on-axis aberration is secondary longitudinal color. Off-axis, there is added astigmatism and secondary lateral color. But even at the shortest wavelength, these lenses are diffraction limited. Distortion is less than 0.01% all across the field. Image illumination at the edge of the field is 93% of the central value (due to cosine-fourth fall-off).
The PanCam lenses are Cooke Triplets as shown in the layout in Figure 31.1(a). Focal length is 42.97 mm giving a field of view of 16 x 16 degrees, or 22.5 degrees on the diagonal (the equivalent of a 109 mm lens on a 35 mm camera). They operate at a constant opening of f/20, view objects in focus whose distances range from infinity to 1500 mm (the depth of field), and are fixed-focused at 3000 mm (the hyperfocal distance). The two plane-parallel plates in front of the lens in the layout represent the filter and sapphire window.
Figure 31.2: This is the famous "Everest Panorama" taken with the PanCam on the Mars Rover.
Located next to the PanCams in the turret on top of the mast is a stereo pair of moderately wide-angle NavCams. These are engineering cameras intended primarily to help direct the rover as it drives across the Martian surface. Nevertheless, they have been giving much science data too.
The NavCams point in the same direction as the PanCams and have a stereo separation of 200 mm. Lens focal length is 14.67 mm giving a field of view of 45 x 45 degrees, or 60.7 degrees on the diagonal (the equivalent of a 37 mm lens on a 35 mm camera). They operate at f/12, view objects whose distances range from infinity to 500 mm, and are fixed-focused at 1000 mm. They are monochrome (black-and-white) systems; a pair of absorption filters (Schott OG590 and KG5) working in series gives a reddish waveband extending from roughly 0.60 µm to 0.80 µm. This matches the predominant color of the Red Planet (although, technically speaking, the color is a light-to-moderate yellowish-brown).
The lens design is a cross between a Hologon and a Biogon and is shown in the layout in Figure 31.2(a). A true Hologon with a deep groove around the center element was rejected for this application because the narrow waist might break apart during the vibrations of launch and shocks of landing. For safety, the center element was divided, and a conventional fixed aperture stop was placed between the two halves.
Figure 31.2(b) is the polychromatic spot diagram. The scale bar length is again 24 µm. The circles indicate the diameter of the Airy diffraction disk for a wavelength of 0.70 µm. On-axis, the main aberration is primary longitudinal color. There are virtually no additional aberrations off-axis. Color is not corrected because the lens has insufficient degrees of freedom. To do so would require more elements (some typically cemented together) in the center group. But that is not allowed here. It is also unnecessary because image quality is already easily diffraction limited, given the slow f-number and short focal length. Distortion is less than 0.03%. Image illumination at the edge of the field is 63% of the central value.
Figure 31.3 NavCam lens: 14.67 mm, f/12, +/- 30.35 deg; (a) layout and (b) spot diagram. Wavelength passband 0.60 to 0.80 mm. Spot size measured in microns.
Figure 31.4: (a) A shot of Burns Cliff, taken with the NavCam on robot rover Opportunity. (b) Breathtaking view of Mars on the top of Husband Hill taken by the NavCam aboard robotic rover Spirit.
In the lower carriage at both ends of each rover, there are stereo pairs of HazCams. One pair faces forward, and one pair faces rearward. These four engineering cameras are another part of the onboard autonomous navigation system. Their primary purpose is to reveal dangerous objects to be avoided in the path of a rover as it drives in either direction. But like the NavCams, the HazCams have been giving much science data too.
The HazCam optics are full-frame fisheye lenses. Their field coverage is 124 degrees from side to side and 180 degrees across the diagonal of the square CCD format, that is, the diagonal field of view is a full hemisphere. For such an extremely wide field, conventional distortion correction is impossible. Instead, equal meridional angle increments in the object scene are mapped (ideally) as equal linear increments on the image plane. When looking at the whole scene, a fisheye lens unavoidably produces a great amount of barrel distortion. This distortion is actually just the consequence of projecting a hemisphere onto a plane. Of course, the mapping function can be calibrated. Of more practical importance, small parts of the scene, even those near the field edge, are imaged quite accurately without needing a lot of subsequent image processing. But the horizon in most pictures appears strongly curved.
The HazCam lens design is shown in the layout in Figure 31.3(a). Focal length is 5.58 mm in the center of the field. They operate at f/15, view objects whose distances range from infinity to 200 mm, and are fixed-focused at 400 mm. Stereo separation is 100 mm. They are monochrome systems; as with the NavCams, a pair of absorption filters (Schott OG590 and KG5) working in series gives a reddish waveband extending from roughly 0.60 µm to 0.80 µm. Absorption filters are used; quasi-reflective interference filters might produce too much stray light.
Figure 31.3(b) is the polychromatic spot diagram. The scale bar length is 36 µm, the size of a 3 x 3 matrix of pixels. The circles (which are radially stretched into ellipses off-axis) indicate the size of the Airy diffraction disk for a wavelength of 0.70 µm. As with the NavCams, primary longitudinal color has not been corrected, but it is small enough to not matter. At the edge of the field there is a complex mix of several aberrations, but lateral color is virtually zero. Amazingly, the lens is diffraction limited all across its field. Departures from perfect fisheye mapping are less than 3%. Image illumination at the edge of the field is 36% of the central value (off-axis pupil growth, shift, and tilt, plus the large barrel distortion, prevent it from going to zero).
Figure 31.5: This photo was taken with the HazCam and shows the huge distortion of a fisheye lens. Note the instrument arm in action.
31.5: Microscopic Imager
The last of the four camera types is another science camera. It is the Microscopic Imager, one of which is located on the end of each rover's arm (Instrument Deployment Device). The camera can be maneuvered to get a detailed look at rocks and other nearby objects of interest. Also on the arm are a Mössbauer Spectrometer, Alpha-Particle-X-Ray Spectrometer, and Rock Abrasion Tool. The four instruments are mounted on a rotating turret so one or another can be brought into place as needed. The front pair of HazCams is particularly useful in verifying how the arm is positioned.
The term Microscopic Imager may be a misnomer; it is not a compound microscope. The object being viewed is larger than its image, and thus it is what photographers would call a macro close-up lens. It serves the same function as a geologist's hand-magnifier loupe.
The lens design is a Cooke Triplet and is shown in the layout in Figure 31.4(a). Focal length is 20.14 mm. Optical magnification is 2.5-to-1 (or 1-to-0.4). Thus a 30.7 mm square object area is imaged onto the 12.29 mm square CCD, and a 30 µm object is imaged onto each 12 µm pixel (giving 60 µm limiting object resolution on a pair of adjacent pixels). At its finite working distance, the lens gathers an f/37.5 cone of light and forms an image having an f/15 cone of light. Stereo imaging is possible by taking one picture, shifting the camera sideways a bit, and taking a second picture. A sapphire window protects the lens in case it accidentally bumps against anything during its Martian adventures. The in-focus working clearance between object and window is 63 mm, and the total length between object and image is 100 mm.
Figure 31.6 HazCam lens: 5.58 mm, f/15, +/- 90 deg; (a) layout and (b) spot diagram. Wavelength passband 0.60 to 0.80 mm. Spot size measured in microns.
The system is monochrome with an absorption filter (Schott BG40) giving a waveband (width and shape) similar to the human photopic visual response. Nevertheless, some color information can be obtained. To protect the Microscopic Imager when not in use, there is a cover that can be closed. To avoid the chance of a single-point failure by a stuck cover, the cover is transparent. The lens can image right through the cover if need be. The cover is also tinted orange. Combining images made with the cover open and closed allows a quasi-color image to be synthesized (somewhat like the two-color Technicolor system of the 1920s described in Section A2.7).
Figure 31.7 Microscopic Imager lens: 20.14 mm, f/15 (when working at 2.5:1 demagnification, 30.7 mm square object area; (a) layout and (b) spot diagram. Photopic wavelength response. Spot size measured in microns.
Figure 31.8 (a) Mars Rover diagram. (b) Artist’s rendition of the Mars Rover (click for larger resolution).
Figure 31.7(b) is the polychromatic spot diagram. Again the scale bar length is 24 µm. The circles indicate the diameter of the Airy diffraction disk for a wavelength of 0.54 µm. On-axis, the main aberration is secondary longitudinal color. Off-axis, there is added secondary lateral color and astigmatism. Nevertheless, all the images are diffraction limited at best focus. Depth of field is estimated to be at least 3 mm on each side of best focus. In practice, the usable field depth may be considerably greater. When in use on Mars, multiple images of rough surfaces are sometimes taken, with the camera distance incrementally stepped, to ensure that all parts are recorded in good focus. Distortion is less than 0.01%. Image illumination at the edge of the field is 89% of the central value.
Figure 31.9: This is a close-up photo of the Mars terrain taken with the Microscopic Imager aboard the robotic rover.
31.6: SunCam and Descent Camera
It is necessary that the absolute three-axes orientation of a rover be known to enable its high-gain radio antenna dish to be accurately pointed at Earth. The north heading must also be known while driving on the Martian surface. With no global magnetic field on Mars to activate a compass, this information is derived from the apparent angular direction of the Sun relative to the rover's body (with help from a vertical-direction sensor and knowing the rover's location on the Martian surface and the time of day). Shortly before a radio transmission, the Sun's position is observed and used to recalibrate the fiber-optics laser gyro in the inertial navigation system.
Originally, the direction of the Sun was to be measured by a single special camera on each rover called the SunCam. It was to be mounted next to the antenna dish. It was to have the same type of lens as the NavCam, except an extremely strong neutral-density filter would be placed in front. However, as so often happens when building spacecraft, the rovers were exceeding their weight budget. Although the SunCams had already been built, to reduce weight they were deleted. Their function was taken over by the PanCams. On each filter wheel, one of the 8 positions now has a strong neutral-density filter for viewing the Sun. These solar observations are also useful in determining changes in atmospheric transparency, such as from dust clouds.
However, the SunCams did not go to waste; they later became the Descent Cameras. The neutral-density filters were removed and one camera was remounted on the bottom of each landing module facing down. Shortly before bounce-down on Mars, the Descent Camera took three images of the ground in quick succession and immediately compared them on-board. If a significant sideways motion of the spacecraft was detected that might tear the airbags on landing, the maneuvering rockets could take it out. This capability was indeed needed for the first rover, Spirit, although not for the second, Opportunity.
The optical prescriptions for the four Mars rover lenses are in the Appendix of the book, along with the prescriptions for all the other lens examples in this book.
The work described here on the design of the camera lenses for the Mars Exploration Rovers was funded by U.S. taxpayers and carried out at the Jet Propulsion Laboratory (JPL), California Institute of Technology, under a contract with the National Aeronautics and Space Administration (NASA). The author wishes to thank the many people at JPL and elsewhere who worked on this project and made it a huge success. In addition to the author, four of the lead people for the lenses were:
Edward C. Hagerott and Lawrence M. Scherr
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109
Kenneth E. Herkenhoff
U.S. Geological Survey, Flagstaff, AZ 86001
James F. Bell III
Department of Astronomy, Cornell University, Ithaca, NY 14853
 Gregory Hallock Smith, et. al., Optical Designs for the Mars '03 Rover Cameras, Proceedings of SPIE, Vol. 4441, pp. 118-131, July 2001.
 Gregory Hallock Smith, Practical Computer-Aided Lens Design, Willmann-Bell, Richmond, 1998.
 Rudolf Kingslake, A History of the Photographic Lens, Academic Press, San Diego, 1989.
Article is closed for comments.