Worldwide space fans are starting to play with the data!
Since Perseverance landed successfully on Thursday, 18 February, I’ve been enjoying watching the space fan community come up with ways to download, display, and process those pictures. There’s been a pause in the downlink of new images during sols 5 through 8, as Perseverance has upgraded its flight software. But we should start seeing new pictures late Saturday night my time (early Sunday morning for most of the world). Image processors, start your engines!
Following are some places to check out Perseverance pics, and then I’ll share a quick guide to the engineering cameras. I'm going to have a chance to interview JPL's Justin Maki on Monday evening to ask some questions about the engineering cameras and the raw images website -- feel free to ask your own questions below and if I can't answer them, I'll ask Justin.
Where to Find the Pictures:
· Raw images(official NASA website)
· Perseverance Image Explorer by mickmis colorizes and displays images by either deBayering or producing RGB combos
· Perseverance Mars Rover Raw Image Playground by Robert Cadena
· Captioned, press-released images (official NASA/JPL website)
· Check my Twitter list of awesome amateurs for their processed pics!
Where to Learn Stuff About the Cameras and Image Processing:
· The paper describing the engineering cameras (Maki et al 2020)
· Blog by me and Melissa Rice about Mastcam-Z’s color vision
· Planetary Report article about The Mastcam-Z calibration target
· Blog by Jim Bell about the Mastcam-Z file naming convention
· The paper describing the Mastcam-Z cameras (Bell et al 2021)
· The paper describing the calibration of the Mastcam-Z cameras (Hayes et al 2021)
· A paper about the CAVHOR(E) camera model, a bunch of numbers that describe pointing and geometric distortion in cameras (Di and Li 2004)
Software tools for image processing (other than Photoshop and GIMP):
· Bryce by Million Concepts: compose and decompose RGB combos
A Bit of Info About the Engineering Cameras:
The mission has (had) 25 cameras. Of these, 4 were on landing hardware (3 cameras pointed up at the parachute from the backshell and 1 pointed down from descent stage at rover), 3 are landing cameras mounted to rover (1 upward looking and 2 downward looking), 9 are surface-operations engineering cameras (2 Navcams, 6 Hazcams, and 1 Cachecam), 2 are on the helicopter, and 7 are rover science cameras (two Mastcam-Zs, one Supercam Remote Micro Imager…I'll delve into these one by one as images start landing on Earth).
I explained the engineering cameras in this Twitter thread. Most important details: all use the same detector, a CMV-20000 CMOS detector with a Bayer filter, 5120 x 3840 pixels, digitized at 12 bits/pixel, with exposure times from 411 to 3277 milliseconds.
If you’ve never heard of a Bayer filter, check out this post I wrote about the way it worked on Curiosity Mastcam.
In the Twitter thread, I talk about how these images are usually downsampled before return to Earth, and that the details are complex and I will blog about it later. So, here we go.
Making Big Images Smaller
5120 by 3840 pixels is not only a lot of pixels to transfer between Mars and Earth, it’s a lot of pixels to transmit between camera and electronics. A particular problem is that the camera data interface in the rover computer is a reuse of the design of Curiosity, which itself reused the Spirit and Opportunity computer design. None of the earlier rovers had any cameras that produced images more that 1 megapixel in size. So the 20-megapixel images of the Perseverance engineering cameras actually have to be cut into tiles to be transferred to the rover computer. The interface can manage 1280 by 960 tiles, of which 16 fit across a single full-resolution engineering camera image. Most of the time, none of the engineering cameras transmits a whole 20-megapixel image. Instead, they either send a downsampled image (that is, one image that’s been resized to 1280 by 960), or just one full-res tile of an image at higher resolution.
There are 10 different ways that the cameras pass information to the rover, 10 different camera operation modes. None of them produces a picture that looks to be in color straight from the rover; all of them show up as black and white in rover memory.
· Mode 0: Full-scale, color (Bayer pattern visible)
· Mode 1: Half-scale, monochrome, using only green pixels from Bayer pattern (each pixel is an average of 2 original green pixels)
· Modes 2 and 3: Half-scale, monochrome, using only red or blue pixels from Bayer pattern, respectively (values reflect original red and blue pixel values from camera)
· Mode 4: Half-scale, panchromatic, each pixel an average of 4 original pixels (2 G, 1 R, 1 B)
· Modes 5 through 7: Quarter-scale, monochrome, using only green, red, or blue pixels (each pixel is an average of 8 original green or 4 original red or blue pixels)
· Mode 8: Quarter-scale, panchromatic, each pixel an average of 16 original pixels
· Mode 9: Eighth-scale, panchromatic, each pixel an average of 64 original pixels
Note that Modes 0 through 8 will produce images that are 1280 by 960 in size (some of the images requiring multiple tiles if they’re going to show the whole detector), while Mode 9 will produce 640 by 480 thumbnails.
Finally, the cameras are capable of one more trick: in-camera co-adding (that is, stacking) of 2, 4, 8, or 16 images. Co-adding is useful for low-light imaging (like in deep shadows or at night) because it allows the camera to improve the signal-to-noise ratio of an image.
One funky implication of the way that the engineering cameras transmit data to the rover is that it’s possible for a single exposure to be returned as a nested image of different resolutions, which the team calls a “context/targeted strategy”. It reproduces something that Curiosity’s mismatched Mastcams did: take a high-resolution (right-eye) image and a lower-resolution (left-eye) image simultaneously, putting a detailed image in a wider context. But there's a crucial difference: on Perseverance, both such images are from the same eye, and it’s easy to shoot a stereo pair (that is, two nested image pairs, one from each eye). Here is how that nested approach will play out in typical operations-support imaging tasks of taking drive-direction and survey panoramas:
Stay tuned for data!