Creating Video From Space Data
Tuesday, March 27th, 2012
Young fans of the Academy Award-winning movie “Hugo” may have been surprised to learn that movies once had to be projected onto a screen using a hand-cranked machine and color was to be added to each film by hand, frame by frame.
Today video comes to your computer or monitor with the click of a mouse. Or so it seems.
While much more automated, there still is a lot of work that has to be done between capturing a moving image and displaying it to an audience. And doing that from space in almost real-time is even more complicated.
It all begins and ends with data. First, you need a few things: a way to capture the image data (say a camera), and a way to transform the data into digitized form and a place to put that data (say a computer). That computer, then, is going to need a high-speed processor, lots of storage space, and lots of memory, because it will also need software to process and edit the image data.
Finally, one of the most important components needed is context for the end user (say, a human). To be useful, data needs to have the how, the when and the where of its capture so that whoever uses it can understand it and compare it to information from other data.
From Pixels to Image
The quality of the images recorded begins with the quality of the digital camera, sensors and lenses used to capture the images. In space, the camera needs to be pretty tough to survive the harsh environment. (For more on that, see “Space Tests: UrtheCast Cameras Prepare for Flight.”)
In general, the images can be captured as raw data or as video. Raw is all the data that is captured, but takes much more storage and much longer to process. Video weeds out some of the data so it is easier to process into usable form, but sacrifices some of the available nuances in light and darkness, color and detail. One significant difference is that once weeded out, that data is gone forever.
How much data is captured by a camera depends on the equipment’s ability to discern variations in light. Those variations are picture elements, or pixels. The more variations, the more data, or basically the higher the resolution of the available image. Of course, the more data captured, the more storage that is needed.
So, the image is captured as data and digitized by the camera, stored and transmitted to a computer on Earth. There, software is used to manipulate the data to remove or enhance the bits of light and dark that make up the image – such things as sharpening contrast or combing out the unnecessary data to allow colours to be more vivid.
The edited data is usually stored in a format that can be universally understood by computers elsewhere – .jpg and .tiff files for still images and .mpeg or .mpv for video. The data can then be sent out, or streamed, to the end user. And depending on how powerful and efficient all that equipment is, it can be done in near real-time.
How beneficial that data is depends on the end user. And, of course, the imagination.
By AJ Plunkett
AJ Plunkett is a freelance writer in Virginia with experience in covering defense and aerospace industries as well as the military. AJ blogs via Contently.com.