I’ve known since RAW-format files were introduced that we could go from RAW to video, but not video to RAW.
How come?
In short, because video does not have as much color and grayscale information as a RAW file, so it would make no sense to up-convert.
NOTE: RAW is not an acronym. It’s not a noun. It should not be capitalized. But it is. Just another weirdness we get to live with.
A good analogy is that we can take a variety of ingredients and bake a cake. But, once that cake is baked, we can’t convert it back into the original ingredients. (Smile… This is an analogy. No cakes are actually harmed in the making of video.)
This is a digital image sensor, a photo sensitive array, signal amplifier and computer. The shiny rectangle in the middle of the chip is the photo-sensitive part of the chip. It is made up of millions of individual photosites. These are minute, light-sensitive points for collecting the data needed to make the image file.
A photosite captures exactly one color per site. Photosites are arrayed in a “Bayer pattern,” (left image above). This means that it takes a 2×2 grid of photosites to create a single pixel containing all three colors (right image).
The process of converting tens of millions of photosites into millions of pixels is called “demosaicing,” or, sometimes, “deBayering.” In other words, this process converts the mosaic of a camera’s Bayer pattern into video pixels.
Where this demosaicing process occurs varies based upon the RAW format. Most RAW conversions are done after the image is captured by the camera sensor but before the file is recorded inside the camera.
Apple’s ProRes RAW saves the Bayer pattern directly to disk and defers demosaicing to the application software (i.e. Final Cut Pro) at the time of playback.
There are advantages to both approaches. While demosaicing in the camera means that high-end computers are not needed to process the image, waiting until application playback provides the greatest flexibility in processing the image but requires powerful computers to do so smoothly.
BAKING THE CAKE
Theoretically, an image recorded using 4:4:4:4 chroma subsampling (meaning lossless data in the red, green, blue, and alpha channels) could be deconstructed back into a Bayer pattern.
But, almost no video is recorded this way. Instead, to reduce file size, colors are dropped (that’s the sub-sampling part) from each pixel. Once that happens, like an over-exposed image, there’s no way to reconstitute color data after it is removed.
Here’s the problem. The left image represents pixel data after demosaicing is complete. Every pixel contains a single red, green and blue value.
But, most video records media using 4:2:2 sub-sampling, that’s the middle image. Here two pixels share one red and one blue value, while each pixel contains a unique green (grayscale) value.
4:2:0 (or 4:1:1) sub-sampling is worse. This is represented by the right image. Here four pixels share one complete color value. Because we don’t know what the colors were that were lost, we can’t reconstruct the RAW source file from any of these sub-sampled video files.
SUMMARY
The easiest way to think of a RAW file is that it is a camera-native source file, who’s composition varies depending upon the image sensor used by that camera and the firmware that processes it inside the camera.
Technically, we could recreate a RAW file, it’s just data after all. But because of all the missing color information in the video file, we could not fill it with accurate data. In general, you get almost the same exposure control with a log file as with RAW, without all the intermediate hassles.
RAW images may us the greatest flexibility in determining the look of an image. But, once that image is created, we can’t go back.
2,000 Video Training Titles
Edit smarter with Larry Jordan. Available in our store.
Access over 2,000 on-demand video editing courses. Become a member of our Video Training Library today!
Subscribe to Larry's FREE weekly newsletter and
save 10%
on your first purchase.