The "Three Formats" of Filmmaking

Posted on by Larry

Easily a third of all the emails I get every day deal, in one form or another, with video formats, image quality, and output. Whether you edit with Final Cut, Premiere, Avid, or any other video format the issues are the same.

In the “olde days” of standard-definition, life was easy. We shot DV or DigiBeta, edited the format we shot, and output the format we shot to video tape.

In these new days of HD, life isn’t so easy. The number of video formats (also called codecs) that we need to deal with is beyond the ability of one person to track. There seems to be infinite combinations of codecs, frame sizes, frame rates, resolutions and aspect ratios. What should we shoot? What should we edit? What should we export? It is very confusing.

As videographers, filmmakers and editors, we want that elusive “perfect mix” of high quality, high speed and small files sizes. Sadly, like the famous triangle of “Good – Fast – Cheap,” we can only pick two. The good news, though, is that our choices change as we move through the editing process.

I call this “The Three Formats of Filmmaking,” which is best explained with an analogy.


Imagine that I’m standing next to a bubbling mountain brook. In my hand is a plastic, one-cup measuring cup. I dip that measuring cup into the bubbling mountain brook and fill it to the brim with bubbling mountain brook water.

Now, into that cup I add food coloring, spices, sugar and stir rapidly to mix it all together. Well, the measuring cup is totally full, so as I add ingredients water sloshes over the sides of the cup.

I keep adding and mixing until I’m done. Except now, my measuring cup is no longer full. Adding and mixing all those ingredients sloshed a lot of what used-to-be-bubbling mountain brook water over the sides.

So, when I’m ready to share my mix with others, I don’t have nearly as much as I started with. The cup was full when I started, but the process of creating my final mix lost a lot of the original material.

There are two possible solutions to this problem of the “lost water.” Let’s look at both of them.


Again, I’m standing next to the bubbling mountain brook, with a one-cup measuring cup. I dip it into the water until the cup is totally full.

However, before I start adding ingredients, I pour the entire contents of that measuring cup into a five gallon wooden bucket.

Now, when I add and mix ingredients, I can add and mix as much as I want. because no matter how hard I stir or how many ingredients I add, the five gallon wooden bucket is plenty big enough to contain it all.

When it comes time to share my final results with others, I have everything I originally captured, plus everything I added, all stored in one large bucket.

However, while my final mix has the form of a five gallon bucket, it only has the contents of what I originally captured from the brook – one cup of water.


Again, I’m standing next to a bubbling mountain brook. But this time, I’m holding the five gallon bucket. I dip the entire bucket into the stream and capture a full five gallons of water.

As you would expect, when I add ingredients and mix them together, water sloshes out of the bucket. However, even if I mix with wild abandon, I know that I only need one or two cups of water for the final project, so I’m not worried about any losses, because I started with far more than I need.


Reality, like a bubbling mountain brook, is essentially infinite. When we capture audio or video, we are taking only a small slice from that infinite supply.

For a variety of technical reasons, many cameras can only take very small slices of reality – the one-cup measuring cup in my example.

However, the process of editing video is not the same as the process of capturing it. Editing means making changes, adding filters, transitions, multiple images. We are mixing a wide variety of elements together to form our final project.

Software is designed to minimize quality loss, but some loss is inevitable; even if we are careful.

To prevent this loss, we have two options: 1. Convert our original media into something better designed for editing, or, 2. Capture at a higher quality to start with.

Video formats designed for editing are often called “mezzanine” or “intermediate” formats. These include ProRes 422, AVC-Intra, DNxHD 200, and others. Like a five gallon bucket, these formats are much bigger than the original capture file, because they are designed to retain the maximum amount of quality no matter how much adding and mixing you do.

Converting (which is also called “optimizing,” or “transcoding”) an original capture file into this bigger space allows for more manipulation of the image, without a significant loss in quality.

If you only have one cup of water, dumping it into a larger bucket allows you to keep all of it. If you start with a full five gallon bucket, even if you lose a little, you still have more than you need.

Converting H.264, MPEG-4, or AVCHD files into a higher-quality intermediate format means you retain as much image and audio quality as possible, while speeding the entire process of editing, rendering and export. It is my recommended way of working with these formats.

Capturing original images at a higher quality is what all external digital recorders are designed to do. In by-passing the step that reduces the bubbling mountain brook to a single cup of water, and capturing it instead into a five gallon bucket, you begin your edit already using an extremely high-quality intermedia codec. This means you have all the quality you need – and more – for speed, precision, and quality for your final output.

Then, after that master file is output, you compress it into the final format you need for distribution. By starting compression with a higher quality master file, you end with a higher quality finished project.


All too often editors are confused into believing they need to edit the file size and format of their final distribution. This is thinking the wrong way.

Use the idea of the “Three Formats” to achieve a better approach:

Regardless of which software you use for editing, when you follow this approach, you’ll be pleased at how much faster the editing process runs and how much better your final results look.

As always, I’m interested in your comments.



4 Responses to The "Three Formats" of Filmmaking

  1. I use both Final Cut Pro X and Premiere Pro CC, so this is not meant as a critique of either program (not that I don’t have an arsenal of critiques).

    One of the biggest (and erroneous) complaints I hear about Final Cut Pro X is that one must transcode media in order to begin editing. On the flip side, one of the loudest remarks about Premiere Pro is that it can directly edit any native media. I always transcode when using FCPX because a) it makes the editing process smoother and b) it shortens final export times. I never transcode when using Premiere Pro and, admittedly, the editing experience is not all that smooth. Based on this alone I would argue for transcoding one’s media before editing.

    I’ve heard that Adobe automatically converts your media on the fly to work with 32-bit float (I cannot confirm this). However, you recommend transcoding your media first. Does either approach have an advantage over the other? I’ll use whatever program works best for the job at hand, but given the information given in your post I’d definitely want the approach that provides the cleanest and best result.


  2. Thomas Smet says:

    I’m not entirely sure the simple answer works for every situation or project. For example when I worked on TV spots I would find it kind of insane to transcode hundreds of shots to save time with a 30 second render. Even with multiple revisions for the client I still sometimes found native to be faster.

    I have also found complexity of compositing to factor in with this as well. When I work with dynamic link between After Effects and Premiere I find the type of format is so minor in terms of render time.

    To backup what Gabriel said I do have a concern about quality. To follow your analogy. Sometimes pouring that one cup of water into a five gallon bucket will create a small loss of water. Some of the drops stay in the cup. Some splash out if you are not careful. The point is that cup of water isn’t 100% the exact volume it was in the cup. There was a bit of loss. There is also a bit of loss dumping back into a small cup. Especially is great care isn’t taken.

    Premiere and After Effects do have an interesting effects on native footage. At least with AVCHD adobe applications will convert the raw material to 32bit floating color space and 4:4:4 color on the fly. In terms of quality this is converting to a water tower and bypassing the five gallon bucket. The color space is superior to 10bit formats and you get up sampled 4:4:4 color which can look better than using a 4:2:2 based intermediate format in some situations.

    I know some FCP users who like to use 3rd party tools to convert footage to 4:4:4 filtered color at 10bits but that isn’t as good as the native Adobe approach. I also find QuickTime formats in Adobe applications to not be as smooth as they should be. Now in FCPX it does make a difference but when it comes to VFX heavy projects I typically prefer using Adobe products with native material.

  3. This raises some more questions. Does using Final Cut Pro X to transcode footage to ProRes result in an image quality that matches what Premiere (or After Effects) does under the hood, i.e. converting the media to a 32-bit float color space with 4:4:4 color on the fly? When working with highly compressed footage (AVCHD for example), is there a noticeable difference in the end result produced by each of the two programs? Does transcoding or under the hood converting magically create bit depth and color information that wasn’t there to begin with, and if so, then how? I know that one can set up a color profile in Adobe Bridge that (apparently) creates a consistent color space between Adobe applications; how does this compare to the color sync technology that (may or may not) function under the hood in Final Cut Pro X (see:

    It seems to me that (aside from all other factors, judging on image quality alone) there are a great many things to consider when determining not only which NLE to use but also which approach to use within that particular NLE.

    I like both Final Cut Pro X and Premiere for different reasons. Because the media organization in FCPX is so much faster and more efficient than that of Premiere, however, I find myself doing nearly everything in FCPX. That said, if FCPX produced noticeably inferior image quality results (I don’t think it does), then I would turn more frequently to Premiere, which certainly has its own set of strengths.

    Lots of technical considerations that make one’s emotional attachment to a particular NLE seem a bit less important (though no less fun to discuss!).

  4. Hmmmm. The philiphodgetts link I provided in my recent post suddenly does not work (seems to have grabbed that end parenthesis), but if you copy/paste into a browser it works just fine. Very interesting read.

Leave a Reply

Your email address will not be published. Required fields are marked *

Larry Recommends:

FCPX 10.5 Complete

NEW & Updated!

Edit smarter with Larry’s latest training, all available in our store.

Access over 1,900 on-demand video editing courses. Become a member of our Video Training Library today!


Subscribe to Larry's FREE weekly newsletter and save 10%
on your first purchase.