The Three Formats of Filmmaking

Posted on by Larry

The Three Formats of Filmmaking

Easily a third of all the emails I get every day deal, in one form or another, with video formats, image quality, and output. Whether you edit with Final Cut, Premiere, Avid, or any other video format the issues are the same.

In the “olde days” of standard-definition, life was easy. We shot DV or DigiBeta, edited the format we shot, and output the format we shot to video tape.

In these new days of HD, life isn’t so easy. The number of video formats (also called codecs) that we need to deal with is beyond the ability of one person to track. There seems to be infinite combinations of codecs, frame sizes, frame rates, resolutions and aspect ratios. What should we shoot? What should we edit? What should we export? It is very confusing.

As videographers, filmmakers and editors, we want that elusive “perfect mix” of high quality, high speed and small files sizes. Sadly, like the famous triangle of “Good – Fast – Cheap,” we can only pick two. The good news, though, is that our choices change as we move through the editing process.

I call this The “Three Formats” of Filmmaking which is best explained with an analogy.


Imagine that I’m standing next to a bubbling mountain brook. In my hand is a plastic, one-cup measuring cup. I dip that measuring cup into the bubbling mountain brook and fill it to the brim with bubbling mountain brook water.

Now, into that cup I add food coloring, spices, sugar and stir rapidly to mix it all together. Well, the measuring cup is totally full, so as I add ingredients water sloshes over the sides of the cup.

I keep adding and mixing until I’m done. Except now, my measuring cup is no longer full. Adding and mixing all those ingredients sloshed a lot of what used-to-be-bubbling mountain brook water over the sides.

So, when I’m ready to share my mix with others, I don’t have nearly as much as I started with. The cup was full when I started, but the process of creating my final mix lost a lot of the original material.

There are two possible solutions to this problem of the “lost water.” Let’s look at both of them.


Again, I’m standing next to the bubbling mountain brook, with a one-cup measuring cup. I dip it into the water until the cup is totally full.

However, before I start adding ingredients, I pour the entire contents of that measuring cup into a five gallon wooden bucket.

Now, when I add and mix ingredients, I can add and mix as much as I want. because no matter how hard I stir or how many ingredients I add, the five gallon wooden bucket is plenty big enough to contain it all.

When it comes time to share my final results with others, I have everything I originally captured, plus everything I added, all stored in one large bucket.

However, while my final mix has the form of a five gallon bucket, it only has the contents of what I originally captured from the brook – one cup of water.


Again, I’m standing next to a bubbling mountain brook. But this time, I’m holding the five gallon bucket. I dip the entire bucket into the stream and capture a full five gallons of water.

As you would expect, when I add ingredients and mix them together, water sloshes out of the bucket. However, even if I mix with wild abandon, I know that I only need one or two cups of water for the final project, so I’m not worried about any losses, because I started with far more than I need.


Reality, like a bubbling mountain brook, is essentially infinite. When we capture audio or video, we are taking only a small slice from that infinite supply.

For a variety of technical reasons, many cameras can only take very small slices of reality – the one-cup measuring cup in my example.

However, the process of editing video is not the same as the process of capturing it. Editing means making changes, adding filters, transitions, multiple images. We are mixing a wide variety of elements together to form our final project.

Software is designed to minimize quality loss, but some loss is inevitable; even if we are careful.

To prevent this loss, we have two options: 1. Convert our original media into something better designed for editing, or, 2. Capture at a higher quality to start with.

Video formats designed for editing are often called “mezzanine” or “intermediate” formats. These include ProRes 422, AVC-Intra, DNxHD 200, and others. Like a five gallon bucket, these formats are much bigger than the original capture file, because they are designed to retain the maximum amount of quality no matter how much adding and mixing you do.

Converting (which is also called “optimizing,” or “transcoding”) an original capture file into this bigger space allows for more manipulation of the image, without a significant loss in quality.

If you only have one cup of water, dumping it into a larger bucket allows you to keep all of it. If you start with a full five gallon bucket, even if you lose a little, you still have more than you need.

Converting H.264, MPEG-4, or AVCHD files into a higher-quality intermediate format means you retain as much image and audio quality as possible, while speeding the entire process of editing, rendering and export. It is my recommended way of working with these formats.

Capturing original images at a higher quality is what all external digital recorders are designed to do. In by-passing the step that reduces the bubbling mountain brook to a single cup of water, and capturing it instead into a five gallon bucket, you begin your edit already using an extremely high-quality intermedia codec. This means you have all the quality you need – and more – for speed, precision, and quality for your final output.

Then, after that master file is output, you compress it into the final format you need for distribution. By starting compression with a higher quality master file, you end with a higher quality finished project.


All too often editors are confused into believing they need to edit the file size and format of their final distribution. This is thinking the wrong way.

Use the idea of the “Three Formats” of filmmaking to achieve a better approach:

Regardless of which software you use for editing, when you follow this approach, you’ll be pleased at how much faster the editing process runs and how much better your final results look.

Visit our website to see Final Cut Pro Training & more!

Bookmark the permalink.

6 Responses to The Three Formats of Filmmaking

  1. DebG. says:

    OK, I’m a little dense here, Larry. 🙂
    Would you please do a step-by-step list of this workflow? I’m still capturing HDV tape via Firewire, but am now moving to DSLR and am a bit confused as to the best way import and export in FCPX.
    I know HDV is already optimized for FCPX, but what about DSLR footage (Canon T4i)? It’s H264 so you import it as ProRes (422 or LT or what?) Then you export Master File and THEN bring it into Compressor? Or can you skip Compressor and export a certain way just in FCPX? Lastly, after you’re done exporting and done with the project, can you safely delete render files in order to free up some hard disk space?

    Thanks, Larry!

  2. “Always, ALWAYS shoot progressive; progressive is very easy to convert to interlaced. Converting interlaced to progressive is a mess”….

    Thats sentence make me worry dear Larry…

    i have all my footage recorded in 1080i and im editing rigth now for export in 1080p….
    I’m making a mistake?
    i should export in 1080i like the orirignal footage?

    Thanks Master

  3. Rick says:

    Can FCPX use the “original media” rather than the transcoded high quality prores to make the final output?

    FC7, I believe, would go back to the original media rather than the render files or re-rendering, if you used Export Using Compressor option. Can FCPX do the same thing, thereby avoiding any transcoding effects, even if minor?

    The advantage would be that the only compression artifacts would be those from the final compression format(s).

    I think the “using compressor” option in FCP7 worked something like this: it would go back to the source media, uncompress a frame, apply the effects to the uncompressed frame, then compress that frame in the final format. It would then repeat the process for each frame. Consequently, an uncompressed version was created and discarded for each frame. Is that correct?

    Can FCPX do that? If so it would mean that the high quality format used for editing was only used for editing and generating temporary render files.


  4. Yay, you just gave me a huge missing piece to my editing knowledge. I will try out this approach to my editing. I didn’t know what DNxHD was when I saw it a few months ago. Now I know. Thanks!

  5. Mia says:

    Hi Larry
    Scenario: a multi-camera shoot, and the highest resolutions differ, say one is 1920×1080 the other 1280×720 or you are given footage with different resolutions and can’t reshoot…
    If your final distribution is 1280×720 or less, would it be best to
    a) transcode the 1920×1080 footage to 1280×720, so that you are editing with a uniform resolution? or
    b) transcode the 1920×1080 footage to 1920×1080, keeping it at its highest quality, and editing with it in a 1280×720 timeline?


    • Egon Freeman says:

      Let me try answering that question from my perspective…

      I think it boils down to what software You have converting Your lower-res footage, and how You go about it. It is also heavily dependent on Your tastes.

      Up-converted video oftentimes looks fuzzy, or otherwise a bit softer than the original. This is by far the worst problem of up-sampling, I think (the artifacts can be minimized). This lost sharpness isn’t always regained when converting back down. If Your software returns this sharpness back to You, then by all means up-sample and cut in 1080.

      If You find that softness unacceptable, then see what happens to Your footage when You down-sample from 1080 to 720. If it looks okay and filters work all right, edit in 720. This has the “hidden advantage” in that You cut in the delivery size, it will give You an idea about how the final product will behave.

      If You’re really bent on “quality no matter what”, then there exists a middle ground… cut the 1080 in a 1080 environment, and the 720 in a 720 environment. When You’re done – export them at their best settings, and see which one You like better – 1080 downsampled to 720, or 720 upsampled to 1080 – and do Your final cut in that format. Keep in mind, though, that this is the most time-consuming process of all. And also, not that many projects can be spliced separately like that.

      There is no one sure-fire way to do editing, I guess, and “to each his/her own” holds true here as well. I, for one, prefer to work as much as I can on the original media. I have this belief that even if not the fastest, it’s always better to just fetch the source directly, with as little going-around as possible. If I have mixed 1080/720 footage, I almost primarily cut in 720, but with the original 1080 footage intact! – for two reasons:

      1) I don’t force the decision – the 1080 will be downsampled to 720 at render/export time, and
      2) the filters/effects in the NLE of my choice (FCPX in this case) render at full source frame*, IIRC, so I’m getting the best of both worlds, in a sense.

      It DOES come with a burden of longer render times, but don’t we all expect renders to take forever anyway? The only real problem this approach has is that working with non-conforming footage is always slower and more computer-intensive (esp. working with unoptimized-for-editing sources like h.264). But with FCPX’s render times, I don’t believe I’d get more speed overall by preoptimizing the video upfront (and my boss tends to agree).

      There’s a second take-away here… If Your delivery format is some form of SD (analog TV?) or similar, then there’s very little You can do to Your HD or Full-HD footage to damage it badly enough for it to show in the final. In those cases, You’ll find, things that work well within the HD realm just don’t work well in the SD realm (think: text, any sort of titles). You’ll be looking for sharpness and detail in wholly different areas. Ultra-sharp HD footage just craps itself if ported straight to SD – all sorts of weird interlace-vs-sharpness problems show up, for example. Ultra-fine detail just LOVES to flicker on interlaced SD, esp. formats like PAL and NTSC – in those cases less quality is actually better. So I think what You really need to do is – figure out what sort of delivery You need to make, and work backwards from there.

      Anyway, sorry for the lengthy brain dump. I hope it makes sense.

      * Larry? I can very well be mistaken here? You’re the FCPX/Motion expert…?

Leave a Reply

Your email address will not be published. Required fields are marked *

Larry Recommends:

FCPX Complete

NEW & Updated!

Edit smarter with Larry’s latest training, all available in our store.

Access over 1,900 on-demand video editing courses. Become a member of our Video Training Library today!


Subscribe to Larry's FREE weekly newsletter and save 10%
on your first purchase.