[Updated May 21, 2018, with more details about specifying hard drives and links to other servers.]
If you are a one-person shop, the best storage system to use for audio or video editing is a RAID that’s directly connected to your computer; this is called “Direct-Attached Storage.” The benefits of direct-attached storage are, generally, that it’s the fastest, cheapest and easiest to use.
If you are an editor in a large shop, their IT department has already configured both hardware and software to save and access both media and projects on the corporate server. The benefit to the corporate server is that all you need to do is edit, it’s someone else’s job to keep the system running.
However, if you are part of a small- to medium-sized workgroup who needs to share media between multiple editors, there has never been a better time to migrate to a server. The purpose of this article is to showcase some best practices for integrating shared storage with Final Cut.
NOTE: Here’s an article that covers how to integrate a server with Adobe Premiere Pro CC.
There are two types of servers: SAN and NAS. “Storage area networks (SANs) and network attached storage (NAS) both provide networked storage solutions. A NAS is a single storage device that operates on data files, while a SAN is a local network of multiple devices.” (LiveWire.com) SAN devices tend to be found in the enterprise, while NAS devices tend to be found in smaller workgroups. Also, in general, NAS devices are much less expensive than SAN systems and easier to set up.
NOTE: Servers today can include spinning hard disks, SSDs or a combination of both. For what we do, spinning hard disks (called “spinning media”) offer the best performance with the best capacity at a reasonable price. Network speeds are so slow, compared to the speed of an SSD, that we aren’t able to take advantage of the speed SSDs provide. They are best used in direct-attached storage.
When you start integrating a server into your editing workflow, you need to be concerned about four things:
Storage capacity is the number we are most familiar. It measures how much data the server can hold in terabytes (TB).
Bandwidth is the speed that data transfers between the computer and server. This is measured in megabytes per second.
Latency is the amount of delay between the time you press the spacebar inside your NLE and when the clip starts playing. Less latency is better, and, in general, we want it to be less than a quarter-second. (While I can’t measure the precise latency on my server, I have not found it objectionable during editing.)
The fourth point is one we’ll discuss more during this article.
NOTE: One other point, when you invest in a server, be sure to also get hard drives that are rated for NAS or server use. These tend to be 5400 rpm units, which is fine for a server. Slower speed drives still deliver great performance and they last longer than 7,200 or 10,000 rpm drives.
CONNECTIVITY AND BANDWIDTH
How you connect to the server has a significant impact on the bandwidth. Here are some examples:
To attain these speeds, three key pieces of hardware must all support the same bandwidth:
As with all things, the faster the speed, the greater the cost. Most buildings are wired with Cat 5e cables, which makes 1 Gigabit Ethernet the default network speed for many of us.
DRIVES ARE IMPORTANT [Update]
It wasn’t until I published this report that I realized I left out a critical step in any server: hard drives. Most of the servers in the market ship without drives, which means we need to add them ourselves. And determining which drives to buy, I discovered, can be very confusing.
Here are some suggestions:
NOTE: Several readers took issue with my recommending 5400 RPM drives, feeling that these were too slow for media work. Instead, they recommend 7200 RPM drives, especially as the number of users on the server increases. The difference in price is minor. If I were to do this again, I’d probably get 7200 RPM drives.
These are the criteria I used to determine which drives to buy:
I ended up buying five Western Digital 8 TB RED NAS drives, which spin at 5400 RPM. The Western Digital 8 TB RED Pro NAS versions spin at 7200 RPM. In either case, I formatted these into a RAID 5 to provide 32 TB of online storage. They’ve been running continuously for seven months, so far, with no problems. And, I haven’t noticed any issues with latency.
NOTE: Servers should always be formatted as RAID 5 or 6, not RAID 0 or 1. Here’s an article that explains RAID levels.
A SPECIAL CONNECTION
Bandwidth is fixed. For example, if I have a single Ethernet cable between the server and a 1 Gigabit switch, that means that the maximum data transfer rate is about 120 MB/second. If I have two users accessing the server at the same time, each user gets 60 MB/second (120 / 2 = 60). If three users access the server at the same time, each user gets 40 MB/second (120 / 3 = 40).
Suddenly, that single Ethernet cable become a serious bottleneck. To avoid this, many servers provide multiple Ethernet connections on the back of the server. Each connection on the back of the server acts as a separate “port,” each with its own IP address and providing the full bandwidth for that port. This allows different computers to access different ports on the server, thus avoiding the bottleneck of trying to squeeze all those data requests through a single Ethernet cable. Spreading the load decreases performance bottlenecks.
NOTE: While I could connect the server to the switch using a 10-Gigabit connection, that would require getting a new switch and additional ports on the server. When budgets are tight, that may not be a good option. Separate ports are cheaper and achieve similar results.
For example, here at the office, I’m using a NAS server from Synology. The back of the Synology has four Ethernet ports. I connect each of these to the switch then, using the switch control software, I assign a different port – with its own IP address to each connection. Now, when editor 1 needs to access the server, they use a different IP address than editor 2.
The internal bandwidth of the server is FAR faster than a single Ethernet connection, so this provides maximum performance to each member of the editorial team.
NOTE: Even though computers connect thru different ports, they all have access to the same data. This server provides file-level sharing, which is what you want for video editing, not separate volumes for each editor.
We can take this one step further using “port aggregation,” also called “port bonding” and “port doubling.” Rather than limit myself to the speed of a single Ethernet connection, I can “tie” or “bond” two of the ports together to improve the file transfer speed between the server and the switch. This means, under a heavy load, I’m using two connections to completely fill the Ethernet “pipe” between the server, the switch and my computer.
NOTE: The specific switch configuration settings vary by manufacturer and switch. Consult the user manual for guidance.
Even with this setup, I still can’t exceed the speed of 1 Gigabit Ethernet, but I can make sure it goes as fast as possible. Port aggregation combined with a server that provides multiple Ethernet ports is a very effective way to make sure your editors have the bandwidth they need.
NOTE: WiFi speeds are improving, but for video editing, I don’t recommend using a WiFi connection. Speeds fluctuate based upon the load through the wireless receiver and interference can also slow things down. If you need to edit, it is much faster and more reliable to connect a wire between the server and your computer.
HOW MUCH BANDWIDTH?
Different codecs require different amounts of bandwidth. For example:
NOTE: Here’s a table that goes into bandwidth requirements for a variety of codecs
The best way to determine how much bandwidth you need is to measure it. And Activity Monitor (Utilities > Activity Monitor) is a great tool for doing exactly that.
Open Activity Monitor, then open Final Cut and play a typical project. Click the Disk tab at the top of Activity Monitor and watch the graph at the bottom. Data received (in blue) shows the amount of data playing from the server to the computer. Data sent (in red) shows the amount of data being sent from the computer to the server.
In this screen shot, I’m measuring the bandwidth while playing a four image split screen in camera native format without first rendering the scene. While the bandwidth fluctuates, at its most intense, FCP only needs 28 MB/second of data in this example. However, I’ve done other projects that need close to 80 MB/second. Every project is different and some video codecs require hundreds of megabytes per second!
These stats are from my current network, as measured using AJA System Test Lite. Given my setup of multiple server ports and port bonding, I can fully “saturate,” or fill, a 1 Gigabit Ethernet connection. While the theoretical maximum bandwidth is 125 MB/second, we can only expect about 108 – 110 MB/second in real life, due to overhead in the Ethernet protocol.
As you can see from the screen shot above, my network, switch and server support both reads and writes close to that practical maximum of 110 MB/second.
So, what video formats will this bandwidth support? A lot, actually, as you can see from this table from Blackmagic Disk Speed test. A properly configured 1-Gigabit Ethernet network can support virtually all camera native formats, frame sizes and frame rates – including all ProRes variations – up to 2K frame sizes.
NOTE: The “How Fast?” column describes the maximum frame rate supported for different frame sizes and codecs at this network bandwidth.
For frame sizes larger than HD, you will need to configure your computers and network to support 10-Gigabit Ethernet. While there are excellent 10-gig converters that connect to the Thunderbolt port on both current and older MacBook Pros and iMacs, you’ll also need to change your network cabling, switch and connections on the back of the server to support this faster protocol.
Still, though more expensive, 10-bit Ethernet provides 10 times the bandwidth of 1-Gigabit Ethernet. This allows you to support more editors from that single server or work with more complex video formats.
SHARING IN FINAL CUT PRO X
My workgroups tend to be small – two to three editors with a fourth computer system reserved solely for video compression. Given that, let me set expectations.
The current version of Final Cut Pro X (10.4.2) does not allow two editors to work in the same library or project at the same time. However, FCP X DOES allow multiple editors to share the same media at the same time; up to the bandwidth limit of your storage system and network.
Final Cut supports editing libraries directly from a server IF the server supports the SMB3 protocol and is configured as an XSAN. This is not an easy hurdle to achieve. I have been able to configure my Synology to support SMB3, mostly, but not XSAN. So, I can’t edit libraries directly on the server.
Media, on the other hand, can be shared between editors for any storage system that can be mounted to the Mac desktop. This is VERY easy to achieve with virtually all servers.
For example, here in the Media Import window, you see four devices:
Selecting a server and importing media is as easy as working with a local hard disk.
HOW THIS WORKS IN PRACTICE
Here are some more things you need to know about Final Cut:
NOTE: On a 1-Gigabit network, copying a 10 gigabyte file takes less than two minutes.
So, here’s my workflow:
This allows me to keep the library small, while maximizing my use of the server.
By default, FCP X stores all generated media in the Library. You can change this by selecting the library, then, in the Inspector, click Modify Settings for Storage Locations.
On the server, create a folder that you want to use to hold all generated media; this means optimized and proxy files. Then, change the Media setting from In Library to the folder you just created. (You can name the folder anything that makes sense to you and your project.) As long as you import media into Final Cut with Leave files in place checked on, the only thing this folder will store is generated media from Final Cut.
Because these generated files are referenced in the library and stored on the server, all other editors can use these same files, without having to re-create them or copy them to their local storage.
NOTE: If you want render files stored on the server, as well, set the Cache to the same server folder. (Not to worry, FCP X will keep all these different formats safely separate.) If you have existing render files, FCP X will move them to the new location. This option is a good idea if you have enough bandwidth, as it saves other editors from having to re-render the same footage.
I’ve found this provides excellent performance, while maximizing what both Final Cut and the server do best.
The first time you setup a server is VERY intimidating. I know, it took me a long while to figure this out. But the benefits of sharing media between multiple editors make the work worthwhile. However, once a server is setup and mounts to the desktop, using it is as easy as using any “normal hard disk.” Even better, once you understand how this system works, creating a new library takes just a few seconds.
As with all things in tech, experiment with this new workflow to see how it works before jumping into a deadline-driven paying project. And let me know what you discover or if I left anything out. There’s still a lot here for all of us to learn.
LARRY’S SERVER SYSTEM
Server: Synology 1517+ (32 TB)
Drives: Western Digital 8 TB RED (a set of 5)
Switch: Cisco SG200-18 18-port Gigabit Smart Switch
NEW & Updated!
Edit smarter with Larry’s latest training, all available in our store.