Challenges Facing Content Creation

By T.V. Burns, CTO, Media & Entertainment at Dell EMC

How UHD, Tiered Storage and Search Are Transforming Media and Entertainment



The broadcast industry is facing a revolution, and I don't use that word lightly. We're looking at a fundamental change in the model for content delivery. The last time we went through a transformation on a similar scale was during the digital transition. Digital television required sweeping changes in infrastructure, workflow, and thought processes that made the move to HD possible. Similarly, ATSC 3.0 will move the broadcast industry to an IP multicast protocol, requiring reconfigured infrastructure and delivery pipelines that will make way for the future of what we used to call television.

UHD: Pixels Are Not the Most Important Part


Let's be clear about something: UHD is a Trojan horse. The future of our industry is not hanging on the delivery of more pixels in every picture. What's important is the incredible array of new variables being introduced to the production process. That includes new deliverables and, along with them, new details that content creators have to track and manage. It's not just about PAL or NTSC, letterbox or pillarbox anymore. It's about the total matrix of dubs and subs, aspect ratio versions and permutations, and of course the marquee features that come along with UHD — high dynamic range, wide color gamut and high frame rate. Did I mention immersive audio? These are highly fluid content types that will be freely mixed and matched depending, of course, on what it is that consumers ultimately want to watch and listen to.

Fortunately, the UHD standards bodies have selected a flexible, highly efficient codec, HEVC. A tremendous amount of work has been done in allowing supplemental enhancement information (SEI) to be carried by the HEVC codec. That may seem like a small thing, but it's actually a big deal. A large amount of future functionality will be embedded inside that highly compressed HEVC container. Think of features like timecode or dynamic range, which used to require out-of-band metadata choices to work properly. With the new ability to carry in-band metadata through SEI messaging, we will have new ways to monetize content, including the ability to adapt to future devices that might have special characteristics, like color gamut, that need to be addressed. On-the-fly rendering by a display device based on flags that are set in content is already happening in cinemas, and is being adopted by CE manufacturers. The Interoperable Master Format (IMF) is like a DCP for broadcast. This allows automated mastering and distribution functionality, and that trend is going to continue with IMF-like capabilities extending all the way down to mobile devices. Automation is being implemented because nobody can afford to make the investments in time, manpower or infrastructure to build 1,000 deliverables by hand.

 

4K Post-Production: Cinema Craves Performance, But Netflix Demands Capacity


What does all this mean for post-production? Fortunately, most of the post world already works in file-based workflows over IP networks. Almost everyone has 10 gigabit Ethernet, and most have implemented 40 GbE backbones. The thing to remember about 4K post-production is that a lot of 4K is compressed, and that helps a lot. Yes, if you're working on cinematic Hollywood tentpoles, you may need to work in 4K uncompressed, and that puts a great deal of strain on your infrastructure. Not so long ago, you needed a SAN to do that kind of post work -- two streams of 24fps 4K, not UHD, in a 16-bit color space. That's more than a gigabyte per second sustained. That's heavy on anyone's infrastructure, but scale-out NAS and client NICs have gotten good enough that you can do that on IP networks instead of Fibre Channel or serial digital. At the very high end, the number of terabytes flying around during a film production is incredible. Ben Gervais, the technical supervisor on Billy Lynn's Long Halftime Walk said that, because of that film's HFR 4K stereo 3D format, they were dealing with 40x the usual amount of data on set. That's where the latest technology, like Isilon's new all-flash array, comes in handy. All-flash arrays can be a necessity to deal with such an insane amount of on-set data because the data transfers are much faster than spinning disk.

But there is not much uncompressed 4K work going outside of the DI and mastering suites, and cinema is not the driving force for post. Only 158 features were widely released by a major studio in 2016. Those are the tentpole pictures that might have been done in 4K. At roughly two hours apiece, that's just under 20,000 minutes of programming in 2016. Compare that to the world of peak TV. There were more than 455 scripted original shows produced in 2016. Do some very rough calculations, and you'll see that we were looking at almost 160,000 minutes of TV programming in 2016. And a lot of good quality original TV is posted using a clip-based workflow rather than a file-per-frame workflow. So there may be bulges in your pipeline where you have to deal with some uncompressed material, but the problem isn't typically the real-time performance of the network. The issue is binge-viewing.

Because Netflix has started the trend of binge-viewing, necessitating the release of all episodes of a given show at the same time, the post facility has to keep all of those episodes online and unlocked in case changes are required. If they make a script change in episode eight, that requires doing some reshoots and unlocking episode two. That's really hard on the post facility. Who can afford to capitalize that quantity of storage for the entire show? Most of them are addressing the problem through smart tiering to less-expensive tiers of disk, either on- or off-premises, or to a hybrid cloud tier.  That hybrid cloud is likely a mixture of private cloud -- on-premises object storage -- with public cloud for bursting. The key consideration is that you don't ever want to use public cloud when you have as much data going up as is coming down, because you're going to get killed by your data egress fee. You have to carefully consider the cost of each leg of the pipeline.

 

Data Gravity: A Heavy Concept for Heady Times


A good metaphor for this kind of meticulous planning is data gravity. The term originated in the world of big data, and video is the biggest data of them all. It's a useful way to think about the financial side of modeling workflows for content creation, content management, and content distribution. The concept of data gravity says that, as the mass of your data increases, the number of services and applications attracted to that data also increases, and in direct proportion to the mass of the data. It really is like gravity, with planets and suns and black holes. And data, especially video data, is so massive you need to keep it in one place -- at Isilon we call it the Data Lake -- and you need to bring the applications to the data. It works when you structure your workflow that way. But it's financially ruinous to do it the other way around.

That's the kind of thing you absolutely have to be aware of. Your financial analysis will determine how you handle data tiering. It will strongly influence the timing of your facility infrastructure upgrades. And it will come into play in operations, as well. You may know that you can send a given shot to India and save a certain amount of money on labor. But you also have to understand that many of the countries where the labor costs are lowest are the same countries where telecommunications costs are the highest. You may end up plotting those two curves against each other and picking a sweet spot where they cross. It's the same with something like data compression. You can use a powerful compression appliance in order to save money on storage, but you have to remember that it will require a certain amount of computational power. You trade off the cost of the compute versus your savings on storage. How about employing compression technology? Let's say you're shooting in Vancouver to get a production tax credit, but posting in LA. You can use dark fiber to light up more wavelengths at extra cost. But maybe you could use lossless compression techniques to get the most out of an existing circuit when sending shots to VFX or to the studio for review.

Even real estate becomes part of the equation. Many content creators operate out of downtown neighborhoods, in hip old buildings that used to be warehouses, and all of a sudden a 10,000-square-foot machine room full of servers is no longer an efficient use of space. You want to co-locate the render farm and maybe even the workstations. Accessing them through PC-over-IP protocols is good as long as you manage the latency for the artists. Because every square foot of your facility is now under the same financial scrutiny. Are you gaining enough from putting artists to work in your hip downtown Manhattan location to justify the cost of putting your data center across the river in New Jersey? That's also a question of data gravity, just on a smaller scale.

Another major factor influencing these decisions is the arrival of powerful virtualization technology. Media and entertainment people don't care about the infrastructure, or where it's located. They just want their applications to work the way they're supposed to so they can get their job done. But just a few years ago, it was hard to push pixels in a virtualized, CPU-only environment. But now that VMware has virtualized GPUs, you have the ability to rack-mount a server and have one GPU per thin client or share a GPU over as many as four. Part of it is companies like EMC working with ISVs to say, "Do you really want to be in the hardware business?" Selling turnkey workstations is great until you have to maintain and upgrade them. But, for the longest time, we thought we had to be on bare metal for performance reasons. Now virtualization has gotten good enough that we can use it for everything from lightweight desktop applications to heavy-duty pixel-pushing -- or relocating hardware to less expensive real estate with a fast pipe between the artists and the servers.

 

Search and Analytics: Keys to the Kingdom


Similar decisions about data gravity affect our working lives in ways we probably don't even consider. Think about your Gmail account. Do you delete your Gmails? Do you go through your iPhotos up in the cloud and clean them up? I don't. It's much cheaper, in terms of the value of my time, to keep everything online and rely on search capabilities to find things when I need them. I used to spend time maintaining good data hygiene, filing everything and checking for duplicates. But today's storage is cheap enough, and modern search tools are good enough that you can just keep everything and search for it. It’s not cost-effective to care about data hygiene any more, and our data lakes keep growing. Isilon will be addressing that trend at NAB by showing a new integration of Elasticsearch that indexes billions of objects with sub-second response time. In other words, we're addressing human nature -- the tendency to rely on more capable search tools rather than investing the time and energy in maintaining good data hygiene.

Improvements in search will go a long way to helping us deal intelligently with data gravity. The single most valuable metric you can have as a content creator is intelligent analysis of your usage patterns. Nobody can make you fundamentally change your own usage patterns. Maybe you're detail oriented, and like to get every last duck in a row before moving on to the next part of a job. Maybe you like to go crazy creatively and sort out the details later. No application vendor can force you to conform to a specific way of working. That's why we want to be able to provide metrics to users -- analysis of traffic patterns and usage patterns -- that will clue them in to the best way to organize their pipeline to cope with data gravity. That kind of analysis will tell you what data needs to be stored in your hot pool, what data needs to be parked in your nearline pool, and what can be put off in the cloud because you don't think it's ever coming back again.

 

ATSC 3.0: Back to the Future


Let's return to my premise at the beginning of this article -- that we're in the midst of a fundamental transformation in the media business. It's about to happen to over-the-air distribution, as ATSC 3.0. When the telecommunications industry moved from a circuit-switched model to a packet-switched model  30 years ago, it was absolutely revolutionary. It paved the way for the telcos to become what they are today -- they moved away from a capital-intensive, technology-focused business model to a more user-focused, service-centric business model. Telcos are re-architecting their core networks for 5G wireless, because today more than half of the data used on mobile devices is for video. Television broadcasters are doing something similar right now with ATSC 3.0.

The implications go way beyond TV. ATSC has geotargeting and personalization. It has interactivity. It has a targeted multicast function that is, theoretically, supposed to be for emergency uses such as 911 services. But when I see features like targeted multicast personalization it screams advertising to me, and we’ve seen advertisers use any and all technologies available to reach consumers.  In fact, there is a proposal in front of the FCC saying that wireless smartphones should have ATSC 3.0 tuners built into them, giving people access to emergency services when they need them. Do you suppose the telcos will be happy about that? They may see mobile video from broadcasters as a threat -- or maybe it will be the other way around. Either way, this underscores the ways in which telcos and broadcasters are experiencing the same stepwise evolution, but at different times and targeting different sectors of the market.

Think about it this way: there is no such thing as broadcast television anymore. The telcos, OTT providers and broadcasters are now in the same business — on-demand entertainment services delivery. It's going to be fascinating to watch the market, and the technology, evolve to reflect that revolutionary reality.