The visual cloud market comes with an interesting set of challenges. On one side there’s the passion and struggle of artists to create something—a show, a movie, a game, a live experience. On the other side there’s the work of promoting that material to a vast consumer audience inundated with options and whose tastes can change as quickly as the weather in Texas. Right in the middle is where technologists fall: tasked with finding a way to bring an explosion of exciting content to people in a manner that maintains the economic engine behind creating and helping customers find content that they like.
There’s no doubt the market is changing. As I wrote earlier this year, we’re entering a new era of storytelling. Global online subscriptions surpassed cable subscriptions for the first time in 2018. (Though the majority of subscribers still pay for both types of services.) Video on demand (VOD) is even set to overtake box office revenue this year. As technologists, we’ll leave the production and marketing to the people of Hollywood. But we’re firmly stuck in the middle to figure out how to drive down costs while serving up higher quality video, and more of it.
Preparation vs. Distribution
Amazon recently published a new breakdown of video streaming costs to build a better understanding of the total cost of ownership (TCO) for such services. Left out of their equation, and our further analysis, was content production and acquisition. It’s not a trivial amount. Netflix alone spends billions each year to secure rights and produce new material. That said, we’ll leave those decisions to others. Our job is the nuts and bolts of getting that material to its intended audience.
Looking at the TCO for streaming video, the two areas technologists can focus on are preparation (transcoding) and distribution (bandwidth, storage, and network optimization).
Based on Amazon’s initial report, we found that the cost of preparation pales in comparison to the cost of distribution. In the example of live streaming from a public cloud, preparation was only 5% of the total cost, with distribution taking up a whopping 95% of the expense.
Like with content acquisition cost, preparation can be a large up-front expense. That cost, however, is a one-time expense. Preparation is a centralized operation that, depending on scale, can take a relatively small number of servers. Distribution is an international operation involving dozens of data centers, edge and storage nodes, networks, and more. Distribution costs are ongoing, and quickly add up, especially for high viewership, over the top (OTT) services.
Bit Rate Reduction
In looking at Amazon’s report, what we found surprised even us. If a video service can lower its preparation costs by 50%, but doesn’t lower its distribution costs, the savings are almost non-existent. Conversely, if a service company focuses on lowering distribution costs, the savings are instantaneous and dramatic. Bit rate* reduction is the crucial factor in lowering visual cloud TCO.
The TCO savings scale, too. For example, in our further analysis, if a video streaming service company increases its preparation costs in order to achieve 10% bit-rate efficiency, either by optimizing the current encoder or going to a higher efficiency encoder like HEVC or AV1, the tipping point for savings starts at one to two thousand simultaneous viewers. With VOD, bit-rate efficiency is even more important due to the increased storage costs.
The immediate impact bit rate reduction has on visual cloud TCO is why Intel has been so focused on Scalable Video Technology (SVT). SVT is a software-based video coding technology that allows encoders to achieve, on Intel® Xeon® Scalable processors, the best possible tradeoﬀs between performance, latency, and visual quality to deliver the best bit-rate at any given video quality. Yes, video streaming service companies need to invest in the right kind of processor-based infrastructure, leveraging technologies such as AVX512, AV2 and VNNI, and at the same time, processors have to be paired with robust software optimizations.
In addition, Intel is investing in the Open Visual Cloud project with an intent to unleash innovation, simplify development, and accelerate time to market for visual cloud services. The Open Visual Cloud is an open source project that offers a set of pre-defined reference pipelines for various target visual cloud use cases leveraging open source building blocks tuned for Intel® Xeon® Scalable processors. These optimized open source ingredients cover the four core building blocks—encode, decode, inference, and render—which are used to deliver visual cloud services.
By starting with these reference pipelines, developers are able to achieve rapid development and innovation in creating and delivering the enhanced visual experience of today's demanding consumer. The reference pipelines are provided as Dockerfiles to simplify container image construction and deployment in cloud environments. Without the larger community supporting the software side of the TCO equation through open source initiatives, the visual cloud industry won’t evolve fast enough to meet consumer demands in a cost efficient manner.
I look forward to the advancements our Intel team will continue to make with our hardware and software research with help from the larger visual cloud community. We’ll continue to invest in open source while optimizing our CPUs and others products across moving, storing, and processing large amounts of data. For more on Intel’s plans to improve efficiency and lower costs for the visual cloud market, visit intel.com/visualcloud.
*The number of bits that are conveyed or processed per unit of time.