How YouTube wins with Web video
YouTube has been so successful that it’s become one of the three applications that have fundamentally changed the infrastructure of the internet (email and the web are the others). As the first real ‘broadband’ application it has more than filled the pipes that were overbuilt in the boom time and it’s forced carriers to peer with a content provider for the first time (see our recent analysis of its impact here).
So, how has YouTube coped with the traffic? Below is a short case study of YouTube’s business model and its impact on others. It deals with the vital importance of aggregation and a major shift in the industry’s internal economics. [Ed - this is a short extract from our new report on ‘Fixing the broken Internet Video distribution value chain’, to be released in November. Pre-order now for a 10% discount here]
Everyone knows YouTube - home of any video clip you can imagine, the world’s richest source of stupidity, and the guys who made all videos exactly 425 by 344 pixels. But how did they get to be more than 70% of the Web video market, and one of only three applications that changed the Internet infrastructure itself?
Let’s hold that thought for a moment. In the 1980s, e-mail made it necessary to hook up enterprise systems to the Internet, or at least to wide-area data networks, for the first time. In the 1990s, the World Wide Web became the first Internet application to reach the masses, and created the dynamically addressed, asymmetric access networks we know and love (up to a point). YouTube was the next application after that to achieve sufficient success to force changes to how the Internet works.
The basis of YouTube’s success wasn’t infrastructure; loads of other companies have big data centres, and there are plenty of content-heavy services that depend on Amazon S3 and any given CDN to work. The takeover by Google, of course, gave them access to unrivalled infrastructure resources, but YouTube’s explosive growth forced them to look for a partner with such resources. It wasn’t the other way around.
Neither was it any engineering aspect of the network layer - it’s basically a naive streaming model, and until very recently you could observe from the UK that your YouTube content came at least as often from datacentres on the US West Coast as it did from the East, and forget about serving it up from anywhere in Europe. That’s something like the opposite of a CDN, even if they have now begun serving a lot of stuff from uk.youtube.com.
In fact, the success of YouTube has been down to two things - the first is that 425 by 344 pixel Flash object. YouTube realised that the best way to get people to look at their content was to get people to promote it themselves, for their own reasons; therefore, it was vital that it fit easily into the then-new blogging tools and social networks. It was also very important that it be as cross-platform as possible.
The second was YouTube’s concern for easy content ingestion. YouTube has value because it has liquidity; you know that you can find what you’re looking for there, because everyone else goes there first. Creating a simple (and free) upload process was crucial in creating a huge hoard or honeypot hub of holiday Hollywood. Essentially, YouTube is an aggregator with a maximally promiscuous ingestion policy; it slurps up all the spare video, hence drawing immense amounts of traffic, which made it a valuable advertising property and hence a business.
It’s possible to imagine a YouTube using third-party cloud computing for the back end. It’s been done. It’s possible to imagine something like YouTube with a different network stage in the middle; in fact, it’s been done. Pushing the video over the cable-TV half of a cable Internet connection, perhaps into a local cache, wouldn’t change the user experience much at all, nor the business model, because the value is concentrated in the user interface and the aggregation. It would, however, spare the ISP.
This is where we get to the impact on the Internet. A fundamental concept in Internet operations, engineering, and economics is peering; the word is used in various different ways, but it specifically means the process by which two networks (or groups of cooperating networks) agree to exchange traffic to and from certain prefixes on a reciprocal basis, so no cash changes hands. The alternative is transit; the relationship is between supplier and customer, the scope is defined as being the whole Internet (except any peering you may have), and every bit is paid for.
Traditionally, peering was for telcos and ISPs, and the less transit you used, the more status you had. Hence the term “tier-1” network, for a carrier that obtains all its connectivity from peering. The Tier-1s were the gods and everyone else was a customer - the advantages of this position should be clear enough. But YouTube’s historic bandwidth consumption changed all that; for the first time, carrier networks extended peering to a pure content network. The advantages are that you get to replace an uncertain and rising bill for transit, dependent on the vagaries of the wholesale transit market and possibly subject to the power of a monopolist, with a fixed sum of CAPEX required to bring your wires to the IX and link up directly with YouTube.
We analysed BGP routing data from RIPE to look at how much of YouTube’s connectivity it gets from the peering ecosystem. You can’t directly tell the nature of the commercial relationship between two networks from the routing table, but you can infer quite a lot; for example, if you announce your prefix into AS174, Cogent, and flag this as IMPORT ANY so they announce everything behind them to you, you’re obviously a transit buyer. YouTube, AS36561, has some direct peers (AT&T, AS7018, looks to be one), but the bulk of the action is over at Google (AS15169). Google operates a community called AS-GOOGLE which includes its production and corporate networks, Postini Networks (the spam filter in Gmail) and YouTube, and which peers with everyone it can at most global IXen, as you can see here; note that almost everyone is announcing a restricted set of prefixes to Google, as you’d expect of peers.
For example, if you’re a BT, KPN, DTAG, Tiscali or NTT customer, you’re reaching YouTube through Google’s peering presence. If you’re with AT&T, you’re peering with YouTube. Other than that, there are a few networks which YouTube gets transit to via AS3356 (Level(3)) and Global Crossing (AS3549), and a considerable number of small emerging market ISPs which are scooped up by Cogent. (Mongolian State Telecoms, anyone?) Cogent accounts for 19% of the prefixes YouTube can see; Google’s peering community some 40%. L(3) and the rest don’t break double figures; and the Google peers include a lot of big, big networks.
It’s precisely the sheer bulk of video pouring out of YouTube that gave them the bargaining power to achieve this. It goes to show just what you can do with good aggregation.
In the video distribution value chain, it’s easy to assume that either controlling content or the pipes will necessarily give you a powerful position. But this is naive. Good aggregation is as good as good content - in fact, it’s a guarantee of having good content. And controlling the pipes can just mean that distribution becomes your problem.
Don’t imagine, either, that your status as a big carrier will help you maintain margins or get you into a two-sided position; the emergence of YouTube as a peering actor demonstrates clearly that content providers, eyeball networks, and everyone else on the Internet are willing and able to disintermediate you.
[Ed - fixed telcos are already very fearful of the impact of video distribution on their costs, mobile operators are just starting to think about this issue. We’ll be discussing how to address the issues and create a more viable business model at the Telco 2.0 event on 4-5 Nov.]
Comments
This may be a bit of spam, but over at Ars Technica there is an article explaining the economic games behind peering and transit.
http://arstechnica.com/guides/other/peering-and-transit.ars
Posted by: Rudolf | October 9, 2008 6:07 PM