Verizon's P4P initiative: will it support the value chain effectively?

The Telco 2.0 research team is undertaking some detailed business modelling around 'Rich Media Distribution' over the summer. We'll also be debating this with industry leaders on 4-5 November at our next event in London. More on both of these anon. In the meantime, here's some analysis of Verizon's P4P next generation file swapping initiative:

We're not sure how it happened, but Verizon appears to be turning into one of the most interesting telcos around. For a start, there's the fibre - but then again, even AT&T has an FTTH roll-out of sorts going on. Then there's ODI, their developer platform initiative. The whizzy portal-like Dashboard application Verizon Wireless is putting on its LG Chocolates has a publicly available API so people can do evil things to it. But perhaps the most significant change at Verizon is P4P, an attempt to reconcile the huge RBOC with the world of peer-to-peer applications, using a technology developed at Yale University as Haiyong Xie's PhD research project.

We'll start by noting that a lot of people read "P2P" and think copyright. Of course, the means by which you distribute something don't determine its content, and certainly not its intellectual property status, so this is a red herring. Anyway, we'll recognise this and move on - we're interested in the telecoms aspects, not the record industry.

Why don't telcos/ISPs like P2P?

Theoretically it should be one of the most efficient ways of delivering heavyweight content like video, music and big datasets; as more people want a specific file, so more sources of it are available. The scaling process is that of a mesh network. But of course, it doesn't work like that; the Internet isn't actually a random mesh network, but it does look like one to its participants. Instead, the underlying topology is very much scale-free, with more mesh-like areas interconnected by unusually critical and heavily-used links.

p2p_chart2.png

OpenP4P chart showing traffic by type

Thinking of it in economic, rather than technical, terms, this comes out even more strongly; the cost of delivering bits varies sharply as they make their way across the Net, depending on the markets for various kinds of lower-layer connectivity, the distinction between peering and transit, regulatory issues, time, and geography. The problem with P2P clients is that they tend to hammer away exactly as if they were part of a dense mesh network, where it doesn't really matter where traffic comes from or goes to - but in fact, their behaviour can cause serious economic problems for ISPs if it means a high-cost sector of the network is heavily used.

This could be literally anywhere - for an Australian or New Zealand operator, it could be a congested transpacific cable, for an emerging-market one it could be an expensive international satellite link, for a British operator it could be the BT-owned local loop under IPStream or their BT Wholesale backhaul links, for an operator in a small but highly connected country like the Netherlands it could be metro connectivity, and for an FTTH operator it might be their peering relationships at their friendly local IX. But what's certain is that there's always somewhere, but it's never the same place.

It's Not That Simple

Some P2P clients now attempt to prefer local peers; but this is where the second clause comes in. What is excellent optimisation for one network will be poison for another; trying to maximise local traffic is exactly what the British or Dutch examples don't want. The British ISP will have to fork out much more to Openreach and/or BT Wholesale; the Dutch one would much rather push traffic out to the abundant and cheap international connectivity of AMS-IX.

The problem is really that the business models of both parties to the game don't work. Both ISPs and P2P users are constrained by their assumptions to behave as if they were adversaries; one desperately trying to stop the flood of traffic or sting it for more money, one desperately trying to evade them. What they really need to do is to co...well, whatever the verb from "co-opetition" is. If there was a way for ISPs to announce details of the network's cost structure, so the P2P client can programmatically adapt its behaviour, this problem could be overcome. Rather than imposing crude preferences on the users through QOS and deep packet inspection, the ISP would play a tune for the clients to dance to.

p4p2.jpg

OpenP4P Results

This, in a nutshell, is the aim of OpenP4P. Here are the results of Verizon,Telefonica, Pando Networks, and Yale's field trial of the system. They are impressive, on the surface at least; but we'd note that so far, they've only tested it in the context of Verizon and Telefonica's network topology. Obviously enough.

The data showed a dramatic cut in the traffic hitting VZ's external peering links and also a big cut in traffic on their long lines between metro areas; unsurprisingly, given that the project's aim was to localise traffic, the utilisation of local loops and metro backhaul was dramatically increased (this went from 6% of the total P2P to 57%). We don't know how well it would work if the optimisation target was different - for example, to maximise traffic on LLU lines and lay off the backhaul, or to minimise internal traffic and maximise external.

interval-bdp.jpeg

OpenP4P results on Verizon: a metric of hop count over time for P2P and P4P traffic

However, Telefonica's half of the trial does suggest that it works; they saw a 57% cut in the number of hops a P2P packet traversed, compared to a fivefold reduction for Verizon, but saw a much greater increase in the amount of traffic served within their local networks (it increased by a factor of 36, compared to a factor of 10 at VZ).

More Unanswered Questions

There are also some issues of security and trust that still need to be cleared up; the "trackers" that provide the network data to clients are in a very responsible position, which hackers would give their eye teeth for. A malicious tracker would be able to steer all the P2P traffic on the network down the most critical link, for example.

And how is this going to make money? By definition, if we're publishing this information to the Internet at large, we can't really restrict who reads it. More fundamentally, we don'twant to do that - because this doesn't serve our interests at all. Refusing some group of applications the information would just add them to the heap of undifferentiated P2P traffic that's clogging the lines.

Conclusions: Fundamentally Two-Sided

But there is an implicit two-sided market here. Users' cooperation is rewarded with improved throughput and latency; content providers' cooperation is rewarded by better quality delivery; the telco is rewarded with lower costs. Trade, in a sense, has been facilitated by the creation of a new network API. P4P is precisely what telcos and ISPs ought to be doing, faced with this coordination problem. However, we would recommend considerable caution in deploying technologies like this until the engineering and operations aspects of security have been clarified. The best way to clarify them would, of course, be to participate in the Working Group.