Re-thinking QoS: Paris Metro Pricing
We’ve been reviewing some documentation from network operators recently. A recurring theme is the need for “end-to-end QoS (Quality of Service) guarantees”. We’re not convinced this is such a compelling business requirement any more. Business is about creating or capturing a scarce resource or capability and charging for access to it. We’d urge those involved in commercial functions at operators to question the QoS orthodoxy and what the network engineers are commissioning. What in your business is really creating value that users are willing to outbid each other to access?
Is the user willing to pay for more than best-effort delivery?
One way of looking at IMS is that it is a system for assigning finite network capacity to users based on the technical and economic needs of their applications. Since those applications have radically different throughput, jitter, latency and resilience needs, the claim is that only through active management of the network operator can they be made to co-exist peacefully. Thus for a decade and more the more traditional telecom outlook has been that “quality will win out” over Internet competition. Meanwhile, Internet absolutists reject any possibility that anything other that best-effort non-discriminatory packet delivery is needed. The existence and popularity of Skype should give the former pause for thought; the latter should be concerned about the conspicuous success of services like SMS, which vertically integrate identity, payment, access and application service.
We’d like to point to a “third way” that offers a better solution than either. The particular example we will discuss is Paris Metro Pricing. It brings into question the fundamental objectives of IMS as an industry-wide platform beyond PSTN replacement.
Squeezing the user through the toll gate
Two of the essential parts of the value proposition of services delivered over any IMS architecture are reliability and quality. To differentiate themselves from pure Internet players and turn a profit, fixed and mobile operators perform two actions. Firstly they create circuits or channels with attached bandwidth guarantees. This creates value to end users which can be charged for. They then try to eliminate or discourage alternative “best effort” means of communication (to maximise profit). This can be done by technical or contractual means, subject to market power and regulatory constraints. For example, Verizon reserves most of the capacity on its residential fibre network for its own video service and forces its own restrictive equipment into the user’s internal home network; and Vodafone bans VoIP from 3G services through contract terms.
IMS involves the reincarnation of circuits as sessions to be managed by the service delivery platform. (IMS enables other value-add functions such as smart call routing, but that is a separate issue to QoS. In the absence of QoS issues, other architectures become competitive.)
What gets priority? And who decides?
The problem is a deep and fundamental one. The network operator isn’t necessarily best-placed to decide what is most important to the user (and thus deserves priority). Furthermore, an inflexible vertically-integrated architecture fails to adapt to different needs, and itself becomes an artificial bottleneck whose sole purpose is the creation of billable events.
The Guardian newspaper report on the Mumbai bombings is all-to-typical of what happens in extremis. When people are willing to communicate via any means available, and almost at any price, the network fails them.
Witnesses reported body parts littering the railway tracks. TV news channels broadcast footage of bystanders carrying victims in driving rain to ambulances and searching through the wreckage for survivors and bodies. Confusion and panic was compounded when the local mobile phone network collapsed.
The same happened in New York, Madrid and London. The design decision to give priority to voice traffic—due to the constraints of 1990s technology—fails to reflect user need. For every voice call that goes through, ten are turned away. No nines, not five nines.
“Intelligence at the edge” applies to people as well as machines
This idea that the edges of the network are the ones to call the shots isn’t new. Indeed, it’s the same observation that underlies the 1981 paper that introduced the end-to-end principle that forms the foundation stone of Internet design.
functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. Examples discussed in the paper include … duplicate message suppression … and delivery acknowledgement.
… The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system.
In other words, only the edge points have the context needed to implement these functions. (There are clearly some limits to the applicability to these ideas that we won’t cover here, such as security or broadcast content.) Protocols like TCP/IP were thus born, and best-effort internetworks created with no need for common agreement on flow control or handling dropped packets. (Contrast this with the reams of technological specifications and testing to peer two IMS networks.)
In the disaster scenario, users were unable to engage in a graceful service degradation from wideband audio, to barely comprehensible audio, to push-to-talk, through to IM and eventually to store-and-forward email. The services and their priority were set in stone. Furthermore, every user may have different preferences and needs, all of which change dynamically. Only the users knew what should be given preference, and how much they were willing to pay to get through.
The same issues apply in more mundane everyday situations, just less dramatically and quickly.
How to preserve the best of the Net - and cast off the worst?
Fortunately, we already have a technique to deal with distributed information and resource contention: the market. The issue becomes: how can we create “spot markets” in connectivity in a manner that doesn’t burden the user with decisions?
This area has been explored in depth in the past. One solution proposed and patented by former AT&T Labs researcher Andrew Odlyzko is Paris Metro Pricing (PMP). There are several other competing “less than best effort” proposals, each with its own merits. PMP deserves study because of its minimalism; it may not be the best form for a particular application or operator, but gives us the best insight into what’s wrong with the IMS approach to QoS.
The idea is very simple indeed. Partition the connectivity into multiple “dump pipes”, each with its own price per packet. You’ve not baked-in any particular assumptions about the technical nature or user value of any particular application. Nobody needs to speak IMS dialects of SIP if they don’t want to. Performance degrades gracefully, as long as the applications place their traffic down the right virtual pipe.
The name comes from the former ticketing system on the Paris Metro, where first and second class carriages and service were identical, bar the price. The double cost of a first-class ticket created a self-regulating system of congestion control.
The challenge to IMS
PMP is implementable today with off-the-shelf open standards such as diffserv. It potentially offers a completely different future and architecture than IMS. No longer do you have to provide a “customs declaration” to the network of the traffic’s content in a telco-specific dialect of SIP. Since the edge device selects the quality needed directly, the network doesn’t need to know what the traffic is to reserve the bandwidth. That means peer-to-peer architectures, at a fraction of the cost of expensive central switches, become very attractive.
Peer-to-peer file transfer dominates Internet traffic today. Best-effort real-time services like VoIP work well most of the time. The main user concern is security. Given the alternative of PMP (or its many brethren), you have to seriously question how much value IMS’s QoS infrastructure is really adding.
Nothing in life is free, not even bandwidth
There are numerous ways in which the simplicity of PMP would have to be tempered with the reality of actual networks and customers. For example, in mobile networks there may be additional costs to re-authentication (after long idle periods) and channel set-up and tear-down. PMP doesn’t model these. The user interface to applications may have to change to enable users to dynamically change their preferences. “The network is congested and your call cannot proceed. Are you willing to pay 20¢ per minute to complete the call?” There are implementation and accounting costs.
Any form of QoS ultimately destroys network capacity and throughput, never creates it. The compensation, we hope, is that the lost and delayed traffic is worth less than the incremental gain to the highest priority traffic.
Above all, there are the human factors of marketing, customer expectation, and support to deal with.
Being a “pipe” operator is a good option for many operators, but they are afraid of the “dumb pipe” label and loss of price discrimination ability that service provision gives you. Re-thinking QoS enables you to innovate in the pipe business model. You’re selling priority, not fixed capacity; abundance rather than expensively metered scarcity. Let each user decide whether the online backup or video stream is the more important. Just make them pay for the privilege of displacing other users’ traffic.
None of this is to say that the network operator isn’t actively involved in allocation of bandwidth. As the issuer of a home hub, set top box, media server or dual-mode handset you have considerable scope for integration of these items into a packaged and integrated bundle. It’s just you need an economist to help you do it, not just a network engineer.