« Broadband: but not quite as broad as advertised | Main | Interview: TM Forum’s Chairman, Keith Willetts »

Plutonium data: prize or problem?

When the British government inconsiderately lost in the post details of all our children, the event was rather aptly described as a “privacy Chernobyl”. Once out in the open environment, there’s no undoing the contamination that the release of such data can cause.

By coincidence, we’ve been privately using a metaphor of “plutonium data” for some time to describe the most sensitive data that telcos, by their very nature, must gather and store. The paradox is that this data could hold a key to evading the “dumb pipe” fear of so many operators. It also provides a differentiator and advantage to telco building a B2B services platform, at least compared to IT platforms such as Google’s. What if the telco brand could be redefined as a trust mark to mean “privacy protected”?

Separating the ordinary data from the extraordinary data

When running any business, you have to deal in what we call “potato data”. That’s the everyday staple diet stuff — name, address, products bought, support calls made. You have to store it somewhere sensible, but it doesn’t need three rows of barbed wire and armed guards. Standard IT tools and procedures will suffice. No special storage or processing facilities are needed. Furthermore, you can pass “potato data” around between enterprises fairly easily, subject to standard privacy and disclosure rules.

In contrast, “plutonium data” is the most sensitive data, such as location, call history, or credit worthiness. It requires special facilities to house it, and can cause havoc if allowed outside. The problem is that this data is potentially very useful in powering interactions between users and various third parties. Sometimes the third party needs to have a shard of radioactive material released. The taxi company needs to know where you are to dispatch the taxi to the right place. We establish special procedures to ensure that this location dip is done according to the rules.

It’s not about network APIs

Yet the operator would like to have more “value added” than just a bulk messaging or location API. That means packaging up APIs and customer data to solve a business problem for a partner. Consider my personal example of signing up for a new electricity supplier. A month or so later, on the day the new service is activated, I receive an SMS welcoming me to Scottish Hydro, and asking me to send my initial meter reading by return SMS. Sadly, I’m over a thousand miles away from home at the time, so it’s a waste of my time and Scottish Hydro’s money.

The job of the telco platform is to optimise business processes such as these. Rather than forwarding the SMS to roaming users, it should be stored until I’m back in the country. Indeed, it should only be forwarded to me at a time that I’m likely to respond. In this case, there’s no point in sending the message when I’m not physically at home. If I get home at 1am after a long business trip (or, for those so inclined, a long pub trip), it’s probably not the ideal time to ask your customer to hunt in the dark recesses of the under stair cupboard for a meter reading. Indeed, why not take the level of personalisation one step further. Don’t send the message when the user is in a call. Only send the message at a time of day when the user is normally active. You’re a shift worker who sleeps in the mornings and doesn’t make or take calls? We’ll hold the message back until you’re awake.

Get paid for outcomes, not inputs

The telco is then rewarded not for sending SMS messages, but for optimising the number of responses. You can see the same example playing out across other similar scenarios, such as marketing campaigns. In the case of meter readings, there’s margin to be made between the few cents of a bulk SMS API, and the many dollars of a personal visit by a meter reader (or the capex of replacing my meter with a connected smart one).

The catch is that I don’t want Scottish Hydro location dipping me 24 hours a day. And at standard pricing for such location capabilities, they’d go broke quickly too. The telco has to offer an interface that’s easy for the power company (or their systems integrator) to consume, and is fairly standard across all operators. We see the same issue with mobile advertising today, where media buyers find the telco an almost impossible channel to buy from due to fragmentation, lack of metrics, and low volume.

Mix and match APIs to increase value

This means that to work, the telco needs some kind of combinatorial API that only forwards the message when I get home. This is a mix of location, presence, messaging and personal data and preferences. A key idea here is we’re keeping all the “plutonium data” inside the telco. The telco knows your location anyway, and we’re not passing that to any third party. All they want are meter readings.

We can easily imagine this personalisation process happening in other ways. For instance, the taxi company I use starts off it’s IVR with an automated message “if you want a taxi dispatched to right away, press one”. Not much point in sending a taxi there if I’m not at home, and equally annoying to ask me if the taxi company can location dip me. In a future scenario, you can image their application passing some VoiceXML to the telco platform to drive a telco IVR. If I’m at home, the telco asks if I want that taxi sent there; otherwise, I go straight to the operator. Again, we’re keeping the “plutonium data” within the telco’s special processing facilities.

The next problem is that every vertical will have its own needs. You can’t keep on spawning APIs for every use case. The next guy will want the “forward this message when the user gets home, but only to adults” API. So just as Web 2.0 needed a new programming paradigm, so might Telco 2.0.

A new kind of computing

The Web 2.0 world has two technical innovations. One is web services, so that it’s easy to fetch data and invoke code remotely. The second is AJAX, which is how we download bits of code (i.e. Javascript) from the server to the browser to be run locally, since those user interface interactions are most sensibly done in that context of the user’s own PC. Web services enable the AJAX code to talk back to the server code.

The Telco 2.0 equivalent is a move towards a software agent model. That means the application sending a bunch of logic or code to the telco to be executed. The telco keeps all the super-private data, and aggregates the behavioural information (“does this person respond to marketing messages?”) across the different application providers. They don’t need to run the whole of their application inside the telco, just the bits that need to interact with “plutonium data”. For example, they might want to route calls based on your previous history of interactions with the callee, and the telco has the CDR history to enable this.

This provides a radically different vision of the future of the telco network than the one being pushed today by vendors or IT giants. The “intelligence at the edge” model of the Internet implies user data dispersed all over, and ignores that some of that data has a natural affinity with the network itself. Potentially, we’ve a “get out of jail free” card here against over-the-top threats. So we’re looking forward to sharing this concept more with everyone at the next Telco 2.0 event, but equally look forward to your feedback and opinions right here.

To share this article easily, please click:

Post a comment

(To prevent spam, all comments need to be approved by the Telco 2.0 team before appearing. Thanks for waiting.)

Telco 2.0 Strategy Report Out Now: Telco Strategy in the Cloud

Subscribe to this blog

To get blog posts delivered to your inbox, enter your email address:


How we respect your privacy

Subscribe via RSS

Telco 2.0™ Email Newsletter

The free Telco 2.0™ newsletter is published every second week. To subscribe, enter your email address:

Telco 2.0™ is produced by: