" /> Telco 2.0: June 2007 Archives

« May 2007 | Main | July 2007 »

June 30, 2007

eHealth in the Digital Home - Big Opportunities for ICT players

Referencing the recent Munich Economic Summit, the London Times ran a headline the other day saying “The most important person in healthcare is the IT guy”. What they really should have said is “…the ICT guy”. There are huge opportunities for telecoms and media players to join technology vendors to generate new business at the intersection of Digital Home and Digital Health.

But we need to work out what propositions and business models are appropriate. We may never achieve the end goal suggested by this image (or want to!)…
ContinuaAlllianceTeleHealth.png …but the IT sector (and now some telcos) are starting to salivate about the home/health market.

We’re therefore delighted to have Thierry Zylberberg join the ‘stimulus speaker’ list for the Telco 2.0 ‘Executive Brainstorm’ in October. Previously Executive Vice President in charge of new services at France Telecom Group, a few weeks ago he was formally announced as the head of FT’s new Healthcare Division. Why this focus? Well, eHealth is now the fastest growing ICT sector - 15-20% per annum growth for the next 5 years, from which FT Group is aiming to generate 500m Euros of new annual revenue by 2010.

Thierry presented the opportunity to investors back in December, and defined the opportunity for telcos here.

This is, of course, the context for the Digital Home Summit at the Telco 2.0 event. We’re looking at how TMT players can carve out new and exciting business models that not only support physical health for an ageing population, but also support a better quality of life through more appropriate and targeted entertainment, communications and independent living services.

We’re gathering the best stimulus speakers for the brainstorm to look at the issues and opportunities from a wide range of relevant angles: from a large telco perspective, local service providers, consumer electronics and telehealth experts, marketing specialists and cutting-edge examples of implementing ICT for this sector, not forgetting of course the media players who are looking at delivering laser focused content (via IPTV and internet TV).

Here are some stats and examples to whet your appetite:

Today there are 605m people aged over 60. In 2050 there will be 2 billion. In Europe close to 20% of the population (70m people) is over 60 and figure will double in the near future. This market is the least served by marketers, brands, service providers and merchants of all kinds today, yet it represents huge and growing purchasing power.

The number of doctors and natural carers (families, friends and professionals) will not keep up with demand.

At the European Commission’s Personal Health Systems conference in Feb this year, Paul Timmers, Head of the ICT for Inclusion Unit (DG Information Society and Media) described the value chain and pointed to statistics suggesting that ICT can not only dramatically reduce the cost burden on society (note: on average, 40-70% of healthcare spending in developed countries is for care of diseases and conditions of the elderly), but also offer great market opportunities for service providers:

* The market for smart homes applications will triple in Europe between 2005 and 2020, from 13 million people up to 37 million.

  • In the Netherlands alone, ‘seniors’ will spend €500 million per year above what they are currently spending if targeted by appropriate ICT products and services.

Another speaker of interest was the Intel representative from the Continua Health Alliance (very useful presentation here). This body was launched June 2006 as “A non-profit, international, open industry alliance of the finest healthcare and technology companies in the world joining together in collaboration to improve the quality of personal telehealth.”

“Our Mission is to establish an eco-system of interoperable personal health systems that empower people & organizations to better manage their health and wellness”.

Continua describe a move from residential care to ‘personal telehealth’, by which they define as services that support Independent Healthy Living, Community Clinics, Chronic Disease Management and virtual Doctor’s Office.

All this has a huge telecoms (and media) element, hence FT’s interest in the sector: Email / chat / video, Appointment scheduling, Personal Health Records , Vital sign monitoring (RPM), Medication reminders and compliance, trend analysis and alerts, connecting with family care givers, etc, etc.

So, coming back to the Digital Home Summit on 18th October, we’re delighted to have Nick Augustinos, Board Member of the Continua Health Alliance, Director in Cisco’s IBSG Global HealthCare team, and “the father of the HealthPresence concept” stimulating the event. Luis Carlos Fernandez, Head of eHealth Solutions from Telefonica will bring his perspective, Daniel Heery from Yorkshire’s CyberMoor project, Caes Rovers founder of OnsNet in Holland, a ‘grey market’ specialist from Ogilvy, and a Senior Vice President from Deutsche Telekom, among others…

We’re hoping that Adrian Flowerday, Chairman of the Telehealth Group at Intellect, will also join. He has some very practical experience of implementing personal eHealth solutions - see here for a teaser.

If you want to take part in the brainstorm, logistics details here.

To share this article easily, please click:

June 28, 2007

Sweating the operator’s data assets

Given how we wrote about environmental causes recently, we’re going to engage in a little of our own greenery and recycle some old material on user identity whose time appears to have come. I’d like to show you a diagram I first drew up on a whiteboard over four years ago, and prompted me to believe that the real opportunity for network operators was to exploit their data assets.

The thesis is quite simple at heart: it’s not just what you know about the customer that matters. It’s what you know about what you “know” about the customer that matters. In other words, is this really your name, your address, and are these really your friends?

We’ve not been blogging recently because we’ve filled every moment of each day with client work. (You’re welcome to join the queue, however.) One of our assignments involves undersanding the real assets of operators that can be used to resist competition from Internet players, or embrace co-operation with them. Indeed, this seems to be a recurring theme, and we’ve covered it with several client engagements.

So to make up for our silence, here’s a diagram that summarises the competitive picture of identity and data assets of network operators vs. other players in the value chain:

Let’s understand the model first, and then dig into the implications. You can find some of the credits and sources in the article linked to earlier.

A taxonomy for our metadata

Firstly, we’re going to provide a classification of the data into four buckets, from highly personal and individualised to impresonal and collective:

  • tier 0, the biometric and unchangeable you. Your fingerprint, iris pattern, DNA profile, height.
  • tier 1, “mydentity”, the named person (e.g. “George W. Bush”), chosen by you. You can go to a lawyer and change your name.
  • tier 2, “ourdentity”, the assigned identity (e.g. “The current resident of 1600 Pennsylvania Avenue, Washington, DC”), with the label or name chosen by others (such as the post office and city in this example) but assigned to you. This is a form of shared identity. The canonical telco example is a phone number.
  • tier 3, “theirdentity”, the inferred person (e.g. “living US presidents”), described as a class. Other examples might be segments you’re assigned to for marketing or service purposes, or credit scores.

The lower tier data tends to be more expensive to collect or create. Some think that layer three should be called the “marketing identity”, but that seems too narrow and prejudicial a definition to me.

And a taxonomy for our meta-metadata (yes, it’s hard)

We then look at how much we know about the data in these four buckets, with increasing levels of certitude:

  • Level 0: Anonymous data — incomplete or partial profile of the user that can’t be used to identify you. Furthermore, multiple data points supplied by the user can’t be correlated in space or over time. That means it cannot be tracked back to me now; and I can supply different data at different times without the fact being detected. So the fact that I am 36 and male can be revealed to any 3rd party, and as long as they can’t see me in person or hear me then I can equally claim to be a 23 year old woman tomorrow. There are no consequences of lying repeatedly. Privacy is assured. Consider a location-based service which gets a user ID and returns a map, and doesn’t get to see your IP address (as you’re hiding behind a telco proxy or firewall). In this case the location data is completely anonymous.
  • Level 1: Pseudonymous data — incomplete but traceable in time. Here, you have acquired a pseudonym. Cookies in a web browser are the classic example of this. A web site like Priceline that wants to prevent you from submitting multiple identical bids (themselves level 0 data) uses a cookie to track you. You can lie, just not over and over. A location service which sees my IP address can effectively use that as a pseudonym for me and track my movements.
  • Level 2: Asserted data. This is a complete data field that is tied to a particular person, such as your name, address or government ID number. But there’s nothing we know about the data we’ve been given. You can commit subsequent fraud or abuse, and if you gave a real name, address or number it has consequences. You could lie about who you are without any consequence. You might also not be lying, but could unintentionally give inaccurate data, simply through a typographical mistake.
  • Level 3: Validated data — complete and suitable for repeatable use. Now we’re getting into the realms of “data quality”. This is subtly different from level 2, and moving identity data from level 2 to level 3 is already big business. If I store my data in a profile, and keep giving it out, then it is more likely to be correct than if I have to enter it fresh each time. If someone checks my postal code against an address database, the data may not change one iota, but the meta data now starts to take on value.
  • level 4: Verified data — non-repudiable and the user has staked some collateral on it. This is the ultimate level, where we add in verification. A trusted third party asserts that the data is true to some degree, in the context of some liability relationship for that assertion. The American Express logo on my credit card acts as an assurance to a merchant that they will be paid in return for entering my credit card number into their system. The assertion is relatively weak, since traditional magnetic stripe cards are easily forged. The credit card number has a check digit and is printed on a card, which at least makes for strong level 3 data.

The telcos have the most valuable data hostage

The obvious conclusion from the diagram is that different players have complementary data sets. Fixed network operators put physical lines into physical homes, and know the location of the network end points. (OK, there are some seriously embarrassing stories to be told here, but still… the big picture is right). They credit check you, know who you call and interact with, where you travel when mobile. With multi-person households, they will often know the family structure. They could probably even work out if you’re married or divorced with kids based on collective weekday and weekend patterns.

Now the diagram above is, inevitably, an over-simplification. Internet players expend a lot of effort validating your email address by sending you a welcome email, and verifying it by getting you to click on a unique link in the text. But their identity assets are generally very weak and narrow.

For companies like eBay who are involved in reputation and facilitating transactions, the telco or ISP at the end of the line could potentially be an ally in resisting fraud. It’s a leak in the “end to end” model which forgets about the physical anchors of the network and any form of payment.

It’s a business you’d like to be in

As a network operator, wouldn’t you rather have the profits and valuations of some of these businesses:

  • Experian, who CAPTURE credit and personal data via a unique network
  • Google, who CORRELATE data between web pages and advert keywords
  • Verisign, who DISTRIBUTE data such as digital certificates into browsers
  • Amdocs, who MANAGE data on behalf of operators
  • Oracle, who MANIPULATE data on behalf of everyone

The opportunity for operators is to be able to federate, project or exchange this data in a win-win-win for them, the partner and (most importantly) the user. This means protecting the privacy of the user to the maximum degree possible.

If Google can spin $165bn of market cap out of some matrix algebra from scraped hyperlinks off web pages, how much are those call detail records really worth?

We’ll be discussing how operators should go about partnering, surviving and thriving in a data-centric world at our next Telco 2.0 Executive Brainstorm in London on 16-18 October.

To share this article easily, please click:

June 19, 2007

Converged security — Do we need a new model?

We’re running a couple of “think pieces” about the role of telcos in assuring user security and privacy in the converged world of communications and IT. For our appetiser post we noted how telcos are getting into the security business, and IT companies are struggling to keep their products secure. Then for the second course we served up some thoughts on how security was just one means of changing the basis of competition.

Now we’re onto the main course. This is going to be a high cholesterol intellectual diet full of unrefined ideas and potentially hazardous raw ingredients. Follow this recipe at your peril. The same caveats as the first article apply: we might be really, deeply mistaken and these are just provisional thoughts.

Computer science, meet social science

Today’s networked applications — email, browsers, file sharing, and so on — are based around attaching a network interface to a 1940s computer architecture. It just happens that the computers got a trillion times faster or so over the past six decades.

There are three underlying causes of difficulty in making these safe and convenient-to-use in a hostile online environment:

  • Firstly, computers have only a very primitive classification of data internally, if any. They keep the data from different concurrent programs safe from each other, but within the memory allocated to any one program all bits are equal. At best they understand the difference between “instructions (code)” and “data”. Computers of the future need to model the real complexity of the different natures of the data stored within in ways they don’t today. As a business person reading this blog, it’s always uncomfortable to hear that there’s some nasty problem lurking down inside the basement of the data centre. We hire CIOs and CTOs to help us pretend that technology can be cordoned off as a business concern.
  • Secondly, it matters where the bits came from and go to outside the CPU — and this needs to be tracked in meta-data about machines, people and institutions which is currently discarded too readily. Today’s security model is from a pre-networked era. A document I created isn’t the same security risk as one emailed to me. But my computer can’t tell them apart, not in the processor, storage, OS or applications.
  • Finally, these two issues need to be bridged, so the different types of data can be passed around and preserve their metadata properties: not everything is a “bag of bits”. But the “human factors” side of integrating permissions, auditing and communication need addressing.

In other words, just as enterprises grew accounting, reporting and auditing functions that became integral, similar systems of control need to be incorporated into future connected devices.

It’s the humans, stupid

A little diversionary side story on this last point. At the open day for applicants to the university I ended up going to, the tutor had a recommended reading list. So I picked the textbook for the first term — “Introduction to Functional Programming” by Bird & Wadler (there’s no chance of ever forgetting that mental image of sparrows and ducks) — and read it on the school bus. All kinds of strange mathematical ways of describing computer programs that were provably derived for their specification, as expressed in predicate calculus.

All seemed like rather a lot of work to put together a ten-line sorting algorithm, given I was used to churning out a hundred lines of working machine code an hour. (At 36, I’ve already been playing with computers for a quarter of a century.) So on the way into the interview, a bag of nerves, I said that I’d read the book, but had a question: how exactly do you go about writing the specification for a word processor in formal, mathematical predicate logic — and ensure your specification is correct?

I never got a satisfactory answer during the whole of the three years there. Still waiting.

It’s those damned humans in the process. That’s where all the interesting problems lie. So our real objective is to alleviate humans from responsibility for security, be they programmers or users. Where they are given choices, they are given meaningful ones. That means something better than “do you want to install this application and have it trash your data or capture your keystrokes?”, which is where we de facto are today.

So a confident communications system is going to pay as much attention to the user experience as to cryptography.

Bits are just bits (except when you act on them)

So, let’s tackle our three problems. Firstly, let’s get a better classification of the bits, which we can then use to create richer security policies and tracking of bits sourced from elsewhere:

  • Instructions that act as instructions (“executable data”). These are the instructions that your CPU runs through, executing code.
  • Instructions that act as data (“compiled data”). When your compiler turns program text into machine code, it produces this output as data.
  • Data that acts as data (“payload data”). The purpose of such data is solely to be stored, re-transmitted, or rendered for human consumption on some external device. The content of an MP3 music file might be a good example.
  • Data that acts as instructions (“flow control data”). This is user data that is used to drive decisions within the CPU — which branch of instructions to follow — the flow of the code.

The difference between “flow” and “payload” data isn’t in the data itself, but whether it is transferred into memory and then loaded into the CPU and then used by an instruction that affects the flow control of the CPU. That’s because data that just “passes through” isn’t a security risk — a security problem has to do something different that deviates from the intended flow. No change, no risk.

(If you’re already feeling queasy, this is a good time to skip a course and wait for dessert. Normal flavour of business commentary will resume shortly.)

As you can guess, we’re going to build a new and different security policy scheme based on this classification.

Fundamental problems already exist

Some important points about these categories:

  • Today, we’ve got the wrong granularity of permissions on “executable data”. When I install an extension into my Firefox browser, it forms part of the same executable and inherits all of Firefox’s permissions and capabilities; yet I have far less confidence in the trustworthiness of such extensions than I do in the browser itself.
  • Compiled data needs to “jump the fence” and be treated as executable code. In today’s operating systems, this might just be a permissions or extension change to a file. As you can imagine, I’m less than impressed by the ease with which such a thing can occur! All these buffer overflow problems have been given extensive prophylactic treatment in new operating systems like Vista, but I’m not convinced the OS is the place to enforce this.

Where did you get that bit, where did you get that bit?

So we’re going to assume that any data passes through the processor is classified according to the above scheme. Later will look at how we attach permissions based on the real class of the data.

The second part of the puzzle is assembling some kind of social history of the data we’re dealing with. It’s time for computers to stop having unsafe sex with strange data just because the data looks pretty and innocent. At the very least we need to track whether the data has been sourced locally (typed in, captured on a webcam, etc. into some local editing tool), or imported from afar. Users tend not to be saboteurs of their own computers in the same way as the controller of a zombied PC half a world away.

Ideally, we’d like a longer and more detailed audit trail of where the data has come from. Which website did I download this file from? Who typed this data in? Which email did I paste this text from?

Putting it all together

Now for the hard bit — putting together the metadata and the “Confident computing” environment where we’ve separated out our different security concerns.

Firstly, we’re going to take a somewhat looser view of what an “application” might be. It’s not one-to-one with an icon down on your taskbar, but a lump of executable code from a discrete source. So using the Firefox browser example, the extensions come from a different source, and should be tagged as such and subject to different security rules. We’re provisioning applications onto devices — and even pushing the granularity down to below the application. Individual bits of script in a web page, sourced from different places, are then subject to different rules. Your banking application may request to access the smartcard reader attached. If you visited this blog and it made the same request, you’d be suspicious.

At the moment we have a reactive approach to dealing with security permissions. When the application tries to do something that requires a greater security level, it trips the switch and the OS or browser pops up a request window. If the application requires a whole bunch of permissions, you get a whole bunch of requests in succession. People don’t like to be nagged, so they turn off such warnings. As soon as Vista launched, the first sets of hints and tips were on how to turn off the nagging nannying security dialogue boxes.

(Before Linux and Mac users start to gloat, really it’s no better there either. Creating faux users for different applications, and adding in granular user-based permissions to the filesystem — HP-UX, RIP — just doesn’t cut it. Either you’re designing in security and modelling it properly, or you’re not.)

Ask for permission, not forgiveness

Instead we need to move to a proactive approach. When we install an application, or first visit a website requiring above-normal system access, it should make all the security requests necessary. A website for editing pictures might request access to the My Pictures folder. A conference calling application can ask to view your webcam. Today, I download an application and install it, and it gets access to anything and everything on my PC.

This means applications need to come with a “manifest” of what they are and intend to do.

The choice would also be presented to the user in a simple and palatable manner. Rather than cryptically listing the permitted OS system calls and resources, there might be standard bundles of permissions (e.g. “File editor”, “System diagnostics”), and it might list the overall high-level permissions, any existing applications with that set of permissions, and any deviance from the standard set of permissions. It’s easy to tell if my music player should be given the same permissions as my banking application or not.

Because we’re tracking the nature of the data in memory and being stored, as well as its origin, we can do some interesting things. Any unit of code that doesn’t need to access a system resource (e.g. network, file system, devices) isn’t a security risk per se. For example, an extension module to rotate videos in a web-based editing
suite can be installed and the user might not even be troubled with a request for any security permissions. Its function is purely transformative. I don’t have to believe anything about what it says on the tin — I know it can’t access any parts of my personal data or modify any system files. It has no “flow control” impact outside its container.

Likewise, data captured from my webcam and passed through to the video system and the screen, without being used to influence a branch inside any CPU code, isn’t a security risk. No need to ask for permission to do anything.

Make it quick, make it easy

So far we’ve managed to simplify the user’s experience by:

  • Not nagging about “transformative” actions that have no security implications. Do I want to install this extension or piece of code? Sure, if it doesn’t have any chance of side-effects.
  • Asking for permission once and only once in a meaningful manner when new code is installed or downloaded that wants to access secure resources.
  • Allowing policies that make more sense to the user in a networked world.

The next trick is to make it all happen fast. In principle all conventional digital computers can emulate each other (the Church-Turing thesis — I knew that final year computability course would come in useful some day). The only difference important to us here is in the physical security of the packaging (which is what trusted computing technology achieves) and the speed at which they execute. So my suspicion is that most of the confident communications capabilities will require support from the silicon layer. Indeed, there may come a point in the near future where most of the transistors in a processor are given over to auditing, policy and cryptographic functions. Eliminating the evil bits becomes more of an imperative than processing the good ones faster.

Practical lessons? We’re going to make you wait…

We’ll look at some consequences of these ideas and practical baby steps operators and others can take towards a more secure communications and computing environment in our next post. But as a parting thought this time, consider how difficult it remains to install, update and manage client software — be it Java, Windows Mobile or Symbian — on mobile handsets. Plenty of people want to be the “container” in which all software must run, and become an essential facility. Adobe’s Flash clients are there already, Google’s latest technology is trying to bridge the Web and PC models. But given their heterogeneous nature, can any one of these software solutions ever be the answer? Or is the passing of secure, identified, audited data and code between computers inevitably a job for some other part of the device and software stack?

In the meantime we’re sure there’s a ton of research out there on building secure networked devices that we’re not aware of. If you think there are some gems that might point the way to the future, why not share them using the comment feedback below?

To share this article easily, please click:

June 18, 2007

Making advertising more personal and actionable

Sometimes a simple example makes things clear in your mind, particularly when it’s your own money at stake! We’ve a long description of the whole advertising value chain and the telcos’ role in it in our report, and two key operator assets are customer preferences and personal data. Here’s a real world case study of putting that to effect.

I’m selling the flat (US readers: apartment) I bought ten years ago when young, free and single. We’re moving into somewhere twice as big now there are four of us. A nice man came to put up a for sale sign outside last week. In fact, in windy Edinburgh this is an important skill, as signs poking out of buildings are prone to fly away with the next gust. I’d show you a picture, except I’m now 2000km away.

Anyhow, there’s a bit down the bottom to send an SMS message to a short code for more details. Which I duly did, as faithfully recorded by Nokia Lifeblog:

Back come the property details:

(Sorry, no discount to Telco 2.0 readers.) Then along comes another SMS:

How many people bother at this point to triple-tap their email address? How many SMS inquiries aren’t followed up because the person forgets the message by the time they get home? An email in their inbox with the images of our immaculately presented property with stunning views (but still no discount to Telco 2.0 readers, sorry) is worth a lot to me. If only for the moral satisfaction of them seeing the place not strewn with kids’ toys for the first time in years.

The opportunity for an operator here is to capture the permission of the user to pass on returned emails when they initiate a short code message. No email address is ever given out, no spam is ever sent. This is data you already have for a large proportion of your subscribers.

Also, Edinburgh is a pretty international city with a very active financial services industry. My property is central — walking distance from the registered offices of two of the worlds’ largest banks. A fair few of the potential buyers will be non-UK residents. Wouldn’t it be nice to address them in their own language based on known language preferences? You can kludge it if they’re roaming and there’s a country code in the caller ID, but it’s a nasty mess.

This isn’t the only such example. I was on the tube in London a few months back and one of the adverts was a promotion for pet food. (No, dearest daughter, you are NOT having a doggy, no matter how much you perfect the psychological warfare techniques.) Send us an SMS, and we’ll give you a free sample.

You can already image the nightmare interaction of having to text flat number, house number and postcode in some obscure format, confirming the returned address, and so on. A telco is in such a sweet position to send your address straight to the post office or shipper (not the advertiser) and get the package routed to you. Why not go one better and do a deal with the Royal Mail to give everyone their own virtual PO box using their mobile number as a key that forwards the mail to you? Now that would be converged advertising in action.

Have a good idea about telcos and advertising? Why not share it with your industry peers at the Telco 2.0 Digital Advertising & Marketing Summit?

To share this article easily, please click:

June 14, 2007

Real-time services: Changing the basis of competition

In our earlier article we began to muse about some of the problems and opportunities for offering security-based products and services in the communications value chain. In particular, the challenge is to go from trusted computing technologies to confident communications value propositions to end users. The idea that large numbers of computers can become zombies under the control of third parties shows something is deeply amiss.

We’ll dive into the complexities of network and processor design and float some radical ideas in a subsequent post. In the meantime, let’s look at the business issue:

Changing the basis of competition to something that favours the network owner over its direct industry competition, as well as outsiders.

In this case we’re looking for a security answer to commodity minutes and megabits. But in the meantime, let’s go for a side trip on how the basis of competition shifts around and examine a wider pool of case studies.

Competing on security and convenience, not price and features

We’ll stick with security and privacy for the first example. I recently saw this advert from Paypal in the newspaper:

It’s quite clever — read it carefully. They’re undermining the traditional payments industry, but not by competing on merchant fee levels or user kickbacks for using debit and credit cards. Instead, they’re attacking the weakest link in the chain: a human call centre operator or merchant Web server intermediating the payment and presenting a security risk. The “intelligence at the edge” idea applied to payments mean connecting straight to Paypal without anyone in the middle to capture your identity or authentication credentials.

We’ll be coming back to this one later.

Is green the new way of staying the the black?

The environment is a hot issue at the moment, literally and metaphorically. Standing on the dire Waterloo & City line a few weeks ago I saw this advert for Orange broadband trampled underfoot:

An advantage of online media is that you don’t get the physical waste of paper advertising. Will you want your brand associated with street litter and waste in future? How well are operators doing at getting their own and third party marketing material off paper and onto screens? Given two price plans and handsets that you’re indifferent between, wouldn’t you select the one that promised less landfill?

Making money from hot air

BSkyB have announced their intention to become carbon neutral. They’re not alone in grasping for new ways to differentiate their offering in a noisy marketplace. The IT industry has picked up on this too, spinning a new marketing message of energy efficiency, repackaging existing government initiatives with fresh PR spin. When you come to buy a new PC, and they’re all screamingly fast, will you just choose the one with the lowest power bills?

A quarter of the opex of running a mobile network is typically power. Mobile operators are major power generators and innovators in energy delivery in emerging economies. At the other end of the radio link, people assume phone chargers are a waste of power when plugged in. As it happens, it’s rubbish — but perceptions matter. Can’t be long until telecom really catches up with the green wave.

Spin over substance?

Hot air isn’t the only eco issue for telcos. Public concern about the safety of Wi-Fi signals recycles older worries about cellular phones. (Maybe the path to a perpetual motion machine is to sit a journalist in front of a radio transmitter whilst slowly turning the power output down to zero. The output of scare story column inches appears to be independent of signal power, and surely you can capture some energy from the keystrokes.) Whilst it’s not hard to whip up such concern among the public, an operator can be sensitive to such concerns. Why don’t you sell your home gateway with a WiFi strength meter so concerned parents can check how infinitesimal it is in the kids’ nursery?

Then once you’re bought, used and outgrown your electronic gizmo, you want to toss it away. If you’re selling twenty handsets a second then at some point there’s going to be a mountain of e-waste: twenty obsolete plastic bricks a second added. There are already initiatives to recycle or dispose of used phones, but are telcos and their retailers lagging the marketing sophistication of the IT players again? Are they stinging your conscience to come into the store with all those toxic, explosive old batteries?

Sharing the love

Last year there was a big promotion on Red-branded phones, where part of the sales pitch is you’re helping the helpless of Africa:

A gimmick? I’d rather take a holiday there and spend some hard currency with the economically active. But it’s a clear change from buckets of minutes.

One of the patents I co-invented at Sprint was for smart interchangeable phone faceplates where the faceplate has some affinity brand on it, like a soccer team, charity or alma mater. When you spend money on the phone with the faceplate attached a proportion of your spending your to the affinity partner — just like with a co-branded credit card. I’m sure Sprint will license it to you for a fee…

Getting the message to match the market

If you’re going to try to compete differently to the competition, you need to be able to communicate this to the customer. Here’s a good example of how not to do it:

The flagship Nokia-branded store on swich Regent Street in London. What’s Nokia’s hottest product right now? The N-series multimedia phones. Nokia are competing on brand and features — my phone’s (literally) bigger than yours. The message? Cheap memory cards and packaged minutes. Oops. OK, this is really a Carphone Warehouse store in disguise (I think) and Nokia plan to fix it, but you get the idea.

The shifting proposition of voice minutes

Let’s circle back to the core voice service proposition of network operators. We’re now entering a third generation of digital real-time communications. The first was the replacement of analogue systems with digital equivalents. There was minimal visible change to end users: some audio quality enhancement and a few new features. The business model remained selling minutes and messages. In this first wave, the basis of competition was service distribution: either being given a distribution monopoly as a PTT, getting your central offices upgraded pronto, or building out mobile coverage.

Then we had the initial wave of IP and Internet telephony, from the pure POTS replacement of Vonage to a more enhanced (but still stand-alone) Skype. The business model remains sale of voice minutes, although the price points differ, with lots of flat rate and free telephony thrown in. Price (and to a lesser degree network quality) became the basis of competition.

So far, incumbent operators have done well out of all of these. For every cent Skype has made, some telco has turned a buck selling broadband Internet. Voice revenues have held up as volume growth exceeds margin decline. A parallel instant messaging world grew up, but carriers were making too much money from SMS to really notice or care. The IM vendors were too greedy or short-sighted to interoperate until it was far too late to create a rival in terms of ubiquity and convenience of SMS.

Ultimately, core revenues are at stake

The third wave is slower, more subtle and ultimately more threatening:

  • The user’s social network — and the buddy list/address book — migrates to one of the social networking sites.
  • Your personal data — photos, documents, payment instruments — is sucked up by Google, who are then in a position to point you to their preferred communications channel when you want to share it around.
  • Communications are increasingly initiated from within third party applications (e.g. web pages, video games, Web TV) as “click-to-interact”, with the context being carried over and affecting the user experience. So I don’t need to tell the call centre agent I’m on page four of the mortgage application form when I call for assistance, and she can see what’s on my screen.
  • The IP-based pipe is then used to create a more secure and convenient channel between users and merchants — as we saw above with Paypal. This is the security threat: someone else builds confident communications, and grows is from the strong base of the above applications and services.

Relationships and conversations, not minutes and megabytes

It doesn’t matter how cheap your minutes are if the user feels their bank account or online identity could be compromised, or they’ll expose themselves to spam or unwanted privacy intrusions. People have demonstrated that they’re willing to pay for privacy and security. On the other hand, if it’s a lot of manual hassle to shunt data around, users won’t bother. Security has to be relatively unobtrusive. (Other Paypal adverts have also focused on their simpler user experience.)

The basis of competition in this third wave is about prompting and connecting people who need to interact — social and contextual awareness — whilst preventing unwanted interruption and connection — i.e. security and privacy. By the time the phone call starts, the money-making is over. If the call was wanted, it’s the broker who gets the payment; if it wasn’t, it’s the filter.

The telecom business model of adding value in transit is in some ways incompatible with this more secure world with value created at the “edges” of the network. It assumes a degree of control and a closed ecosystem with a few large players offering mass market services, and the use of contract and lawyers to enforce good behaviour.

So if the basis of competition shifts, you could be in trouble. Can’t happen to you? Hope you’re not in the telegram, telex or fax business. In this case, it’s not a shift in medium, but rather in message: sending rich information in a secure format. Who will be the Paypal of telecom? Probably not Skype — there are more threatening products out there.

Assume that we move towards more open ecosystems and a world of niche communications products. These still manage to interoperate via various global “backplanes”. Making the system secure and convenient is critical, and it’s not going to happen by centralised design of a telco standards committee. It’s a multi-decade problem to solve, so don’t expect a Telco 2.0 market report on telecoms and security any time soon either.

Which takes us back to our original problem. How can we build a more secure, private communications system? That’s the next stop on the journey.

If you want to meet like-minded people looking for growth opportunities and competitive advantage in delivering voice and messaging services, you need to come to our Digital Product Innovation summit. Alternatively, debate the wider issues of telco and media business models at our Executive Brainstorm plenary sessions.

To share this article easily, please click:

3rd Telco 2.0 Executive Brainstorm - 16-18 October, London

Following our last post on the topic, we’re delighted to give our blog readers first viewing of the site for the October Telco 2.0 ‘Executive Brainstorm’, here.

You’ll hopefully see that we’ve worked long and hard on trying to put together something of real value to those trying to make a difference in the Telecoms, Media and Technology sector. So, far we’ve been rewarded with a flood of companies contacting us about sponsoring (that’s good, because it costs a very large sum to put on the event), but equally pleasing has been the interest from senior industry figures to act as ‘stimulus presenters’ to the brainstorming:

For example, from our favourite UK ‘Telco 2.0-compliant’ company (why? see here, here, and here, with a caveat here), we’re delighted to confirm Steve Robertson, CEO of BT Openreach who’ll speak in the plenary session about the liberating effect of structural separation on business model innovation. Openreach is one of the most important stories in global telecom today.

JP Rangaswami, CIO and President of BT Global Services, is one of the most interesting people in telecoms and IT today (your brain fizzes for hours after you’ve met him). He’ll be stimulating our Digital Youth Summit. Why? Read his comments about Facebook on his blog.

Kip Meek, previously Chair of the European Regulators Group and now supporting the Broadband Stakeholder Group, is trying to wake up government to the real threats and opportunities from the rapidly converging broadband value chain. He’ll be supporting the Digital Cities Summit and the Plenary.

From across the pond, Bill Gajda, Chief Commercial Officer at the GSM Association, and a force of nature for shaking up the (still complacent) mobile industry, will be helping with the Digital Advertising and Marketing Summit.

You’ll see from the speaker invitee longlist that there will be plenty of other international stimulators. We’ll give you a preview of them on this blog as they are confirmed over the coming weeks and months.

To share this article easily, please click:

June 13, 2007

Best of Telco 2.0 blog — One year on

Today’s our first blogiversary, so we thought we’d celebrate by pulling out a few of our gems from the last twelve months that you might have missed.

Business models

We’re proud of our consulting client work on designing new business models, but for obvious reasons you don’t get to read about that on the blog. Still, we’ve tried to share a bit about our methodology, such as defining a process for business model design, and examples of how to go about business model innovation.

We’ve also done some hypothetical case studies: designing a “blue ocean strategy” for an operator to serve the urban poor, and building a Telco 2.0 compliant MVNO.

We also look at some more theoretical concerns, such as how networks of compatibility arise.

Communications over content

We made an early foray into the opportunities for telcos to develop smarter personal communications services rather than being obsessed over media content sales. Continuing on this theme we’ve talked about how the telephony experience can be improved, as well as rethinking the phone book, and the prospect of the personalised telephony experience.

On the business side, rather than the product and user experience, we’ve covered the real sources of value in SMS, our voice and messaging survey results, and future business model for telephony

Advertising

Our Telco 2.0 market study on the role of the network operator in advertising is a wealth of unique information on a hot growth area, but you’ll get a lot of the key ideas by reading our blog posts. You might want to start with an introduction to the key issues. We’ve published summaries of our first and second advertising workshops at the last two Telco 2.0 events, as well as extracts from the executive summary of the report.

Delving into more detail, you can avoid doing any real work for the rest of today by learning about:

Platforms and partners

Telcos desperately need to open up their services to partners and allow in a little fresh innovation. Taking a business perspective first, we jointly reviewed with Keith McMahon the weaknesses of the Internet businesses, as well as the nature of adjacent businesses enabled by platform, pipe and identity assets.

If you build a platform, remember to address commercial needs as well as technical application integration. Finally, a strong strategy is nothing without good execution and a great user experience.

Marketing and pricing

When we’re not designing new business model and products our attention veers towards the sale of what’s already on the truck. So in that light, we’ve pondered what do telco brands really stand for, how to price for abundance rather that scarcity, and how you might rethink Friends and Family pricing plans.

Business model map

On of the highlights of our past year was the publication of our map of how valuable bits might be moved between users in different ways in the future. You might like to start with this overview before going into more depth with parts one, two, three and four of the essay that describes what we now call the Data Transport Systems Map. We’ll be doing more research on this over the summer — follow the overview link for details.

Networks

Being telecom, we can’t ignore the networks themselves and network technology, so why not burden your browser with a few more tabs and windows:

Industry comment

Finally, we’ve picked on a few well-known names to write about, with a retrospective of 3GSM, Microsoft’s telecom products, and BT’s Telco 2.0 strategy.

If you’d like to meet the Telco 2.0 team in person, why not come to our next Executive Brainstorm in London on 16-18 October?

To share this article easily, please click:

June 12, 2007

BT’s product strategy — the Telco 2.0 version

We’ve been thinking a lot recently about BT’s product strategy (don’t ask why, we won’t tell!). Ben Verwaayen, BT’s boss, has a vision of BT’s business model supporting global, open and real-time services, for which he got some stick in the trade press.

Let’s re-interpret those words, but with the Telco 2.0 spin (and penchant for puns):

Glocal, clopen and hard-time.

What BT is trying to do is establish some guidelines/principles on how products and services access the development resources of BT. It’s the gatekeeper to the product and project pipeline who needs to be testing each proposal to spend money against:

  • Is it protecting and extending the existing business lines. (The first Telco 2.0 strategy.)
  • Is it creating a open platform (Second Telco 2.0 strategy) for global, real-time services. (More below.)

The pipe (third strategy) is assumed, and given the Chinese walls and equivalence rules, Openreach and access infrastructure is a separate world we’re not considering here.

We think the global-open-real time mantra is heading the right way, but needs some refinement.

  • “Glocal” steals from destined-for-big-things danah boyd the idea of glocalisation. Every service is global, but the successful ones can adapt to local (and personal) needs. The Firefox browser might be a good example: one base product, an unbounded number of combinations of extensions and customisations within that framework. BT’s products need to be adaptable by third parties to niche/local markets and needs — “retail and OEM”.
  • “Clopen” means we’re looking for the right balance between openness to any partner and having some kind of tolled access to a scarce and valuable economic resource (the closed side). i-mode allowed anyone to become a partner, but you could only bill to a DoCoMo customer’s invoice and only for services that generated revenue-earning data traffic.
  • “Hard time” doesn’t mean telecom feels like a prison sentence, even if it feels like it to investors. Rather, you’re looking for applications that the Internet sometimes may have a hard time delivering. One such example is demanding real-time applications where you can control QoS etc.; but it’s not the only one. Another example could be heavyweight video content that works best with a local content delivery cache. We’ve come up with a long list of all the ways the IP/Internet abstraction breaks down (just don’t expect us to give it all away for free!). These gaps are profit opportunities.

What’s missing from most operators is a rigorous way of managing the product and project pipeline that aligns resources to long-term strategy. Too often it’s predicted (“fictitious”) ROI that drives investment decisions. But if that means the doughnut shop over the road is making higher returns on invested capital and better margins, you’re ordering flour and jam instead of mobile handsets and base stations. Even worse, those product ROI figures are based on all kinds of cost accounting tricks. Yes, you need to keep one eye on the cash flow. But the other needs to be on the road to glocal, clopen and hard-time services.

To share this article easily, please click:

June 6, 2007

From trusted computing to trusted communications

We try to make our writing here at Telco 2.0 practical as well as analytical and philosophical — as would commercial writing in more formal media. Normally we try to end articles with something that addresses the “so what?” question. We like to bring you answers. This time, we’ve just got an important question. Fortunately, a blog is also a great place to float half-formed ideas as well as force you to structure your own scattered thoughts. Blogging is a different medium from a newsletter, or a journal. As Doc Searls, editor of Linux Journal, puts it:

So I write a lot about the Net, the Web, blogging, podcasting and the rest of it. And maybe I’m wrong about a lot of it too. Hell, what does anybody know? The whole thing is still new. Everything we say about it is unavoidably provisional.

So, this is the first of a few essays with some provisional (and possibly wrong) thoughts about delivering security and privacy to users in a mass and massively networked age. These problems and opportunities for everyone in the communications value chain. Indeed, they were sparked by a consulting assignment for someone at the silicon part of the ecosystem. Our punchline (you’ll have to wait) will be this: there’s a very deep architectural problem in how computers and networks interact, and we’ve got some ideas on how to do things differently.

The economics of security are different from the rest of the communications value chain

There’s big money in the security game. Why? Well, consider the four basic means of making money from any communications service:

  • Create or capture valuable bits. This could be shooting a movie, or giving the users a phone to talk into.
  • Add value to existing valuable bits. For example, there’s a great platform opportunity for telcos to open up their infrastructure to deliver personalisation services to third party application, portal or search providers who don’t know anything about you.
  • Move valuable bits from A to B so they can interact with the user. Enough said.
  • Stop unwanted negative-value bits from getting through to the user, or even being sent in the first place.

This last category is special. For the first three, the amount of value you create roughly corresponds to the number of bits you process. But one rotten bit can undo the value of billions of tasty ones. If only there was an easy way of identifying them… hmm, well there’s an IETF standard:

Firewalls [CBR03], packet filters, intrusion detection systems, and the like often have difficulty distinguishing between packets that have malicious intent and those that are merely unusual. The problem is that making such determinations is hard. To solve this problem, we define a security flag, known as the “evil” bit, in the IPv4 [RFC791] header. Benign packets have this bit set to 0; those that are used for an attack will have the bit set to 1.

If only! Sadly, the “evil bit” specification is an April Fool. The take-away is this, however: there is a different set of actors and incentives at work; the user value proposition and thus the supporting brand and marketing is unlike selling a service or pipe; and that the problem is acknowledged to be hard enough to warrant spoof standards to generate some humorous relief.

Telcos already have a disjoined set of security businesses

There’s been some interesting activity in this last space recently. BT bought up security specialist Counterpane. Verizon have done likewise.

Particularly in the US, managed home security and alarm services have long been a significant revenue source for local operators. The former CTO of pre-merger AT&T, Hossein Eslamblochi, was always convinced that the money was in stopping the bad bits, not sending the good ones. Deep packet inspection is useless for service price discrimination, but vital to some security processes.

If you’re running an ISP business, you’ll try to bundle anti-virus software with PCs and update subscriptions. You’ll also have a network operations centre working to keep out email spam and mitigate inbound denial of service attacks.

Then there’s the hidden iceberg in telecom: digital identity. Telcos have secretly been the big animals of the identity industry for decades, despite the high visibility of upstarts like Verisign and Neustar. Why do you so willingly pay a hefty fee each month for telephony service, and have bells and ringers in your home that could wake or interrupt you at any time? Because the telecoms industry has created the social, institutional, legal and technical framework that keeps the level of abuse at an acceptable level. Make a serious nuisance phone call from home, expect to get arrested. There’s someone to complain to.

So far, operators have resisted extending their vast store of user identity collateral to third parties through technologies such as ENUM. They see the threat (you can point your phone number at a VoIP service and bypass the operator) but not the opportunity.

Of course, it helps that traditional telco networks are very closed when it comes to keeping out the baddies. Open directories like a public ENUM service aren’t the telco way of doing business. Which is where the other side comes in: the IT industry and the Internet.

The Internet and PC culture: open to good and bad

You can install any application you want on your PC, and connect to any address or service you like on the Internet. This creates huge option value, as you’re not locked into someone else’s idea of what the use patterns might be. There’s no shortage of innovation that exercises those options. Unfortunately, the malicious and fraudulent actors on the network show a great deal of creativity too. This openness comes with a price.

From ENIAC to Apple

Traditionally, design trade-offs in CPUs have been made in the processing of the bits; the passing of the bits between processors is an afterthought. The standard literature will teach you about the Von Neumann architecture. The chief issues were around how to wire up the processor and memory. (Storage and networking have never been treated as a “first class” objects that needs to be modelled in the CPU; they’re just undifferentiated bus interfaces that are driven by the OS software. The CPU can’t tell a USB mouse from an ethernet port.)

So the design issues were things like should one instruction should operate on one or many pieces of data? How many instructions should run at once? Should we architecturally separate out “instruction memory” from “data memory”?

So all bits are beautiful and good. The value comes from combining them on a processor and spitting out value-added bits into memory. Faster equals better. All applications are trusted to access all data and resources — read any file, message any process, write to any network. The default is “yes” until someone comes along and places user-based restrictions on the application or data. Those users are local to that machine or (at best) local network. Applications have thin partitions between them so that crashes aren’t contagious and crash the CPU. Another key innovation was creating privileged supervisor modes that let the operating system access things that applications can’t, stopping them from treading on each others’ toes. But that’s all.

Bad bits make you bitter

There’s just an incy, wincy problem.

The security of the system is then left to the operating system. Tens of millions of lines of code. The challenge isn’t Herculean, it’s Sisyphean. You could spend forever and a day on it, and still not get it right. In fact, for all practical purposes, Microsoft has spent forever and a day on splicing DRM and security into Vista, and still some bugger simply reads out “Delete See Colon Backslash” in an audio file on a web page and BOOF! Pray you’ve got an install CD and system backup.

So the problem is that not all bits are good. As noted before, the evil bits can erase the value of the good bits. And networking a computer — or simply passing media around — gives the bad bits a medium through which to pass.

Yes. Some are red. And some are blue.
Some are old. And some are new.
Some are sad.
And some are glad.
And some are very, very bad.

Why are they
sad and glad and bad?
I do not know.
Go ask your dad

Dr Seuss: One Fish, Two Fish

Security as an afterthought

And this Dad says: We’ve started with Trusting computing, which is a naive and good world. The worst threat was a prankster in the university computing lab. Then we found out there were real baddies in town, so we’ve built Trusted computing to retrofit some security onto the general-purpose CPU to enable security between machines.

Avoiding the technojargon, Trusted Computing has one critical feature that lets my computer talk to your computer in a secure way, and know that certain data and software on your computer hasn’t been tampered with, and isn’t faking or spoofing the answer. You don’t need to be too smart to see that this involves a security function between two devices over a network. If you’re in the communications business, you ought to be working out how to make money from this.

What we need is Confident computing, where the humans know the system is working for them and it’s not a zombie on someone’s spambot network (now 100% secure from penetration, thanks to Trusted Computing). Once we have that, we can move to Confident communications, where we can accept untrusted code and data from third parties, and know that they are either immune to contagion, or the effects are reversible.

On mobile, the constraint is power, not processing

There’s another problem. I suspect a large proportion of the runtime of an operating system is given over to performing authorisation and security functions. This bloats the size of the OS (more storage and processing cost) and shortens the battery life of mobile devices. We’re about to move to a world of mobile Internet appliances, and we need a transformative approach to power usage: weeks or months of battery life, not hours or days. That means putting security where it belongs: in silicon or the BIOS (the bit that mediates between the OS and hardware) — not the OS or (heaven forbid!) user applications.

There have been previous efforts to radically re-think computing. For example, rather than running the whole processor against a central clock, do everything asynchronously, and only accept an input once the previous stage is done, and pass it on as soon as you’re finished. Don’t wait for the nanosecond hand to tick on to the next stop. Useful, barely practical, and doesn’t address the real user crisis, which is being asked to manage the security a complex device in a hostile networked evnironment.

So, what’s the bottom line?

We think the whole foundation of computing and machine communications needs re-examination for the networked age.

Yup, you read it here first. Just a minor detail, that the whole system is flawed. Stop what you’re doing. Rescind that router purchase order. Delete your Dell.

This isn’t really a Telco 2.0 thing, per se. It’s a far bigger problem across a vast ecosystem. We probably can’t just turn up and do a workshop for you an unilaterally fix these issues.

Our thesis is that the value in converged IT/communications is shifting. The first move is from computing power to connectivity. We’re well into that process. You’d rather have a slow laptop with WiFi than a fast one that doesn’t connect. The next shift is from connectivity to security, and that it requires a fundamentally different device and network architecture.

Which is where we’ll go next.

To share this article easily, please click:

The Telco 2.0 Methodology - Transport Systems for Information Goods

In our previous post on Telco 2.0 Methodology we introduced a framework for thinking about business model innovation in telecoms and media. We’ll delve into that deeper in forthcoming posts…

In the meantime, this article provides an update to our analysis of probably the most important structural change facing the industry: the fragmentation of ‘service transport systems’.

By ‘transport systems’ we mean the ways in which Information Goods (valuable bits and bytes, including telecoms services and content) will be transported over the next 5-10 years, and how money might flow differently as a result. It provides an important context for understanding the opportunities for and threats to all players in the Telecom-Media-Technology value chain, and thus for designing sustainable business models.

Since we originally published this in March we’ve talked it through many times with various audiences. This has helped us to refine the way we describe it. Here’s an updated version of the key chart. Below that we explain it in more detail:

Fragmentation.png

So, what exactly do we mean by service/content ‘Transport Systems’?

Here’s a way of thinking about it. With physical goods, it’s easy to see the difference between the goods inside the box and the “postage and packaging” distribution or ‘transport’ part of the product. You can order a toy and have it delivered by next-day courier from Amazon, slower parcel post, or collect in person from a store. Same product, many ways of getting hold of it. As well as the physical box and delivery, there are many different patterns of payment: “postage and packing included”, cash-on-delivery, pre-paid return, self-addressed stamped envelope, agents who collect and deliver on your behalf, or explicit separate payment. The combination of both the physical delivery system and the payment system gives us all the different distribution or ‘transport’ business models.

There’s a similar pattern for information goods. Computer software can be transported to users via retail stores or via download networks like Kontiki. Facsimiles were transported to users via PSTN, ISDN and now the internet (fax.com). Web pages can be tranported to users via dial up PSTN, cable modem, or broadband DSL. Digital photos can be transported via Bluetooth (device to device), internet (Flickr, Picasa), by CDs, by printed paper, or just by handing your cameraphone to someone for on screen viewing.

Transport_Systems.png

(We’ve picked the example of content delivery here, but the bits can equally be made valuable by the users themselves and flow the other way to a service provider like a social networking site.)

In traditional vertically integrated systems (broadcast TV and radio, and telephony) it’s much harder to see the difference between the service itself and the transport system as they’re tightly coupled together. For physical goods this is rarely the case - most are delivered over general-purpose transport. A petrol tanker which only carries petroleum is probably the nearest equivalent of vertical integration.

So, in telecoms and media is we find it harder to distinguish between the service and the transport system, in part because we don’t always have separate words for them. In telephony the service is “POTS” (plain old telephone service) while the transport systems are “PSTN” and “Mobile Phone Network”, broadband Internet (which supports SkypeOut) or NGN networks (such as BT’s 21CN). In messaging the service is SMS and the transport system is “the collection of SMS RAN channels and SMSCs and their interconnect” (we don’t have an equivalent word to “PSTN”) or it’s GPRS/3G Internet access and a 3rd party (e.g. SMSbug, Vyke).

So, how should we be thinking about Transport Systems?

Firstly, as we’ve described in detail before in seeking to understand how transport systems are evolving and how money might flow differently between value chain participants, we need to map these systems in terms of their levels of:

  • technical integration (how far is the delivery of th bits tied to one application or generic to many applications?), and
  • commercial integration (does the money flow through the value chain automatically as a result of user content consumption or service interaction, or is it more manual? Does the access network payment cover service, or vice versa, or are they separate?).

As we’ve shown here, this analysis helps us to see the change from the bi-modal world of telecoms-related service/content transport systems today (PSTN/SMSC at one extreme, Broadband Internet at the other, with a number of much smaller hybrids in between) to a much richer one, with a new and exciting ‘zone of opportunity’ opening up for existing or new players in the ‘information goods transport business’.

What should telcos do about this?

For telco operators to realise this growth opportunity they should focus their efforts on ensuring that, in this increasingly complex world, the retail (end-user) experience is dramatically simplified. This requires making wholesale MUCH more sophisticated than it is today. The ‘Broadband Incentive Problem’ is the industry’s Sword of Damocles. To ward it off and start the process of innovation in Wholesale we must start by breaking up the broadband business model, horizontally and vertically. Horizontally breaking it into tiers: free (ad-funded), subload (e.g. backups), standard best-effort, priority and full “QoS assured”. Vertically, slicing it up so it can be packaged with devices or services as “postage and packing included”.

Slice_Dice.png

Of all the structural and operational changes we see that are required to create sustainable growth, this rich wholesale environment is the most important one. Sure, you can create a value-added personalisation platform appealing to Internet portals, or diversify a little into mobile payments, or eliminate capex by sharing open-access infrastructure. These buy you time. But you’ll still be caught between the Broadband Incentive Problem for Internet access, and the stagnant, closed environments of IPTV, IMS, PSTN and broadcast media.

Over the summer we will be conducting more detailed analysis to create more thorough and tangible descriptions of the scenarios described here, and attempt to provide some market sizing forecasts. If you’re interested in getting involved please contact us.

To share this article easily, please click:

June 4, 2007

Mobile Advertising: Is the Future Black for Blyk?

We like the hutzbah of the Blyk team. “A new pan-European free mobile operator for young people funded by advertising” sounds good in theory, and we applaud the bravery and boldness of its ambitions. But it breaks the principles for success in this market that we identified in our recent Market Study, and we believe its business model - in its current form - has little hope of success.

Here’s why:

As the company gears up for its summer launch in the UK, the PR machine has kicked into action and management have announced a rash of recent deals:

  1. Advertising partnerships with Coca Cola, Buena Vista (Walt Disney), I-Play Mobile Gaming, L’Oreal, Stepstone (on-line recruitment), Yell (directory services) and others
  2. A Billing, Customer Care and Partner Management platform agreement using MetraNet from MetraTech
  3. An equipment deal with Nokia Siemens
  4. An Ad-serving and Customer Profiling platform agreement with First Hop
  5. And, of course, a network agreement with Orange

But Do Customers REALLY Want What Blyk Offers?

Blyk has been at pains to point out that research shows that its target customers (the youth segment) are ready and willing to receive targeted advertising. This is part of the reason why the company is investing heavily in CRM, ad-serving and customer profiling systems.

This seems like sensible stuff. Google has shown us all the value of targeted on-line advertising. But one of the key differences here is that the user experience for Google is incredibly simple and friendly:

  • Type in search term to a beautifully clean Google homepage, receive list of highly relevant web pages and highly relevant advertiser pages. Click through to get information/product/service. Done.

Blyk is proposing a much more convoluted process in which customers must complete a form (or forms) with their user preferences to ensure they receive relevant ads. It’s not clear whether the system then refines these preferences based on usage or whether the user needs to continuously update them but, either way, this is an annoying intrusion. The mobile is an incredibly personal device and we are not convinced that Blyk visibly imposing itself between the user and their desire to communicate with their contacts is a smart move.

Like Google, the power should be in the system which make this all happen seamlessly and invisibly: I pull the advertising/promotions I want OR Blyk magically pushes relevant stuff to me. The magical bit could be done via a partnership with a loyalty scheme provider (such as Nectar) where customer buying behavior is automatically tracked enabling Blyk to push relevant ads to users. Alternatively, let the user pull ads - by, for example, asking for deals on Coca Cola from local stores. Just don’t ask me questions about what I want, get out of my face!

Sure, Yoof don’t like to pay for things but I think the above issue represents a serious demand-side risk for Blyk. In addition, as we have already pointed out in this blog, the inventory available for operators like Blyk is limited - people will only consume a finite amount of advertising associated with telephony services.

Mitigating Demand-side Risks

The real opportunity is to become an advertising enabler when customers head off-portal to enjoy others services. In the case of the youth segment, this is likely to be to music-oriented places like iTunes, Pandora, Sony BMG’s MusicBox, Last.fm etc. etc. We believe that Blyk would do better to focus on helping partners like this deliver more appropriate services AND advertising (for fashion, games, video, etc) to their customers. The social networking sites all do this quite well already, but there are opportunities to improve the community experience of these sites. Google’s interactive TV concept demonstrates how contextual information (which operators have in abundance) can add to the social networking experience enjoyed by the youth segment:

Google%20model.png

Blyk will also have location, (micro-) payment services, customer history and customer demographic information with which to further improve the relevance of services and advertising for these sites. Critically, unlike Google, Blyk knows who else the user calls or interacts with. This latent social information is a potential goldmine. After all, Google has made hundreds of billions of dollars of shareholder value out of scraping other people’s hyperlinks.

Surely, this would be a better long-term bet than merely offering voice and texts in return for ads? Blyk’s management team may be planning this, but all the current noise is around ad-funded telco services.

Supply-Side Risks are Also Substantial

Now, we’re sure that Blyk’s management team have done the numbers. But we struggle to see how advertising alone is a viable revenue model for an operator. Internet-based businesses can achieve exceptionally low distribution costs for their services. The same is not true for operators like Blyk. Network costs will always remain high. It is interesting that Blyk has teamed up with Orange in the UK. Orange has traditionally eschewed MVNO deals (unlike T-Mobile, O2 and, more recently, Vodafone).

So why are they bothering with Blyk who will never be anywhere near the size of such MVNOs as Tesco Mobile, BT or Virgin? We reckon that this is a neat hedge to stay close to a potential competitor which is moving in on a market where Orange has traditionally been strong. Orange has a larger proportion of pre-pay customers than its rivals because it has a greater number of young customers (a legacy of its optimistic ‘Future’s bright’ message). By being Blyk’s network provider, Orange can learn about Blyk’s services, branding and operational processes. But Orange is unlikely to be motivated to trade margin for revenue since the risk of cannibalising existing customer revenue is high if it drops its wholesale prices too much.

Bottom-line: We think Orange will not offer Blyk enough of a supportive pricing deal and Blyk will struggle to cover its network costs with ad-funded telco services alone. In the off-chance that Blyk takes off (with its current model), you can bet that Orange will itself fight back - initially with price-led offers to its youthful pre-pay base and later by copying Blyk’s ad-funded model.

Better Opportunity: Improving Youth-oriented Voice & Messaging Services

We can’t help feeling that Blyk should mitigate the supply-side profit risk by having at least some services for which it charges users. Young people do have disposable income. They may not have as much dosh as their earning adult counterparts, but parental allowances, student grants and dole money can all be spent on Telco services as well as booze, drugs, fags and music. This would have the added benefit of positioning the company as a value-added service provider rather than a ‘cheap-as-chips’ player. Helio and Amp’d (notwithstanding the latter’s current financial difficulties) have been successful in the US and generate mouth-watering ARPUs.

The key thing for Blyk is to focus on making their voice, messaging and music services, and their devices, BETTER than the competition not cheaper than them. Don’t just think about targeted advertising: deliver targeted Telco services and targeted devices.

Some examples might include:

  • Partner-branded devices - Paul Smith (fashion); Coca Cola; Cobra (beer); etc.. Nokia have produced interchangeable fascias for their phones for years. Why not extend this idea to produce limited edition fascias or phones featuring favourite bands? Young people would pay for a Snow Patrol limited edition phone. Blyk could make money from sales of such phones and accessories on their website particularly if they also considered…
  • Youth-oriented device features. We believe that Blyk would benefit from partnering with handset manufacturers or OS players to integrate a unique youth feature-set into the devices. They could consider such things as: an iTunes button to connect and pay for songs from the site; a ‘Call Me’ text button to send a FREE text to request inbound calls from (affluent) parents/friends (Blyk would enjoy inbound termination revenues) and/or sharing of inbound termination with Blyk users to grow this lucrative revenue stream. We will be exploring in detail these concepts and others detail at the Telco 2.0 Digital Youth Summit in October.
  • Youth-friendly voice and messaging services. Make your CORE service more attractive to your customers with a mix of free and paid-for content and services: Offer a selection of FREE and premium rate tailored ringback tones featuring favourite bands/soundtracks. Let users send a Blyk Emoticon to each other for free (on-net only) but pay for a selection of premium emoticons and animated messages. For example, users could send mobile Flash-powered interactive greetings (that work between Blyk users and internet users). Non-Blyk mobile users can download a client (so Blyk starts to parasitise other operators’ customer bases).

Create an eco-system of developers for services and open up the Blyk platform for them to create bespoke applications and content. Offer generous revenue share options for them - other operators do this badly because they see developers as a distraction or threat. Copy Microsoft’s approach with Windows and see them as the lifeblood of the Blyk value proposition. Give them an easy to use Blyk developers kit and allow the most popular services to appear automatically on the device menu or at the user’s request (via over-the-air updates). (In fact, Microsoft can even sell you the tools for this.)

Blyk%20Evaluation.png

To share this article easily, please click:

Telco 2.0 Strategy Report Out Now: Telco Strategy in the Cloud

Subscribe to this blog

To get blog posts delivered to your inbox, enter your email address:


How we respect your privacy

Subscribe via RSS

Telco 2.0™ Email Newsletter

The free Telco 2.0™ newsletter is published every second week. To subscribe, enter your email address:

Telco 2.0™ is produced by: