" /> Telco 2.0: June 2009 Archives

« May 2009 | Main | July 2009 »

June 29, 2009

Ring! Ring! Hot News, 29th June, 2009

In Today’s Issue: Vodafone after T-Mobile UK? Femtocells ready to launch; voiceprint ID at Vodafone Turkey; moving to the Smoke; HOWTO install unauthorised software to a Palm Pre; Intel-Nokia strategic alliance; Moto Karma launches on AT&T; AT&T femtocells coming; subscription navigation application; TV to go; Comcast, TimeWarner break with Hulu; Virgin Media to bother filesharers; ARCEP says yes to urban overbuild, divides France into three parts; NSN gets optical kit from Juniper, i.e. Ericsson; the long death of Nortel; new Ericsson CEO talks to the FT; Indians alarmed by exploding Chinese gadgets; GVoice hype and cold water; AdSense for mobile launches; native SDK out for Android; BREW’s future; iPhone 3GS costimates; EBay “bought Skype but not the code”; Genachowski’s in at the FCC; Entanet shoots back in the UK wholesale wars

Consolidation watch: the Financial Times claims there is an offer on the table for T-Mobile UK, from Vodafone. How will OFCOM respond to that? A combined company would have no less than 40% of total spending on mobile service in the UK, and this would trigger yet more repercussions for the spectrum situation. Perhaps part of the regulatory solution would be to trade off a tranche of 900MHz to 3UK, in exchange for T-Mobile’s 1800MHz holdings?

Meanwhile, Vodafone UK is about to launch a femtocell product. It’s going to be called “Access Gateway” - thrilling - and it looks like Vodafone is marketing it on the basis of better in-building coverage rather than any new features it might have. There is some discussion here of what you can and can’t do with them; notably, it will be interesting to see how easy it is to circumvent the IP-address geolocation and take it abroad with you.

If you read the whole thing, down-ticket, there is an interesting development at Vodafone in Turkey; they’re working on a voiceprint-based authentication/identification service, presumably in order to compete with Turkcell’s Mobile Signature. It’s not obvious that voiceprints are particularly solid verification, but it’s certainly an original and innovative idea.

And there’s a major moment in the history of the world’s biggest carrier by revenues; they’re moving HQ from Newbury into the capital, apparently to be “nearer shareholders and business partners”. Which is ironic; the original reason for locating what was then a skunkworks inside Racal Electronics another half an hour down the railway in Newbury, rather than at Racal HQ in Reading, was precisely to keep the nascent operator well away from the owners so they didn’t ask too many questions.

If you’re bored waiting for a femtocell to show up so you can take it to Australia and convince it that it’s still in Surrey, you could try…installing unauthorised applications on a Palm Pre. The exploit is simple; send yourself a link to the application as an e-mail message.

In other technology news, Intel and Nokia have agreed to cooperate on a range of things, including embedded 3G chips, mobile Linux, and devices in the ever-shrinking gap between smartphones and netbooks. This is probably very bad news for WiMAX. However, it’s something of a relief to see that they aren’t breaching tradition by reducing the number of Linux variants in the world; Maemo and Moblin will continue to exist separately.

Motorola, meanwhile, has a new, social networking-targeted gadget out with AT&T, which also announced a coming femtocell deployment this week. Note that unlike Vodafone, AT&T was making noises about integrating other features onto the STB. And they have just launched a subscription-based navigation app and a mobile client for their U-verse IPTV service.

Comcast and Time Warner don’t seem convinced about Hulu - they’re launching their own subscription-based Internet TV service, which will be available on mobile (at least in the end). Does this mean that Hulu is going to be left short of content?

In other cable news, Virgin Media wants to start disrupting service to people it thinks are illegal file-sharers; they deny that this involves spying on their customers’ traffic, and square the circle by saying that the information comes from Universal Music. But where do they get it from? Bound to be highly controversial, whatever the answer is.

ARCEP has decided that France should be divided into three parts; dense urban areas, where fibre overbuild at the level of individual buildings will be permitted, less dense ones, where multiple operators will share fibre right to the wall socket, and rural ones, where the whole fibre loop will be FTel wholesale.

Regarding fibre, Nokia Siemens Networks recently announced it was partnering with Juniper (an Ericsson division, you’ll remember) for its IP optical networking products. So we presume that means they don’t want the Nortel metro-Ethernet division when it comes up on the block. Telephony Online looks back at the long slow death of Nortel.

There’s an interview with Ericsson’s new CEO here, after Carl-Eric Svanberg left to run BP this week. The major challenges are Huawei and the continuing mid-market squeeze on Sony Ericsson; however, the Indian industry is concerned about the quality of so-called “whitebox” imports from China.

Hype has been building up fast about Google Voice, Grandcentral as was; TelecomTV provides a valuable corrective, pointing out that it has no clear role or route to market in the enterprise. We’d also point out that when you can do essentially anything in enterprise voice with Asterisk, just being free may not be enough, and certainly doesn’t constitute a business model.

More interestingly, Google has launched the private beta of AdSense for mobile applications. Everyone knows AdSense; Google’s real core business, the huge targeted-ad serving system. AdSense for mobile apps is what it says on the tin - a version of it that fires targeted ads inside your mobile application, and kicks back some revenue share if a user clicks through. This is an important step in the mobile apps economy. Google also released the native C SDK for Android 1.5. Meanwhile, there’s a good post on the future of BREW here.

The iPhone 3GS costs $172.46 in parts and $6.50 in labour to make, iSuppli’s analysts conclude after pulling one apart and playing with the bits. It’s not that long ago that the same firm costed out the first iPhone at over $300 a go…

Trouble between EBay and the Skype founders. Rather than sell Skype back to Zennstrom and Friis for a chunk of the money EBay paid them in the first place, EBay wants to find a trade buyer or perhaps IPO the p2p icon. But it turns out that the dynamic duo didn’t sell the intellectual property rights to Skype along with the company; in fact, EBay paid $2.6bn for Skype the company without getting Skype the application. So now, everyone’s off to the courthouse; the founders claim EBay has breached the terms of the software license, and threaten to turn off the system.

Julius Genachowski has been confirmed by the US Senate as Chairman of the FCC.

And the British ISPs shoot back at BT’s proposed extra charges for content providers.

To share this article easily, please click:

June 24, 2009

Digital Britain: Too Large a Scope

Any UK citizen would applaud the ambition of the Digital Britain Report:
“to secure the UK’s position as one of the world’s leading digital knowledge economies”

We believe that the breadth of the scope of this ambition is the real problem of the report: too many intertwined companies are affected, especially when the government is pursuing a policy of Industrial Activism, which is longhand for intervention. Only time will tell whether benefits outweigh the costs of this intervention, but our initial impressions are not good. In this note, we examine three of the key findings which have the largest impact on telcos.

The Universal Service Commitment (USC)

“11% (2.75m) of households today cannot enjoy a 2Mbps connection. We will correct this by providing universal service by 2012.”

kmm-digibrit.png

People have criticized the headline speed and whether this is sufficient for future proofing for future applications. We are more concerned about the mechanism for delivering 2Mbps to each home. The diagnosis of the problem is as follows:

  1. Problematic home wiring (c.1.9m homes);
  2. Telephone line too long (c.550k homes);
  3. Random network effects (c.300k homes).

Home wiring is expected to be solved by self-help in 800k homes and public funds will required to solve 1.1m homes. Obviously, the big question here is why some people should get their home wiring problems solved and not others? Even in the days of voice only, homeowners were responsible for their own home wiring and paid for it.

Line length is planned to be solved for 420k homes with FTTC investment, presumably from the USC funds. Random network effects for 100k are to be resolved by “special investigation”. We would have thought that random network effects should be solved as a normal part of business by whoever is providing the network - either OpenReach or Virgin Media. This leaves around 330k homes which will have to be remedied by delivery via wireless or satellite technology.

In summary, the Universal Service Commitment is of unknown cost/benefit. However, the report recommends establishing a new quango - the grandly titled “Network Design and Procurement Group” with initial funding from the Digital Switchover Help Scheme, which the BBC will fight tooth and nail not to give up, as it comes directly from their budget.

Even worse is that the government hasn’t factored in how many of homes will not even want broadband once the lines are upgraded. A recent OFCOM survey showed there is a significant percentage of home-owners who are self-excluded for reasons of not being interested in the internet. In addition, there are a significant proportion of homes in the UK which are mobile only and don’t have a fixed line.

kmm-digibrit1.png

Next Generation Broadband

A key finding of the report is that the market-led deployment of Next Generation networks, whether by BT OpenReach or Virgin Media, will only lead to 60-70% of homes being covered. The report then makes the massive jump to saying that funding will be provided to extend this coverage to 90%. Note, not 100%, but 90%. The funding is to be provided by a new tax on fixed lines of 50p/month, which would raise approximately £150m/year or £1bn to 2017. Again, the “Network Design and Procurement Group” would be the gatekeeper to this set of funds.

kmm-digibrit2.png

Again, the cost/benefit equation is not clearly articulated. The first question that springs to mind is why the “Final Third” should be subsidized by the 10% who will get no next generation network and the 60-70% who live in a market financed area. The alternative is, of course, is that the households who live in rural areas pay more. After all, is this not the current scenario with broadband? People who live in areas where there is limited unbundling or a cable network pay more for their service.

The next question is why the need for the quango to control the funds? An alternative would be just to give the money direct to BT Openreach. For sure, the give-away would have to be structured correctly to avoid the EU investigating for State Aid, but as long as it was open-network to all communications providers and didn’t favour BT Retail, there is a large case to be put forward that there is a market-failure in rural areas. And further, it seems strange to be debating the last 10%’ s position in 20 years’ time when so little progress is being made delivering the first 10% now, or even the first 1%.

Perhaps, the most important question is why the rush? Why not let the Universal Service Commitment give everyone 2Mbps and let the market cover 60-70% of homes and review the situation in 2012? In any case, we would suspect that there is little chance of any changes being enforced in law before the end of the current political lifecycle.

Further, if the last few years have taught us anything, it’s that the regulatory environment is hugely important for the business models of the two infrastructure operators (BT and Virgin) and even more so for the independents using their networks. But the report doesn’t settle what kind of wholesaling, unbundling, or network-sharing arrangements there will be for the NGN.

Also, Telco 2.0 analysis of the iPlayer boom demonstrated that the question of backhaul is critically important to the survival of a competitive telco/ISP market. The report doesn’t move it forward. Neither has it tackled the question of access to civil works - ducts and poles - effectively. Across the Channel, ARCEP chose the same week to reiterate its commitment to regulated open access at Layer 0. Without it, the whole issue is punted back to the question of regulated wholesale, unbundling, or network sharing.

Spectrum Still Going

With the parlous state of its finances, the UK government would love to raise some money by selling some spectrum to the mobile operators. Unfortunately for them, the MNOs can’t seem to agree on the terms of any auction. The problem is twofold: firstly, the current imbalance of spectrum holdings amongst MNOs, and secondly, that at present it is not in the MNOs’ interest to agree to an auction.

kmm-digibrit3.png

As can be seen from the above chart only the original mobile licensees, Vodafone and O2, currently hold the most valuable 900MHz spectrum, and Three is a 3G only licensee. It is a challenge to appease all five MNOs given both their radically different spectrum holdings and their market shares.

The government has been trying to license the 2.6GHz spectrum that has already been awarded in some European countries. The auction has been continually delayed since summer 2008 because of legal challenges by the MNOs. Under the harmonised CEPT band plan it is divided into a paired component (2 × 70MHz) suitable for LTE and an unpaired component (50MHz) suitable for WiMAX.

The 800MHz band will largely share the propagation characteristics of 900MHz spectrum - in other words it is much more valuable than the 2.6GHz. The common European band plan envisages paired spectrum (2 × 30MHz) which is being standardised for use with LTE. This spectrum needs to be cleared from its current primary use for analogue television.

The Digital Britain report basically proposes a bandwidth cap of 2×65MHz and both Vodafone and O2 having to give up some of their 900MHz (GSM) spectrum to gain 800MHz (LTE) spectrum. If the light is at the end of tunnel with an expected auction of mid-2010, there are still plenty of hurdles yet to overcome, such as yet another OFCOM consultation with potential for further legal challenges.

The elephant in the room is whether, by the time of the auction, there will still be five MNOs left in the UK. Rumours of consolidation abound and, for a variety of reasons, the UK has the lowest return of capital of all the main European markets. Any consolidation will probably entail a complete revamping of the spectrum holdings issue.

Conclusion

All of the three proposed recommendations discussed above have little chance of being implemented in their present format. We believe that the path of bickering both behind closed doors and in public is likely to continue for the foreseeable future. We believe that the Digital Britain report was a wasted opportunity to change the digital landscape - it would be have been much better to have come out firmly on the side of one of the players, either the content companies, the communications providers, or the consumer. We believe that by trying to please everyone - the worst outcome has arrived - pleasing no-one.

The government comes out of this in the position of wanting various things, but not very much. They want to have competition, fibre deployment, and digital inclusion, and they would like to impress the content rightsholders - but they don’t want any of these things enough to take decisions that might spoil any of the others.

By contrast, the various lobbies involved want their interests very much. It’s been said that the US government’s Pakistan desk will always struggle, because it doesn’t deal with the Pakistani government’s US desk, but with its Pakistan desk. The record of offering subsidies and regulatory easements to incumbent telcos in exchange for fibre deployment is not promising; the incumbents usually win.

All told, if a lesson is to be learnt from the whole of the Digital Britain process is that consultations should be narrower in scope, more focused, and more action-oriented.

To share this article easily, please click:

7 Strategic Priority Areas for new Telecoms Business Models

In the last 12 months the fundamental question related to the ‘2-sided’ telecoms market opportunity has changed from “what is it?” to “how do we do it?” A new report describes the key priority areas.

The 2-Sided telecoms business model theory has been now been articulated thoroughly (see “The 2-Sided Telecoms Market Opportunity” report), and the theoretical growth opportunity largely accepted by senior strategists, as evidenced by the vote at the November 2008 Telco 2.0TM Executive Brainstorm and again at the May 2009 event in Nice, South of France.

The key focus of debate is now on ‘how to move forward?’ Based on the output from the May brainstorm (a gathering of over 200 senior strategists from the telecoms, media and technology sectors) the Telco 2.0 Initiative has released a new ‘Executive Briefing’ report that describes the key areas to focus on, the priorities within them and the key next steps required (short term and longer term).

The following is an extract describing some of the key strategic issues.

Can telcos develop a successful platform strategy?

In Nice, almost half (48%) of the respondents agreed that it will be possible, but very challenging for telcos to develop a successful platform strategy, owing to the need for the industry to agree appropriate standards and develop new skills. However, one in five were more optimistic, agreeing that “operators have to do this and are in a unique position to seize the opportunity and add value to the wider digital economy.” While the remaining 32% felt that “the retail and wholesale businesses are fundamentally in conflict and service providers will be unwilling to move from a one-sided to two-sided market.”

eventexecsummary.png

The Telco 2.0 team’s view is that the way forward is to understand and specify an end-to-end commercial framework for telcos within the two-sided ecosystem. The diagram below summarises this at a high level.

eventexecsummary1.png

The brainstorming sessions in Nice focused on 7 important aspects of building a two-sided platform strategy:

  • 1.Open APIs –how to open telco networks to reduce access and/or transaction costs for other retailers.
  • 2.Retail Services 2.0 – how telcos can provide both their own retail services and the B2B platform services that will enable other retailers to sell products and services successfully through their networks.
  • 3.Devices 2.0 – how telcos need to access more of the intelligence in devices and exploit it for their own retail services and two-sided business model strategies.
  • 4.Enterprise Services 2.0 – how telcos’ assets can be used to remove or reduce the barriers other service providers face in interacting with end users.
  • 5.Content (esp. Video) Distribution 2.0 – how telcos are in a position to make money by helping to restore rational behaviour to the market
  • 6.Technical Architecture 2.0– how telcos need to be able to easily access a key untapped asset - customer data
  • 7.Piloting 2.0 – how to succeed and learn quickly.

The key issues and action points in each of these areas are described in the Executive Briefing Report, including an analysis of how current industry initiatives (e.g.s GSMA, TMF, NGMN, MMA, OMTP, OMA initiatives) map against the schematic architecture above.

Big things to focus on

In terms of strategic industry-wide issues to focus on, the delegates in Nice were asked to choose their top two priorities for the year ahead from a list of options. Below are the results of that vote:

eventexecsummary2.png
Source: 6th Telco 2.0TM Brainstorm, Nice, May 2009

Collaboration between service providers, understanding the needs of upstream industries and end-users garnered are absolutely key to the creation of the two-sided business models:

  1. Collaboration is necessary to create new markets for upstream and downstream customers. This is because an application that addresses only a proportion of customers belonging to only one network is significantly less likely to be attractive and successful than one that can be used by or address anyone (e.g. the impact of SMS interoperability on usage).
  2. Understanding the needs of “upstream” industries is critical – to understand their value chains and the pain points that new services can valuably address. This requires insight which is outside of the most traditional Telco’s expertise – deep into the processes of other business that are not necessarily obviously related to communications.
  3. The needs of end-users are always important in service design, but in this instance there are two further twists. First, the needs again are not purely about communication and some are therefore outside of the traditional Telco world view. Second, some complex issues of security and privacy can be invoked by new uses of customer data, e.g.s Google Street View, Phorm. These issues can be addressed but not without clear understanding of the users’ needs and wishes and a product / service design that addresses these needs effectively.

To read the full 40 page report which includes:-

  • A description of the key priority actions in each of the 7 focus areas
  • Summarised analysis of the key points and debate in each focus area
  • How this relates to 2-Sided Telecoms Business Model theory

You can buy the report here, priced at £1,450 (plus VAT for UK buyers), or join our Subscription Service here. Existing Subscription members can access the full article here.

To share this article easily, please click:

Google: The Internet Behemoth and how it profits from YouTube

There is an ongoing debate about the size of the losses at YouTube and for how much longer the parent, Google, can afford to fund its errant child’s excessive lifestyle. Credit Suisse put a high price on it; Brough Turner criticised their analysis; RampRate decisively debunked it.

The debate has focused upon YouTube as a standalone service and little attention has been given to the spin-off benefits accruing to the parent. Google controls a significant, and growing, share of the means of production of the entire Internet industry. We argue that ownership of YouTube is a crucial ingredient for Google’s control of the economic rent that Google extracts from the whole of the Internet value chain.

We believe that YouTube is used indirectly to drive profits at the parent, and that Google is currently incentivized to keep these profits hidden from prying eyes. The key indirect benefits accruing to Google of owning YouTube are as follows:

i) YouTube gains Google a critical slice of growing online video eyeballs, which will attract more marketing dollars to the Internet as a whole. This is much more important in the USA, where the main competitor Hulu is ad-funded than the UK, where the BBC iPlayer is taxpayer funded;
ii) YouTube gains Google yet more important meta-data which can be cross-pollinated with data from other Google services;
iii) YouTube traffic strengthens Google specifically in peering negotiations and generally in network design;
iv) YouTube is probably a small fraction of Google’s overall cost base, and the spin-off benefits from lower overall unit costs;and
v) YouTube positions Google very powerfully for a key role as a gatekeeper in the copyright world.

This article explains these indirect benefits in detail and explains a strategy for telcos to adopt in the online video world.

A rising tide raises all boats

On a macro level, the more ‘eyeballs’ and time spent on the internet, the bigger percentage of advertising budgets that advertisers will allocate to internet marketing. Compelling content is an essential attractor of ’eyeballs’. For Google with its massive share of the internet market, it doesn’t matter quite as much whether the video stream itself is monetized today as long as it picks up a share of the increase in the overall digital advertising budget.

What is more uncertain is whether online viewing would have grown regardless of Google owning YouTube. Nonetheless, Google’s ownership ensures YouTube’s survival and gives Google a significant foothold in this growing and potentially important area.

kmm-google.png

Source: OFCOM - UK Communications Market Review 2008.

In 2008, overall UK advertising revenue was £17.5bn with TV advertising £3.8bn and Internet advertising representing £3.3bn. The overall market declined by 3.5%, whereas Internet advertising grew by 17.5% - not bad in a recession.

We are not saying that video accounts for the whole of the increase in Internet advertising. Far from it. A significant part of the rise will be accounted by other factors: the overall increase in the Internet population, the time spent on social networking and other services, the increase in e-commerce spending, and the relative effectiveness of Internet advertising as a channel. However, video viewing is becoming significant, as the latest US Comscore Video Matrix data for April illustrates:

  1. 78.6 percent of the total U.S. Internet audience viewed online video.
  2. The average online video viewer watched 385 minutes of video, or 6.4 hours.
  3. 107.1 million viewers watched 6.8 billion videos on YouTube.com (63.5 videos per viewer).
  4. 49 million viewers watched 387 million videos on MySpace.com (7.9 videos per viewer).
  5. Hulu accounted for 2.4 percent of videos viewed, but 4.2 percent of all minutes spent watching online video.
  6. The duration of the average online video was 3.5 minutes.

What is more uncertain is whether online viewing would have grown regardless of Google owning YouTube.

Google’s Share of the Advertising Pie keeps on Rising

As well as Internet advertising growing faster than overall advertising, Google’s share of that advertising is growing as well.

kmm-google1.png

Source: IAB/PWC Data and Google SEC filings

In 2008, the Google share of the overall US Internet advertising market had reached 45%, up from 36% in 2006 - pre YouTube acquisition. Again, all of this growth cannot be attributed to YouTube - either directly or indirectly.

In terms of direct YouTube advertising revenues, Google does not publish a figure. Credit Suisse in their infamous analyst note estimates YouTube 2008 advertising revenue at only US$200m (associated with overall losses of US$470m) and the IAB/PWC data estimates video advertising at only 3% of the overall Internet advertising market.

Show me the Meta-Data

One of the little discussed benefits of Google owning YouTube is that YouTube is more than a pure video streaming network. It is a social network in its own right. Accounts are needed to create, comment about and rate content, which is something that is not required for the core search service. It is hard to envisage that at this stage YouTube is directly monetizing this meta-data. Undoubtedly, however, it will be contributing to understanding of user-behaviour, and probably the data is already contributing to the optimization of the Google advertising engine.

Google worldwide revenues have grown by US$11.2bn to US$21.8bn in the two years since purchasing YouTube in 2006 for an all-share cost of US$1.8bn. In the same period, operating cashflow has grown by US$4.3bn to US$7.9bn.

The amount of indirect contribution of YouTube to Google revenues is uncertain, but what is more certain is that Google can afford YouTube.

Marginal Cost = Zero

One of the most important disciplines of Internet economics is to keep the marginal costs of delivering another page or another video stream as close as possible to zero. This doesn’t mean that total costs are zero, but more costs are predictable and not subject to the vagaries of any explosive growth in demand.

To be successful on the Internet, then, costs must be either turned into fixed costs or else made success-based. With YouTube,the key variables are computing and bandwidth costs. Google’s tactic is turning the majority of them into capital costs. Google is deploying more and more capital building out infrastructure, whether vast datacenters or its own network - effectively keeping an ever increasing proportion of its traffic on the Googlenet.

Another key variable costs for YouTube is the cost of the content itself. Google here attempts to minimize its risk by making any payout to third parties dependent upon success of advertising revenues achieved. This is the cornerstone of the Adsense program for publishers. Unfortunately for Google, it is not the traditional way that the video content industry works and therefore is a great source of friction between YouTube and video rights holders.

The Bandwidth Equation

A case can easily be made that Google could make its cost of delivery for video - zero. Every global IP transit provider would love to be the exclusive deliverer of such a significant portion of the world’s Internet traffic, and the transit providers could make money by squeezing the downstream ISPs in their cost of delivery. Such an extreme network design would bear a heavy political cost for Google and would obviously be unpalatable, but it illustrates the power that Google has accumulated through the YouTube traffic.

Instead of focusing on IP transit, Google extensively uses peering, delivering its own traffic to the major peering points in the world, as we made clear in this post, which the people at RampRate used in their critique of Credit Suisse. Peering is not free - it involves buying expensive dark fibre linking the Google data centres to the peering exchanges, renting space in the peering exchanges, equipment to light up the fibre and a team of network engineers to manage the peering relationships. However, most of these costs are fixed. As important for Google is that peering will allow them to control the reliability of delivering their traffic.

A more recent tactic for Google is to build edge-caching servers within ISP networks bringing the content closer to the end-user and thereby improving the speed of delivery. The early indication seem to be that Google is building its own content delivery network in much the same way as Akamai has built one - except that the Google one is for internal use only, right now. Most media companies have to use third party content delivery networks, as they don’t enjoy the Google economies of scale. For instance, the BBC uses Akamai for delivery of a significant proportion of its traffic. The Google costs are fixed whereas the BBC’s are variable - they are paying a third party supplier.

The more difficult question to answer is how much is the sheer volume of YouTube traffic is helping the other Google services, especially the highly profitable search business. We would argue significantly - in terms of cost, reliability and speed. Furthermore, the overall bandwidth economics that Google enjoy are extremely difficult for competitors to replicate and represent a significant barrier to entry.

The Google strategy in distribution seems to be to keep control and only outsource the minimum. Logically, Google would only adopt this strategy if it felt it could gain a competitive edge through distribution. Scale matters in distribution and YouTube brings that scale to Google.

The Compute Equation

In a recent paper about its datacentre operations, Google argued:

“As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC).”

In determining the computing cost for the YouTube service, it is impossible to look at the cost of the service in isolation or approximate it with traditional server and storage costs. The Google strategy is to build vast datacentres and to control the elements within as much as possible. Google does not use standard off-the-shelf components - Google not only uses commodity hardware and designs the whole of the software stack, but innovates on power and cooling systems.

With the capital costs of a large datacentre running as high as US$250m, the cost is more dependent upon overall utilization of the datacentre than individual units such as storage or CPU costs. In fact, as the chart below shows, power costs are as important as server costs.

kmm-google2.png

Source: The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines - Luiz André Barroso and Urs Hölzle - Morganclaypool

YouTube undoubtedly consumes a lot of compute resources. Most analysts have focused upon storage, but the cost of ingesting and encoding all the video is very expensive as well. But, we are doubtful YouTube represents as high a proportion of overall Google services’ demand for computing as it does for bandwidth. However, we are sure that Google has a significant cost advantage over its competitors.

Google builds its own factories for the digital age. Whilst other individual datacentres are being built of a similar scale, it is doubtful that any other company has reached the same scale as Google.

YouTube and Negotiating Power

Video on the internet is currently extremely difficult to monetize - as we say in the Online Video market study, we are currently in a “Pirate world” where most content is available is available for “free” from a variety of sources. Old media are seeing their revenues cannibalized as content moves online into the “Pirate World”. Eventually, either through legislation or sheer volume of eyeballs, New Players will emerge whose revenues and profits will be significant. Arguably, the effort to increase the percentage of YouTube videos that carry ads is a key indicator of this. The business model for this New World is still extremely uncertain. However, just because of the sheer volume of traffic, Google will have a key seat at the value chain negotiating table. Google will also almost certainly be the lowest cost player, in both aggregation and distribution.

What would make the negotiating position of Google even stronger is control of the method of licensing content. The recent Google Books deal shows that Google is starting to get extremely interested not only in the meta-data around copyright, but also in distributing payments directly to content creators and not through traditional aggregators. In this case, the creators are the authors and the aggregators are the publishers. It does not take a major leap in thinking to see how Google could disintermediate the traditional aggregators of video and music with YouTube.

YouTube Overall Profitability

It is irrelevant to determine the overall profitability of YouTube without considering the spin-off benefits to the parent, Google. These spin-offs are currently considerable but difficult to quantify: in attracting eyeballs to the Internet, therefore attracting more overall advertising spend; and in providing the traffic volume which improves the overall economics for the Google Cloud Computing Platform.

In the future, YouTube will be the key for Google establishing a strong position in the licensing of media content and thus not only controlling its own costs, but being a critical aggregator and distributor for content owners in its own right. Google will try to turn its current Achilles heel, copyright, into a future strength.

In the Telcos Eye

The critical message for telcos is in the scale of investment that Google is making in distribution. Despite BT claiming that the BBC and YouTube are enjoying a free ride, the exact opposite is true. Google is building out both compute and bandwidth infrastructure for delivery. Other video services, for example the BBC, are paying third parties such as Akamai for distribution.

The real battleground is around the share of the value chain. BT is really saying that it is not earning enough from its access fees and the economic table is tilted in the favour of “free-at-the-point-of consumption” video services such as YouTube and other sites, such as Hulu and the iPlayer.

But all is not lost. The growing size of the payTV market proves that advertising alone cannot fund all video services and consumers will pay for premium content. The core strategy for telcos competing is not to replicate YouTube, but by providing tools for the myriad of content owners who are unhappy with the current payback from the online world - these tools are not limited to billing & payment services, but should also include copyright protection.

The war of online value chain is not yet lost: we are still in a pirate world and Google is one of a very small club who can afford to distribute all types of content across the globe.

To share this article easily, please click:

EVE Online comes to Telco 2.0

Telco 2.0 is delighted to announce that Nathan Richardson, Executive Producer for EVE Online at CCP Games, will be speaking at November’s EMEA Telco 2.0 event, to help us understand the ‘digital generation’.

EVE Online is one of the world’s biggest massively-multiplayer virtual worlds, a community which incorporates as many as 300,000 subscribers and 45,000 others on free trial accounts. Within its world, players organise themselves in alliances, guilds, and commercial corporations and compete to dominate the trade of the Universe, whilst of course looking out for space pirates, or perhaps dabbling in piracy themselves.

The game is implemented…

…in the open-source Stackless Python programming language, a recent fork of Python designed for very high performance concurrent applications. It provides a rich Python API for upstream customers to develop their own skins, levels, tools and related applications, and engages with the community to the extent that the game’s government, the Council for Stellar Management, is elected by users. (An impressive list of community sites is here.)

Since 2005, the game has integrated voice-over-IP so that users can chat, conspire, and taunt each other in real time; it is a classic example of the trend towards voice fragmentation, where more and more of a growing total voice market will be taken by many new applications that have developed a voice capability, rather than by carriers or even traditional VoIP applications like Skype. Sociability - among alliances and corporations, among enemies, and among developers - is the key.

eve-voice.png

CCP Games monetises EVE through a combination of subscriptions and an in-game economy, and is therefore something of a two-sided hub as well; it uses a network of resellers to distribute pre-paid game time, very like pre-paid airtime in a mobile operator. It even sells subscriptions to a magazine. All things considered, its success has been built on all the elements of Telco 2.0 thinking, and hits the targets laid out in the Customer Participation Framework with uncanny accuracy; we can’t wait to hear what they have to say…

To share this article easily, please click:

Wimbledon 09 - Where is the telco in this picture?

We mentioned that IBM Research is charting a Telco 2.0 agenda, concentrating on mobile enterprise applications, emerging-market mobility (especially applications for SMBs), and enterprise-to-end user applications. Here’s a video demonstration of their Seer augmented-reality application, which is being trialled at Wimbledon this week.

A couple of points come to mind. The first is that Anssi Vanjoki’s remarks at this spring’s Telco 2.0 event about the future of the Web being contextual, rather than semantic, and that this would be driven by the proliferation of new sensors (orientation, machine vision, location, etc) on mobile devices, are entirely right. The second is that this shows the power of an open development platform. It may look like science fiction, but there are quite a lot of similar projects going on, working primarily in Android or Symbian S60, both inside Nokia R&D and independently.

The third is that device API standardisation is important, and the BONDI, JIL, and related projects are crucial for the future of the industry. The fourth is that the IBM developers didn’t involve a telco in any way, except to provide data transfer. Can anyone spell “dumb pipe”?

To share this article easily, please click:

June 23, 2009

Developers - That’s where Telco 2.0 comes in…

Ericsson is promoting its Java SDK on YouTube:

Well, it’s good to be reminded of the fundamental need for communication. Netscape legend Jamie Zawinski said something similar in a now-classic blog post about groupware, social networks, and contacts management:

But with a groupware product, nobody would ever work on it unless they were getting paid to, because it’s just fundamentally not interesting to individuals. So I said, narrow the focus. Your “use case” should be, there’s a 22 year old college student living in the dorms. How will this software get him laid? That got me a look like I had just sprouted a third head, but bear with me, because I think that it’s not only crude but insightful. “How will this software get my users laid” should be on the minds of anyone writing social software (and these days, almost all software is social software). “Social software” is about making it easy for people to do other things that make them happy: meeting, communicating, and hooking up.
More seriously, we noticed that the voice-over mentions Eric interfacing with different telecoms operators - “how hard can it be?” Anyone with experience of these things would confidently respond - “very hard indeed, and frequently impossible”.

We think that bit of the video ought to say “and that’s where Telco 2.0 comes in…”

To share this article easily, please click:

The Toolkit of Voice 2.0 - Linux, Asterisk, OpenSER…

A new open-source technical toolkit (based on Linux, Asterisk, OpenSER and perhaps OpenSS7) is emerging for Voice 2.0 applications. In this note we will discuss the elements that go into it, the possibilities and problems involved, and the consequences of these developments for existing operators.

Two kinds of technical change: conservative…

Technological change is happening in several forms in the industry. One of these mainly affects how existing network designs and business models are executed. For example, switches that were once dedicated hardware systems running extremely specialised software are being replaced with softswitches which run on standard bladeservers or even PC-servers. OSS-BSS installations, similarly, are migrating from being monolithic enterprise systems based on mini- or mainframe technology into standard data-centres. Across all of these, the underlying operating system is increasingly likely to be Linux or another of the open-source Unix variants. CAMEL and IN are being replaced with applications servers, frequently implemented in Java, which provide (usually) XML interfaces for further development.

But none of this, by itself, changes the fundamental business model. It still implies a world of big, closed telcos who get their equipment from a small number of large vendors, whose business model and general philosophy is essentially that of a hardware company. And it often implies a centralised IMS or hybrid SS7/IP architecture.

So far, this has mostly been a question of cost reduction. Google, Sun Microsystems, and the Internet Archive have shown the way in terms of providing high-availability, high-performance IT using large numbers of low-cost PC servers, virtualisation, and open-source operating systems, and no doubt the telcos are moving that way as well, at least the less hidebound ones. Cost reduction is an honest goal, after all. It’s no revolution, though.

For that, you’ve got to look at the wing of the open-source community that wants to replace you entirely.

…and revolutionary

The core of the new voice community is Asterisk, the free PBX which has grown into a general-purpose telephony solution. It is no overstatement to say that every Voice 2.0 player we’ve interviewed turns out to be using Asterisk somewhere in their system. The key advantages of Asterisk are the classic ones of open-source software - you can do what you want with it, and as a result it offers great feature richness. For example, it supports unified messaging out of the box, using the standard IMAP e-mail protocol, and it handles a wide range of network protocols, both telco and IETF.

Where it genuinely excels is in the scope it provides for systems integration and applications development. The Asterisk dialplan language permits fairly extensive customisation and some application logic to be built directly in a configuration file, but things get genuinely interesting through its AGI and AMI features - Asterisk Gateway Interface and Asterisk Manager Interface, which permit Asterisk to execute other programs from within the dialplan and permit other programs to control Asterisk from either a local or a network connection respectively. Further, Asterisk’s version of a BSS/OSS interface is a database connector that can be invoked in any of these options to write CDRs into a database, to populate the dialplan with users dynamically, or to store users, configuration data, and the like so that many Asterisk instances can be cloned, as in this presentation by Serge Kruppa at ASTRICON on building a carrier-grade call centre with it.

asterisk-kruppa-1.png

Many, many small applications

Performance is not, of course, in the AXE10 class. One of the downsides of the feature-richness is that in many modes of operation, the audio stream must pass through the Asterisk server, which makes it a highly-concurrent application and therefore causes scaling issues. The canonical approach to scaling Asterisk deployments is to multiply and distribute the nodes, thus keeping the maximum concurrent calls on any given instance down to the low hundreds. Sam Houston State University, for example, has 6,000 users with six servers, but only two of them are used for call processing rather than voicemail or PSTN interconnection. The world’s biggest Asterisk installation, an enterprise project carried out by Synetrics, has over 100,000 lines and 6,500 concurrent calls. Pennsylvania University’s Asterisks are serving 30,000 end points.

This is, of course, a typical post-Google IT architecture, which would imply a similar approach to the back-end databases. Indeed, this is roughly what Mapesbury Comms’ UK01 Mobile service is doing to minimise its costs. There’s a fascinating talk on running a really big Asterisk deployment on Amazon EC2 by Nir Simionovich here, from this year’s AMOOCON.

asterisk-simionovich.png

Even if running an entire service provider this way is unrealistic, however, handling voice and messaging within a specific application, enterprise, or call-centre is precisely what Asterisk is designed to do. At the other end of the scaling spectrum, it has been successfully compiled and used on a bewildering variety of embedded platforms, including set-top devices and SOHO WLAN routers. We recently saw a handheld WLAN router with cellular backhaul that one UK mobile operator will soon be pushing; imagine if you shipped them with an Asterisk node installed with its native Web GUI, preconfigured to talk to your infrastructure. It’s an office in your pocket.

OpenSER and OpenSS7 - aiming for scale

If you need really high performance, you ought to be looking at the OpenSER and related projects. This is a long-running open-source project to develop both a SIP media server and a simpler, higher-performance SIP router (Open SIP Express Router). It’s being used by several of the Voice 2.0 companies we mentioned above in conjunction with Asterisk, as what the IP world would think of as a gateway router or load-balancing proxy and what the telco world would think of as a tandem switch, in order to terminate incoming bulk SIP connections and distribute them among the Asterisk nodes. This makes it possible to push towards telco scale, whilst keeping the complexity of the call-processing abstracted from the networks outside.

And if that isn’t enough for you, there’s also the OpenSS7 project, which aims to provide an open-source SS7 implementation for use either with classic transport types or with SIGTRAN over an IP network. Their performance figures are impressive - back in this post, we pointed out that their HLR implementation is designed for 4,000 transactions a second.

asterisk-simionovich1.png

Conclusion: enterprises, MVNOs, and Voice 2.0 are driven by open-source

So, the take-away lessons are that if you have a major enterprise business, and even more if you do IT/systems integration, you will soon be asked by your customers for Asterisk. And if you aren’t, perhaps you should consider offering it to them. Beyond that, even if you are unlikely to substitute the core infrastructure for a swarm of Asterisk boxes, you should be aware of it as an option for any new voice or messaging application you want to build, as a strong option for voice-centric services for SMBs, and as something developers you work with will almost certainly want to use with your services.

Further, other open-source communities are working hard to reduce the scaling gap between their products and the traditional vendors’. This is likely to be a major empowering factor for MVNOs, MVNEs, enterprise networks, independent developers, and all kinds of disruptors, but it can also do the same for your own new ideas. Small operators may yet benefit from looking at a total overhaul.

[Ed. - more on this topic at our research database, and at the next Telco 2.0 event in November (EMEA) and Dec (Americas).]

To share this article easily, please click:

June 22, 2009

Ring! Ring! Hot News, 22nd June 2009

In Today’s Issue: Nortel - the final curtain; NSN vultures up CDMA, LTE assets; CSL CEO: Chinese vendors are our rightful masters - submit!; Iran: HOWTO strangle the Internet; actually, asking NSN seems to be the staff solution; MTN chafes at the bit; more censorship data; Uncle Sam’s monster e-mail database; Novarra: half the data traffic is from basic phones, and we know because we read it!; Indian 3G auction set for September; WiMAX to follow; the great British spectrum sale?; AT&T/Slingbox/Baseball neutrality row; cablecos can community on Canoe; AT&T gives away MMS; 80% of growth at RIM now consumer; IBM R&D to spend $100m on Telco 2.0-ish agenda; Intel wants to put your phone in a cloud; Free lobbies on, self-funds fibre rollout; Wind faces cash call; BT will deploy more fibre, one day, perhaps; the call centre that can’t; augmented reality, without pills, with Gphones; new Android gadgets at T-Mobile; Palm SDK - still waiting; security alert from OpenBTS; Apple launches some sort of mobile device, apparently

It has come to this: Nortel throws in the towel. The vendor, once valued at $250bn, is to be broken up in an effort to recover some cash; the first vulture is already in, and it’s Nokia Siemens Networks. They’ve acquired the CDMA and LTE infrastructure operation for $650m, which gives them a considerably boosted presence in North America and control of important patents over LTE technologies. Other plums are likely to include the optical networking and carrier-Ethernet businesses.

CSL CEO Tarek Robbiati, proud owner of an HSPA network based on software-defined radio, reckons that in the future there will only be three vendors and two will be Chinese.

Meanwhile in Iran, the government successfully cut Internet traffic in half, imposing a squeeze on transit availability for local ISPs through the wholesale monopoly, DCI. For some reason, as Arbor Networks’ blog points out, AS1299/TeliaNet is now carrying much more of Iran’s upstream traffic than before the disputed elections, while Telecom Italia/Seabone are down to zero.

Fortunately, they had help: NSN, for it is they, sold DCI a complete lawful-intercept monitoring centre for their GSM network. (From our perspective, of course, the big loser here is MTN, which has been investing heavily in its Irancell unit and driving a boom in prepaid GSM subscribers. Now their network is semi-permanently down and the chances of getting their profits out of the country are slim.) There’s much more here, and here.

But who can really complain? Certainly not the US, after the discovery of a huge and possibly illegal e-mail monitoring database.

In other IP-spookery news, Novarra, purveyors of snooping gear to the GSM fraternity, reckon that half of all the IP data calls they monitored came from devices other than the premier-league smartphones. This perhaps shouldn’t be that surprising, as there are an awful lot of non-smart phones out there and most of them now have a Web browser of some form. Interestingly, the traffic is highly diverse, with the top 500 URIs accounting for only 30% of the total. Perhaps the iPhone hype has helped to normalise mobile Web use?

Meanwhile, the Indian 3G auction is a a go at last, planned for August-September. No less than six national licences are going to be offered, with a reserve price of 40.4bn rupees each; a wave of WiMAX auctions is going to follow. Vendors, start your engines.

In the UK, there’s some actual upshot from the Digital Britain report - we may see a giant spectrum sale next year, finally putting the 2.5-2.69GHz band on the block after extensive delays and also chucking in the 800MHz, with the possibility of some deal that would see the Original Two GSM operators hand back some 900MHz spectrum in exchange for 800MHz frequencies.

There’s a row going on; AT&T is accused of banning the Slingbox client on the grounds that its terms of service forbid streaming video, but still promoting an app that streams live baseball. Presumably this is because they have some sort of sender-pays deal with the Major League; but you have to wonder whether they are doing the right thing in offering free MMS messages if network capacity is a problem.

The US cablecos’ Canoe advanced-ads platform takes a step back: its community-targeted advertising features have been canned, after it turned out it would take at least a week to finalise each ad.

Meanwhile, numbers time at RIM. Some people may have lost sight of the fact that they hold 55% of the North American smartphone market; the main news, though, is that 80% of their net adds are now coming from the consumer sector, where the competition from Nokia, Apple, Samsung, and Palm is mightiest and spending is in general flat or falling.

Interestingly, IBM Research is setting a Telco 2.0-tinged agenda for its work. They have $100m to spend on mobile R&D over the next five years, and they have defined three target areas - “mobile enterprise enablement, emerging markets mobility, and enterprise-to-end-user mobile experience”. There is nothing there we’d disagree with.

Intel Research, meanwhile, want a clone of your mobile to run in the cloud, with applications working seamlessly across the two, so the remote server could provide extra processing wallop when needed and hold onto persistent data whilst you’re offline. Given the quality of many mobile data links, they better be good.

As well as emerging markets and enterprises, we’re famously keen on fibre and on integrated video delivery. And the best people to learn from are Free.fr. Xavier Niel argues in an interview with the Financial Times that they could cut the average monthly mobile bill in half, with an all-IP network, if the French government would only ignore the three existing GSM networks’ whining and give them a licence. Very significantly, he also says that his planned €1bn investment in fibre overbuild between now and 2012 is “self-financing” out of OPEX savings alone.

The news is not so good at Italian mobile op Wind, which is trying to renegotiate its debts in order to find €500m…for their eventual owner, Naguib “Orascom” Sawaris. Nor at BT, where they’re looking at perhaps increasing their FTTC rollout, maybe, sometime, if the Government gives them a ton of money.

Also in the UK, an already controversial directory inquiries service has managed to invent the call centre that can’t take calls.

We’re living in the future; here’s the latest augmented reality application, which uses Android devices’ accelerometer, GPS, and virtual compass to orientate itself and shows an overlay of relevant data when you point the camera at something. They’ve signed up various “local information providers”, but how soon will it be that someone does a service to advise you which building to set on fire? T-Mobile, meanwhile, has a new Android gadget.

Palm says that the WebOS SDK won’t be ready for a while yet; if this post on Jamie Zawinski’s blog is anything to go by, it can’t be ready soon enough.

David Burgess of the OpenBTS Project has a grave security warning.

And finally, Apple launched a device of some sort. Apparently it’s faster. Or more expensive. Or something. Carriers should watch out, though; the FCC doesn’t like exclusivity deals.

To share this article easily, please click:

June 19, 2009

Zoompass: Implementing Mobile Money, Right

A major milestone has been achieved; the first inter-carrier mobile payments system in a developed market since PayForIt. We were impressed when we first met Zoompass at this year’s Mobile World Congress (MWC); we’re more impressed now. It will offer Canadian subscribers a comprehensive transfer, transaction, and account management service with encryption, whichever of the three main mobile operators they use.

In so doing it has adopted key Telco 2.0 concepts - trusted agent networks, inter-carrier cooperation, and extending telco assets and capabilities into thousands of other business processes.

Building a trusted network

There are several features of this we would like to draw attention to. First, they’ve integrated it with a number of other financial and payments systems - a Zoompass account comes with a RFID/NFC payments card, which is automatically topped up from funds in the account, and also a pre-paid MasterCard. Further, it’s possible to use the service to withdraw cash from ATMs.

We’ve said before that much of the value in a Mobile Money Transfer (MMT) system is concentrated in the network of agents which actually deals with the users; for example, using the same street vendors who sell pre-paid GSM airtime to ingest or pay out cash. The situation in Canada is obviously very different, but the problems are fundamentally the same - you need to beat the first-fax problem by making it as easy as possible to get cash in and out of the system.

Zoompass has solved this by integrating with the most common and (reasonably…) trusted financial networks operating in Canada.

Banks, the obvious partners for…money

Secondly, it is always a major question in MMT as to who mobile operators should partner with - banks, “remittance service providers”, or retail-focused companies like bus operators in some parts of Africa. Orange’s activities in West Africa and the Middle East, for example, have always involved a local bank as a partner, which holds the subscriber funds, deals with other banks and financial institutions, and acts as the bank for regulatory purposes. This has the advantage that it maximises the strengths of both partners, avoids the prospect of the same company being faced with simultaneous banking and telecoms regulation, and makes use of existing financial systems whilst avoiding high-margin wire service firms.

Zoompass has chosen to tie up with a bank; after all, as Orange VP of Payments, Mung Ki-Woo, said a couple of MWCs ago, the bank knows what to do when a customer dies and both his wives try to claim his outstanding balance, and mobile operators usually don’t. As we’ve pointed out before, the bank partnering option is common to all successful MMT projects.

Inter-carrier cooperation considered crucial

Thirdly, we’re facing the prospect of many mutually incompatible payments systems. This has already become a reality in parts of East Africa. For example, in Uganda, Valuable Bits reports that there are no less than three competing and balkanised mobile payments networks, and even though two of them both use the same M-PESA technology there is no interconnection between them. Shopkeepers and tradesmen have to maintain accounts with all three to be sure of accepting payments, and keep a balance on all three to be sure of making them. This is clearly far from optimal, especially in an industry that prides itself on providing global interconnection between hundreds of operators in several dozen jurisdictions for billions of subscribers.

And the company behind Zoompass has tackled that. Zoompass is a customer-facing brand for Enstream LLP, a company which operates the common infrastructure, manages the relationship with the banks, and which is jointly owned by the three Canadian GSM operators - Bell Canada Mobility, Rogers Wireless, and Telus. This is the really impressive bit - we’ ve always said that carriers will have to cooperate to deliver transactional VAS, either through standardisation or through the creation of a common platform rather like a roaming hub.

enstream.png

Lessons from finance - notably the VISA credit card network and the British LINK network of ATMs - from the Internet - with the importance of Internet exchanges - and from many other industries - notably shipping, with the development of the load-centre ports - support this view. Our containerisation case study is here, with more here; we discussed LINK in this post on Amazon.com and transactions.
The LINK network of ATM operators in Britain originated with the UK’s small mutually-owned banks (called “building societies”). As their competitors, the national commercial banks (the clearing banks, in British parlance), installed more and more ATMs hoping to achieve nationwide coverage, the building societies were faced with a serious problem. As (mostly) small local institutions no one society could hope to offer national card service, and it could be expected that the clearing banks would exact a high price to let a small competitor use their system. However, precisely because their territories rarely overlapped, a society that joined LINK hugely increased its coverage at once. More members meant more value, and also helped spread the costs of the shared infrastructure. Eventually, it was the clearing banks who had to swallow their pride and join LINK.

The GSMA’s OneAPI project is also an effort to implement inter-carrier cooperation. And the example of the Nordic countries’ approach to mobile number portability (MNP) shows that this is the way to go - by putting their mobile numbering plan under the control of an independent company jointly owned by the operators, they were able to deliver faster and cheaper portability earlier than anywhere else.

In fact, the UK’s solution to MNP, where numbers remain with the original carrier they ere assigned to, which then refers traffic to the subscriber’s current operator, is likely to add considerable complexity to the challenge of inter-carrier API interworking.

Om Malik’s remark that getting the US operators to agree on anything was like herding cats needs to be revised in the light of this; famously, the way to herd cats is to move the food. The Canadian operators have recognised that the food is being moved - that their voice & messaging staple business is being eroded - and they are moving to catch up with it.

Conclusions

Arguably, one of the crucial lessons from the success of M-PESA and friends on one hand, and the failure of PayForIt - which after all had the backing of multiple carriers and an interoperability solution - is that it doesn’t do to become obsessed by “content”. In most of the developed world, mobile operators have looked at mobile payments/mobile money transfer as an adjunct or afterthought to what they imagine is their content business.

This misses the overriding truth that what the public have always wanted from the industry is communication; just as they value the ability to communicate ad-hoc with anyone by voice and messaging, and increasingly the ability to interact with arbitrary Web services, they value the ability to transfer funds ad-hoc much more than they do being able to pay for “content”. And who could disagree? The possible market for transaction services is as big as the entire economy; which puts any conceivable content play in the shade.

Making this a reality requires inter-carrier cooperation, partnership with a financial sector now increasingly keen to rebuild its base in retail banking, a strategy to build a trusted distribution network, and urgent attention to the technical challenges of numbering and API interworking. But above all, it requires us to see mobile money in the broader context of the fundamental demand for communication, as expressed in the Customer Participation Framework.

To share this article easily, please click:

June 18, 2009

Pilot 2.0 (Part 2): Using Old Systems for New Business Models

Sometimes new business models feel too complicated to undertake. However, new methods and technologies are enabling operators to trial new business models without having to change their existing systems or processes. Some people are calling this “BSS 2.0”.
 
Building on our last post on how to pilot Telco 2.0 ideas, this guest article from Andrew Thomson, VP Solutions at Infonova expands on his stimulus presentation from May’s Telco 2.0 Executive Brainstorm and provides some examples of where and how “BSS 2.0” could add value.

PS: Scroll down to see an interview with Simon Torrance and Andrew Thomson for TelecomTV on the same topic

Telecommunications - An industry undergoing massive change

As we know, the industry is changing rapidly with new revenue models being invented and delivered by dot.com type companies. To compete, telcos and service providers will have to transform. But most telcos / service providers are trapped by their legacy systems. In many cases, internal culture and conventional suppliers still advise the CXO’s to plough further investment into legacy systems that have already cost millions.

At the same time, network transformation is driving massive investment and many advisers recommend integration at the OSS layer to deliver transformation for the operator and the end customer. Integrating legacy systems may deliver new functionality for both transformation and complex two sided business models - but it will cost millions and will take years. Executives need to evaluate faster new technology options that overlay and orchestrate existing systems or they may find that the market has moved on and left their company behind.

The transformation dilemma

Most telcos are facing a key challenge: network transformation. The move from physical networks to logical IP services on physical networks is forcing most telcos to re-think their investment and market approach.

Figure 1: Technology Assets & Business Transformation

The BSS 2.0 needs to deliver an abstracted BSS capability that supports NextGen, legacy services and new business models simultaneously so that a telco can ensure that it migrates its customers from legacy to NextGen at the appropriate time - be it geographic specific migrations or event driven migration.

New business models

The market is evolving rapidly, with new entrants like Google, Skype or Freeserve offering new ways for consumers to subscribe to services. One thing is clear … a broad array of new business models will need to be supported by telcos’ BSS systems so that telcos can stay relevant.

Figure 2: Fixed and Mobile Bundling - a first step

Bundling capabilities are a key facet for new business model success. If a telco cannot offer a simple level of bundling where, for example, pooled minutes can be shared between fixed and mobile then their systems will probably not have the capabilities to support more complex business models. Again, many advisors will propose that this can be resolved with integration at the OSS layer - proceed down that route and be prepared to spend millions and wait several years!

Telco Apps stores

Order to cash BSS platforms should be technology neutral and able to support the latest business models - simultaneously at the same time as existing / legacy business models. App Stores are expected to be a major cash generator in the next decade… the capability to support complex business models that facilitate multiple parties involved in Apps stores to readily execute and benefit from revenue sharing models is a key requirement.

infonova-3.png

Figure 3: Apps Store - Monetisation

For example, multiple operators can offer their API’s for Apps Developers to incorporate within Applications. When applications have been developed, the developer lodges his / her App in both the Primary Operating Entity catalogue entity and the relevant Application Store catalogue. This enables the automated disbursement of royalty revenue allocation to all the respective parties when an end-user consumes an App. A key issue solved is the ability to support true partitioning of multiple catalogues so that Apps can be marketed under various brands to appropriate segments.

Monetising the OTT Hype

A number of new players have entered the market operating an approach that has been termed “Over The Top”. Apple is one of the most significant players with its iPhone product. Other device vendors have been trying to determine their approach to achieve a similar type of relationship with end-customers - while at the same time trying to avoid offending key customers … the operators.

infonova-4.png

Figure 4: OTT Emulation & Monetisation

Using a multi-layer business model, both operators and device manufacturers can benefit. The operator wins a new channel to market and is able to participate and share in the revenue opportunities.

This enables manufacturers to sell their devices with a service / carriage relationship direct to the end customers. The package can include other bundled services such as, device insurance, fixed line services, content and other accessories. This business model enables the device manufacturer to use the monthly billing relationship to capture micro charges and to have a regular marketing feedback loop with the customer. It also provides the capabilities to share and allocate new revenue streams with other third parties.

Enabling Channels to Market

There are increasing opportunities to assist Business-to-Business-to-Consumer models. One example illustrated below enables an automobile manufacturer to enable his dealers to be channels to market.

Figure 5: Tenancy & Dealer Operations for the End-User

This will enable the automobile manufacturer to commission the dealers and easily support the end customers’ use of mobile and navigation devices as well as enhanced end-user services, such as directed ads, campaigns and service details.

Infonova BSS Release 6

Infonova BSS R6 is a latest technology solution that can sustain and deliver transformation scenarios and complex business models at a fraction the cost of traditional integration scenarios.

Infonova BSS R6 is specifically designed to support Telco 1.0 and Telco 2.0 business models simultaneously: triple & quad play bundling, white labeling, rich wholesale products and innovative partner models. Infonova’s platform helps optimize existing business models and supports flexible intercommunication between legacy and next generation systems. The proven Infonova BSS product family modules cover the enhanced ‘Order-to-Cash’ lifecycle: Platform and Business Management, Product Management, Customer Management, Order Management, Billing and Finance.

Piloting New Business Models

Older technologies can take years to configure. Encapsulating decades of knowledge, Infonova’s latest BSS product, Release 6 is a J2EE order to cash platform that is designed for real multi-tenant order to cash operations for fully convergent and complex business models. Piloting new models can be undertaken in weeks… the Infonova BSS Release 6 is already available in the “cloud”.

Telco 2.0 CEO Simon Torrance discusses the challenges of new business models and network transformation with Andrew Thomson of Infonova in a TelecomTV interview…

About Infonova

Infonova was founded in 1989 and delivers highly automated IT solutions for Telco & Media companies. Infonova’s BSS solutions have been implemented for incumbent, attacker and cable operators supporting triple & quadruple play service portfolios. For more information, please visit www.infonova.com.

To share this article easily, please click:

June 15, 2009

Ring! Ring! Hot News, 15th June, 2009

In Today’s Issue: Sprint sells iDEN assets; US mobile data price wars; value heads for the edge and for key infrastructure; T-Mobile denies Data Thieves of 2009 caper; epic net neutrality row in the UK as politicos play to the whistle on Digital Britain; C&W in fatcat punchup; Australian NBN news shows Telstra and Optus making nice; a hundred flowers blossom, a thousand schools of thought contend, and they all want a broadband stimulus cheque; universal GSM for the poor - the US poor; Telekom Austria looks at separation; Qwest - customers don’t care about speed but do want everything now; Sprint-L(3) tie up to buy up Qwest; Kenya’s submarine cable comes ashore; China Unicom “buys 125,000 Node-Bs”; Qualcomm sees recover; Dell claims to monetise Twitter; dismantle your Palm Pre; Palm hires Apple iPod chief; Nokia coming for Adobe and MS developers; two Nokia howtos; sue your way to popularity; HP mobile social network; pitfalls of the smart grid; Cisco California comms considered costly; analogue switchoff, MediaFLO on the air; Iranian BGP admins working for the clampdown

So it finally happened: Sprint-Nextel is selling a chunk of iDEN assets in the Midwest to settle with one of its many, many angry affiliates. This sounds like an opportunity for someone innovative to make use of the system’s special powers in enterprise Voice 2.0. Meanwhile, price war rages; Sprint again slashed its data tariff this week after Verizon did likewise. With 500MB/month for $40, they’re yet to get close to the sort of prices 3UK offers.

After all, the value is moving to the edge, as a report quoted on David Isenberg’s blog says. Interestingly, it argues that the money will end up with those parties who either provide hyper-specialised edge applications or else with those who control key infrastructure enablers, like wholesale backhaul. (Or customer data; T-Mobile USA, for their part, are vigorously denying last week’s report of a giant data theft, which is a sort of an edge application.)

The big question this week in the UK is whether there is just too much video to fit through the pipes, or whether the Big Expensive Phone Company is doing rather too well at making money from its wholesale backhaul operation. TelecomTV attacks the proposal to tap content providers for cash if they don’t want their traffic throttling; it’s hard to say what is going to happen except for a hell of a row, and that it wasn’t the best time for another BT division to say their capacity problem was “solved”. Very probably, the real motive here is to position BT politically for the home straight of the Digital Britain report; whose author is quitting the government.

Cable & Wireless, for its part, is heading for an epic row with its shareholders over executive pay.

In Australian NBN news, the regulator is arguing that structural separation at Telstra is vital to the success of NBN. Telstra argues that there is no need because it’s wholesale or dark fibre only. Which in itself confirms our view that they are preparing to pull a KPN and make nice with the fibre deployers. Relatedly, Optus is planning to swap its cable network and layer zero footprint for a stake in the NBN. Meanwhile, Wired is amused by the sheer number of interest groups sticking their oar into the US national broadband plan.

One of the least egregious would be this effort to make GSM a universal service, like PSTN service already is.

Telekom Austria is looking seriously at the idea of structurally separating its Austrian fixed network, notably because all the employees are civil servants. They haven’t apparently considered the logical next step…

Qwest, meanwhile, argues that the customer doesn’t care about speed but does want “what they want, when they want it, delivered seamlessly”. Sprint and Level(3) are apparently considering a joint bid for their long distance backbone.

Speaking of long distance, Kenya is about to get its submarine link, as the TEAMS cable backed by Vodafone and France Telecom lands at Mombasa, to be “welcomed” by the president. You’d think it was going to swim up onto the beach all by itself.

And China Unicom is reckoned to be in the market for 125,000 Node-Bs. Those Alcatel-Lucent and Ericsson deals sound better and better with every passing day.

Qualcomm has increased its profits guidance for this year, in a sign of returning economic confidence. As TTV points out, being a chip maker means that you are a leading indicator for the whole value chain. Hey, Dell even claims to be making money out of Twitter.

If that has fired you with enthusiasm, why not try dismantling a Palm Pre into its component parts? Here’s how to do it. However rich, smart, and pretty iPhone owners supposedly are, you’ve got to assume that the geek status boost from stripping a Pre will blow them out of the water. Palm, meanwhile, have hired the former head of the Apple iPod division to follow up on their Pre saving throw.

In the Nokiasphere, they are trying to woo Adobe and MS Windows developers. You can now develop for Nokia’s Web Runtime widgetry platform in Adobe’s Creative Suite or MS’s Visual Studio IDEs, ooh, and in a version of the Eclipse open-source IDE too. There’s a nice post here about in-line updates for WRT widgets, and a HOWTO on using Eclipse with Maemo Linux here.

That’s not quite the way to respect data sovereignty; mobile directory firm Connectivity demanded O2 hand over all its customers’ numbers, and sued when they wouldn’t. As O2 rightly point out, people treat mobile numbers as something much closer to plutonium than potatoes. If anyone wants a good idea, why not try one of the anonymous calling services we’ve blogged about in the past? Or you could have a look at this HP research project, which does a social network analysis of your call logs.

Smart grids are widely considered to be a major growth area for Telco 2.0. Of course, hooking the power grid and the Internet together raises some serious security issues; which is one reason why using reasonably trustworthy systems like SMS, USSD, and the SIM is a good idea.

Prices are out for Cisco’s California unicomms/Voice 2.0 products, and it’s fair to say they are aiming at a traditional Cisco big iron, bespoke network strategy.

The first wave of US analogue shutdown is here; Qualcomm’s MediaFLO went on the air as planned. The EFF is celebrating their win; there will be no broadcast flag.

And the Iranian government’s censors and thugs appear, as with most things in the dictatorship line from Iran, to be considerably more subtle than their rivals. Where Burma simply pulled the plug and vanished from the Internet routing table, although they briefly re-appeared for reasons still unknown, and Pakistan accidentally blackholed YouTube for the whole world, Iran has decided to reduce the availability of Internet transit and route its traffic through Turkey. Why Turkey? We don’t know. But Renesys does know what happened.

To share this article easily, please click:

June 12, 2009

Technical Architecture 2.0 - Good Start, but Significant Gaps

Below is a summary analysis of the Technical Architecture 2.0 session at the May 2009 Telco 2.0 Executive Brainstorm.

The premise we explored was this:

The implementation of new ‘Two-Sided’ Telecoms Business Models has major consequences on telco network architecture. Perhaps most importantly, data from separate internal silos needs to be aggregated and synthesised to provide valuable information on a real-time basis. Key process interfaces that enable new services must be made available to external parties securely and on-demand. Network and IT functions must start collaborating and function as a single entity. Operators need to migrate to a workable architecture quickly and efficiently; vendors have to support this direction with relevant new product offerings and strategies.

Participants’ “Next Steps” Vote

Participants were asked how well current industry activity around technical architectures supports the development of Telco 2.0 business models.

  • Very well - the industry is shaping up well for delivery and new business models.
  • Good start but more needs to be done - major building blocks are missing.
  • Lost cause - the industry will never deploy the capabilities for new business models

Lessons learnt & next steps

Since the development of broadband access, the Internet world has recognised that customers can have many, dramatically different roles and attributes, needing specific functionality, preferences, and user profiles. Operators are in a unique position in that they have a fuller picture of customers than any single website or retailer or service provider. Several have already recognised this, and a number of vendors are offering scalable platforms which claim to be in line with the current EU legislation on data protection.

Marc Davis, Chief Scientist, Yahoo! Mobile: ”Data is to the information economy as money is to the economy. But there is a missing infrastructure - because there’s no user interface for this data and what is the equivalent of a bank for this data - who looks after it?”

But as well as user profile data, the 2-sided business model requires on-demand response from the network infrastructure. It will not matter whether it is the network or OSS/BSS/IT element that is breaking down - customers won’t care, they will just find the situation unacceptable. Both the network and IT elements must work together to deliver this. Operators are moving in that direction organisationally and structurally.

Telco 2.0 expects that this will result in new implementations of control & monitoring systems such as Resource & Service Control Systems (RSC). As services are the key business drivers, the opening up of the walled gardens is changing the service delivery platforms quite rapidly, as most new applications are centred around apps stores, mash-up environments, XaaS environments, and smartphone Web browsers, etc. which do not demand a traditional SDP or SDF. In addition, enabling services are becoming an essential element in operators’ core products. These enabling services will, in the future, allow operators to monetize their network assets.

These enabling services need a framework, which is highly flexible, agile and responsive, and integrated with the features defined by NGMN. While not all these points are implemented yet, there is increasing understanding at the operators, upstream service providers, and regulators that this new phase, opened up through the 2-sided business model, represents a historic opportunity for all members of this ecosystem.

Marc Davis, Chief Scientist, Yahoo! Mobile: ”What if we had new, industry standard terms of service under which users owned their data?”

Before the technical details can be finalised, of course, business models need to be scoped. However, the major technical areas discussed above are focal points for technology development. In the short term, Telcos should:

  1. Build up a logical semantic database as preparation for database integration;
  2. Include migration from 2G and 3G and backwards compatibility in LTE tenders;
  3. Prepare a user profile database;
  4. Reduce the number of OSS/BSS systems;
  5. Develop real-time responsiveness in OSS/BSS systems
  6. Separate the control and data planes, separate services from transport;
  7. Implement and deploy an RSC system as a multivendor abstraction layer.

Event Participants: can access the full presentation transcripts, delegate feedback, and long-term recommendations for action at the event download page (NB. the URL has been emailed to you directly as part of your package).

Executive Briefing Service Members: can do so in a special Executive Briefing here. Non-Members: see here to subscribe.

To share this article easily, please click:

Online Video 2.0 - TIme to re-think the fundamentals

Below is a summary analysis of the Video Distribution 2.0 session at the May 2009 Telco 2.0 Executive Brainstorm.

The premise we explored was this:

The demand for internet video is exploding. This is putting significant stress on the current fixed and mobile distribution business model. Infrastructure investments and operating costs required to meet demand are growing faster than revenues. The strategic choices facing operators are to charge consumers more when they expect to pay less, to risk upsetting content providers and users by throttling bandwidth, or to unlock new revenues to support investment and cover operating costs by creating new valuable digital distribution services for the video content industry.

Participants ‘Next Steps’ Vote

Participants were asked: Which of the following do we need to understand better in the next 6 months?

  • Is there really a capacity problem, and what is the nature of it?
  • How to tackle the net neutrality debate and develop an acceptable QOS solution for video?
  • Is there a long term future for IPTV?
  • How to take on the iPhone regarding mobile video?
  • More aggressive piloting / roll-out of sender party pays data?


Lessons learnt & next steps

The vote itself reflects the nature of the discussions and debates at the event:  there are lots of issues and things that the industry is not yet clear on that need to be ironed out.  The world is changing fast and how we overcome issues and exploit opportunities is still hazy.  And all the time, there is a concern that the speed of change could overtake existing players (including Telcos and ISPs)!

However, there does now seem to be greater clarity on several issues with participants becoming increasingly keen to see the industry tackle the business model issue of flat-rate pricing to consumers and little revenue being attached to the distribution of content (particularly bandwidth hungry video).  Overall, most seem to agree that:

  1. End users like simple pricing models (hence success of flat rate) but that some ‘heavy users’ will require a variable rate pricing scheme to cover the demands they make;
  2. Bandwidth is not free and costs to Telcos and ISPs will continue to rise as video traffic grows;
  3. Asking those sending digital goods to pay for the distribution cost is sensible…
  4. but plenty of work needs to be done on the practicalities of the sender-pays model before it can be widely adopted across fixed and mobile;
  5. Operators need to develop a suite of value-added products and services for those sending digital goods over their networks so they can charge incremental revenues that will enable continued network investment;
  6. Those pushing the ‘network neutrality’ issue are (deliberately or otherwise) causing confusion over such differential pricing which creates PR and regulatory risks for operators that need to be addressed.

There are clearly details to be ironed out - and probably experiments in pricing and charging to be done. Andrew Bud’s (and many others, it must be added, have suggested similar) sending-party pays model may work, or it may not - but this is an area where experiments need to be tried. The idea of “educating” upstream users is euphemistic - they are well aware of the benefits they currently are accruing, which is why the Net Neutrality debate is being deliberately muddied. Distributors need to be working on disentangling bits that are able to be free from those that pay to ride, not letting anyone get a free ride.

As can be seen in the responses, there is also a growing realisation that the Telco has to understand and deal with the issues of the overall value chain, end-to-end, not just the section under its direct control, if it wishes to add value over and above being a bit pipe. This is essentially moving towards a solution of the “Quality of Service” issue - they need to decide how much of the solution is capacity increase, how much is traffic management, and how much is customer expectation management.

Alan Patrick, Telco 2.0: ”98.7% of users don’t have an iPhone, but 98% of mobile developers code for it because it has an integrated end-to-end experience, rather than a content model based on starving in a garage.”

The “Tempus Fugit” point is well made too - the Telco 2.0 participants are moving towards an answer, but it is not clear that the same urgency is being seen among wider Telco management.

Event Participants: can access the full presentation transcripts, delegate feedback, and long-term recommendations for action at the event download page (NB. the URL has been emailed to you directly as part of your package).

Executive Briefing Service Members: can do so in a special Executive Briefing here. Non-Members: see here to subscribe.

To share this article easily, please click:

June 11, 2009

Pilot 2.0 - How to trial new business models

Below is a summary analysis of the Pilot 2.0 session at the May 2009 Telco 2.0 Executive Brainstorm.

One of the recurring themes at the event was ‘where to start?’ with Telco 2.0 business models. Although many participants could perceive where operators would like to be eventually, there was much less belief or consistency in working out how to get there.

Most recognise the need for caution. C-level executives will, quite rightly, take time to buy into the idea, as will investors: proof points will be needed. And Telco 2.0 projects will need to be aligned with various other transformation initiatives, such as more moves to new OSS/BSS stacks or outsourcing of important functions. In addition, any major new programme of investment (for example in new hardware platforms, or extensive developer-centric marketing and support) is likely to be burdened by delays and much closer business case scrutiny in the current economic climate.

So, Telco 2.0 believes that quick and influential wins might be achieved via pilot projects – illustrating the power and vision of two-sided models without needing complete reinvention of overall company strategy first. As the economy picks up and executives are more inclined to take risks again, these proof-points can then be used to accelerate much larger programmes of change. Clearly, the appetite for risk will vary by operator – as will the most accessible ‘low hanging fruit’

Audience input around piloting spanned a wide range of themes – from the technical to the organisational, and from attitudinal shifts to more specific early niches. Overall, all these elements will be important to align.

Lessons learnt & next steps

The main take-out from this session is that there is no single clear path. The feedback yielded dozens of suggestions, many of which make sense on a standalone basis. The appropriate options for any given operator will clearly depend on its specific circumstances - fixed vs. mobile, tier 1 vs. tier 2, national vs. international, age & capability of OSS, maturity of existing API and Telco 2.0 programmes, and numerous other criteria.

However, one theme came out strongly throughout the event: do something quickly. There is insufficient time to pursue the usual protracted Telco timescales for research and deliberation. This means that areas with long lead times - such as government projects - are typically unsuitable. Some target industries are also experiencing lengthening sales/decision cycles in the recession - these are also not optimal for pilots.

Instead, focusing on sectors or groups capable of making quick turnarounds - with easy measurement of success/failure - are paramount. Web-based companies are often the most flexible, as are some academic institutions. There may also be a geographic dimension to this - countries with low regulatory burdens, or where it is unusual to have projects stuck for months with lawyers, are attractive for piloting purposes.

Working alone may be fastest, but collaborating with other operators is likely to be more effective in demonstrating validity to the Telco 2.0 concept.  Balancing this natural tension will be important in the near-term.  Gathering a small collection of operators together to work on tightly defined projects seems sensible as these can morph, over time, into larger scale activities with a larger ecosystem.

The Telco 2.0 Initiative is happy to work with any individual operators looking to identify early options. But some general short-term guidelines include:

  1. Get a credible senior (board member) executive to sponsor activities in this area - preferably the CEO.  Don’t try and build something without this support as a new business model will never succeed without the will to change at the top;
  2. Realistically assess the likelihood that the corporate culture and systems will sustain ‘maverick’ Telco 2.0 operations. If it can, it is probably worth setting up an in-house group to work closely with relevant IT and operational units to select pilot areas and capabilities. But be honest with yourselves - if this will get mired in bureaucracy and politics, first seek an alternative approach outside the main business;
  3. Where possible, avoid trials which need software or devices to be ‘hard-coded’ as making changes to beta versions is difficult and distribution issues will limit adoption. Instead, using the web or browsers as an interface enables any changes to be made on the server-side, on an ongoing basis;
  4. Web-based trials have another advantage - multiple versions of the same underlying service can be developed in parallel, enabling project managers to see immediately what works and what doesn’t, by comparing feedback from separate groups of customers;
  5. Perform an audit of current Telco 2.0-type initiatives across the whole company. Highlight any apparent duplication of effort, and predict any likely areas of tension or internal competition as early as possible.  This is not trivial - in-fighting can kill projects quickly;
  6. Assess and contribute to relevant industry-wide collaboration projects- GSMA OneAPI, OMTP BONDI, etc. Send representatives to developer meetings of competitors or peers elsewhere in the world, or in adjacent technology markets;
  7. Look for any internal groups that could themselves act as early clients for new service propositions.  It is easy to be blind to the obvious:  if communications-enabled business processes are valuable, why not communications-enable your own processes first?

The Telco 2.0 team is creating a database of pilot work, some of which will be presented at the next Telco 2.0 event on 4-5 November in London.

Event Participants: can access the full presentation transcripts, delegate feedback, and long-term recommendations for action at the event download page (NB. the URL has been emailed to you directly as part of your package).

Executive Briefing Service Members: can do so in a special Executive Briefing here. Non-Members: see here to subscribe.

To share this article easily, please click:

BT Tries To Fix Global Services with Open Source

BT has frequently been in the news here; innovation chief Matt Bross presented about his aim to change the company from one defined by infrastructure to one defined by software at the November 2008 Telco 2.0 event. More recently, its systems-integration and IT consulting wing, BT Global Services, has run into trouble, notably in its involvement with the UK’s controversial giant healthcare IT programme.

A software-defined telco - like a software-defined radio - sounds a great idea. But how will BT execute on it - and how will they stabilise Global Services, which is surely a crucial element in such a strategy? It’s an important question for anyone trying to implement Telco 2.0. It seems that BT is hoping that it can achieve this by embracing open source software (OSS) and the habits and methods that go with it.

An anonymous delegate at the last Telco 2.0 event sent us this Mindshare feedback message:

We need more outside the ecosystem players like Apple coming in to cross pollinate with our gene pool. My guess is that apple doesn’t attend Telco events because they are worried about damaging their own gene pool with our status quo. Give some kids full artistic license at a reference acct/operator to build their playground. Lock them in a room with caffeine and pizza and a big pipe. Carriers have great toys they would like to build with. Output = Telco 2.0

As it happened, we had the opportunity to see BT’s efforts to do just that recently. As well as its well-known investment in Voice 2.0 start-up Ribbit, and its previous Web21C API suite, BT’s efforts to initiate transformation towards Telco 2.0 saw it acquire a small UK software house called Osmosoft, which specialised in open-source development. They’re now installed in the spooky and very, very Bellheaded confines of the Westminster ATE - you know you’re in a telco when half the building is devoted to Ministry of Defence networks.

But what are they doing in there?

The short answer is that they are trying to infect the rest of BT.

J. P. Rangaswami, old friend of Telco 2.0 and MD of BT Design, the cross-cutting engineering function of BT, described the main purpose of Osmosoft as being to inject open source culture and methods into the BT mothership. He’s selected a dozen people as “evangelists” - oddly enough, the same number as the original evangelists. One hopes no-one will have to be crucified. He remarked that BT doesn’t want to be a consumer of open source, but wants to be a contributor, both to support the movement and also to influence the direction of the projects it uses. Further, once you’ve recruited open-source developers, they need to keep contributing to maintain their skills and stay current with progress in the broader community. Open partnering, he said, is vital. (You might almost think ex-Telco 2.0 people were involved).

So, Osmosoft, and to a lesser extent Ribbit’s, real role is threefold - as a brain farm for recruiting talent that wouldn’t necessarily fit into a traditional telco culture like BT, as a terrorist cell spreading subversive ideas inside BT, and as an exercise in costly signalling. Biologists who study signalling in nature have long theorised that the reason why so many natural signals are disadvantageous to the creatures that produce them is that signals have to represent a real investment to be credible, as otherwise they would be universally faked. Osmosoft and Ribbit cost money - so they are a credible demonstration by top management that innovation is not a crime.

Osmosoft Towers

Jeremy Ruston, the head of Osmosoft, said that the decision to buy it boils down to the fact that BT knew there was innovation in open-source software and needed help to understand it. He was convinced by the fact that they didn’t mention lower costs; but did mention community. You can’t say anything sensible about OSS without talking about community. This helped overcome his suspicion about moving to a company where the finance department turned down the funds for the deal twice.

BT, he said, seems like a hostile place for OSS; but as soon as he got there he found it was under every stone.

This is an important point for Telco 2.0 implementers - open innovation can come in through the back door as malware and semiofficial, undocumented work-arounds, or it can come in the front door as solutions. But it will get you, one way or the other; so it’s best to accept it on your terms. And there is often a surprising amount of innovation going on from below that might flourish in a more welcoming culture. BT engineers frequently used their own home-made or OSS solutions although something expensive in the way of proprietary software had been purchased - management knew nothing of this until the creation of an OSS business unit helped to make it visible.

A major strength of OSS is that it is a low friction way to solve problems in the enterprise. However, management suffered a widespread belief that OSS was incompatible, illegal, or fattening. BT being an engineering-led culture, full of experts, they felt they needed to have something that they could touch and feel. They decided to use Osmosoft’s TiddlyWiki as an example. This is a set of collaboration tools based on the wiki paradigm, but implemented as a single file and a back-end server that permits multiple users to synchronise their versions of the document as they add to it.

In Osmosoft’s own right, they’ve integrated it with the conference resources website Confabb. As a BT project, it’s being used to visualise and document BT infrastructure. J. P. Rangawami remarked that “as we solve problems with OSS, we find other problems we could solve. There’s much more richness and value in doing it this way as opposed to buying in monolithic systems.”

BT Wholesale’s Web site includes 4000 pages and 10,000 documents - not surprisingly, feedback from customers suggested that distributing documents and collaborating on them was a serious problem. Tracking changes to hundreds of pages of detailed and legally sensitive tariffs is hard. So they introduced tiddlydocs, based on tiddlywiki, to help document it all. Wholesale customers are now using this for their information and also to help sell to their downstream customers. Sky Broadband is the first to trial it.

That sounds fun, but we couldn’t help feeling that it wasn’t particularly telco-specific. However, it’s worth pointing out that some of the use-cases we expect to pay off soonest are ones which are in the carriers’ own businesses. For example. BT is now using Twitter to get real-time feedback from its customers. But Twitter is not very different from SMS; why the fuss?

“Social networks have certain common features - a directory, relationships, communication, scheduling, and a record of changes, which is what the status-updating function is,” Rangaswami said. “Telcos, in the past, had the address book, especially when they were Post and Telegraphs as well as just Telephone. What Google, Microsoft, Yahoo!, and Facebook have done is add the scheduling and the record of changes, which no telco has ever done.”

But it’s not really connected to the infrastructure, it doesn’t touch on voice or VAS. BT has, of course, been trying to do better on these for several years, first with Web21C and then with Ribbit. As J. P. Rangaswami says, the decision to do this was taken as long ago as 2006. But Web21C didn’t take off. He has an interesting take on why.

“Communities don’t do what you expect, though,” he said. A lot of early usage of Web21C’s call control API was simply “virtual calling cards - arbitraging between the voice tariff and the service we made available through Web21C.” It’s a classic Telco 2.0 moment; the primacy of voice demonstrated. People will do almost anything for cheaper calls; the flipside of this is that voice is the only form of interpersonal electronic communication that the public are willing to pay for.

Rangaswami also argues that the Web21C project overstated the relative importance of server-side solutions as opposed to client-side ones. As a result, the decision was taken to revise the business model, accelerate opening the network, and acquire Ribbit and its team of Adobe Flash developers.

Developers are offered a choice of routes to market and business models - you can choose to “pay per drink”, or “eat all you can within limits” - and a range of levels of engagement. “Have you got your own termination, billing, data centre? We can accommodate users with all of these, none of these, or any combination,” says Rangaswami.

He argued that a major role for Osmosoft, and for OSS in general at BT, was recruiting; “It’s a talent war - we’re primarily concerned in acquiring talent. Most businesses aren’t expanding, but we’re always trying to gain new capabilities. Osmosoft is our most effective acquisition tool; much better than M&A.” Jeremy Ruston said that the hires we’ve made from the community are much cheaper - you can see what they’ve done in terms of actual code. Also, as one of the developers pointed out, OSS means that you don’t have to abandon your work on a project when you change jobs.

However, open source is becoming deeply embedded in the infrastructure. According to J.P. Rangaswami, “I can’t think of any BT product that doesn’t include Linux in some way.” Rangaswami argues that there is an interesting dynamic at work, which is likely to transform the IT industry if it keeps up. “If a problem is general, you should go to the OSS community; if it is specific, you should go to commercial, because someone will have the commercial imperative to solve it. This is why OSS moves steadily up the generic IT stack - as soon as it becomes necessary to scale up and provide for diverse user needs, it becomes more efficient to become an open source community.”

Telco 2.0 couldn’t agree more. As Thomas Howe, CEO of Jaduka, said at this spring’s Telco 2.0 event, there are a few basic capabilities in the Voice 2.0 field which are applied to tens of thousands of highly specific business processes. Value is created at the touch-point between the two, which requires very specific knowledge of the target business process. This is why the upstream customers are important!

In the end, he pointed out, it’s the cost of change that becomes ludicrous, not the cost of acquisition. In India, he said, he could ask someone to come to dinner and his mother would fix it - the cost of adding some more vegetables was trivial; it’s different in the UK where you have lamb chops and count out the potatoes. “There is a lot of OSS within our stack, but we want it in our culture. We use software to deliver services - when it gets to the enterprise customers it’s a service. The focus is on moving their processes into our systems.”

Doc Searles described this as the “because” effect. Google makes money because of Linux, not with it; “this transition from the economics of scarcity to those of abundance,” Rangaswami said, “is necessarily destructive of value to at least one actor but creative of it to many others.” Similarly, the migration from Telco 1.0 is painful to the old voice monopoly, but it’s hugely beneficial to the wider economy, which is much more money. The clever bit is to make sure a thin slice of the productivity gain is captured for the telco.

So, Rangaswami was asked, is open source going to save BT Global Services? “Platforms, reuse, and collaboration are going to save BTGS; and OSS underlies that,” he said.

To share this article easily, please click:

June 10, 2009

RIM: APIs are crucial, enterprises are the target

Blogging from the Open Mobile Summit, we were interested by some of RIM Senior Vice President Alan Brenner’s remarks about developing for mobile devices.

Brenner argues that the fundamental use cases for mobile applications are very different to those for desktop apps; whereas most desktop applications are “sovereign”, demanding your full attention as you analyse data, process e-mail, edit documents, work with graphics, play games, write code, etc, mobile applications are usually transitory. You’re doing something else when you briefly stop to check e-mail, send a text message, or look up information from the Web.

Secondly, mobile applications are usually an adjunct to some network-based resource - rather than doing the processing themselves, they send or receive data to and from a remote machine and provide the user interface. This only makes sense, given the form factor, intermittent connectivity, and battery restrictions - which will of course never go away, being based on physics and human factors. After all, a telephone is meaningless without a softswitch to talk to; computer deployment caused computer networking, but the process was reversed in mobile.

This is where Telco 2.0 comes in; if mobile applications are fated to be dominated by a mash-up paradigm, driven by the fundamental constraints of the medium, it’s going to be crucially important to provide the richest possible set of APIs for them to consume. Mobile applications are increasingly intended to lash together several Web services and wrap them in a mobile-expedient user interface. If Brenner is right, they’ll always be like that; systems like Palm’s WebOS, which makes the whole user interface a Web browser interacting with both local HTML objects and remote Web sites, will only make this more so, as the user experience blurs between local and remote elements.

He’s also concerned that a large percentage of “thousands of tiny applications” on the various app stores are used once and never again, which must surely have implications for the possible user value over the long term. Thousands of small applications isn’t a problem; that’s essentially how all UNIX systems work, and they are the great survivors of IT. But a lack of real user value is.

This suggests to us that value exists primarily when highly specific problems are solved by adapting general-purpose tools - like the famous quick-reconfiguring machine tools of Toyota. Put it another way, those tiny applications will be used and paid for in one particular field - the enterprise. Everything is a communications-enabled business process.

But one thing hardly anyone has discussed all morning is voice, the original and best of CEBPs.

To share this article easily, please click:

June 8, 2009

Ring! Ring! Hot News, 8th June, 2009

In Today’s Issue: Verizon launches cloud computing offering; APIs coming up this year; the problems of being cloudy; beware potential plutonium privacy problems; hackers claim to steal entire T-Mobile USA billing database; SingTel on Bharti/MTN: “In for a billion!”; Apple App Store to start subscriptions, volume pricing; Google’s cash for developers scheme; MIDs fail; WhyMAX femtocell; satellite TV for your car; problems of network-based DVRs; Carphone Warehouse demerger on the way; “Twitter phone” = SMS; Pre SDK coming right up - eerily similar to JIL; has the Pre chased O2 off the N97?; DARPA mesh networks; Sarin’s exes are a gas; wave of missed call fraud; Pirate Party gets elected; Angola lays fibre, but not that sort yet; BSNL reissues WiMAX tenders after corruption panic

Verizon launches the first telco cloud-computing service; at the moment their “Computing as a Service” offering is enterprise-focused, letting you run your applications or their applications in the cloud, but what’s this? Open APIs are promised some time this year, to match the existing device-centred ODI program.

“The API is available today, and we’re using it for our own user interface. But we’re looking for the right use cases; we’re still doing due diligence to make sure we understand what customers are looking for when they want to interface with the environment in that type of manner. I’d say by the end of this year we’d have a published API that customers could leverage.”

Speaking of clouds, here’s an interesting discussion of the problems of working in the cloud at Ned Batchelder’s blog.

We’ve said before that Verizon is proving to be remarkably permeable to Telco 2.0 ideas; with the cloud computing, open APIs, and open device projects pulling away from the station, the P4P research project in the works, and significant fibre deployment going on, that only leaves the frontier of subscriber data to check off the list. Wired has a good cautionary tale about increasing regulatory interest in behavioural ad programs at Google and elsewhere; remember not to mix up the potatoes and the plutonium.

And look what just happened; hackers claim to have stolen “everything” from T-Mobile USA. This could be the biggest carrier data loss in history, right up there with the Vodafone Greece incident in the annals of telco security disasters. Or it could be a hoax; read the blackmailers’ note here and make your own mind up. Of course, subscribers can take heart from the thought that it’s very unlikely that the data is well enough organised for them to do anything criminal with it without lots of time and effort.

The Bharti-MTN deal has become more likely; SingTel says it’s in for 30 per cent of Bharti, having gone down to 19 per cent in the past.

Meanwhile, in the Apple world, they’re gathering for a shindig; but the serious people want to know what is happening with the new pricing options for the App Store. Apple is expected to announce that they will soon provide the ability to charge subscriptions or usage-based fees as well as one-off sales; it’s almost like a telco billing system, in a way.

Google, meanwhile, is taking the shortest way to attract developers to code for Android; give them some money, as Milton Friedman said about the poor. Specifically, a rich set of prizes are offered for the best applications. Relatedly, it’s not looking good for so-called MIDs (Mobile Internet Devices); these are a device class that Intel essentially invented a few years ago, at least in part as a target market for WiMAX connectivity, which fit in somewhere between a smartphone and a laptop. Unfortunately, fitting in between a 2005 smartphone and a laptop of the same vintage is a strategy that makes less sense in 2009, with much less space between new smartphones and netbooks.

So we’re more than a little sceptical of a plan to make a WiMAX femtocell. Well, we suppose it might come in handy…perhaps. Similarly, is there really a demand for this new product from AT&T? “CruiseCast” is essentially satellite TV for your car, and although we have to recognise that nothing could be more American than the combination of a car and a television, it all sounds far too much like something that escaped from the bubble years. Especially as it costs $1300 to install and $28 a month.

A lot of carriers are interested in implementing DVR/STB/whatever functions in the network, thus saving on all those boxes and the problem of deploying them to the users. Telephony Online makes the very good point that, although this is a quick win and relatively cheap, it doesn’t do anything to solve the video problem; if anything it means even more video hammering the wires. And you don’t even get to offer the subscribers a shiny gadget.

Speaking of video and the broadband incentive problem, Carphone Warehouse held its dividend for this year, but it can’t get the TalkTalk DSL and multi-MVNO business demerged fast enough. No wonder.

Meanwhile, INQ plans to launch a “Twitter phone”; the obvious comment on this is that it’s not going to be hard, as Twitter messages are essentially SMS that gets logged on a Web site, so anything with basic GSM functionality and a Web browser can do it, which means essentially every device on the market. Sounds like a great way to sell really cheap devices; probably better than SUV TV in today’s economic climate.

According to Palm’s VP of sales, the developers won’t have to wait much longer for the Pre SDK. It sounds remarkably similar to Vodafone, China Mobile, and Softbank’s JIL; the whole user interface is a Web browser and everything is an HTML/CSS/Javascript entity. Everyone seems to like the gadget. Rumours suggest that the lack of Nokia N97s at O2 is a signal that they have signed up the GSM/UMTS version of it.

DARPA is investing in a new approach to military radio - they want to use large numbers of cheap radio nodes working together in a mesh network, rather than a few heavily engineered ones. This may well tell us quite a bit about future radio networks - especially as Mapesbury’s UKO1 is apparently using a mesh GSM network.

Special mention for Arun Sarin; in his last year at Vodafone he made £7.5m, but still needed £500,000 of expenses to relocate back to the US.

There’s been a wave of phone-related fraud in the UK, says regulator PhonePayPlus; apparently deliberately generating missed calls from a super-premium rate number is a common practice.

This weekend saw the European elections; and the Pirate Party responded to the Pirate Bay convictions by getting elected. Apparently they took over one-fifth of the youth vote; they are planning to join the Green caucus in Brussels (and of course Strasbourg).

Angola is laying fibre; international, submarine fibre, that is. And the story of the week: Indian state carrier BSNL is re-tendering for its planned WiMAX network, after the first lot of contracts went to “family members and confidants” of the telecoms minister. Whoops.

To share this article easily, please click:

June 4, 2009

“Social News” = “New Players Emerge”?

Here’s another attempt to map the future of media: How Our News Sources Changed in the Last 200 Years.

News sources changing over time

Predictions, as they say, are especially difficult about the future, and it’s certainly a brave one to forecast a future dominated by “social” and “targeted” news - two forms of media that don’t exist yet and that we don’t really know how to describe or define. But one of the interesting things here is how closely this maps to the progression through our scenarios in the Online Video Distribution market study.

We see the video market progressing rapidly away from its traditional set-up, into a chaotic period we called Pirate World, during which we expected a wave of creative destruction and wild experimentation, and eventually stabilising as powerful new players emerge. Here’s the original presentation that we introduced in this post:

Whatever “social news” might turn out to be, it sounds a lot like one of our “new players”; driven by aggregation, metadata, user experience and integrated delivery rather than by content or infrastructure ownership.

To share this article easily, please click:

Open APIs 2.0 - Unifying Commercial Framework Needed

Below is a summary analysis of the Open APIs 2.0 session at the May 2009 Telco 2.0 Executive Brainstorm which, for the first time gathered together leaders of the major telco API programmes - GSMA, TM Forum, MEF Smart Pipes, OMTP BONDI, Orange Partners, Alcatel-Lucent - and some of their potential users (BBC, Yahoo, Amazon etc).

The premise we explored was this:

Platform-based 2-sided business models need APIs to enable upstream customers to use telco assets and processes. There are a huge variety of APIs and enablers being developed in the market by industry bodies (GSMA, TM Forum, OMTP, MEF), by individual operators (Vodafone Betavine, Orange Partner, O2 Litmus), and by ad hoc consortia (such as Vodafone, China Mobile, and Softbank’s JIL). But what is the commercial strategy that underpins these programmes? What needs to be done to ensure that APIs are valuable for upstream customers (developers, merchants, advertisers, and government) and profitable for operators?

Vote

Participants were asked near the end of the panel: Which of the following statements best reflects your views on the API efforts of the Telco industry?

  1. Individual operators and cross-industry bodies are getting things about right and current API programmes will yield significant value to the Telco industry in the next 3 - 5 years.
  2. Individual operators and cross industry bodies have made a good start with their developer and API programmes but more needs to be done to standardise approaches and to bring commercial thinking to the fore if APIs are going to generate significant value to the Telco industry in the next 3 - 5 years.
  3. The current developer and API activity by individual operators and cross industry bodies are totally inadequate and are unlikely to create value in the next 3 - 5 years.
  4. Votes at Telco 2.0

    Lessons learnt & next steps

    APIs are a hot topic in the industry at present and this lively session highlighted three things very clearly:

    1. There is a great deal of work being done on APIs by the operator and vendor community. There is a real sense of urgency in the Industry to make a set of cross-operator/platform/bearer/device APIs available to developers quickly;
    2. There is a real risk of this API activity being derailed by the emergence of numerous independent “islands” of APIs and developer programmes. It is not uncommon for operators to have 3 or more separate initiatives around “openness” - in the network, on handsets or on home fixed-broadband devices, in the billing system and so on. Various industry bodies have taken prominent roles, usually at the level of setting requirements, rather than developing detailed standards.
    3. Thomas Howe, CEO, Jaduka: ”Standards aren’t something we have to wait for! In the web sphere standards were something we did which worked so well that everyone said ‘that’s the standard’ and started using it. This is what happened with AJAX.”
    4. It is still extremely early days for the commercial model for APIs. This is an area that the Telco 2.0 Initiative is concentrating hard on at present. It is already becoming apparent that a one-size-fits-all solution will be difficult. In line with the previous discussion about piloting Telco 2.0 services, it is important for operators to ensure that API platforms (and the associated revenue mechanisms) can service two distinct classes of user/customer:
      • Broad adoption by thousands/millions of developers via automated web interfaces (similar to signing up for Google Adwords or Amazon’s cloud storage & computing services);
      • Large-scale one-off projects and collaborations, which may require custom or bespoke capabilities (e.g. linked to subscriber data management systems or “semi-closed” / “private” APIs), for example with governments or major media companies.
    5. It seems that certain sets of APIs are quite standalone and perhaps have simpler monetisation models - e.g. location lookups or well-defined authentication tasks. Others, such as granting 3rd-party access to specific “cuts” of subscriber data, may be more difficult to automate.

    The fireworks between various panellists also illustrated an important point - there remains considerable tension between those advocating business models which are ‘content’-driven, involving the delivery of packaged entertainment and information to consumers and enterprise customers, versus those which are aimed at facilitating large numbers of new and (mostly) unknown developers who may use the platform to create ‘the next big thing’. Both business models have merit - while there is certainly value in using packaged approaches like “sender-pays data” for well-defined content types, there is also huge potential in becoming the platform of choice for unexpected ‘viral’ web applications that exploit unique Telco assets.

    Dean Bubley, Telco 2.0: “I can’t believe that people in this room are still referring to their future customers as ‘OTT Players’ which is as derogatory as calling Telcos ‘pipe salesmen’, or ‘under the floor players’. Unless you show some respect to these companies, do you really think they will prefer to do business with you, rather than destroy you?”

    In the short term, work needs to continue on developing the API platform, but also on evolving the attitudes and processes within the operator to support successful future business models:

    • Avoid pre-conceptions about the commercial model for APIs. In particular, revenue-shares and flatrate % commissions are extremely difficult to justify, except for the most commoditised capabilities like payment, or large-scale individually-negotiated contracts;
    • Develop thinking around the commercial model for APIs as getting this right will drive the success of existing industry-wide API initiatives - these technical programmes will fail without input from strategists and marketers on the required frameworks;
    • Most operators are undergoing major programmes of transformation - e.g. around outsourcing or IP network deployment. It is critical that these actions are constantly reviewed for fit against API-type initiatives to ensure they ease their creation, and don’t create new bottlenecks or structural silos;
    • Recognise that individual propositions about openness often make sense when viewed in isolation - but need to be seen in a wider strategic context, including all interface points between the operator domain and the Internet/apps world;
    • Non-handset specialists should make an effort to understand the implications of OMTP’s BONDI, as it can support a broad set of innovative applications and business models - and may well also appeal to third party developers;
    • Be aware that many developers will not want to have dozens of separate relationships with individual operators - do not force them to duplicate effort. Instead, work with industry-wide groups to address their core needs;
    • Develop a checklist of open API “hygiene factors” that are critical for developers, such as easy app testing mechanisms, transparency in application approval/signing, clear API pricing and so forth;
    • Consider “eating your own dogfood” and use elements of third-party web services and APIs as part of your own offering, at least in the early stages. In particular, this could reduce time-to-market and enhance flexibility.

    Here is one of the stimulus presentations, from Karl Bream of Alcatel-Lucent:

    Event Participants: can access the full presentation transcripts, delegate feedback, and long-term recommendations for action at the event download page (NB. the URL has been emailed to you directly as part of your package).

    Executive Briefing Service Members: can do so in a special Executive Briefing here. Non-Members: see here to subscribe.

    To share this article easily, please click:

Videos from Telco 2.0 event

The good people from TelecomTV have now published the videos from this spring’s Telco 2.0 Executive Brainstorm. Below is the opening interview with Chairman of TM Forum and CEO of Telco 2.0 Initiative:

To share this article easily, please click:

June 3, 2009

Subscriber Data: The smarter way to use it

[Ed - This is a guest article by Telco 2.0 ally Paul Magelli Head of Subscriber Data Management at Nokia Siemens Networks on customer data, a topic we’ve been investigating since 2006, notably here, on the difference between potatoes and plutonium, on Move Networks, OpenID, Skydeck and Yahoo!.]

By 2015, when there will be something like 5 billion people connected around the world, where applications will predominate in the internet and there is broadband everywhere, a multitude of business models will exist. A vital aspect for communications service providers (CSP) to survive and thrive in this environment is to have a sharp focus on the individual needs of subscribers.

CSPs, particularly mobile operators, already know a lot about their subscribers. They know not just who their customers are and what their service profile is, but how much they spend on the services, when and where they spend and much, much more. Few other industries have such detailed insight into their customers’ behavior.

Using this information efficiently and effectively will be the ultimate enabler of success.

Breaking down the legacy structure

The trouble is, all this data is often locked away in fragmented databases and on legacy data management platforms. Because of the way networks have evolved over the years, subscriber data is held across many disparate applications each storing its own data. A typical CSP today can have as many as a hundred, sometimes more, separate databases that run their network, and the number is growing at 10 to 20 percent a year.

Continuing with such a silo structure is unsustainable because adding applications simply escalates data duplication and network complexity until the whole thing becomes practically unmanageable. Not only is the cost of acquiring the data and managing it high, but the sheer complexity inhibits the CSP from making proper use of this superb asset.

A recent survey of senior industry managers by Nokia Siemens Networks showed that over half the respondents cannot analyze customer behavior using their existing customer data infrastructure, while almost as many complained that data is not being analyzed fast enough. 83 percent of CSPs say that real time subscriber data is critical to improve the subscriber experience. Yet only 14 percent of CSPs have real-time data analysis available to them.

If only CSPs could consolidate all the rich subscriber profile data they have, they would be well placed to take full advantage of their trusted relationships with subscribers.

That’s where a unified real-time Subscriber Data Management (SDM) solution, with totally open interfaces and complete reliability, comes in.

The benefits of such an SDM solution, where subscriber data is consolidated into a single database and accessed in real-time by multiple data-less applications translate into reduced costs, both operational and capital, better consistency of customer experience, and faster time to market for new services.

Supporting the two-sided business model

Better still, if CSPs can leverage this subscriber data in real-time they can not only improve their service offerings, but can also create completely new revenue streams and support third parties in numerous industry sectors in improving their service offerings to their customers. With such business transformation more value added services can be provided to banks, call centers and others on top of the packaging of minutes, messages and megabytes with their own products.

Paul_Magelli_Telco2.0%20Nice%20FINAL-BLOG.png

Imagine a bank customer who is travelling abroad and attempts to use his credit card to make a purchase, but finds it is blocked. In these days of frequent fraud, banks are rightly sensitive about protecting their customers. But had the bank been able to access the subscriber’s roaming data, it would have known where he was, that it really was him attempting to use the card, and not cancelled his card. Not only would the customer enjoy a better service experience, but the bank would save the cost of issuing a new card and a month of lost income.

Or consider the case of a roaming mobile subscriber also travelling abroad. He gets a sales call from his mobile CSP offering him a new data tariff. He is not happy because he is paying roaming charges to receive the call. Even the CSP seems unable to use its own subscriber data to recognize what a bad idea such a call is.

Such examples are happening all the time today - in fact, they demonstrate that some of the first opportunities that validate the business case for Telco 2.0 are right here inside the telco itself, in improving its own operations, product development, marketing, and customer care. This can help to reduce costs, optimise CRM, and enable rapid service creation and deployment, even before we begin to draw in upstream customers, integrate new forms of content delivery, and extend our activities into the enterprise.

With this consolidated overview of their subscriber profile data, plus real-time insight into customer behavior, CSPs can bring value to their subscribers, their own business and third party companies.

Pushing data use even further

More business potential lies in identity management. The Nokia Siemens Networks survey reveals that 64 percent of CSPs see identity management and managing multiple subscriber identities as a key issue. Identity management controls the provisioning, sign-on, linking, and federation of subscriber identity data to Internet services.

As online business continues to grow, the issue of identity becomes increasingly thorny. Passwords proliferate and security details are requested by an ever growing number of service providers and content owners. New services, such as cloud computing, make the protection of personal details even more complex.

The issue can be difficult for consumers, but a superb business opportunity for CSPs. And with many subscribers continuing to access Web 2.0 applications via PCs over fixed lines, this is one area that can benefit fixed-line providers as much as mobile CSPs.

For subscribers, identity management means an end to the ever-growing numbers of passwords and logins, since they can be identified by their mobile device or IP address. With the CSP acting as a trusted partner and gateway to online services, all their online activities can be authenticated by a single device certificate or password.

Another key to successfully becoming truly subscriber centric is to ensure data privacy. Surely users should be able to control their own data exposure. Permissions-based use of data should be rigorously adhered to. If users properly understand the benefits to them, a better service experience, with appropriate incentives they can choose with whom to share their data and for what purposes.

The CSP must also take into account local regulatory constraints as well as local cultural influences. Different populations accept different levels of exposure of their personal data. For example, in some countries salaries are made available on websites as a matter of course, something which would be completely unacceptable in other countries.

The possibilities for growth and efficiency that unlocking the potential of subscriber data enables are almost limitless. So, let’s get working on it.

Find out more about Subscriber Data Management in this podcast at Nokia Siemens Networks, in which Paul Magelli, head of the Subscriber Data Management unit, discusses how to make smarter use of subscriber data.

To share this article easily, please click:

June 1, 2009

Ring! Ring! Hot News, 1st June, 2009

In Today’s Issue:: MTN-Bharti back on in complex merger; Spanish regulators see 45% FTTH; layer-zero openness is key; fibre diet very good for Xfone; NBN numbers; Aussies row back on filtering; KCOM - managed services as a managed service; Time Warner gets out of AOL; Nortel gets out of LG-Nortel; Vodafone offers 25MB for £5 but not 5MB for £1; not the best start for Ovi Store; data pressure on at AT&T; AT&T to offer Android, Pre; Spotify for mobile; virtualisation for mobile; Facebook “valued” at “$10bn”; Orascom profits slide; Telfort fined; Vimpelcom loses money, gains subscribers; Zain Iraq fined; Wataniya Palestine appeals to Blair; Viettel buys 2,000 cellsites from Huawei; Renesys on the cybersecurity report; MetaSwitch claims softswitch leadership; decisions coming on UK universal service

The MTN-Bharti emerging-market supermerger is back on, after the two carriers agreed to negotiate exclusively with each other until the end of the year. The combined beast would be the third-biggest operator in the world after Vodafone and China Mobile; in a deal which could be fairly described as complicated, Bharti would increase its stake in MTN to 49%, while MTN took a 36% stake in Bharti.

However, just in case this might be suspected of simplicity, 11% of Bharti would then be passed on to MTN shareholders as part of the payment, in a mixture of cash and shares, for Bharti’s new stake in MTN. And MTN, meanwhile, would pay for its stake in Bharti with a mixture of cash and MTN shares. Clearly.

No wonder the big question in the talks will be who gets to control the combined company.

In Spain, and in fixed, meanwhile, the regulator, CMT, reckons almost half of Spanish subscribers will be on FTTH by 2023 - to be precise, 43 to 46 per cent of subscribers. There is interesting stuff in the report; CMT estimates that there may be as many as three alternative fibre operators competing with Telefonica by then, probably concentrated in Barcelona and Madrid, with at least one alternative operator pushing into smaller towns. They reckon that such operators would recoup their investment in 9 years in the cities.

The secret sauce is layer zero openness; the report’s authors worked from the assumption that all the alternative operators would use Telefonica’s ducts and poles. Accordingly, the CMT has decided to keep an existing obligation to provide duct access at a price equal to that charged for Telefonica’s own fibre deployment in place.

Fibre in your diet is good for you; Tim Poulus points to Xfone’s results, which show monthly ARPU of $75 for DSL and $175 for FTTH customers, with a churn rate of 1.7%.

In Australia, there are numbers from telecoms minister Stephen Conroy on the National Broadband Network; in a parliamentary committee hearing, he said that the government expects the NBN to have a 50/50 split between debt and equity, that the central government would hold 51% of the equity, and that this would mean an upfront investment of A$11bn - so the total bill would come to about A$40bn. He also rowed back on plans to institute mandatory filtering of the Internet.

KCOM, the operator of Hull’s municipal phone network and once dotcom darling, is outsourcing its national managed-services infrastructure to BT. Is this the first known case of managed services as a managed service?

In other .com nostalgia news, Time Warner is getting rid of AOL, spinning off the once-famous dialup operator into a standalone company again. It’s probably telling that they hired someone from Google to run the division shortly beforehand, although what you would actually do with it is another question. Nortel is also getting out of an old joint venture this week, putting its interest in LG-Nortel on the block.

Vodafone trumpeted its suspension of roaming charges in Europe this summer, but said remarkably little about data roaming. But then, who would? It seems, however, that there is a new tariff for data as well, which divides the world into three parts and superimposes a distinction between phones and dongles/laptops/everything else. £5 gets you 25MB of data; why they didn’t say that 5MB costs £1, we don’t know…

Anyway, app stores. Again. This week saw the launch of Nokia’s Ovi Store, and whilst it didn’t quite topple sideways from the pad and explode in a giant cloud of red smoke, it wasn’t Apollo 11 either. Reviewers were pleased by the range of software - 3G Skype clients, for example - that Apple would never have accepted, but disappointed by the user experience on some devices and the worryingly frequent website outages. Whoops.

AT&T CEO Randall Stephenson says: the mobile backhaul capacity crunch is coming. In response to the iPhone’s demands, they are planning to roll out HSPA rapidly; they are also planning to ship Android devices and the Palm Pre, in order to keep those pipes filled. It’s a curious feature of the industry; on the one hand, everyone talks about measuring and throttling and value-based pricing, but on the other, as soon as a new link is installed, the business imperative is to price the service to go.

Here’s something to make the backhaul links run hot; mobile Spotify, for Google Android devices. An interesting detail; rather than stream music all the time over a mobile network, this version of the service can cache a playlist locally. Which is sensible, but rather defeats the idea that they could sell you music without letting you have a copy…

Scary tech development: virtualisation for mobile phones. VMWare are working on it; they like the idea of the same gadget being able to run Android, LiMo, S60, or whatever, and corporate sysadmins being able to have their own secret way in to their fleet of devices in order to check the doors are locked.

Meanwhile, Facebook valued at $10bn. Well, that’s more like “Facebook” “valued” at “$10bn”; what has actually happened is that a Russian investor has offered $200m for 1.96% of the company. Hence, some quick multiplication sums, and you have your headline.

Telco people who remember the late 90s and early 00s will of course recognise an old trick; you swap a given amount of capacity on your empty or unbuilt cable with a competitor, at a price which is silly but doesn’t matter because no cash changes hands. This allows you to value the project based on that price, and why not all the others too? Hallelujah! Riches!

Then, of course, you end up in jail like Bernie Ebbers and the Indian PTT ends up owning your network. It’s not exactly the same, but it is the same sort of idea - get a silly valuation for an insignificant percentage of the company and apply it to the whole thing. What’s in it for the Russians? Well, Fbook shares aren’t easy to come by, unless you do what they are doing and buy them from employees. The company doesn’t have to be worth $10bn, after all; they just have to be able to sell the 1.96% for more than $200m to turn a profit.

On that grim note, Orascom saw its profits slide by 66%. They are blaming the dollar exchange rate. KPN’s MVNO-farm, Telfort, has been hit with a regulatory fine for not fully using its spectrum allocation. Vimpelcom lost 8.5 billion roubles despite adding 1.7 million subs.

And Iraq has slapped Zain with a $18m fine for what is simply described as “bad service”. The other two networks also copped smaller penalties. Imagine that - not only do you have to contend with suicide bombers, day-long power cuts, US electronic warfare, and gun battles between rival gangs of policemen, but you’re still subject to ill-defined regulatory fines.

It could be worse, though: you could be Wataniya Palestine, whose financial close is held up by a delay in getting their spectrum allocation sorted out with the Israeli government. Unfortunately, they are appealing to none other than Tony Blair to intervene. Good luck with that one.

In case you were wondering where all the Huawei network rollouts were, here’s one: Viettel ordered a 2,000 Node-B UMTS HSPA network from Huawei, but then this comes naturally when the owner is the Vietnamese Army.

Renesys comments on the US Government’s new cybersecurity white paper; MetaSwitch claims primacy in IP softswitch deployments, mentions that it provides “core VoIP infrastructure” and “innovative IMS solutions”, doesn’t say how those numbers break down between them; and a decision is expected on the spectrum implications of universal broadband in the UK.

To share this article easily, please click:

Telco 2.0 Strategy Report Out Now: Telco Strategy in the Cloud

Subscribe to this blog

To get blog posts delivered to your inbox, enter your email address:


How we respect your privacy

Subscribe via RSS

Telco 2.0™ Email Newsletter

The free Telco 2.0™ newsletter is published every second week. To subscribe, enter your email address:

Telco 2.0™ is produced by: