How do we ensure that exponential increases in demand for bandwidth continue to be met both today and tomorrow? What hurdles must be overcome in the race to deploy ultra-high speed networks in the face of a less than favorable economic climate? Reflection on Europe provides fertile grounds for debate over some of the more delicate issues which, at their heart, revolve around new approaches toward network management and a more pragmatic idea of the network neutrality principle.
Global Internet traffic is well on track to quintuple over the next four years representing a surge that could very well attain monthly levels approaching 60 exabytes (one million terabytes). At the same time the mobile Internet is expanding at an even faster rate. A proliferation of 4G-equipped devices and smartphones ensures no slowdown in already exponential growth of wireless infrastructure. However by the year 2015, predictions are that 90% of all traffic will continue to be carried over traditional wires.
A flood of multimedia content can claim a large share of responsibility for the acceleration of traffic and represents 40% of household consumption today with predictions of a rise to around 60% by the year 2015.
As daily usage habits change so does the role of streaming sites such as YouTube, or the ever increasing range of video “on demand” options, be it for purchase or rental, or designed to be shared within a regulated framework. Smart TVs have transformed viewers into participants while webcams have already become an integral part of the Internet experience.
Additional factors will also contribute to growth. At the cutting-edge of health care and education new network applications follow one after another. Governments can now provide public services through online portals and site security is increasingly dependent on reliable networks.
The introduction of new applications and services is placing increased pressure on already scarce bandwidth resources. The widespread adoption of HDTV, including ever sharper image quality along with more recent developments in 3D technology, has made the equation even more complex. Cloud computing services have been rolled-out to store information in remote server farms and rely on high-speed broadband to provide instant worldwide access to data (music and video included) across a range of devices.
Networks are reaching saturation, as they are being asked to perform feats for which they were never intended, and our creaking system of copper wires is incapable of transferring data from anything above a range of 1Mbps (megabits per second) to, at best, 15Mbps. One solution to the problem is the fiber-to-the-home approach based on laying new networks of optical fiber to assure speeds that range between 100Mbps and, for some applications, up to 250Mbps.
Without forward-thinking solutions to network infrastructure, economic growth will be stifled and competiveness threatened. Moreover, consumers will find their choices limited and could be cutoff from a wide range of digital entertainment options: television, online gaming, video access, etc.
Market forces are evolving inexorably and are creating considerable potential for application and content providers but to ensure continued growth some delicate trade-offs need to be made, with stakes that are not only financial, as the example of Europe so aptly illustrates.
While across the EU mobile networks are experiencing phenomenal growth, the same cannot be said for their poor relations operating over a fixed line architecture. Elsewhere progress has been more tangible. According to an OECD report published in December 2010, Japan and South Korea lead the way with fiber-optic broadband penetration levels approaching one in every two households while in Europe the average has stalled at a mere 5%. Within the region, differences are also significant and while the Netherlands boast figures of 7-8%, laggards such as France are still below 1%.
In a targeted effort to address these gaps the European Commission launched a digital agenda strategy to create a more integrated market for broadband and ultra-high speed networks along with improved standards for interoperability between a range of layers and applications. The Commission aims to increase minimum bandwidth to at least 50Mbps, and provide more than 50% of households with connections above 100Mbps, by 2020.
As part of the Commission’s commitment to the achievement of such ambitious targets Neelie Kroes, European Commission Vice-President for the Digital Agenda, invited me along with other stakeholders such as René Obermann, CEO of Deutsche Telekom and Ben Verwaayen, CEO of Alcatel-Lucent to contribute to the formulation of a coherent strategy. We actively sought and achieved consensus across all layers of the industry, from the major telecoms companies and those charged with building the infrastructure to the application and content providers that will play such an integral role in future development. In July 2011 we were able to present a list of 11 proposals intended to create a blueprint for Europe’s digital future.
As of today, the proposals have been received with less enthusiasm than might have reasonably been expected. Yet time marches on and the industry continues to innovate. A point of contention exists between competing business models and over how to raise the levels of capital investment that large network infrastructure projects require. How do we ensure that exponential increases in the demand for ever more bandwidth will continue to be met? More globally, how does Europe ensure that new ultra-high speed networks are deployed at a rate that is capable of meeting targets set by the digital agenda initiative?
It has been my privilege to occupy a central role at the heart of discussions among colleagues across the industry, most notably on questions of investment and the need to for a shift in received wisdom on the subject. Current networks are largely the offspring of what were once state dominated monopolies and their economic underpinnings are more the result of public policy decisions than market reality. In today’s climate the logic has changed significantly and the inherent tension between a receding state and increased competition is being exposed. Cracks are made more visible by the ongoing European debt crisis and clearly the regulatory framework needs to change to create incentives and a level playing field for all. Solutions include in particular co-investment schemes and pooling efforts.
An agreement signed between France’s two leading telecoms groups in October 2011 set the stage for a clear demonstration of the new model. France Telecom-Orange and Vivendi’s SFR announced a partnership to deploy fiber broadband networks to less densely populated regions. Of the 11 million households to be covered by the two operators, this joint investment will enable the delivery of fiber broadband to a total of 9.8 million for which both companies had separate deployment plans. The agreement provides for the necessary infrastructure to be put in place in a shared manner between SFR and France Telecom-Orange with 2.3 million and 7.5 million households respectively.
The agreement promises to adhere to existing regulations and is designed to ensure maximum consumer choice in terms of service. In accordance with the rules of France’s telecoms regulator, ARCEP, other service providers must be granted equal access to infrastructure which can be accomplished either through leasing arrangements or co-investment strategies with either operator, ensuring long-term access to the completed project.
Elsewhere in Europe, a variety of ad hoc approaches have emerged to encourage the participation of local authorities in the creation of public private partnerships. Telecom Italia signed a deal with the province of Trentino to provide a solution to the perennial problem of capital investment. Each side stands to benefit as the operator can make use of already existing infrastructure to improve service with funds from local authorities who retain a majority stake in the resulting company. At the end of a fixed term the operator has the option to buy back shares from the provincial government and become the major shareholder. Amsterdam provides another illustration with the Citynet project, which has brought together local authorities, a specialist operator (Reggefiber), and five housing corporations to bring more wire into the homes of the city’s residents.
Of course partnerships must be formed with an eye to local conditions. What works in an urban context of high population density such as Amsterdam may not be applicable in a rural setting such as Trentino. Nevertheless, all examples clearly demonstrate the power and flexibility of partnerships and their ability to create an array of solutions.
Another approach exists and could well represent a more significant break with the past: the development of new business models to draw value from the interaction between telecoms and service providers. The rise of a new range of services and applications have been made possible and owe a large part of their success to massive capital investment in infrastructure from telecoms. The demand for more “intelligent networks” promises to place even further demands on a system that is already feeling the strain.
In the face of exponential increases in traffic something has to give and the telecoms are being forced to reevaluate their role as an essentially unpaid link between end users and service and application providers. The time has come for the latter to make a greater financial commitment to the new ecosystem of which they are both the primary beneficiaries as well as users.
One solution might be the introduction of a “two-sided” business model to create a virtuous circle as it has in other sectors of the economy to which it has been applied for some time, credit cards being a case in point. Today, the market is a one-way street with consumers paying the ISP. The “two-sided” model would create new clients in the form of content and application providers, through commercial agreements that leverage the telecom’s existing assets, to provide higher levels of service through network optimization, and an improved experience for the end-user.
More concretely, content or application providers who would like to ensure that their customers are receiving the best possible service can improve quality through new “managed service” agreements being rolled-out by the telecoms. These services, based on respect for openness and non-discrimination, would likely be of most benefit to specific niche markets or for those who must operate on a particularly grand scale.
The proposals, supported by the players in the industry, are not contrary to the main principles of “network neutrality” as long as the data flow management takes place in a transparent and non-discriminatory way.
By no means is it our intention to question the freedom and extraordinary dynamism of the Internet. A more pragmatic approach to network management need not necessarily be viewed as a road to ruin or an obstacle to free expression but merely a way of ensuring the continued flow of data in an era where much of what we commonly call Web 2.0 relies on viral growth and content accumulated through user participation.
The “best effort”, basic, approach, makes no explicit guarantees and insists that all traffic be treated equally. It is related to the notion of “dumb wires” and of an open Internet where packets of data are transmitted from origin to endpoint without differentiation. This is not to say that interventions never occur and indeed bandwidth settings are often adjusted to account for congestion at times of peak-usage, providing optimized performance and stability across the entire network.
In many ways the “best effort” approach may be viewed as a default setting that could be complemented by additional layers of managed services for application and content providers. The current crop of extraordinarily profitable Web giants would no doubt figure among the first to express interest in such added services.
These proposals would give network operators more room for maneuver and the flexibility to innovate new solutions that correspond to prevailing needs. The process would remain transparent and non-discriminatory and provide a wider range of options to consumers who are demanding increasingly sophisticated digital services. The net-neutrality issue should be considered first and above all in the light of economic dynamism rather than in terms of a public freedom issue as some argue. The risk of discrimination on some content would of course not be more acceptable than a two-speed Internet that could lead to de facto discrimination based on the economic value of one set of data over another. Let’s trust actors on the Web to explore new methods of value creation through social network effects; and let’s not forget that additional benefits could accrue as operators invest more in the underlying infrastructure creating positive externalities for all. To conclude, if the argument over net-neutrality is to reach any serious resolution it must serve less as an obstacle to rational debate and more as a means by which we expand the horizons of what is possible.