AT&T recently launched the next phase of its Cloud Networking architecture — the AT&T User-Defined Network Cloud (UDNC) — at Mobile World Congress. Having often written about AT&T’s cloud initiatives, I thought I’d provide some context for the latest announcement and strategy, which builds upon our ongoing virtualization journey.
Where we have been
The first step in the journey was the virtualization of the Wide Area Network (WAN). AT&T introduced our MPLS Virtual Private Network (VPN) services years ago, and customers world-wide have adopted them to run their enterprise networks. We think of MPLS as the first wave of virtualized service. In essence, we virtualized the wide area network, creating a flexible, multi-tenant platform with such enterprise features as logical separation of customer traffic, class of service, and network security.
Where we are now
The IT industry is also well along the journey towards virtualization of computing. Whether you’re an enterprise with a private cloud, a service provider like AT&T with its Synaptic family of services, or among the many others in the larger Infrastructure-as-a-Service (IaaS) industry, virtualized computing has become an accepted practice. Why? One reason is because it drives efficiency in computing resources for a significant set of applications today. Much as MPLS virtualized the WAN, virtualized computing using public and/or private cloud solutions enables businesses to maximize their computing efficiency and realize more bang for the buck in their computing investments.
AT&T has been a leader in integrating these two areas of virtualization. In doing so, we have delivered initial proof points of our larger Networked Cloud vision. AT&T NetBond service is an area I have written about frequently, detailing how NetBond marries VPN services with cloud computing platforms to create an enterprise class Virtual Private Cloud solution. The technology behind AT&T NetBond delivers upon the UDNC vision in several important ways:
- It opens the control plane of our network to computing providers, enabling them management control of the network in conjunction with the computing resource as integrated network solutions.
- It virtualizes the connectivity between the WAN and computing resources, dynamically assigning and flexing resources as a software-based service as user needs dictate.
- It is wholly API-driven such that we have been able to successfully leverage this solution in a modular and open fashion to interoperate seamlessly with a multitude of cloud platforms — for example, AT&T, IBM, CSC, Microsoft
Where we are heading
And now, the next phase of cloud evolution kicks in as we create an application-aware network cloud comprised of a rich set of virtualized services that can be flexibly applied on-demand per user needs. This evolution will migrate functions that previously were delivered with “dedicated” hardware, like routers and firewalls, into network-resident software services. These services can be utilized dynamically at a customer level and function as network-based utilities that are scalable, on-demand, and manageable through APIs.
This conceptually extends what has been happening in computing for many years into the network fabric, creating similar advantages of speed, scale, and agility. Just as today’s businesses wonder whether they should buy that next dedicated computer or storage device, in the not-too-distant future they will question why they don’t procure on-demand network software services instead of a dedicated appliance that isn’t as nimble or economical.
Ironically, these network-based software services will run on a cloud computing platform that is the future infrastructure of the network. In that way, they will leverage the benefits of cloud computing “pods” to flexibly deliver on-demand services in an agile, cost effective manner, and will be offered as services of the global network.
This network function virtualization (NFV) moves network functions from hardware-based appliances into software platforms inside virtual computing machines. As a result, we can update network functions from almost anywhere and do it quickly without having to redeploy new hardware and software. We can dynamically reroute traffic, add capacity, and introduce new features through programmable, policy-based controllers. Much like the phases that came before it, this third wave of virtualization will evolve over a number of years. However, as the basis for this wave is in fact each of the waves that have preceded it, we expect it will advance quickly.
We are excited about the possibilities this presents and the many leading-edge partners and customers who will be part of this next journey.
I welcome your thoughts and input on this next wave of the virtualization.