10 Reasons Why Telcos Will Dominate Enterprise Cloud Computing

By Joe Weinman

November 3, 2008

New-economy icons like Google and Amazon, with Internet-speed innovation in their DNA, have announced a dizzying array of cloud computing services, and InformationWeek quoted Google CEO Eric Schmidt as saying that with the exception of security requirements, “there’s not that much difference between the enterprise cloud and the consumer cloud.” If that’s true, it shouldn’t be too difficult for a Google or Amazon to leverage a strong consumer franchise and initial success servicing, say, Facebook application start-ups such as Animoto Productions, and rapidly penetrate blue chip Fortune 500 enterprises.

But old economy stalwarts like telcos have made cloud computing announcements, too. Consider, for example, AT&T’s recently announced Synaptic Hosting service, utilizing its 38 global Internet datacenters.

However, in a battle between a company born in the 19th century versus a nimble new-millennium innovator at the top of its game, is there really any question as to who ultimately will service the enterprise market’s cloud needs?

Well, actually, there is. Because while there may not be much of a difference between enterprise cloud services and consumer cloud services architecturally, there are dramatic differences between them in all other respects. These include not just security, but sales, service, support, scale, solutions, SLAs and so on. In fact, because the enterprise is so different, companies like Google have been trying different approaches to try to make headway, such as pursuing and extending partnerships with Salesforce.com and IBM. As a predictor of likely success, one need look no further than studies like Fingerprint, which shows that Gmail — even after several years on the market, the $625 million acquisition of Postini and a value price of $50 per user per year (or free, for some institutions and small businesses) — has only a minor share of the enterprise e-mail market.

To understand why telecommunications companies have such a strong franchise in this market space, it will be helpful to define what a cloud service is. I define it as a CLOUD: Common, Location-independent, Online Utility provisioned on-Demand. Common (i.e., shared) in that it multiplexes demand from multiple customers and applications into a common pool of resources. Location-independent, because it shouldn’t matter where you are or where the service is. Online, in the sense that it is accessible over a network, as well as “not down.” A utility because it provides value and offers usage-sensitive pricing. And on-demand in that the ability to provision capacity or service should be as fast as possible to meet variable demand requirements, enhancing business agility and providing capacity at the lowest total cost.

Under this definition, not only can computing be cloud-based, but so can be storage, security, audio conferencing, video conferencing, Web conferencing, messaging, collaboration, software as a service and so forth. In fact, cloud services have been around since well before today’s latest networked IT architectures and business models. Hotel chains are cloud services: they time- and space-division multiplex guests traveling as individuals and in groups, on vacation or business, into dynamically allocated units of capacity (rooms). They are location-independent, in that no matter what city you are in, you are likely to find a service node (a local hotel from the chain). They are online, accessible over wide-area highways and local-area hallways. They also are utilities (pay per room per night). And they are available on-demand (although reservations are recommended during peak season).

Large, global, integrated service providers (aka “telcos”), leaders in global networking and hosting, have a compelling value proposition to enterprise customers for such services, which inherently are net-sourced IT. Not only can such providers offer networking, hosting and application management services, they also can take advantage of the evolution of cloud services, creating an interoperable, integrated and “platformized” set of capabilities: compute and storage infrastructure; voice, data and video conferencing; and horizontal productivity-, enterprise- and vertical-focused applications.

In fact, such providers have 10 major strategic advantages in this market:

(1) Enterprise sales capability — Telcos have a long history of selling to enterprises as well as consumers. For example, AT&T had annual revenues of $119 billion in 2007 — more than either IBM or HP — and roughly half of those revenues come from businesses. Unlike their consumer or start-up counterparts, enterprise CIOs do not want to go online to initiate and manage a relationship. They want dedicated account teams collaborating closely with them and their teams for the long term, in many cases with a permanent on-site presence. Some might argue that there is a major business model transformation underway. After all, who needs an enterprise sales force when employees can just use their credit card to provision services?

This is unlikely to happen in the enterprise for three reasons. First, most enterprises have tight controls on purchasing that extend to $10 worth of business cards, much less buying online computing and storage capacity. Second, no corporate information security officer is likely to appreciate the idea of tens of thousands of employees purchasing cloud services and placing proprietary corporate data willy-nilly across providers and platforms. Third, enterprise IT shops already have experienced the chaos and hidden costs associated with loss of control of applications, desktop images, and foundation architecture in departmental computing and rich desktop environments, and thus are not likely to support a model of individual purchases of cloud capacity and services. If the enterprise wants to avail itself of the benefits of the cloud, credit card purchasing is not the way to go.

(2) Lifecycle service and support — It’s not just sales, but also after-sales service and support, including: lifecycle management teams ensuring successful service delivery 24/7; advanced tooling for service monitoring and management; portals for network and application performance, usage monitoring and configuration and provisioning changes; and even e-bonding between enterprise systems and service provider systems.

(3) Reliable operations at scale — Rather than offering services that still remain in “Preview Release” or permanent “Beta” purgatory after many years to avoid any implied service reliability or feature stability commitments, service providers go through a comprehensive suite of pre-launch interoperability, certification, and scalability engineering and testing. In fact, telcos are used to engineering services for four or five nines of availability, even as they scale up to tens of millions of customers. This reliability at scale is in telcos’ DNA and service culture, as well as in regulatory requirements. Imagine a trauma victim calling 911 and getting a pre-recorded message saying, “Your call did not go through — but, hey, we’re still in beta.” It isn’t clear that a new economy culture of random innovation is compatible with a culture of continuous delivery of the same service to tens of millions of customers day after day.

(4) SLAs with financial penalties — Not only won’t enterprises accept “Well, after all, it’s still in beta” as an excuse for service outages, they demand meaningful SLAs (service level agreements) with clear metrics for evaluating achievement of those SLAs, backed up by monitoring and management systems, and financial penalties such as credits or refunds if service levels aren’t met. A “free” or low-cost service with questionable delivery quality is about as attractive to a CIO as an offer of free neurosurgery from someone who just skimmed a blog on how to do it in three easy steps.

(5) Full enterprise solutions portfolio — Cloud computing services don’t exist in a vacuum; many other services may be procured in conjunction with them, either due to technical architecture requirements or due to contracting benefits, such as discounts for total spend. Related services such as network access and transport, MPLS VPNs for backhauling to the enterprise datacenter, application management, global load balancing, asymmetric Web acceleration, network-based firewalls and other network-based security services, content delivery, Voice over IP, Video over IP, managed messaging, Web conferencing and remote access can offer synergies when combined with cloud computing and storage.

(6) Integrated hosting and network services — This has real benefits in terms of cost and performance. It generates cost advantages in a number of ways. First, having hosting facilities on net — that is, in the same locations as core network backbone switching and routing facilities — eliminates expenses associated with building additional access facilities to reach a third-party datacenter. Integrated providers also can access network facilities at cost, rather than at market prices. And larger providers should be able to achieve more compelling economies of scale. Having hosting facilities on net also means better performance by reducing router hops and associated physical propagation delays.

(7) Vendor independence — Service providers tend to be software and hardware vendor-agnostic. The reason for this is that their broad customer bases have wide ranges of requirements and preferences, and service providers are strategically intent on reaching as wide a market as possible. Consequently, lock-in to a specific storage, server, operating system, hypervisor, middleware, database or application vendor would be self-defeating by limiting market penetration. This contrasts with some of the existing players, who mostly seem to have at least some proprietary elements to their platforms.

(8) Global footprint — It’s not news that today’s enterprises have gone global. Whether it’s a global base of employees, customers, supply chain partners, offshore contact centers or skill base for innovation, reach and footprint are critical. Large, integrated global service providers have the capability to provide services locally and consistently virtually anywhere in the world to support today’s increasingly interactive applications with proximate infrastructures that reduce response time — and with the sales and support resources to directly engage with regional or local leadership, or corporate executives headquartered anywhere from Shanghai to Dubai, Bangalore to Brussels, or Sydney to Sao Paulo.

(9) Financial stability and market commitment — In today’s tumultuous economic environment, enterprises are more focused than ever on the financial stability, brand and business viability of service providers providing key parts of their infrastructures. Commitment to hosting and cloud computing as part of their provider’s core business is important, as opposed to cloud services being a potentially temporary excursion from different core businesses such as online retailing or advertising. Over the last few years, high and rising stock prices have permitted some new economy players substantial flexibility in capital investments, but recent drops of fifty or sixty percent may slow such adventurism for the foreseeable future.

(10) Technologies are easier to replicate than relationships and operations — Don’t the famously highly paid developers at the new economy companies have an edge in creating new technologies such as automated provisioning that enable cloud services to rapidly scale up and down? If they do — which is arguable — it isn’t sustainable. Such technologies have been around for years from companies as small as BladeLogic and as large as IBM (e.g., Tivoli Provisioning Manager), with variations such as VMware’s vCenter and VMotion fitting into the mix. For every highly paid developer at an online bookseller, there is a highly motivated developer at a start-up or large global software firm, developing software tools for others, like integrated service providers, to incorporate into their tooling and management platforms. Even Animoto, the poster child for non-consumer use of cloud computing services, leveraged a third party, RightScale, to manage dynamic allocation of these services. Service providers also can choose best-in-class capabilities and focus on integration. Much harder to replicate are global networks that have been built for literally hundreds of billions of dollars of investment, and the experienced skill base, long-term enterprise customer relationships, management tools, support organizations, service culture, and local access and regulatory relationships that enable services to be delivered successfully at scale.

The different players in the emerging cloud computing market have different starting points, different current strategic advantages and different challenges. The trick to handicapping this race is to focus on fundamentals: what advantages are duplicated easily, and which are sustainable. Ultimately, the winners in selling to the enterprise will have to address enterprise requirements and competitive strategy issues identified above. Large, global, integrated service providers, who are not just telecommunications companies, but also hosting and applications management companies, just might have the edge in selling and delivering cloud services to the demanding enterprise CIO.


Joe Weinman is Strategic Solutions Sales VP at AT&T. The views expressed herein are his own and do not necessarily reflect the views of AT&T.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the University of Chicago, leads Chameleon. This innovative projec Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable quantum memory framework. “This work provides a promising Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Point. The system includes Intel's research chip called Loihi 2, Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Research senior analyst Steve Conway, who closely tracks HPC, AI, Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Anders Dam Jensen on HPC Sovereignty, Sustainability, and JU Progress

April 23, 2024

The recent 2024 EuroHPC Summit meeting took place in Antwerp, with attendance substantially up since 2023 to 750 participants. HPCwire asked Intersect360 Resear Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Leading Solution Providers

Contributors

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire