December 01, 2008
PALO ALTO, Calif., Dec. 1 -- HP today outlined the results of its three-year IT transformation and laid out the company's IT strategy to support future growth for fiscal year 2009 and beyond.
As a result of the effort, HP has reduced its IT operating costs by approximately half; provided more reliable information for executives to make better business decisions; and, established a more simplified and dependable IT infrastructure that provides improved business continuity and supports the company's future growth.
"HP's IT transformation was not just a technology initiative within the IT organization, it was a business strategy adopted throughout the company," said Randy Mott, HP executive vice president and chief information officer. "We commend our IT team for building a world-class infrastructure and organization, but we're just getting started. We're now in a great position to enable future business growth."
The initiative began shortly after Mott joined HP in July 2005. Starting in fiscal year 2009, the transformation will lower IT costs by more than $1 billion per year from fiscal year 2005 levels. This cost reduction is even more impressive considering HP added more than $25 billion in revenue during the three years since the transformation began.
The transformation focused on five major initiatives: next-generation global datacenters, portfolio management, workforce effectiveness, building a world-class technology organization and a true enterprise data warehouse. Through aligning its entire global organization on these five initiatives, HP has reduced complexity and added significant capability and quality of service.
"For the transformation to work, we had to invest money to save money," said Mark Hurd, HP chairman and chief executive officer. "With a lower IT cost structure we are able to reinvest dollars into go-to-market efforts. This challenge isn't unique to HP. Most companies have the opportunity to create an IT cost structure that is at least half of today's average for their industry. We can, and do, share our experiences with our customers."
The transformation is expected to enable HP to:
The HP IT organization now operates under a strategic framework in which teams are deployed to deliver more business innovation through a smaller number of global and common applications. These applications are running in the next-generation datacenters, where the technology is constantly refreshed in modular-designed white space.
By creating global and common applications, HP IT is able to focus on new capabilities and devote 80 percent of IT employees to innovation that is aligned with business strategies and future growth opportunities.
The company's own HP Neoview implementation is the single enterprise data warehouse with current users exceeding 32,000 HP employees – a number that is expected to top 50,000 next year. HP believes its Neoview installation is one of the largest enterprise data warehouses in the market today.
HP, the world's largest technology company, provides printing and personal computing products and IT services, software and solutions that simplify the technology experience for consumers and businesses. HP completed its acquisition of EDS on Aug. 26, 2008. More information about HP is available at www.hp.com/.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.