December 04, 2006
United Devices (UD) has announced
general availability of solutions focused on virtualizing
mission-critical business applications for enterprise data centers and
outsourced IT service providers.
UD's data center virtualization solution, Data Center One, enables automatic provisioning of business applications across a shared pool of physical and virtual IT assets within an enterprise data center. Initial implementations have demonstrated lower total cost of ownership (TCO), improved productivity and responsiveness, and reliable service level agreement (SLA)-driven performance.
The company's managed services solution, Service One, uses the same core technology to enable IT service providers to offer application delivery services to multiple customers. Service One provides automated, SLA-driven provisioning of applications across pools of heterogeneous IT assets that may span data centers, business units and companies with no restrictions based on location or ownership of assets.
"These solutions represent two years of intense product development funded by United Devices, our customers and our business partners," said Ben Rouse, chief executive officer. "The new capabilities are built on UD's proven technology that powers the industry's largest production Grid implementations. As a result of our customer collaborations and documented success in driving data center efficiencies, we are in a unique position today to unveil Data Center One and Service One to the market at large."
Both the data center and managed services solutions take an application-centric approach to virtualization, where the needs of the application are paramount and infrastructure is managed as a shared pool of capacity. Each solution includes an analytics component to help IT organizations capture information about applications and associated infrastructure, including real-time capacity and utilization data.
Data Center One applies UD's Grid technology to automatically provision applications on bare metal, manage virtual machines and third-party provisioning tools, and to deploy and manage large-scale enterprise software implementations such as SAP, Siebel or Oracle. This solution enables companies to achieve an agile infrastructure that lets businesses respond more quickly to changing needs with minimized manual intervention and shorter lead times to get new resources up and running.
In fact, UD's data center technology has been shown to reduce infrastructure costs related to SAP by 35 percent, as announced in a recently published white paper, "Grid-Enabled SAP: Solution Blueprint and ROI Analysis."
Service One applies UD's core technologies to serve outsourcing companies, allowing applications to be automatically provisioned to multiple clients over a much broader set of assets and networks. Service One combines Grid capabilities with application portals and services, a custom application performance management engine, and a set of professional services to help companies build and market their own offerings.
"The ultimate destination for IT services outsourcers and large data centers alike lies in automatically provisioning resources and networks according to the demands of the applications being run on them," said UD CTO Jikku Venkat. "That is where customers can expect to see a great leap in productivity and efficiency, and that is the arena in which United Devices alone is offering a proven set of application-focused solutions."
United Devices said that in addition to Data Center One and Service One, it will continue to provide its high performance computing (HPC) solutions and its Internet Grid offerings that let companies develop and build large-scale, geographically distributed grids of non-dedicated devices.
All of UD's announced solutions are available immediately.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.