November 10, 2010
AUSTIN, Texas, November 10, 2010 -- Virtualization and cloud computing have dramatically reduced traditional capacity management planning cycles in today’s enterprises while increasing the volume and complexity of management data generated by these highly distributed, dynamic IT environments. To address these challenges, Hyper9, Inc. today released its Hyper9 3.0 solution, with new capabilities for cloud-speed and cloud-scale capacity management that have helped customers break through the 10,000-VM (virtual machine) barrier.
“Where capacity was once a planning function managed on a scale of months or years, with virtualization it has now transitioned to an operations and optimization issue managed in terms of minutes, hours and days,” said Dave Bartoletti, senior analyst at Taneja Group. “Because of this, we’re seeing virtualized environment capacity management evolve along three critical, time-sensitive functions: capacity operations (minutes, hours); capacity optimization (days, weeks); and capacity planning (months, years). With this latest release, Hyper9 has upped the ante for virtualization management solutions aiming to keep pace with the acceleration of cloud computing adoption.”
Hyper9 3.0 enables users to address each phase of cloud-speed capacity management with the following capabilities:
Phase 1 - Capacity Operations: new Capacity Operations Dashboard provides real-time views of resource consumption across CPU, memory and storage.
Phase 2 - Capacity Optimization: new advanced analytics extend industry–leading, search-driven capacity management intelligence.
N-1 redundancy analytics help users identify the impact of capacity shortfalls if a cluster loses a server or if a server is taken down for maintenance.
Resource ceiling analytics to allow planners to analyze capacity at variable CPU, memory or storage resource buffer limits.
Analytics transparency enables full data traceability for verification and validation of results and recommendations.
Advanced storage analytics support critical virtual storage features including thin provisioning and linked clones.
Phase 3 - Capacity Planning: provides new support for resource purchase and public cloud deployment analysis.
Capacity purchase recommendations: How much more (CPU, memory, storage) do I need to buy?
Cloud capacity planning: How much it would cost to run our VMs on public cloud providers (e.g., Amazon EC2)?
Unlike existing capacity management solutions that were designed for either older, physical environments or for lab and small company deployments, Hyper9 3.0 provides the application-awareness, cloud-scale, unified capacity and performance management, and business-level integrations required by enterprises with rapidly expanding virtualization and cloud environments. For example, Hyper9 has unified and automated capacity management for a large enterprise software provider across more than 13,000 VMs, 450 ESX hosts and six global datacenters.
Hyper9 3.0 has been architected to scale from smaller virtualization deployments to large-scale virtualization and cloud environments by delivering both cloud-scale analytics and cloud-scale deployment. Hyper9’s cloud-scale analytics leverage a unique search-driven approach that is the only way to operate across the cloud’s rapidly exploding data set. By capturing unique interrelationships and application dependencies, Hyper9’s analytics also provide multi-dimensional insight across a hyper-connected data model that mirrors virtualization and cloud topologies.
Hyper9 3.0 also expands its enterprise-scalability with new virtual appliance packaging that extends the distributed, federated architecture at both the remote data collector level and the centralized “brain.” The user interface has been optimized for cloud-scale and is now completely Flex-based, enabling collaboration between stakeholders, supporting parallel tasks via a tab-based UI, and a customizable query builder. Hyper9’s open APIs also allow for enterprise integration with existing management consoles, CMDBs and portals to share the right insights with the right business stakeholders.
“The migration of enterprise infrastructures to the cloud introduces incredible opportunities for efficiency and flexibility, but also significant management challenges,” said Bob Quillin, chief marketing officer at Hyper9. “With Hyper9 3.0, our goal is to help organizations understand and anticipate their capacity management needs in real-time so they can fully leverage the advantages of virtualization without jeopardizing the speed or performance of their business.”
About Hyper9, Inc.
Hyper9 is a privately-held company backed by Venrock, Matrix Partners, Silverton Partners and Maples Investments. Based in Austin, Texas, the company was founded in 2007 by enterprise systems management experts and virtualization visionaries. Since then, Hyper9 has collaborated with virtualization administrators as well as systems and virtualization management experts to develop a new breed of virtualization management products that leverages cloud computing technologies like search, collaboration, analytics and social networking. The end result is a product that helps administrators discover, organize and make use of information in their virtual environment, yet is as easy to use as a consumer application. For more information about Hyper9, visit www.hyper9.com.
Source: Hyper9, Inc.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.