June 25, 2007
Organizations today not only need to be focused on their day-to-day
operations -- but to be prepared for all “what if” scenarios that might occur
24/7/365. Organizations working with the federal government to provide key data
on the general public -- to assist during disasters like Rita, Katrina and 9/11
-- must ensure that the data collected -- names, addresses, social security
numbers, etc. -- is being protected to guarantee that no person is “lost”
during a disaster, especially when funds are needed to survive.
While this can be effectively achieved, there is a new problem faced by today’s enterprise. To assure optimal processing and efficiency with no downtime or service delay, the applications used to run its business need to adaptively shift all resources to a given application if a disaster occurs. If a disaster happened this moment, today’s organization would experience crippling interruptions, as application servers would need to be manually reconfigured and provisioned for peaks in capacity. Companies need a solution that can offer capacity on-demand, assuring seamless mobility across compute resources for optimal service quality, while reducing the necessary human resources needed to manage the environment.
Does this problem sound familiar? It shouldn’t, as this scenario, unfortunately, is not an isolated case. More and more companies strive to move away from time-intensive, manual and error-prone provisioning of resources toward a more dynamic IT infrastructure able to cope with the demands of today’s business environment.
Our world has changed in many ways. Customer demand is so increasingly dynamic that IT needs to be just as dynamic -- able to shift resources and applications adaptively to meet this ever-increasing demand. Due to this increase in demand, and coupled with the dependencies on their business processes and underlying applications, many companies need protection from loss, waste and downtime. As a result, disaster recovery and business continuity strategies have become a large part of IT planning. Achieving high availability through both reduced planned and unplanned downtime has become an IT imperative.
However, imagine a world where every processor could back up every other processor; where processing power was just a single pool from which the business could draw on-demand; where the compartmentalization between services levels, unplanned downtime, geographic processing windows and disaster recovery disappeared.
It wasn’t long ago that disaster recovery and business continuity technologies were mostly focused on providing backup and off-site standby. Then, business processes did not depend on technology to the degree they do now. If access to applications was lost, most business units could revert to manual processes while data was being restored from tape or hardware, and applications would be rebuilt and redeployed by hand. Most organizations had neither the need nor the budget for costly business continuity technologies such as long-distance replication and application failover.
While we must plan for recovering from impending disaster, we must also plan
for the day-to-day disruptions. Prevalent business strategies such as online
trading, online purchasing, customer support and just-in-time inventory are not
possible without technology, and are key to maintaining a competitive
advantage. In addition, new government regulations make it mandatory to have
advanced levels of protection for all sizes of businesses. Consequently,
for more and more business units, functions and applications, even a minimal
service interruption has a dramatic financial impact. Manual processes simply
are not an option anymore.
In global financial enterprises, M&A activity is just a part of everyday business. Consolidating servers and applications to reduce duplication of effort and contain costs is a priority. As companies grow, IT organizations must leverage existing resources more efficiently and at times rapidly add capacity, often in the form of new applications running across a variety of operating systems. The resulting application and server sprawl is costly in terms of technical, financial and human capital. If these costs can be offset by improvements in business continuity, the exercise is more worthwhile.
Virtualization technology is enjoying a period of explosive growth, and an increasing number of enterprises are becoming virtualization converts. Research firm IDC estimates about 750,000 virtual servers were in operation in 2004, and it expects this to rise to more than 5 million by 2009 -- a compound annual growth rate of almost 50 percent. Why the surge of interest? Virtualization as a concept has been around for years, if not decades, but only recently has its potential for business continuity been truly understood.
By virtualizing application platforms and services (based on business-driven policies and real-time service levels), it is now possible to centralize the command and control of application deployment and execution, thereby guaranteeing that capacity is available on-demand. Virtualization is able to eliminate downtime and service interruption by providing application failover both locally and remotely, while also enabling organizations to run production applications at hot-sites during non-emergency times. For risk managers, this capability is powerful and compelling, giving the capability to provide high availability for optimal SLA management; increased operational efficiency and flexibility; and higher application and server utilization, while lowering the cost and reducing the complexity of IT.
With a potentially worldwide processing pool, under the control of business performance policies, a whole new approach to business continuity is possible. A recent report by Gartner, which asked CIOs what are the biggest concerns for the datacenter in the year ahead, saw "Business Continuity" and "Disaster Recovery" topping the list. They were followed closely by "Virtualization Directions" and "Technology." Ironically, in 2007, it can be argued that these two fundamental concerns will give rise to a new approach for dealing with today’s unpredicatable marketplace. Despite the unpredictable demand for IT services, application virtualization heralds the answer to effective business continuity planning by driving dynamic and automatic allocation and optimization of IT resources so that service levels are predictable and consistent.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.