April 27, 2012
Test equipment vendors gain from providers' service level agreements offering high uptime
MOUNTAIN VIEW, Calif., April 26 — As a new technology, cloud computing has many technical and operational challenges. Nonetheless, large enterprises invest heavily in the technology due to their advantages of scalability and cost efficiency. The uptake of cloud computing creates ample opportunities for test equipment vendors to develop solutions specifically for it.
New analysis from Frost & Sullivan's Cloud Infrastructure Testing and Cloud-based Application Monitoring Markets research (http://www.testandmeasurement.frost.com) finds that each market earned revenues of $68.0 million and $168.0 million, respectively, within 2010. The analysis projects these markets to reach $320.2 and $556.2 million, respectively, by 2017.
If you are interested in more information on this research, please send an email to Jeannette Garcia, Corporate Communications, at firstname.lastname@example.org, with your full name, company name, job title, telephone number, company email address, company website, city, state and country.
The two most important factors for deploying cloud services are availability and security. Despite being hosted at a remote location, the application or data is accessible at all times. Security within the cloud market is also emerging and has many related aspects such as data protection, application security, privacy and standard compliance.
"Besides testing the cloud infrastructure for security, scalability and performance, enterprises also seek insights into the performance of applications hosted in the cloud environment," said Frost & Sullivan Senior Research Analyst Srihari Padmanabhan. "Service providers, enterprise organizations and network engineers need to understand the root cause of faults in the network by gaining end-to-end visibility across the cloud, giving numerous opportunities for application monitoring as well."
Cloud computing is groundbreaking in the way scalable applications are deployed and delivered. The technology has attracted major end-user segments such as IT organizations, enterprises and governments. To tap this lucrative market, many leading test and monitoring companies have invested in developing solutions that help their customers test and validate a cloud infrastructure.
A cloud service provider normally signs a service level agreement (SLA) with its customer to establish the level of service that the network service provider will furnish. Most service providers guarantee an uptime of 99.99 percent.
"Every service provider is expected to meet the quality of service as defined by the SLA," said Padmanabhan. "To do this, service providers need to constantly monitor their cloud infrastructure and allocate resources so that applications can respond properly to peaks in a load."
Cloud-based application monitoring has witnessed the emergence of open-source solutions that can monitor the performance of cloud applications. Even though these solutions do not provide insights into the actual performance of the application, they have been widely accepted as an alternative to application performance monitoring solutions.
With the increase in demand for monitoring solutions, several other commercially backed open-source solutions will emerge in the market, posing a threat to companies who measure application performance.
"To reach success in such a market scenario, participants have to perform a careful analysis of the expected ROI and leverage their market intelligence information to determine growth opportunities," said Padmanabhan.
Cloud Infrastructure Testing and Cloud-based Application Monitoring Markets is part of the Test & Measurement Growth Partnership Services program, which also includes research in the following markets: Global Triple Play and Next-generation Services Test and Monitoring Markets, Global Gigabit Ethernet Test Equipment Market, Wireless Test Equipment Markets, and World xDSL Test Equipment Market. All research services included in subscriptions provide detailed market opportunities and industry trends evaluated following extensive interviews with market participants.
About Frost & Sullivan
Frost & Sullivan, the Growth Partnership Company, enables clients to accelerate growth and achieve best-in-class positions in growth, innovation and leadership. The company's Growth Partnership Service provides the CEO and the CEO's Growth Team with disciplined research and best-practice models to drive the generation, evaluation, and implementation of powerful growth strategies. Frost & Sullivan leverages 50 years of experience in partnering with Global 1000 companies, emerging businesses and the investment community from more than 40 offices on six continents. To join our Growth Partnership, visit http://www.frost.com.
Source: Frost & Sullivan
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.