November 27, 2012
SAN FRANCISCO, Calif., Nov. 27 — Boundary has released critical new application monitoring capabilities for companies running on Amazon Web Services (AWS) and other public and private cloud infrastructure. These new capabilities enable companies, for the first time ever, to get early warnings of pending application infrastructure issues that, left unchecked, would affect customer experience. The enhanced solution will be on display this week at AWS re: Invent, Amazon’s global conference for AWS customers and partners.
Boundary’s updated service includes a proactive alerting capability that understands normal application behavior and, using advanced analytics, warns users at the earliest sign of potential problems. Boundary has also added a Big Data store that will enable customers to stash detailed performance data for long periods, as well as a reporting component that will automatically compare historical and current performance metrics, and email the summaries to customers.
“Applications hosted in the public cloud – even more than traditional infrastructures – require constant and vigilant monitoring,” said Gary Read, CEO at Boundary. ”But because the public cloud is dynamic in nature and does not expose critical items such as topology, traditional solutions are typically out of date and too late in reporting problems.”
The new version of Boundary addresses this challenge by collecting previously unexposed data every single second, understanding the dynamic application topology, learning the normal behavior of applications on a minute-by-minute basis, and providing real-time, analytics-driven warnings on performance abnormalities. Using the reporting capability and long-term data store, customers can examine all the metrics for prior periods to help in problem diagnosis. This way, users can resolve potential issues before customers are impacted.
“This is really important for EC2 customers, because when applications are running on a shared infrastructure, companies need to understand the impact of other users on the response time and the network,” said Read. “Early knowledge of network congestion or poor performance can help IT managers make quick decisions to move applications to other instances or availability zones on Amazon, or to a secondary cloud provider.”
Boundary launched in April and now has over 60 paying customers and 500 businesses using its free version. All the announced new features are available to free users, apart from the long-term historical data store.
“Boundary allows us to confirm, in real-time, that deployed application changes are performing as they were designed and keep an eye on our surrounding environment,” said Michael De Lorenzo, CTO, CMP.LY. ”The combination of real-time and historical data has allowed us to more accurately identify alerting/monitoring thresholds that afford us the ability to act quicker in diagnosing and fixing potential issues.”
“Boundary recently detected the AWS outage over two full hours before Amazon announced it and a customer of ours detected the Azure outage 15 hours before it was announced by Microsoft,” said Read. “Now we’re putting even more advanced analytic and reporting capabilities in the hands of our customers. Before traditional monitoring tools have even processed their next set of samples, Boundary has identified abnormalities in cloud infrastructure and alerted users to potential problems.”
Boundary provides a new kind of application monitoring for new IT architectures: one-second app visualization, cloud-compatible, and only a few minutes from setup to results. Boundary is a privately-held company based in San Francisco, California with venture funding from Lightspeed Venture Partners and Scale Venture Partners.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.