November 27, 2012
SAN FRANCISCO, Calif., Nov. 27 — Boundary has released critical new application monitoring capabilities for companies running on Amazon Web Services (AWS) and other public and private cloud infrastructure. These new capabilities enable companies, for the first time ever, to get early warnings of pending application infrastructure issues that, left unchecked, would affect customer experience. The enhanced solution will be on display this week at AWS re: Invent, Amazon’s global conference for AWS customers and partners.
Boundary’s updated service includes a proactive alerting capability that understands normal application behavior and, using advanced analytics, warns users at the earliest sign of potential problems. Boundary has also added a Big Data store that will enable customers to stash detailed performance data for long periods, as well as a reporting component that will automatically compare historical and current performance metrics, and email the summaries to customers.
“Applications hosted in the public cloud – even more than traditional infrastructures – require constant and vigilant monitoring,” said Gary Read, CEO at Boundary. ”But because the public cloud is dynamic in nature and does not expose critical items such as topology, traditional solutions are typically out of date and too late in reporting problems.”
The new version of Boundary addresses this challenge by collecting previously unexposed data every single second, understanding the dynamic application topology, learning the normal behavior of applications on a minute-by-minute basis, and providing real-time, analytics-driven warnings on performance abnormalities. Using the reporting capability and long-term data store, customers can examine all the metrics for prior periods to help in problem diagnosis. This way, users can resolve potential issues before customers are impacted.
“This is really important for EC2 customers, because when applications are running on a shared infrastructure, companies need to understand the impact of other users on the response time and the network,” said Read. “Early knowledge of network congestion or poor performance can help IT managers make quick decisions to move applications to other instances or availability zones on Amazon, or to a secondary cloud provider.”
Boundary launched in April and now has over 60 paying customers and 500 businesses using its free version. All the announced new features are available to free users, apart from the long-term historical data store.
“Boundary allows us to confirm, in real-time, that deployed application changes are performing as they were designed and keep an eye on our surrounding environment,” said Michael De Lorenzo, CTO, CMP.LY. ”The combination of real-time and historical data has allowed us to more accurately identify alerting/monitoring thresholds that afford us the ability to act quicker in diagnosing and fixing potential issues.”
“Boundary recently detected the AWS outage over two full hours before Amazon announced it and a customer of ours detected the Azure outage 15 hours before it was announced by Microsoft,” said Read. “Now we’re putting even more advanced analytic and reporting capabilities in the hands of our customers. Before traditional monitoring tools have even processed their next set of samples, Boundary has identified abnormalities in cloud infrastructure and alerted users to potential problems.”
Boundary provides a new kind of application monitoring for new IT architectures: one-second app visualization, cloud-compatible, and only a few minutes from setup to results. Boundary is a privately-held company based in San Francisco, California with venture funding from Lightspeed Venture Partners and Scale Venture Partners.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.