December 12, 2012
RESTON, Va., Dec. 12 – ScienceLogic Inc. today announced enhancements to the company's award-winning data center and cloud management platform to give enterprise IT managers real-time and multi-tenant insight, visibility, and control over private cloud environments. The ScienceLogic platform provides unified monitoring out-of-the-box for integrated converged compute stacks like FlexPod and VCE Vblock, and the underlying technologies -- Cisco UCS, NetApp, EMC, VMware, Xen, Microsoft Hyper-V -- that are designed specifically to deploy agile private clouds.
"As more enterprises deploy private and hybrid cloud environments, they must deal with levels of complexity, scale, and speed unlike anything they have faced previously," said Antonio Piraino, CTO, ScienceLogic. "Enterprise clouds require a highly integrated approach to monitoring and management, across a heterogeneous set of vendors and technologies. Traditional IT management provides only point solutions that were never engineered or architected to work together -- per vendor, per technology, per data silo. Our customers are able to leapfrog the time and investment needed to integrate these disparate solutions, launch private and hybrid cloud environments confidently, and get to market faster for true competitive advantage."
Enterprise IT teams who want to be successful at deploying and operating private clouds to launch IT services as rapidly as the business requires will increasingly model their operations against these types of leaders and become a "service provider" to the rest of their organizations. Architected from the ground up to handle dynamic and heterogeneous computing environments at scale, the ScienceLogic Smart IT solutions offer the best ROI in the industry and are the platform of choice for leading global service providers, like Equinix, Fasthosts, and Dimension Data.
The ScienceLogic platform integrates the core IT infrastructure management functions needed to run today's complex, distributed computing infrastructures -- including physical and virtual systems, performance, network, service, event and asset management capabilities, as well as multi-tenancy, run book automation and service desk -- all in one product with one unified code base, user interface and centralized data repository. Key features of the newest version include:
The standard for centralized, "Smart IT" operations and dynamic cloud management across any mix of data center and cloud environments, ScienceLogic enables service providers and enterprises to improve IT efficiency as well as deliver differentiated service offerings. The ScienceLogic platform unites and correlates critical IT functions and data to provide a constantly updated, actionable view of business service delivery. In contrast to legacy tools, ScienceLogic pre-integrates event, fault, availability, performance and asset management, as well as service desk and runbook automation, in a single product. The platform is easily extended to manage both existing and emerging technologies and applications, making it a solid foundation for the fluidity of modern IT environments.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.