March 12, 2013
COSTA MESA, Calif., March 12 — Emulex Corporation today announced broad partner adoption of its LightPulse 16Gb Fibre Channel (16GFC) Host Bus Adapters (HBAs) worldwide, for joint virtualization, flash storage and data archiving and backup solutions. DataCore Software, GreenBytes, Pure Storage, Quantum, and X-IO have certified Emulex 16GFC HBAs with their solutions, enabling the most scalable solutions for databases, virtualized and transaction-intensive environments in today's demanding data centers. Emulex 16GFC HBAs are the most widely deployed 16GFC HBAs by OEMs, with more than 70 percent of the overall revenue market share for 20121.
"Partners and appliance integrators continue to certify and adopt Emulex 16GFC I/O connectivity solutions because we offer the best performing HBA available today, enabling the most compelling application throughput and I/O scalability," said Shaun Walsh, senior vice president of marketing and corporate development, Emulex. "In addition, our superior reliability and extensive interoperability testing allows for seamless deployment across multiple platforms."
Three key technologies that require the performance and low latency benefits of Emulex 16GFC adapters include virtualization, flash storage and data backup and recovery. Partners have certified Emulex 16GFC HBAs in the following ways:
Data Archiving and Backup
"Our longstanding partnership with Emulex has benefited thousands of customers around the world. The Emulex 16GFC HBAs deliver very high throughput and low latency, key to enabling deployments of densely virtualized servers," said Carlos M. Carreras, vice president of alliances and business development at DataCore Software. "Combined, DataCore's SANsymphony-V storage hypervisor with Emulex 16GFC technology addresses the performance, availability and scalability needs of today's mission-critical applications such as Oracle, SAP and SQL in virtual environments."
"Delivering desktop applications to hundreds or thousands of end users as a managed centralized service places heavy demands on centralized storage, a problem that is exceptionally well addressed with 16GFC connectivity for the back-end servers," said Bob Petrocelli, founder and CTO, GreenBytes. "Emulex 16GFC HBAs are an ideal complement to GreenBytes' IO Offload Engine because of their efficiency and performance per watt, which is necessary for the toughest desktop virtualization environments today."
"Flash memory is on pace to replace the spinning hard drive in performance storage, but is too expensive to deploy broadly. Pure Storage's all-flash storage array overcomes this price challenge, delivering hundreds of terabytes of high-performance flash in a highly-available, plug-compatible array form-factor, all for less than the cost of spinning disk," said Matt Kixmoeller, vice president, products, Pure Storage. "By certifying Emulex's LightPulse 16GFC HBAs with our array, we enable faster and more flexible connectivity solutions for our customers."
"Quantum's Scalar i500 and i6000 libraries are ideal for long-term data storage and archiving for enterprises faced with massive data growth that needs to be stored for longer periods of time for compliance reasons or because it is business-critical," said Eric Bassier, director of product marketing, Quantum. "These companies need a cost-effective, reliable, and easy-to-manage solution with options to scale from terabytes up to many petabytes, and when coupled with Emulex 16GFC HBAs, Quantum's Scalar libraries deliver improved backup and recovery performance to address customers' data storage requirements, now and in the future."
"X-IO's ISE-2 and Hyper ISE storage systems are geared towards a performance and capacity balance for server and desktop virtualization, cloud computing and I/O-intensive DBMS applications, where maximum performance and low TCO are required," said Blair Parkhill, vice president of marketing, X-IO. "Together with Emulex 16GFC HBAs, we can rapidly deliver critical information across the enterprise, support larger server virtualization deployments and scalable cloud initiatives, and offer the performance to match new multi-core processors, and faster server host bus architectures."
Emulex, the leader in network connectivity, monitoring and management, provides hardware and software solutions for global networks that support enterprise, cloud, government and telecommunications. Emulex's products enable unrivaled end-to-end application visibility, optimization and acceleration. The Company's I/O connectivity offerings, including its line of ultra high-performance Ethernet and Fibre Channel-based connectivity products, have been designed into server and storage solutions from leading OEMs, including Cisco, Dell, EMC, Fujitsu, Hitachi, HP, Huawei, IBM, NetApp and Oracle, and can be found in the data centers of nearly all of the Fortune 1000. Emulex's monitoring and management solutions, including its portfolio of network visibility and recording products, provide organizations with complete network performance management at speeds up to 100Gb Ethernet. Emulex is headquartered in Costa Mesa, Calif., and has offices and research facilities in North America, Asia and Europe.
Source: Emulex Corp.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.