July 19, 2011
IBM is expected to reveal its next generation grid-based XIV Storage System this week, which they hope will appeal to performance-conscious customers with virtualized server environments, big data analytics operations and cloud computing setups.
The new generation of XIV won’t arrive for customers ready to fork over what ComputerWorld reported as $2 million until September, but news about the improvements, especially for the growing numbers of virtualized, big data and cloud customers are looking for enhanced options.
According to a report in advance of the news, IBM will tout the XIV Storage System as being four times faster than the last model with enhanced features that will ease management and allow the system to support more workloads, making it a better fit for enterprise users.
As Lucas Mearian stated in ComputerWorld today, “with this release, IBM moves from Intel Nehalem processors to the latest Westmere chips. It also upgraded from a gigabit Ethernet backbone to an InfiniBand interconnect and moved from 4 GB/sec Fibre Channel to 8 Gbit/sec Fibre Channel front-end ports.”
Each of the new generation models will come standard with two InfiniBand switches with redundant inter-module connectivity for up to 600Gbit/sec total internal bandwidth and an increase in the number of iSCSI ports—going from a mere six to 22.
IBM’s VP of enterprise disk storage at IBM told ComputerWorld that “we’re starting to see demand pick up for IP connectivity, though I’d still say it’s slower than what we in the industry predicted it would be by this point in time…this will help customers get prepared for that future transition into greater workload for IP connectivity.”
In addition, Cancilla also said that IBM plans to put an SSD into every disk drive drawer to act as a caching layer, sitting between the controller and the spinning disk. This will have all the performance benefits of SSD but it doesn’t complicate management from a data tiering standpoint.”
According to Cancilla, the grid architecture of the XIV line as a whole allows performance to grow with the addition of new disks, saying that the simple configuration of the array was planned so that set up and management wouldn’t be a hassle.
Mearian added that while the array still doesn’t migrate data across multiple disk types depending on performance needs, it is able to scale up from 27 terabytes of capacity to 161 terabytes. Furthermore, he adds, IBM has added the ability to perform non-disruptive code updates, data snapshot possibilities as well as synchronous and asynchronous replication."
Full story at ComputerWorld
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.