January 30, 2013
SUNNYVALE, Calif., Jan. 30 – AMD today announced that its SeaMicro SM15000 server is certified for the Rackspace Private Cloud. "Nova in a Box" and "Swift in a Rack" are respectively the most efficient compute and highest storage capacity solutions validated for OpenStack. The product certification for mass compute and object storage ensures that enterprise deployments of Rackspace Private Cloud on AMD's SeaMicro SM15000 servers are a proven and rigorously tested solution, enabling peace of mind for enterprises around the world. As a part of The Rackspace Open Cloud platform, the company launched the Rackspace Private Cloud Software in August 2012 with thousands of organizations in over 125 countries spanning all continents downloading the product.
The Rackspace Private Cloud simplifies the management of large pools of compute, storage and networking resources using OpenStack. As cloud-based services grow, data centers must scale their services to new and ever-growing requirements. Reference architectures and test criteria for OpenStack solutions help to ensure consistent performance, supportability and compatibility. With the certification for mass compute and object storage, AMD is at the forefront of providing a thoroughly tested private cloud solution using OpenStack that is simple for enterprises to deploy.
"We are excited to be at the forefront of OpenStack technology and proud to team up with Rackspace," said Andrew Feldman, corporate vice president and general manager, AMD Data Center Server Solutions. "The combination of Rackspace Private Cloud and AMD SeaMicro servers will change the way the industry deploys and manages large pools of compute."
The AMD SeaMicro SM15000 server has been certified for the following Rackspace Private Cloud reference architectures:
"We are seeing rapid adoption of Rackspace Private Cloud Software powered by OpenStack," said Paul Rad, vice president, Private Cloud, Rackspace. "The AMD SeaMicro SM 15000 system offers Rackspace Private Cloud customers unprecedented density, storage capacity and performance, bringing enterprises one step closer to running the cloud in their own data centers."
AMD's SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, up to five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.
AMD's SeaMicro server product family currently supports the next generation AMD Opteron 4300 Series processor, the Intel Xeon processors E3-1260L ("Sandy Bridge") and E3-1265Lv2 ("Ivy Bridge"), as well as the Intel Atom processor N570. The SeaMicro SM15000 also supports the Freedom Fabric storage products, enabling a single system to connect with up to five petabytes of storage capacity. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.
AMD is a semiconductor design innovator leading the next era of vivid digital experiences with its ground-breaking AMD Accelerated Processing Units (APUs) that power a wide range of computing devices. AMD's server computing products are focused on driving industry-leading cloud computing and virtualization environments. AMD's superior graphics technologies are found in a variety of solutions ranging from game consoles, PCs to supercomputers.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.