January 30, 2013
SUNNYVALE, Calif., Jan. 30 – AMD today announced that its SeaMicro SM15000 server is certified for the Rackspace Private Cloud. "Nova in a Box" and "Swift in a Rack" are respectively the most efficient compute and highest storage capacity solutions validated for OpenStack. The product certification for mass compute and object storage ensures that enterprise deployments of Rackspace Private Cloud on AMD's SeaMicro SM15000 servers are a proven and rigorously tested solution, enabling peace of mind for enterprises around the world. As a part of The Rackspace Open Cloud platform, the company launched the Rackspace Private Cloud Software in August 2012 with thousands of organizations in over 125 countries spanning all continents downloading the product.
The Rackspace Private Cloud simplifies the management of large pools of compute, storage and networking resources using OpenStack. As cloud-based services grow, data centers must scale their services to new and ever-growing requirements. Reference architectures and test criteria for OpenStack solutions help to ensure consistent performance, supportability and compatibility. With the certification for mass compute and object storage, AMD is at the forefront of providing a thoroughly tested private cloud solution using OpenStack that is simple for enterprises to deploy.
"We are excited to be at the forefront of OpenStack technology and proud to team up with Rackspace," said Andrew Feldman, corporate vice president and general manager, AMD Data Center Server Solutions. "The combination of Rackspace Private Cloud and AMD SeaMicro servers will change the way the industry deploys and manages large pools of compute."
The AMD SeaMicro SM15000 server has been certified for the following Rackspace Private Cloud reference architectures:
"We are seeing rapid adoption of Rackspace Private Cloud Software powered by OpenStack," said Paul Rad, vice president, Private Cloud, Rackspace. "The AMD SeaMicro SM 15000 system offers Rackspace Private Cloud customers unprecedented density, storage capacity and performance, bringing enterprises one step closer to running the cloud in their own data centers."
AMD's SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, up to five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.
AMD's SeaMicro server product family currently supports the next generation AMD Opteron 4300 Series processor, the Intel Xeon processors E3-1260L ("Sandy Bridge") and E3-1265Lv2 ("Ivy Bridge"), as well as the Intel Atom processor N570. The SeaMicro SM15000 also supports the Freedom Fabric storage products, enabling a single system to connect with up to five petabytes of storage capacity. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.
AMD is a semiconductor design innovator leading the next era of vivid digital experiences with its ground-breaking AMD Accelerated Processing Units (APUs) that power a wide range of computing devices. AMD's server computing products are focused on driving industry-leading cloud computing and virtualization environments. AMD's superior graphics technologies are found in a variety of solutions ranging from game consoles, PCs to supercomputers.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.