June 23, 2010
Designer of multicore and manycore processors, Tilera, announced today that it plans to double the number of cores on a chip and thus double the compute per rack every two years with a three-year projection landing them at 40,000 cores in 2013. As the company noted in its release, "Cloud computing is restricted by legacy x86 processors that consume a lot of power while providing minimal performance improvements across generations. Tilera, not restricted by legacy cores, is working in collaboration with top cloud vendors and providing the highest performance per-watt processors that dramatically reduce operating expenses for datacenters and cloud computing operators."
In an interview this morning with HPC in the Cloud, Ihab Bishara, Director of Cloud Computing Products at Tilera stated, "We currently have a 64-core processor and will have a 100-core processor out next year. The modular nature of our TILE architecture enables us to easily scale to higher and higher core count, unlike other processor vendors. Because of this, we are planning a 200 core processor for 2013. With eight of these in a server and 20 servers in a rack, that will enable 40,000 cores."
Granted, while they don't have x86 compatibility, if they are able to demonstrate solid power and performance this will certainly be attractive for many, especially as large-scale datacenters are having trouble fitting their current servers in at the rate they'd like to. Intel and AMD are making the same play, but they are certainly lagging behind Tilera's much more aggressive manycore roadmap. It is worth noting as well that SeaMicro who emerged from the ether last week with its announcement of a low-power server sporting 512 Intel Atom cores is another player here, as are those working on ARM chips for similar types of servers. In the case of SeaMicro, however, while they are functioning on the same general idea, they will likely not have the same power-performance zing since they will be tied inextricably to the legacy x86 architecture.
Bishara offered some insight into the company's plans to meet its projected goal as well as into their current cloud-optimized servers. As Bishara indicated, "X86 processor technologies are built to solve the problem of how to run a single threaded application, like Windows, using frequency and acceleration. Over the years, to accomplish that goal the cores grew to be very large and power hungry. Tilera provides very high performance, small and power efficient cores that fit well with parallel cloud type applications. "
The iMesh Architecture
Bishara described the iMesh architecture and what it brings to the goal of 40,000 cores by 2013 by stating, "Tilera's architecture eliminates the dependence on a bus, and instead puts a non-blocking, cut-through switch on each processor core, which connects it to a two-dimensional on-chip mesh network called iMesh (Intelligent Mesh). The iMesh provides each tile with more than a terabit per second of interconnect bandwidth, creating a more efficient distributed architecture and eliminating any on-chip data congestion -- a problem other processor companies have not tackled. In addition to having so much bandwidth for communication, the iMesh has specific networks to manage the coherency between the cores."
Who's on Board with Tilera?
It was clear that SGI has been toying with Tilera's offerings for some time as a statement from Mark Barrenechea, CEO of SGI is included in the release. He stated that his company is "working with Tilera to bring this compelling technology to our customers across HPC, cloud, and government market segments," and Bishara also noted there has been significant collaboration but was not willing to elaborate, although he did note that there will be some news coming in the near future from their partnership, especially where there are areas of overlapping benefit in finance as well as government and cloud more generally.
In Bishara's words, "We are working with many names you would recognize, but beside SGI and Quanta Computer, we are not able to name any names at this time. In terms of OPEX, our new server enables companies to replace eight state-of-the art Intel Xeon 5500-based servers with one Tilera server -- at the same performance and a fraction of the power. This is key for datacenters where they've maxed out the number of systems they can pack into a datacenter. So this system will enable 8X more performance in the same amount of space, or same performance at 6X the power reduction. This means a huge savings in both power consumption as well as real estate."
Posted by Nicole Hemsoth - June 22, 2010 @ 9:05 PM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.