March 12, 2007
Although the company is still in stealth mode, we were able to conduct a brief Q&A with two members of the Grid-X team: Bill Poires, board member and acting director of business relations; and Mike, a source close to the company who wished to remain anonymous. The two shed some light on the company's 100 GbE offload technology, its target markets and what led to its formation.
GRIDtoday: First off, what does Grid-X do? What is the technology behind the company?
MIKE: Grid-X designs IP-level and board-level ultra high-speed offload engines for Grid computing applications. The technology is based on proprietary coding and algorithms. This is necessary because as network speeds approach and exceed 10 Gbps, standard TCP offload engines fail.
Gt: The company is called "Grid"-X. How does the Grid-X offload engine work within a Grid computing (or similar) environment?
MIKE: Grid-X offload engines are being designed to accomodate the standard interfaces and commands used in Globus, for example. The Grid-X engine is inserted into a host backplane, and the network side connects either in 10 GbE aggregates or directly to 100 GbE.
Gt: What makes the Grid-X offload engine different and/or better than competing solutions?
BILL POIRES: There is no similar equipment out there. It runs at faster speeds and on tomorrow's networks. Our technology is advanced because of the people and product. Our people designing this are world-known architects. Our product is advanced because of unique, patent-pending offload approaches.
Gt: 100 GbE is a lot of bandwidth. What is the market for such high performance? What kinds of customers has this high performance promise attracted thus far?
POIRES: We are targeting government-based supercomputer facilities; one of the national labs is a beta site tester. Commercially, we have one big defense firm that plans on using it in aerospace. We have 3 beta sites total.
Gt: Are these the markets in which you will focus future sales efforts?
POIRES: Yes, although the present market is limited. We're banking on 100 GbE taking off.
Gt: What do you see as the future of the datacenter, and how has this affected the development of the Grid-X offload engine?
MIKE: Based on our discussions with supercomputing laboratories, we know it's Infiniband versus 10 GbE. Since ethernet is so standard and can be found everywhere, we assume 10-100 GbE will win as the intra-cluster fabric, inter- cluster fabric and, at the very least, for Grid cluster WANs.
Gt: Can you speak a little about the formation of Grid-X? Without giving away any competitive secrets, who (or from what backgrounds) are the founders/stakeholders, what was the impetus for forming the company, etc.?
POIRES: Grid-X is a spin-out of an existing offload engine firm. It was founded by seven seed investors, including business people from supercomputing, hardware engineering and manufacturing companies. I'm a former oil trader and invested a lot of my savings into the company. We started in a garage and are now in an office park in Wakefield, Mass.
Gt: When will Grid-X emerge from stealth mode and take its technology public?
POIRES: In six to 12 months.
Gt: What can we expect to see from the company in the meantime, and what can we expect to see at that time?
POIRES: You will see occasional press releases tracking our progress. When we emerge from stealth, you can expect to see the full management team bios ... and the world's fastest offload engine.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.