May 05, 2008
With IT expenditures both capital (CAPEX) and operational (OPEX) spiraling out of control, businesses have swarmed to virtualization to help consolidate servers, cut energy costs and maximize utilization, among other benefits. But the server virtualization being offered by vendors like VMware is just the tip of the clichéd iceberg.
With their collective eyes set on drastically cutting CAPEX and OPEX while significantly increasing flexibility and scalability, a new breed of vendors is targeting the rest of the datacenter for virtualization. One of these companies is 3Leaf Systems, a Silicon Valley startup with a simple vision: to take a pool of x86 servers and decompose them into I/O, compute and memory, and make each available on demand. 3Leaf has a two-phase Virtual Compute Environment approach, says Senior Director of Marketing Rob Reiner, starting with I/O virtualization (currently available) and culminating with compute and memory virtualization spanning across physical machines (available beginning in 2009).
It Begins with I/O
The company’s virtual I/O software strips local storage, NICs and HBAs and replaces them with virtual versions. The servers are then connected via InfiniBand (although Ethernet functionality is on the way) to a commodity switch fabric, which is connected to a pair of 3Leaf’s V8000 virtual I/O servers. The V8000s connect directly to the SAN and LAN, and the entire setup requires approximately 80 percent fewer standby servers and 80 percent fewer SAN and LAN ports. According to Reiner, it also has saved proof-of-concept customers an average of 50 percent in CAPEX and OPEX.
Speaking specifically about OPEX, Reiner says the big savings come because server configurations are saved as profiles, so provisioning is as easy as pointing as clicking. And because the profiles are portable, users can switch from failover servers to spare servers as needed, which means users won’t need as many spare servers. In addition, the redundant switch and V8000 architectures add resiliency, which saves time and money in case of a hardware failure.
3Leaf’s approach differs from that of other I/O virtualization providers, says Reiner, because 3Leaf creates an open architecture leveraging commodity parts, including QLogic and Emulex HBAs to ease integration and certification. Plug-in cards and drivers also are commodity, which lets the company enable new features as they become available just by updating the drivers. By using commodity processors in the V8000 and supporting commercial switches, 3Leaf can follow the technology curves in these areas, as well, says Reiner. The real added value comes from the software, so being able to take advantage of hardware advances without heavy investment is a good thing. 3Leaf’s virtual HBAs and NICs also have adjustable QoA built in, so users have guaranteed bandwidth service levels.
Summing up I/O virtualization, George Crump, founder and president of analyst firm Storage Switzerland, said, “What it does today is allow you to solve that same sort of virtualization theme. It allows you to aggregate and better utilize existing network pipes for both storage and messaging ... so you can reduce overall port count in the enterprise and reduce, in some cases, HBA count and things of that nature.” Crump added that while I/O virtualization is appealing mainly to large enterprises, it will not near the appeal of 3Leaf’s virtual compute and memory solution once that becomes available.
Completing the Virtual Compute Environment
Crump says the advanced virtualization adopters with whom he has spoken are crying for something like 3Leaf’s Virtual Compute Environment (VCE), often telling him they “need to get out of the box. VMware was great, it gave me a lot of good ideas, but I’m essentially landlocked in the sheet metal [of the individual server].” They want greater flexibility, the ability to scale when necessary, and the ability to pull in idle processors at peak times, said Crump, and these capabilities are not too common outside of grid environments.
Announced in early April, 3Leaf will complete the VCE strategy and form truly dynamic datacenters with its “game-changing” virtual compute and memory server. The solution enables “expandable” servers that can share memory and compute resources across physical boundaries thanks to 3Leaf’s one-of-a-kind hardware-based scheme, says Reiner. “It’s as is this is one large expandable [and] contractable server,” he added.
The solution allows for QoS management via built-in policies for failover, time-based events and resource monitoring, and it supports parallel processing, as well. For this reason, Reiner noted that 3Leaf expandable servers could be useful in some grid or cloud computing environment. However, understanding that some businesses require real-time performance management beyond 3Leaf’s scope, Reiner says the VCE solution is designed to handle external policy engines, and actually includes a third-party interface.
Taking advantage of AMD’s coherent HyperTransport and Intel’s QuickPath Interconnect technologies paired with 3Leaf’s proprietary ASIC, Reiner says 3Leaf’s expandable server technology will address the entire x86 market. Processors are connected to the 3Leaf proprietary ASIC, which then connects to a commodity 10 GbE or InfiniBand switch fabric. The silicon-based approach, says Reiner, will offer higher agility, scalability and utilization than software-based approaches, while -- like the V8000 -- drastically cutting CAPEX and OPEX.
“We’re not trying to make big SMP systems,” explained Reiner. “What we’re trying to do is deliver scalability and flexibility and agility into the x86 market. So within the enterprise datacenter, you can add resources and de-allocate resources as required to address changing business needs.”
But 3Leaf’s solution conflict with current VMware or XenServer environments, right? Not so, says Crump, who believes 3Leaf’s hardware-based, cross-machine VCE is more complementary to existing server virtualization technologies than anything else -- especially in datacenters with high numbers of both physical and virtual servers. “If you have a few ESX servers and, for the most part, they’re solving all your problems, it’s doubtful you’ll need virtual compute,” he explained. “But if you [have] 50 to 100 physical servers with a fair amount [of them] that aren’t virtualized, that’s where that will appeal more.”
3Leaf’s virtual compute and memory solution will be available for AMD environments in the first half of 2009, and the Intel version will hit the market in 2010.
It Pays to Virtualize Everything
According to Reiner, 3Leaf’s goal is to tackle head-on the pressures to decrease skyrocketing operational expenditures under control while still addressing the ever-more-prevalent demands for increased flexibility and agility. He cites an IDC statistic stating that overprovisioning wastes $130 billion per year to justify 3Leaf’s focus on increasing utilization. “On the operation side,” he says, “there is a clear need to bring ... expenditures under control.”
That 3Leaf meets it goal to cut costs is evidenced in its approximately dozen proof-of-concept evaluations with Fortune 100 customers. Ranging from e-commerce to social networking to financial services, Reiner says (as noted above) these evaluations have saved customers and average of 50 percent in both CAPEX and OPEX. “We are coming in with something that is very different and unique, and really solves a lot of the problems that have been around for a long time and never really have been addressed,” says Reiner.
As pressures to drive down costs while still increasing performance continue to mount, the temptation to “virtualize everything” will grow correspondingly. According to Crump, this trend probably will reach its pinnacle in two to four years after a gradual, slow roll. Server virtualization like VMware is only the first “baby steps,” he explained, and the next wave will encompass infrastructure and I/O virtualization, with compute and memory following behind. The reason for the gradual approach, he believes, is that because IT departments are always busy with a laundry list of tasks, problems generally are not addressed until they absolutely have to be. For example, he noted, the “green IT” craze did not start because everyone suddenly became environmentalists, but rather because they couldn’t get any more power. And the same thing is happening with virtualization.
“You can only ask IT to do more with less so many times before it becomes abominable,” says Crump. “Really, the only way to get there now in some of these environments is to virtualize everything, because the needs of the business have not slowed down – they’ve probably increased. The only way to be able to be responsive to those needs is to have a virtual environment at all layers – server, compute, storage, backup, network infrastructure [and] I/O infrastructure can all be virtualized and moved relatively quickly depending on today’s needs of the business.”
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.