April 14, 2008
Imagine you’re the New York Mets. You’ve got a string of high-performance sluggers in your lineup, but every time you’re behind in the game, they fail to connect -- whiffing, leaving men on base and missing opportunities. All that power in the dugout, but where is it when you really need it?
A good baseball manager can make a multitude of adjustments in the course of a game, attempting to solve performance problems in real time. It’s not that easy for the datacenter manager. Marshaling resources or redeploying them can be an arduous process, especially if you don’t have the resources you need.
Wrapping up the baseball analogy, this is what Liquid Computing says is doing: providing a bench of power hitters, along with the ability to add them to the lineup quickly and easily. The company’s LiquidIQ system, based on a fabric computing architecture, combines processing, networking and I/O modules in a chassis. “Think of LiquidIQ as providing flexible logical pools of computing, networking and I/O that an organization can shape to meet its needs on demand,” says Liquid Computing CEO Greg McElheran. “We provide the building blocks for a complete infrastructure.”
Liquid says one of its chief innovations is its Software-Defined Real-Time Infrastructure (SD-RTI). “With SD-RTI, the entire datacenter can be defined in software,” says Keith Millar, Liquid’s vice president of product management. “Standard physical resources are treated as software-defined infrastructure, including servers, VLANs, gateways, routers, load balancers, firewalls, everything. All configurations are defined in software and can be modified easily to create clusters of varying sizes based on the needs of the organization at that time.”
“Everything is virtualized, all the physical switches, servers, pathways and so on,” Millar adds. “We’re essentially virtualizing all the IT infrastructure stuff that VMware doesn’t.”
This virtualization of physical resources in LiquidIQ allows for complete software control, Millar says. Enterprise applications such as SAP, VMware, and custom apps can make requests of the LiquidIQ system to add capacity. Policies and service-level agreements (SLAs) usually are housed up at the application level, and LiquidIQ is programmatically driven by these applications through standard scripting or CLI calls, Millar explains.
“LiquidIQ can be set up as a programmatic slave to management systems that sit on top of these applications,” he says. “LiquidIQ will respond to requests from these systems to load up a new server, or reconfigure networks, and so on.” By way of illustration, he describes a situation where Oracle Enterprise Manager was being used to handle two Oracle RAC clusters for two different customers in a hosted setting. The host wanted to swap the server hardware running for Customer A over to Customer B to satisfy SLAs. “The swap was made without either customer experiencing downtime,” Millar says.
According to company officials, LiquidIQ’s fabric architecture and software-defined infrastructure save time and capital in several ways: You have to buy fewer servers because the system provides logical servers; there less networking gear because the system includes virtualized switching and storage networking; and the IT staff doesn’t have to deal so much with provisioning and maintenance of servers, not to mention cabling.
SD-RTI automatically manages the complexity of IP addresses, MAC addresses, hypervisors and block storage devices. Because an administrator can add more resources or control their use with the system’s software, the different IT teams don’t have to convene a meeting to discuss making changes happen. New applications don’t require cadres of IT workers and can be set up in hours, with all the required computing and networking resources, Liquid says.
The LiquidIQ System
“The system combines computing plus switching plus storage networking,” Millar says. Hardware components come in chassis modules that provide computation and memory, switches, interconnects, and I/O. Up to 12 chassis can be linked. When you add a new module, the system recognizes it and includes it in the inventory as part of the application infrastructure, Millar added. A system can have as many as 20 processor modules. The LiquidIQ system currently is based on AMD quad-core Opteron processors, but the company plans to soon offer Intel silicon. The 300 Gbps I/O modules (up to five per chassis) support Gigabit Ethernet, 10 GbE and Fibre Channel. Current systems run Linux, but Windows will be part of the mix in the next few months, officials say.
Liquid execs describe the system’s peak performance as “Cray-like,” but say the cost is comparable to buying commodity blade servers. Applications, which do not have to be rewritten, can run up to three times faster on the Liquid chassis, the company says.
With the LiquidIQ system, Liquid is providing convergence: computing and communications capabilities in the same fabric. FabricBoss, the proprietary control system that spans all the physical, logical, and virtual resources across one or more chassis, is “telecom-grade,” Millar says. FabricBoss orchestrates all the different physical components, checks their health, and automates many of the management tasks that would otherwise require human intervention. “But its main task,” he says, “is virtualizing hardware and making it software-defined.”
“You can define any network component. For example, an Oracle RAC installation can be set up with specific MAC or IP addresses. You don’t have to know all these interdependencies. We present one graphical map of everything that fits under the Oracle RAC. All the connections still have to be made, but they can be turned into a template and re-used, instead of sending in the troops to recable and do resetup.” According to Millar, “template stuff gets people excited.”
Much of Liquid’s startup and management team comes from the telecom business. “We’re used to building systems that let a few people manage a big infrastructure,” Millar explained.
Who Can Benefit?
The company believes any industry or market where people need scalability, like in cluster or grid situations, would make sense for Liquid’s system. “Folks doing transactions, database stuff, with peak loads at just a certain time of the year, for example, could build their IT infrastructure to meet that peak load only when they need it, but doing it via software rather than humans doing it,” says Millar. “We’d also make sense for folks with heavy reliance on applications that are very dependent upon OS and network setup, such as Exchange clusters or SQL clusters … business applications that require all that OS configuration to be sustained.”
A new target market the company is eyeing is disaster recovery. “We don’t have any disks onboard,” he explains. “It’s all stored externally.”
Company officials would not disclose the number of current customers, but a 2006 news story reported systems in trial at 15 customer sites. Last week, Liquid announced its latest deal: a broadband services provider, Abacus Data Exchange (Lafayette, La.), will use LiquidIQ to host and deliver enterprise-type applications over a municipal fiber network.
Industry observers have recognized liquid Computing for its take on fabric computing and its convergence philosophy. By all accounts, its hardware is no slouch either: an earlier version appears on the list of Top 500 supercomputers, but Liquid’s use of x86 processors and other standard technologies makes the system less exotic and disruptive. However, it’s hard to say how many organizations would be willing to divert from current suppliers and put their money on a different approach, from a vendor that is not a common datacenter name. Still, the idea of fabric computing and simple scalability should prove attractive to organizations looking for a different way to reach new performance levels.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.