Liquid Server Architecture Scales to Workload Demand

Liquid Computing's new server architecture leverages industry-standard components and proprietary technology to create a pool of computing resources that can scale up and down depending on workload demands.

As major computer makers push a variety of utility computing offerings, one small startup is hoping to attract attention with a new server architecture that leverages industry-standard components and proprietary technology to create a pool of computing resources that can scale up and down depending on workload demands.

Liquid Computing Corp.s Liquid Server systems, which officials said will at first be able to scale to 1,920 processors, initially will run on Advanced Micro Devices Inc.s 64-bit Opteron chips and support Linux. For data coming into and out of the boxes, Liquid Computing will support such standard I/O protocols as Fibre Channel, Gigabit Ethernet and 10 Gigabit Ethernet, according to CEO Brian Hurley.

However, inside the boxes will be Liquid Computings proprietary interconnect—built with a combination of the Los Altos, Calif., companys own intellectual property and commercial components—and management software designed to automate server provisioning, fault detection and repair, Chief Technology Officer Mike Kemp said. The devices offer throughput speeds of up to 6G bps, with a latency of less than 2 microseconds, Kemp said.

Liquid Computings air-cooled servers will offer better price/performance than large, expensive symmetric multiprocessing systems from OEMs such as IBM and Sun Microsystems Inc. and the scale-out architecture favored by the likes of Dell Inc., Hurley and Kemp said. "As you scale out the systems today, you also are scaling the complexity and scaling the cost," Hurley said.

Charles King, an analyst with Pund-IT Research Inc., said that Liquid Computing seems to be taking the right approach but that the company will be challenged to carve out a space for itself in a highly competitive field in which top-tier vendors Dell, IBM and Hewlett-Packard Co. are pushing the utility computing model through various hardware and software offerings.

"The idea is to offer highly flexible infrastructure that can be adapted to different kinds and styles of workloads through a combination of maximum performance and maximum server utilization," said King in Hayward, Calif. "Im not sure how they will differentiate themselves. I dont know what their secret sauce is."

Liquid Computing will begin trial tests with customers in August and then conduct another round of trials later this year, Hurley said. Beta testing will begin early next year; general release is slated for later next year.

Hurley said the company initially will target the high-performance technical computing space, traditionally an early adopter of new technology and a proponent of Linux. However, as commercial software support grows and the systems capabilities are enhanced—Liquid Computing eventually will add support for Windows applications as well as for other processors—Hurley expects larger enterprises will also adopt the platform.

Other smaller vendors are trying to gain a foothold in the space as well. For example, Azul Systems Inc., of Mountain View, Calif., last month unveiled the first of its appliances designed to offer application servers access to a massive pool of network-attached processing power.

Azuls appliances run up to 16 of the companys "Vega" processors, each of which can hold up to 24 cores, giving enterprises access to up to 384 cores of compute power.


Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.