Fresh from ISC’08 and the associated petaflop-mania, I noticed that the latest issue of Wired magazine has a series of articles on the ramifications of petabyte data. The issue is titled “The End of Science,” and the main thesis is that these enormous data sets are forcing us to rethink the way traditional science is performed.
While petabyte-sized data may be relatively new to the HPC world, Google, Amazon and eBay have been wrestling with this for some time. Rather than trying to model the data, these companies use heuristic-based methods to generate useful information — or at least useful enough so that you can sell products or ads around it. The theory is that given enough data, heuristics is the most practical path to the best results.
In the Wired piece titled “The Data Deluge Makes the Scientific Method Obsolete,” the author posits that when data reaches petabyte size, it’s not just more of the same. With such a quantity of data to from, researchers no longer need to bother with hypotheses to be tested; in fact, it’s often not practical to do so. Instead, statistical magic can be applied so that the data itself shapes the solution. For example, Google doesn’t “know” why one Web page is better than another; it just exposes the usage patterns. In a nutshell: correlation is in, models are out.
From the article:
At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.
A practical example in science is the gene sequencing work Craig Venter is doing in marine microbiology. Instead of separating the individual organisms and sequencing them one by one, he employs “shotgun sequencing” and a supercomputer to derive the likely species based on statistical analysis of the gene fragments collected in a given saltwater sample. This approach doesn’t produce a definitive list of species, but does yield a tremendous amount of information about all the possible species encountered and the genetic parameters of the ecosystem.
This new computational approach was also reflected in Dan Reed’s presentation at the recent TeraGrid ’08 conference, which we report on this week. One area he talked about is the way these big data sets are challenging conventional thinking:
Data models, noted Reed, are in rapid flux because of larger and larger data volumes. This is especially pronounced in some fields, such as biomedical research, where large databases are subject to distributed analysis. A big challenge, probably underappreciated, says Reed, is the scale of the data deluge. “We will be running queries on 100,000 servers,” said Reed.” And research is moving from being hypothesis driven (“I have an idea, let me verify it.”) to exploratory (“What correlations can I glean from everyone’s data?”). This kind of exploratory analysis will rely on tools for deep data-mining.” Massive, multi-disciplinary data, said Reed, is rising rapidly and at unprecedented scale.
These heuristic computational methods are not exactly new. One that’s been around awhile is the genetic algorithm — a technique that mimics biological evolution as a problem-solving strategy. To make it work, you have to be able to define the general shape of the solution, so it’s useless if you don’t have some idea of what you’re looking for. Like Darwinian evolution, a genetic algorithm makes random changes in the candidate solution and lets the “fitness” of the result determine if it’s on the right track.
A 2004 article on genetic algorithms and evolutionary computation describes a real-life example:
[A] genetic algorithm developed jointly by engineers from General Electric and Rensselaer Polytechnic Institute produced a high-performance jet engine turbine design that was three times better than a human-designed configuration and 50% better than a configuration designed by an expert system by successfully navigating a solution space containing more than 10,387 possibilities. Conventional methods for designing such turbines are a central part of engineering projects that can take up to five years and cost over $2 billion; the genetic algorithm discovered this solution after two days on a typical engineering desktop workstation [Holland, John. “Genetic algorithms.” Scientific American, July 1992, p. 66-72].
I suppose taking the human element out of problem-solving is the logical end to all science becoming computer science. And it certainly is a capitalist-friendly way of doing business. After all, why bother employing dozens of domain experts when you can just buy or rent some software in the cloud? But even if the petabyte age brings an end to theories and models, humans aren’t completely expendable. We still get to ask the interesting questions.