February 04, 2013
Feb. 4 – In this week's Nature Methods, Salk researchers share a how-to secret for biologists: code for Amazon Cloud that significantly reduces the time necessary to process data-intensive microscopic images.
The method promises to speed research into the underlying causes of disease by making single-molecule microscopy of practical use for more laboratories.
"This is an extremely cost-effective way for labs to process super-resolution images," says Hu Cang, Salk assistant professor in the Waitt Advanced Biophotonics Center and coauthor of the paper. "Depending on the size of the data set, it can save over a week's worth of time."
The latest frontier in basic biomedical research is to better understand the "molecular machines" called proteins and enzymes. Determining how they interact is key to discovering cures for diseases. Simply put, finding new therapies is akin to troubleshooting a broken mechanical assembly line-if you know all the steps in the manufacturing process, it's much easier to identify the step where something went wrong. In the case of human cells, some of the parts of the assembly line can be as small as single molecules.
Unfortunately, in the past conventional light microscopes could not clearly show objects as small as single molecules. The available alternatives, such as electron microscopy, could not be effectively used with living cells.
In 1873, German physicist Ernst Abbe worked out the mathematics to improve resolution in light microscopes. But Abbe's calculations also established the optical version of the sound barrier: the diffraction limit, an unavoidable spreading of light. Think of how light fans out from a flashlight.
According to the Abbe limit, it is impossible to see the difference between any two objects if they are smaller than half the wavelength of the imaging light. Since the shortest wavelength we can see is around 400 nanometers (nm), that means anything 200 nm or below appears as a blurry spot. The challenge for biologists is that the molecules they want to see are often only a few tens of nanometers in size.
"You have no idea how many single molecules are distributed within that blurry spot, so essential features and ideas remain obscure to you," says Jennifer Lippincott-Schwartz, a Salk non-resident fellow and coauthor on the paper.
In the early 2000s, several techniques were developed to break through the Abbe Limit, launching the new field of super-resolution microscopy. Among them was a method developed by Lippincott-Schwartz and her colleagues called Photoactivated Localization Microscopy, or PALM.
PALM, and its sister techniques, work because mathematics can see what the eye cannot: within the blurry spot, there are concentrations of photons that form bright peaks, which represent single molecules. The downside to these approaches is that it can take several hours to several days to crunch all the numbers required just to produce one usable image.
"It's like taking a movie, then you go through some very complex math, so what you see is the end result of processing, which is extremely slow because there's so many parameters," Cang says. "When I first saw PALM, I was shocked by how good it was. I wanted to use it right away, but when I actually tried to use it, I found its usefulness was limited by computing speed."
Even using statistical shortcuts, processing these images was still so intense that a supercomputer was required to reduce the time to a practical level. "Calculating an area of 50 pixels can take nearly a full day on a state-of-the-art desktop computer," says Lippincott-Schwartz. "But what you'll have achieved is the difference between a guess and a definitive answer."
In their Nature Methods paper, the researchers offer other scientists the tools they need to use an easier alternative-the Amazon Elastic Compute Cloud (Amazon Elastic EC2), a service that provides access to supercomputing via the Internet, allowing massive computing tasks to be distributed over banks of computers.
To make PALM more practical for use in biomedical research, the team wrote a computer script that allows any biologist to upload and process PALM images using Amazon Cloud.
As a demonstration, Cang, Lippincott-Schwartz and post-doctoral researcher Ying Hu reconstructed the images of podosomes, which are molecular machines that appear to encourage cancer cells to spread. In one instance, they dropped the time needed to process an image from a whole day to 72 minutes. They also imaged tubulin, a protein essential for building various structures within cells. In that case, they were able to drop the time from nine days to under three and a half hours.
Their new paper provides a how-to tutorial for using the code to process PALM images through Amazon Cloud, helping the other labs achieve similar increases in speed.
Other researchers on the study were: Xiaolin Nan, of Oregon Health and Science University School of Medicine, and Sengupta Prabuddha, of The Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health.
Source: Salk Institute
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.