January 30, 2013
Web-based technologies, such as Software-as-a-Service (SaaS), were slow to catch on in the science world, but that's about to change. As this Radar O'Reilly article points out, there is growing momentum for this new approach, called Science-as-a-Service (aka SciAAS).
All the benefits of SaaS enjoyed by business users and consumers – like reduced cost and increased flexibility – are just as attractive to researchers. What's more, by injecting the best practices of IT into the science process, researchers are free to spend their time on more "mission-critical" endeavors.
As one researcher from the Texas Advanced Computing Center put it, Science-as-a-Service "takes the spotlight off of technology and puts it back onto science."
This may be a relatively new approach, but there's already quite the ecosystem forming. As O'Reilly Associate Renee DiResta observes, there are a large number of enterprising startups aiming to "disrupt the slow-moving pace and high cost of research."
Often times, the founders were themselves researchers motivated out of frustration to create better solutions. "To do this, they're applying innovative business models traditionally used by B2B and B2C startups – everything from the principles of collaborative consumption to decoupling service workers from their traditional places of employment," writes DiResta.
This kind of outsourcing is not completely new – contract research organizations (CROs) have been around since the 80s – but it's occurring on a never-before-seen scale. The list of firms that want to help "make science easier" is longer and more diverse than you might expect. Note the following sampling:
Science-as-a-Service is still an emerging paradigm, but the pace of growth and innovation suggest that this is just the tip of a much larger iceberg.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.