August 06, 2012
Last month, the U.S. Government and Accountability Office (GAO) released a report detailing the progress of cloud service adoption by federal entities. The agency was tasked with assessing the state of a "cloud first" mandate, originally introduced by the Office of Management and Budget (OMB) in December of 2010. While positive changes have taken place, a number of agencies mentioned in the study have fallen short of their benchmarks.
The "cloud first" mandate was part of a 25-point plan developed to reform federal IT management. The plan's author, then U.S. Chief Information Officer Vivek Kundra, explained that government IT productivity had seen little improvement over the past decade, given the $600 billion that has been spent over that time.
"Too often," Kundra wrote, "Federal IT projects run over budget, behind schedule, or fail to deliver promised functionality."
As part of the push for cloud adoption, Kundra reminded readers about the rebate processing system set up for the 2009 "Cash-for-Clunkers" program. To prepare for an estimated 250,000 transactions, the National Highway Traffic Safety Administration (NHTSA) launched the Federal Government Car Allowance and Rebate System, a customized application hosted by a traditional datacenter. In less than three days, the rebate system crashed repeatedly due to higher than expected demand, and because it lacked the ability to scale rapidly, it took over a month to achieve stability.
The cloud-first strategy hopes to alleviate pain points like the "Cash-for-Clunkers" processing system failure, while incorporating benefits such as reduced capital expenses and increased flexibility.
But there are also challenges associated with moving to the cloud. The GAO report lists seven: meeting security requirements; obtaining guidance; acquiring expert knowledge; certifying vendors; ensuring data portability and avoiding lock-in; overcoming cultural resistance; and procuring services on a consumption basis. The OMB's Federal Cloud Computing Strategy, several NIST publications, and the the Federal Risk and Authorization Management Program (FedRAMP) program offer guidance on managing the transition.
So far, each of the seven agencies followed by the GAO has incorporated cloud computing requirements into its policies. The agencies have also identified at least three services appropriate for migration to the cloud before a February 2011 deadline. Five have also reported deployment of more than one cloud service by December of 2011.
While there are visible signs of progress, two agencies did not expect to meet a June 2012 deadline to have three cloud applications in service. The USDA expects to have a document and management correspondence tracking system running by September and the Small Business Administration (SBA) has two applications coming online in August and December. Other issues were found when only one of 20 plans submitted to the OMB included all necessary elements.
These delays may be the result of IT managers concerned with the security or cost-effectiveness of cloud implementations. Back in January, Government Computer News reported on a survey conducted on behalf of Safegov.org by the Ponemon Institute. Out of 432 respondents representing more than 20 federal agencies, 25 percent believed that cloud applications would increase IT costs. Furthermore, 70 percent of the participants wanted all cloud provider personnel with access to agency servers and data to undergo rigorous background checks.
Jeff Gould, CEO and director of research at Peerstone Research, summed up the situation: "We know the transition to the cloud is going to happen," he said. "But this survey's findings show that agencies are still in need of education on the cloud and how they will transition effectively."
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.