September 28, 2010
Software and risk analysis firm Veracode announced that over the past 18 months, over half of the applications sent to them for extensive testing were unable to meet minimum security standards, even when the company downgraded some of its standards for applications that did not require exhaustive security requirements.
According to leaders at Veracode, this is in part due to the fact that building web or cloud-based applications requires an extra set of skills and a new range of expertise, not to mention far more time than some in-house developers want to commit to retooling applications that have been running without fail on dedicated servers.
As Samskriti King of Veracode noted, “Unfortunately, developers trained with software that’s generated and used in one location with a single set of servers often don’t understand the precautions needed for Web applications that take code, data, and elements of the interface from many servers.”
CEO of Virtacore Systems, Thomas Kilbin told Network World that his customers are moving back office apps to the private clouds without developing applications that are built to work in a cloud-based model. This exposes them to a far greater number of threats because many developers don’t want to have to rewrite all of their applications that are only running in the cloud for “bursty” periods and are only doing so to save money.
Kilbin also noted that the cloud “is more threat-rich than the shared hosting model, mainly because in shared hosting the core OS and apps—php, perl, mysql—are kept updated by the service provider. In the cloud, the customer has to keep the core OS updated, along with the application stacks, in addition to their code” which can be a major undertaking that some teams don’t have the time or expertise to handle.
Full story at NetworkWorld
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.