In this entry Bruce Maches takes a high-level look at SaaS and how it can be leveraged by these organizations to help them meet their research objectives in a more cost effective and timely manner.
The High-Tech Farm Report: Cloud Computing Comes to Agriculture
Post Date: February 21, 2011 @ 9:35 AM, Pacific Standard Time
Blog: Behind the Cloud
Both agriculture research and practice are seeing some benefits from advancements in cloud-based tools and platforms. From hosting of agricultural GPS data to aid in conservation to commercial farm companies, cloud computing and agriculture are merging.
Upcoming Event to Explore the Role of Clouds in HPC
Post Date: February 15, 2011 @ 9:43 AM, Pacific Standard Time
Blog: Behind the Cloud
The remarkable success of the first international ISC Cloud’10 Conference held last October in Frankfurt, Germany, has motivated ISC Events to continue this series and organize a similar cloud computing conference this year, with an even more profound focus on the use of clouds for High Performance Computing (HPC).
The cloud rush is on, and it’s no longer a one or two team race to the finish line to see who can claim the number one spot. The traditional “big boys” have some new players to contend with, ones with deep pockets, strong, trusted brands, and massive embedded customer bases.
Higher Ed Budget Cuts Spur Cloud Services Scramble
Post Date: February 09, 2011 @ 8:04 AM, Pacific Standard Time
Blog: Behind the Cloud
As the cloud becomes a more commonplace, trusted way to handle needs on a university or university system-wide scale, new ways to manage demand, priority, access and concerns about (de)centralization must be developed.
It is only the beginning of the week and we have already seen two major telcos emerge from the ether to snatch up cloud computing and datacenter companies Terremark and NaviSite. This will be a trend that will continue as these companies realize the true potential of solid infrastructure and the systems to support clouds...
Although interoperability remains a hot topic in debates about cloud providers and end user needs, there is little hope on the horizon for true standards to emerge anytime in the near future. We discussed this issue with John Considine, CTO and founder of CloudSwitch.
Biotech firms require vast computational resources and for those smaller companies, this means maximizing current infrastructure and processes as much as possible. Bruce Maches weighs in one a recent implementation, looking at the challenges and possibilities for these companies.
This week IBM announced another addition in its string of cloud computing data center initiatives rooted in the Asia-Pacific region. This brings the company to over $100 million in investment in APAC as analyst figures continue to match this sense of hope for the region's vast market.
It’s important to acknowledge that all of this hullaballoo around clouds may be an issue of semantics. Ultimately, it doesn’t much matter if you call this internal elastic infrastructure a cloud, a grid or some other such thing...
Earl J. Dodd, President of Ideas And Machines, Inc. and i3D Inc.
Independent HPC consultant for cluster, grid, and cloud computing, and for data and compute-intensive applications, and General Chair of the ISC Cloud Conference.
Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Complutense University of Madrid (Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group. He is directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives.
An HPC industry consultant and cloud evangelist, Steve Campbell is a seasoned senior HPC executive.
Former Director of Information Technology for Pfizer's R&D division, current CIO for BRMaches & Associates.
Sue Korn is a Senior Analyst at Intersect360 Research specializing in Edge HPC applications, and a 20-year veteran of the Financial Services Industry. In her role at Intersect360 Research, Korn spearheads the company's analysis of the drivers and barriers of HPC adoption in business environments and the growing role of Edge HPC applications.
Scott Clark has been an infrastructure solution provider in the EDA/Semiconductor industry for almost 20 years.
Ignacio M. Llorente, Ph.D in Computer Science (UCM) and Executive MBA (IE Business School), is a Full Professor in Computer Architecture and Technology, and the Head of the Distributed Systems Architecture Research Group at Complutense University of Madrid.
Joshua Geist is the founder and CEO of Geminare Incorporated, an innovator in cloud-based enablement technologies for the Recovery as a Service market. Combining a degree in Physics with over 20 years of technology experience, Joshua's passion lies in solving technology challenges for the mid-sized business market.
Miha Ahronovitz specializes in cloud software, products and business models and led product and business strategy for Sun Microsystem’s HPCGrid and Cloud division. Following Sun’s merger, Miha is now the Principal of Ahrono Associates.
Edward J. Lucente is V.P. of Business Development at Data Center Rebates, Inc., an IT efficiency consultancy based in Carlsbad, CA, whose professional services focus on data center energy efficiency (DCEE), leasing integrated with technology refreshes, and negotiation of IT energy rebates. (Ed is a rabid Red Sox fan also.)
Craig Lund is a consultant focused on specialized markets for High Performance Computing. He is best known from his many years as CTO of Mercury Computer Systems.
Jake is a software executive, writer and blogger. Based in Raleigh, North Carolina, he is currently the chief marketing officer for rPath. Feel free to contact Jake via email at firstname.lastname@example.org
Tom is the publisher of HPC in the Cloud. He has over 30 years of experience in business-to-business publishing, with the last 22 years focused primarily on High Productivity Computing (HPC) technologies.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.