February 09, 2011
The Higher Education Funding Council of England announced today that it set aside £12.5 million for a broad cloud computing-geared shared services initiative for England’s university system. This effort is set to go far beyond simple email and student services (which are the usual targets of university cloud implementations) as it works to create a network of cross-college shared services.
The bulk of the resources (around £10 million) will go toward the construction of shared storage and data management services, with significant effort and funding going toward creating the basic cloud infrastructure needed support the diverse user base.
The remaining funding (in the area of approximately £2.5 million) will be focused on the creation and licensing for a wide array of applications used in higher education.
Recall that this investment is hinging on the idea of “shared” services across a university system—not within the confines of a single campus or one-campus network. Shared storage, shared computational resources—and many, many needs to support, including those of researchers, students, faculty and beyond.
While on the surface, this might sound like the dawn of a innovation-led push to enhance and extend university infrastructure, the switch to the cloud IT model came about due to massive budget cuts that are set to affect research, buildings and instruction as well as technology.
David Sweeny, Director of Research, Innovation and Skills at the England’s Higher Education Funding Council noted that during times of economic pressure, “it is critical that technology is used in a collaborative and cost-effective way to deliver services…cloud computing has the potential to do this in ways that will serve the academic community.”
Some might contend, however, that this is not the best way to serve the community of users with a vast range of needs for computational resources. Even though such an effort might help solve some cost issues, as Peter Tinson, executive secretary of the Universities and College Information Systems Association argued there are some problems with a shared services model across universities. Tinson stated, “there is a degree of a cultural resistance towards shared services among universities…there’s a fear of loss of control.”
In an article for EDUCause Weekly Shelton Waggener, Associate Vice Chancellor and CIO at the University of California, Berkeley proposed that shared services for university systems are a positive move, despite concerns about losing control over one’s infrastructure. He stated, ““Providing a single approach to IT solutions is very difficult given the diverse constituencies of a higher education campus. Discussions about solving the economy of scale issue frequently leads to complaints about the dreaded ‘centralization’ of IT services and the perceived loss of features of features and autonomy.”
Waggener continued that the time has come to “move away from the siloed, local delivery model” and that universities should consider reorganizing around the idea of IT supply and demand. “Rather than centralized versus decentralized, the real discussion for higher education IT needs to be about demand planning and service delivery and where those two activities most appropriately belong to achieve maximum benefit at both the institutional and local levels.”
As universities see funding cuts scrape away at their IT resources, the dueling issues of cultural/loss of control versus cost savings in the face of increasing demand for IT services will create an interesting dynamic. As the cloud becomes a more commonplace, trusted way to handle needs on a university or university system-wide scale, new ways to manage demand, priority, access and concerns about (de)centralization must be developed.
Posted by Nicole Hemsoth - February 09, 2011 @ 8:04 AM, Pacific Standard Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.