March 14, 2011
Following the first hints of news about the tragedy in Japan, people around the world turned to the Internet to find different formats for information—not just mass media coverage, but also firsthand impressions left on personal websites, blogs and social media outlets. During the Japanese disaster, a combination social networks and the principles of cloud computing became the primary source for information gathering and sharing.
In the past years, the number of individuals who have deposited a great amount of their time into social networks has increased. Statistics speak volumes--Facebook weighed in at 600 million users active in January 2011, Twitter tallies 190 million users tweeting 65 million times a day in July 2010 or LinkedIn lauched a figure of 90 million in January 2011.
Depending on what the user wants to obtain, socially speaking, the chosen provider will be different. For example, LinkedIn would be selected for maintaining professional records of those contacts the user knows personally and letting them know the own latest work achievements, Twitter would be a way of instant information exchange with peers that the user doesn’t need to know personally, and Facebook would be a great way to reconnect with school buddies and share persistent information such as vacation pictures.
In fact, this last Facebook example is the one I use when giving general talks on cloud computing. I issue the following question to the audience: “Do you save locally the pictures which you have been tagged in?”. The answer is always negative, as everybody understands that the pictures are already there in the social network, available always when needed without caring about what is underneath. Does this last statement ring a bell?
So basically, more people than expected are using cloud computing without even noticing it.
With the advent of information technology, we have been able to use Internet for getting the latest news on the Japanese disaster. But more important, many of us that knew somebody living in the raising sun country, needed a way to contact them. This way, the news regarding the person finder service provided by Google spread instantly across Twitter. Once a user knew this service existed, a new tweet was making its way to her/his contacts, making this important information available to almost every user of the network. The important fact here is that this valuable information arrived, no matter who was between the source and me. Also I find interesting that a cloud computing service was being announced by a social cloud, resulting in a perfect integration between clouds.
The importance of social clouds has been considered by mass media, resulting in all big players having a Twitter account for making their last second announcements. During the Japanese disaster, companies/agencies such as Reuters, BBC (which operates with 3 different accounts depending on the information type), Reuters or Al Jazeera got an increase in their follower number. At the other end, users create their unique set of information providers which maintain their timeline with updated news, no matter which is the source.
But we cannot underestimate the power of individuals. Many users came to social clouds like Twitter to gather first hand impressions instantly and provide some feedback in form of vital information. This is the case of one of my Japanese contacts and cloud computing expert, who wrote a brilliant blog post entitled “What should be tweeted in the disaster” which contained basic guidelines for coping with the situation (saving internet resources, retweeting government announcements, …) and served me as an important inspiration for this article.
In resemblance with public clouds, users during the Japanese disaster are choosing among different providers depending of their needs. The Twitter example for instant information have been explained above. However, many users seek tools to interact in a more persistent way. Here, the provider would be Facebook by means of the groups of users or even, via the fundraising project within the “Causes” application operated by the American National Red Cross.
At the end, identifying social networks as social clouds is just an exercise of comparison between philosophies, considering the examples shown in this article match the definition of a cloud. These social clouds offer “Information as a Service”, allowing users to dynamically choose the sources, correlate and expand the content at will. The users do not need to understand what or who is bringing the desired information or providing the tools for exchanging information, the social cloud provides an unique interface to its services.
Finally, I would like to end this article expressing my condolence to all of those who lost someone in Japan during these days, and give all my support to the whole country in these days of sorrow. And if you consider donating money to a relief fund, I suggest that you read this important article at CNNMoney.
About the Author
Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Complutense University of Madrid (Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group (http://dsa-research.org/). He is directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives.
From 2005 to 2009 his research focused in application porting onto Grid Computing infrastructures, activity that let him be “where the real action was”. These applications pertained to a wide range of areas, from Fusion Physics to Bioinformatics. During this period he achieved the abilities needed for profiling applications and making them benefit of distributed computing infrastructures. Additionally, he shared these abilities in many training events organized within the EGEE Project and similar initiatives.
Since 2010 his research interests lie in different aspects of Cloud Computing, but always having real life applications in mind, specially those pertaining to the High Performance Computing domain.
Posted by Jose Luis Vazquez-Poletti - March 14, 2011 @ 11:43 AM, Pacific Daylight Time
Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Complutense University of Madrid (Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group. He is directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.