April 09, 2012
Before EC2, there was S3. On March 14, 2006, Amazon launched its first utility computing service, Simple Storage Service (S3), and within weeks there were 200 million objects packed onto its disk arrays. Now, on the heels of its sixth anniversary, the service is about to hit a major milestone: one-trillion objects stored.
In a blog post, Amazon Evangelist Jeff Barr wrote that the storage service contained 905 billion objects at the end of Q1 2012. He also revealed that Amazon S3 routinely handles over 650,000 requests per second, up from 500,000 requests per second just three months earlier.
The cloud service has more than tripled over the last two years. This chart, provided by the company, depicts year-end totals as well as the most recent quarter (which is not shown to scale):
According to Barr:
The S3 object count continued to grow at a rapid clip even after we added object expiration and multi-object deletion at the end of the year. Every day, well over a billion objects are added via the S3 APIs, AWS Import/Export, the AWS Storage Gateway, all sorts of backup tools, and through Direct Connect pipes.
As for what constitutes an object, Amazon's S3 FAQ explains thusly:
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
Note to job seekers: with the growth of the storage service, it's only natural that Amazon needs more team members to support the effort. The company lists a number of open positions on the business and technical side.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.