Before EC2, there was S3. On March 14, 2006, Amazon launched its first utility computing service, Simple Storage Service (S3), and within weeks there were 200 million objects packed onto its disk arrays. Now, on the heels of its sixth anniversary, the service is about to hit a major milestone: one-trillion objects stored.
In a blog post, Amazon Evangelist Jeff Barr wrote that the storage service contained 905 billion objects at the end of Q1 2012. He also revealed that Amazon S3 routinely handles over 650,000 requests per second, up from 500,000 requests per second just three months earlier.
The cloud service has more than tripled over the last two years. This chart, provided by the company, depicts year-end totals as well as the most recent quarter (which is not shown to scale):
According to Barr:
The S3 object count continued to grow at a rapid clip even after we added object expiration and multi-object deletion at the end of the year. Every day, well over a billion objects are added via the S3 APIs, AWS Import/Export, the AWS Storage Gateway, all sorts of backup tools, and through Direct Connect pipes.
As for what constitutes an object, Amazon’s S3 FAQ explains thusly:
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
Note to job seekers: with the growth of the storage service, it’s only natural that Amazon needs more team members to support the effort. The company lists a number of open positions on the business and technical side.