November 28, 2012
SEATTLE, Nov. 28 – Amazon Web Services Inc., an Amazon.com company, today announced the limited preview of Amazon Redshift, a fast and powerful, fully managed, petabyte-scale data warehouse service in the cloud. Amazon Redshift enables customers to dramatically increase the speed of query performance when analyzing virtually any size data set, using the same SQL-based business intelligence tools they use today. With a few clicks in the AWS Management Console, customers can launch a Redshift cluster, starting with a few hundred gigabytes and scaling to a petabyte or more, for under $1,000 per terabyte per year – one tenth the price of most data warehousing solutions available to customers today. To learn more about Amazon Redshift, visit http://aws.amazon.com/redshift.
Self-managed, on-premise data warehouses require significant time and resource to administer, especially for large datasets. Loading, monitoring, tuning, taking backups, and recovering from faults are complex and time-consuming tasks. And, the financial cost associated with building, maintaining, and growing traditional data warehouses is flat-out expensive. Larger companies have resigned themselves to paying such a high cost for data warehousing, while smaller companies often find the hardware and software costs prohibitively expensive, leaving most of these organizations without a data warehousing capability. Amazon Redshift aims to change this quagmire. Amazon Redshift manages all of the work needed to set up, operate, and scale a data warehouse, from provisioning capacity to monitoring and backing up the cluster, to applying patches and upgrades. Scaling a cluster to improve performance or increase capacity on Amazon Redshift is simple and incurs no downtime, while the service continuously monitors the health of the cluster and automatically replaces any component needed. Amazon Redshift is also priced cost-effectively (a fraction of existing data warehouses) to enable larger companies to substantially reduce their costs and smaller companies to take advantage of the analytic insights that come from using a powerful data warehouse.
"Over the past two years, one of the most frequent requests we've heard from customers is for AWS to build a data warehouse service," said Raju Gulabani, Vice President of Database Services, AWS. "Enterprises are tired of paying such high prices for their data warehouses and smaller companies can't afford to analyze the vast amount of data they collect (often throwing away 95% of their data). This frustrates customers as they know the cloud has made it easier and less expensive than ever to collect, store, and analyze data. Amazon Redshift not only significantly lowers the cost of a data warehouse, but also makes it easy to analyze large amounts of data very quickly. While actual performance will vary based on each customers' specific query requirements, our internal tests have shown over 10 times performance improvement when compared to standard relational data warehouses. Having the ability to quickly analyze petabytes of data at a low cost changes the game for our customers."
Amazon Redshift uses a number of techniques, including columnar data storage, advanced compression, and high performance IO and network, to achieve significantly higher performance than traditional databases for data warehousing and analytics workloads. By distributing and parallelizing queries across a cluster of inexpensive nodes, Amazon Redshift makes it easy to obtain high performance without requiring customers to hand-tune queries, maintain indices, or pre-compute results. Amazon Redshift is certified by popular business intelligence tools, including Jaspersoft and MicroStrategy. Over twenty customers, including Flipboard, NASA/JPL, Netflix, and Schumacher Group, are in the Amazon Redshift private beta program.
"At Netflix, we deliver personalized recommendations for our millions of subscribers by analyzing large volumes of data, and are always looking for ways to improve our service," said Kurt Brown, Director, Data Science & Engineering Platform at Netflix. "We're very excited about the cost-disruptive and cloud-based model of Amazon Redshift. It's sure to shake up the data warehousing industry."
"We are excited about being able to use this new service to take our cloud usage even farther and run a large scale data warehouse in the cloud for our engineering, science, and IT data," said Tom Soderstrom, Chief Technology Officer, Office of the CIO, NASA/JPL. "We're delighted to have a new, fast and low-cost option for analyzing massive amounts of data. This new service will also allow us to create new types of Big Data analytics that will lead to new discoveries."
"The Amazon Enterprise Data Warehouse manages petabytes of data for every group at Amazon. We are seeing significant performance improvements leveraging Amazon Redshift over our current multi-million dollar data warehouse," said Erik Selberg, Manager of the Amazon.com Data Warehouse team. "Some multi-hour queries finish in under an hour, and some queries that took 5-10 minutes on our current data warehouse are now returning in seconds with Amazon Redshift. Early estimates show the cost of Amazon Redshift will be well under 1/10th the cost of our existing solution. Amazon Redshift is providing us with a cost-effective way to scale with our growing data analysis needs."
Amazon Redshift includes technology components licensed from ParAccel and is available with two underlying node types, including either 2 terabytes or 16 terabytes of compressed customer data per node. One cluster can scale up to 100 nodes and on-demand pricing starts at just $0.85 per hour for a 2-terabyte data warehouse, scaling linearly up to a petabyte and more. Reserved instance pricing lowers the effective price to $0.228 per hour or under $1,000 per terabyte per year – less than one tenth the price of comparable technology available to customers today.
To learn more and sign up for the limited preview of Amazon Redshift, visit http://aws.amazon.com/redshift.
About Amazon Web Services
Launched in 2006, Amazon Web Services (AWS) began exposing key infrastructure services to businesses in the form of web services – now widely known as cloud computing. The ultimate benefit of cloud computing, and AWS, is the ability to leverage a new business model and turn capital infrastructure expenses into variable costs. Businesses no longer need to plan and procure servers and other IT resources weeks or months in advance. Using AWS, businesses can take advantage of Amazon's expertise and economies of scale to access resources when their business needs them, delivering results faster and at a lower cost. Today, Amazon Web Services provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers hundreds of thousands of enterprise, government and startup customers businesses in 190 countries around the world. AWS offers over 30 different services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS). AWS services are available to customers from data center locations in the U.S., Brazil, Europe, Japan, Singapore and Australia.
Amazon.com, Inc., a Fortune 500 company based in Seattle, opened on the World Wide Web in July 1995 and today offers Earth's Biggest Selection. Amazon.com, Inc. seeks to be Earth's most customer-centric company, where customers can find and discover anything they might want to buy online, and endeavors to offer its customers the lowest possible prices. Amazon.com and other sellers offer millions of unique new, refurbished and used items in categories such as Books; Movies, Music & Games; Digital Downloads; Electronics & Computers; Home & Garden; Toys, Kids & Baby; Grocery; Apparel, Shoes & Jewelry; Health & Beauty; Sports & Outdoors; and Tools, Auto & Industrial. Amazon Web Services provides Amazon's developer customers with access to in-the-cloud infrastructure services based on Amazon's own back-end technology platform, which developers can use to enable virtually any type of business. Kindle Paperwhite is the most-advanced e-reader ever constructed with 62% more pixels and 25% increased contrast, a patented built-in front light for reading in all lighting conditions, extra-long battery life, and a thin and light design. The new latest generation Kindle, the lightest and smallest Kindle, now features new, improved fonts and faster page turns. Kindle Fire HD features a stunning custom high-definition display, exclusive Dolby audio with dual stereo speakers, high-end, laptop-grade Wi-Fi with dual-band support, dual-antennas and MIMO for faster streaming and downloads, enough storage for HD content, and the latest generation processor and graphics engine—and it is available in two display sizes—7" and 8.9". The large-screen Kindle Fire HD is also available with 4G wireless, and comes with a groundbreaking $49.99 introductory 4G LTE data package. The all-new Kindle Fire features a 20% faster processor, 40% faster performance, twice the memory, and longer battery life.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.