March 18, 2013
REDWOOD CITY, Calif., March 18 — With the rise of big data has come the development of modern technologies capable of storing massive and growing amounts of multi-structured data. Hadoop and, increasingly, NoSQL databases, have emerged as widely adopted platforms for wrangling the semi-structured nature of big data into contained and organized formats. Unlike traditional relational databases that impose flat, rigid schemas across entire tables, NoSQL database systems are useful when working with large quantities of semi-structured data.
LucidWorks, the company transforming the way people access information, today announced the integration between LucidWorks Search and MongoDB. The combined solution brings search and analysis capabilities to MongoDB so organizations can easily search their MongoDB NoSQL database to discover actionable insights within the reams of semi-structured data. Together, LucidWorks and MongoDB extend the existing security and scalability benefits that LucidWorks Search brings to enterprises, driving innovation and enabling more ways to search and analyze big data.
LucidWorks Search is the development platform that accelerates and simplifies building highly secure, scalable and cost-effective search-based applications. The platform provides deep insights into both the data and the way users interact with that data.
LucidWorks Search Capabilities
MongoDB is a scalable, high-performance, open source NoSQL database that allows schemas to vary across documents and to change quickly as applications evolve, while still providing the functionality developers expect from relational databases. The technology has emerged as the NoSQL database of choice, with more than 3.8 million downloads.
LucidWorks Search and MongoDB Combined Solution Highlights
"Our mission is to help companies quickly and effectively innovate by transforming the way they access and analyze the massive amounts of unstructured data they are accumulating. LucidWorks Search speeds that time to discovery by making it easy for any business user, not just heavily trained data scientists, to dig through unstructured content in MongoDB and apply analytics to uncover new insights." - Grant Ingersoll , CTO and Co-founder, LucidWorks
LucidWorks transforms the way people access information to enable data-driven decisions. LucidWorks is the only company that delivers enterprise-grade search development platforms built on the power of Apache Lucene/Solr open source search. Employing one quarter of the Core Committers to the Apache Lucene/Solr project, Lucidworks is the largest supporter of open source search in the industry. LucidWorks Search delivers unmatched scalability to billions of documents, with sub-second query and faceting response time. LucidWorks Big Data tightly integrates key Apache projects needed to build and deploy applications requiring access to multi-structured data. Customers include AT&T, ADP, Sears, Ford, Verizon, Cisco, Zappos, Raytheon, The Guardian, The Smithsonian Institution, The Motley Fool, Qualcomm, Taser, eHarmony and many other household names around the world. LucidWorks' investors include Shasta Ventures, Granite Ventures, Walden International and In-Q-Tel.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.