December 11, 2006
TIBCO Software Inc. announced the
availability of a distributed service container -- TIBCO ActiveMatrix
Service Grid -- for creating and deploying services in a heterogeneous
environment. This offering represents one in a family of
service-oriented architecture (SOA) products designed to help reduce
SOA complexity and simplify the deployment of new business services
through service virtualization and governance.
TIBCO claims that traditional approaches to SOA are not designed to deal with the inherent heterogeneity of most IT environments. The reality is that most companies have a variety of technology platforms in place, including Java, Java EE, .NET, C++, and many more. This fact greatly complicates the time and effort involved in service creation and deployment. As a result, efforts to realize the full benefits of SOA have been stymied. TIBCO ActiveMatrix Service Grid addresses this complexity by providing distributed service containers for the creation and deployment of reusable services in a virtualized manner.
"As SOA initiatives evolve, organizations are looking to develop new services," said Neil Macehiter, research director at Macehiter Ward-Dutton. "These efforts are highlighting a number of technology gaps in areas such as service deployment and lifecycle management, which have to be plugged manually leading to inconsistency and additional cost and risk. Service infrastructure must evolve to address these limitations if organisations are to maximise the return from their investments in SOA."
TIBCO ActiveMatrix Service Grid is a unified service container which supports service run-time environments including Java and .NET, with existing plans to support C++, Perl, Ruby and COBOL. Using this product, developers can write services from scratch or take an existing service such as Enterprise Java Beans on IBM WebSphere or BEA WebLogic and expose or deploy them as managed services within the ActiveMatrix Service Grid.
ActiveMatrix Service Grid, based on the Java Specification Request (JSR) 208 and Service Component Architecture (SCA) specifications, also allows organizations to add their own service run-time environments. The product enables SOA-specific functionality such as policy management, service deployment, and service management to be configured at runtime by administrators.
ActiveMatrix Service Grid enables companies to achieve extreme scalability by providing an open, extensible service container based on the proven foundation of TIBCO's messaging and Enterprise Service Bus technology. With ActiveMatrix Service Grid, companies can automatically deploy services across machines or co-locate them within an operating system process, dynamically move services to different machines, and add distributed load balancing or fault tolerance. Administrators can add protocols such as SOAP over JMS, HTTP, WS-ReliableMessaging, and TIBCO Rendezvous at runtime through configuration without requiring redeployment.
"An organization's competitiveness and survival hinges on the free flow of information and services and how quickly they can react to change," said Jeff Kristick, senior director, product marketing, TIBCO. "Allowing functionality to be container-managed and not hard-coded reduces integration burdens, increases reuse and ultimately improves business responsiveness by ensuring developers can easily create and deploy interoperable services."
TIBCO ActiveMatrix Service Grid is now available to the public.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.