March 12, 2007
SANTA CLARA, Calif., March 6 -- At the EclipseCon conference,
Paremus today announced the release of several new Infiniflow products
that will transform the way the world runs composite applications.
Infiniflow Enterprise Service Fabric (ESF) is a robust, lightweight,
standards-based SOA platform capable of supporting the simplest
application and the most sophisticated composite business system.
Infiniflow increases business agility while simultaneously reducing
cost and complexity.
The Infiniflow ESF suite consists of a core framework -- Infiniflow DSF, the Distributed Services Framework -- and a number of optional Fabric Processing Patterns and modules. The new Fabric Processing Pattern for Enterprise Service Grids is available today with Complex Event Processing (CEP) and Distributed Transaction Processing (DTP) to follow shortly. Paremus also announced the availability of Utility Service Modules for fabric management functions including Vision, Audit and Chargeback together with an SCA Assembly Tool.
Infiniflow is Java-based and leverages two industry standards that are set to transform the enterprise application world, namely OSGi and SCA (Service Component Architecture).
"Spring, the Service Component Architecture (SCA) standard and the OSGi component model, are fundamentally redefining the way that enterprise and SaaS applications are developed, deployed and managed," said Dr. Richard Nicholson, CEO of Paremus. "Collectively, these technologies promise true component re-use, dynamic assembly and maintenance of the most sophisticated composite business services. Infiniflow ESF delivers on this promise, providing a state-of-the-art autonomic runtime platform for the next generation of parallel, high-throughput and transactional composite business services."
The features and benefits of Infiniflow include:
Infiniflow Enterprise Service Fabric suite provides a distributed,
component-based service oriented platform for a broad range of IT
solutions for organizations of all sizes in many industries.
"Despite the wide applicability of Infiniflow, there is commonality in a number of fundamental requirements," said Mike Francis, sales and marketing director at Paremus. "Organizations deploying Infiniflow are able to reduce complexity and increase business agility, improve productivity, reduce operational and capital costs and gain competitive advantage."
A commercial time-limited evaluation download of Infiniflow DSF is available from www.paremus.com and a GPL open source developer release is available as the Newton project at www.codecauldron.org.
Paremus offers Infiniflow -- the Enterprise Service Fabric -- a family of lightweight, distributed, autonomic, SOA platforms for highly dynamic, composite, business applications. Leveraging the OSGi and Service Component Architecture standards, Infiniflow allows users to realize the full potential of distributed computing for their re-usable, composite service oriented applications. Infiniflow's distributed autonomic run-time environment offers maximum IT agility for businesses while delivering advanced resource management technology that allows automatic resource optimization to dramatically reduce datacenter operating costs. Infiniflow provides transparent support for composite POJO's and Spring-based business applications and makes it simple to enhance resilience, distribute, scale and manage these applications at runtime. Identified by Gartner as a Visionary in the Enterprise Application Server marketplace, Infiniflow is the ideal next generation solution to deliver competitive advantage for your enterprise today.
The Cauldron community is a directed open source community interested in addressing the fundamental problems associated with designing the next generation of massively distributed adaptive system. Rather than starting with a particular architectural bias or allegiance, Cauldron looks for guidance from the many complex adaptive systems with which we interact with on a daily basis. Some 20-plus years of academic research into biological and more generally Complex Adaptive Systems (CAS), provide the foundations and context for the community endeavors.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.