February 25, 2013
BARCELONA, Spain and SAN FRANCISCO, Calif., Feb. 25 — Procera Networks, Inc., the global intelligent policy enforcement company and Tilera Corporation, the leader in 64-bit TILE-Gx manycore general purpose processors, today announced they have achieved 200Gbps of Deep Packet Inspection (DPI) performance deploying the Procera Network Application Visibility Library (NAVL) on TILExtreme-Gx Duo platform, utilizing approximately 70 percent of the TILE-Gx cores.
This industry-best performance addresses the growing requirements of telecommunications and enterprise security providers to implement application-aware policies to efficiently manage their networks without sacrificing user quality of experience (QoE). The solution can be deployed in a variety of networking scenarios including Network Security (IDS/IPS, DPI, DLP), Cyber Security, Network Monitoring, Data Forensics, Analytics and Big Data processing.
"With traffic volume and application types exploding in service provider and enterprise networks, there is a growing demand for high-throughput, accurate and real-time DPI to enable network optimization and tiered billing," said Jason Richards, senior vice president of business development for Procera Networks. "The accuracy and breadth of applications and protocols supported by the Procera NAVL DPI engine, coupled with the 200Gbps throughput we have achieved on the TILE-Gx, is the perfect solution to meet this rapidly growing and complex demand."
Tilera's TILExtreme-Gx Duo platform packs 288 cores with eight TILE-Gx36 processors in a compact 1U rack mountable device. The TILE-Gx manycore processor ranges from 9 cores to up to 72 cores, and is available in various form factors from half-length PCIe cards to 288 cores in a 1U chassis. This enables customers to choose 5Gbps, 10Gbps, 40Gbps, 100Gbps or 200Gbps DPI solutions depending on the overall solution requirements.
"We have once again shattered the record books by delivering the industry's highest performance per watt DPI solution with our TILE-Gx 64-bit processor," said Michael Zimmerman, vice president of marketing for Tilera Corporation. "Our focus on standard software programmability, maximizing performance per watt and scalability enabled us to deliver such high performance. Further, this solution utilizes about 70 percent of the available processing on the platform, leaving the rest for other functions -- such as billing, policy enforcement and end user customization."
Procera's NAVL is an easily integrated DPI engine, providing real-time, Layer-7 classification of network application traffic. NAVL enhances telecommunications providers' abilities to provide a variety of application-aware functions to ensure equitable access to resources by all users and to create tiered classes of service for billing. Procera's NAVL DPI engine and OEM products are the result of its recent acquisition of Vineyard Networks.
Tilera and Procera Networks will demonstrate these capabilities at Mobile World Congress in Barcelona at the Fira Gran Via, February 25-28, 2013 in hall 5 stand 5I10 and at RSA in San Francisco at the Moscone Convention Center, February 25-March 1, 2013 in booth #2739.
Tilera Corporation is the developer of the highest performance, low-power, general purpose manycore processors. Tilera is headquartered in San Jose, Calif., with additional locations worldwide.
About Procera Networks, Inc.
Procera Networks Inc. delivers Intelligent Policy Enforcement solutions designed for carriers, service providers and enterprises worldwide. Procera's PacketLogic solutions provide actionable intelligence and policy enforcement to ensure a high quality experience for any Internet connected devices. Network operators deploy Procera's technology to enable real-time visibility, superior performance and scalability, and deliver personalized services for millions of enterprises and consumers.
Source: Procera Networks
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.