December 05, 2005
Infoblox Inc., a developer of infrastructure for identity-driven networks, and Lucent Technologies announced that the two companies have teamed to deliver a highly scalable, secure and easy-to-deploy IP address management (IPAM) solution for mid-to-large enterprise customers. Under an agreement between the companies, Infoblox will resell the combined Lucent-Infoblox solution through its authorized channel partners globally.
The new IP address management solution, which includes support for the domain name system (DNS) and the dynamic host configuration protocol (DHCP), is a powerful, integrated combination of Infoblox's network appliance and distributed Grid management technologies together with Lucent's VitalQIP IP Address Management software. VitalQIP enables customers to increase their security, simplify management and reduce the costs of deploying DNS, DHCP and IP addresses in distributed networks.
The new solution from Infoblox and Lucent supports a growing trend to deliver powerful, appliance-based IPAM solutions targeted towards large, distributed enterprises.
IPAM is increasingly recognized as a critical element for IP network infrastructure. For example, a recent industry analyst report from research firm IDC states that IT managers and CIOs must understand that the proper distribution and utilization of IP resources are the very foundation to establishing a reliable IP-centric network for the organization. Further, the report states: "As the majority of the DNS/DHCP/IPAM tools today are based on software packages on servers, IDC expects many of the solutions will migrate into pre-installed purposeful appliances specifically to address DNS/DHCP issues in the network."
In order to ensure high availability and performance for DNS and DHCP services, most organizations distribute these critical services across multiple sites using software deployed on a collection of general-purpose hardware platforms and operating systems. Installing, securing, managing and updating these distributed servers places a significant burden on IT staff and increases the operational costs for delivering DNS, DHCP and IPAM services. In contrast, Infoblox's purpose-built appliances offer a secure and easy-to-manage platform for distributed DNS and DHCP services. The hardened appliance design enables the offloading of DNS and DHCP services from general-purpose servers that typically handle multiple functions and require significant administration overhead. Instead, these services can be easily managed by the Infoblox appliances that are inherently secure, reliable and scalable, freeing network administration resources and providing increased security for these critical network identity services.
"Robust IP address management services are increasingly recognized as critical for any network, and yet many organizations still operate without these vital tools, in part because of the cost and complexity of deploying and managing remote servers," said Richard Kagan, vice president of marketing at Infoblox. "By pioneering appliance-based IPAM services, Infoblox has brought advanced network identity services to a much broader range of organizations. With today's announcement, we further extend the reach of Infoblox's advanced IP address management by bringing the operational, security and cost benefits of appliance-based deployment and distributed Grid technology together with Lucent's industry-leading VitalQIP solution."
"Infoblox has built a strong market position selling a robust and reliable DNS/DHCP appliance solution," said Rand Edwards, director of marketing for Lucent Worldwide Services. "The new VitalQIP Solution from Infoblox provides a compelling deployment option for Lucent VitalQIP software and reflects our continued commitment to providing our customers high quality network management solutions with low operational costs."
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.