March 19, 2012
PALO ALTO, Calif., March 19 — Big Switch Networks, who is bringing the benefits of virtualization and cloud architecture to enterprise networks, today unveiled the three pillars around its Open SDN architecture as it celebrates its two-year anniversary.
"SDN has the promise to change the networking landscape, helping drive innovation and increasing business agility and operational efficiency of the network," said Joe Skorupa, VP Distinguished Analyst at Gartner. "However to reach its full potential, it must integrate with the existing ecosystem and standards."
"OpenFlow and SDN have been gaining significant momentum and, as customers adopt these technologies, it is critical they look for open standards, open APIs and open source to ensure the SDN solution they implement is future-proof," said Guido Appenzeller, Co-Founder and CEO of Big Switch Networks. "Big Switch Networks has been at the forefront of the OpenFlow standard creation, offered enterprise-grade open source SDN solutions to the community and continuously expanded its ecosystem of partners. Openness will be core to SDN success and, by relentlessly delivering it in our Open SDN architecture, we plan to stay ahead of the market."
Big Switch Networks' Open SDN architecture is based on these three pillars:
1. Open Standards: Support for networking industry standards, including newer ones like OpenFlow, ensure better integration and interoperability today and in the future between different players within the SDN ecosystem. It allows for smooth deployment of SDN solutions into existing physical and virtual networks and for multi-vendor environments without increasing complexity.
2. Open APIs: Open APIs foster the creation of a vibrant ecosystem of infrastructure, networks services and orchestration applications. Solutions delivered by this ecosystem can transform networks into a competitive business advantage by enabling dynamic coordination between networking, compute and storage resources and by providing a tighter alignment with business priorities.
3. Open Source: Successes such as Hadoop, MySQL and Linux demonstrate the importance of open source in every major software revolution that has taken place in the past decades. As networking becomes more software-oriented, open source provides complete transparency on the quality of its code while enabling customers to benefit from contributions made by the active open source SDN community and, more importantly, prevents vendor lock-in in the new network landscape.
Big Switch Networks supports open standards like OpenFlow and integrates with open APIs as an OpenStack member. Earlier this year, the company made the core component of its commercial solution available to the open source community with the release of Floodlight, its open source Controller, and it has since been downloaded more than a thousand times.
"For the last decade, most major changes in the software industry have been associated with open source: Hadoop with Big Data, Linux with Operating Systems and Netscape for the Web," said Matt Davy, Chief Network Architect and Executive Director of InCNTRE, Indiana University. "As networking is moving towards more programmability thanks to SDN, we should also expect strong open source communities to develop around solutions like the one Big Switch Networks introduced with Floodlight."
For more on Big Switch Networks' Open SDN architecture, view the SDN Coffee Talk with Guido Appenzeller as a guest speaker at: http://www.bigswitch.com/sdn-coffee-talks/.
Big Switch Networks Celebrates Two Years of Innovation
Big Switch Networks was founded in March 2010 to bring the benefits of virtualization and cloud architecture to enterprise networks. Key milestones include:
About Big Switch Networks
Big Switch Networks was founded in 2010 to deliver Open SDN solutions to cloud networks. Big Switch Networks raised $13.75 million Series A funding led by Index Ventures and Khosla Ventures in 2011 and is headquartered in Palo Alto, California. For more information, visit http://www.bigswitch.com.
Source: Big Switch Networks
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.