November 08, 2012
CHARLOTTE, N.C., Nov. 8 – National information technology (IT) infrastructure and cloud solutions provider Peak 10 Inc. today announced the opening of its newest data center. The 14,000 square foot facility completes the first phase of Peak 10’s plans for a 64,000-square-foot technology campus in Charlotte, N.C. It is located in the David Taylor Corporate Center in the University Research Park area near the University of North Carolina at Charlotte. This is the company’s fourth data center in Charlotte. It also has 22 data centers in nine other cities throughout the United States.
“We are excited to have our fourth Charlotte data center open for business,” said Pat O’Brien, vice president and general manager of Peak 10’s Charlotte market. “This facility enables businesses locating and growing in Charlotte to house and manage their important IT assets with the peace of mind that we’ll be there to keep their business up and running.”
The Charlotte technology campus brings Peak 10’s entire Charlotte footprint to more than 129,000 square feet. Subsequent expansion phases will be constructed as the customer base grows. Upon build out, Peak 10 will have invested approximately $50 million in infrastructure within the Charlotte market.
“We are proud to be part of the Charlotte region’s thriving business community. This new technology facility is a great example of our long- standing commitment to invest in high growth markets. We continue to enhance our data center facilities and managed cloud services in support of our current and new companies who require 24/7 support of their technology assets and services,” said David Jones, president and CEO of Peak 10. “The opening of this facility and the future expansions of the campus position us to continue serving as a business engine for the markets we serve throughout the United States and the high-growth companies that call the Charlotte area home.”
Like all Peak 10 facilities, the newest addition is engineered with enterprise-class infrastructure to meet its customers’ requirements for security and compliance. The facility operates with five-point physical security, uninterruptible power, HVAC systems, fire suppression and around-the-clock monitoring and management. It is SSAE 16 audited, PCI compliant and interconnected with Peak 10’s private network, which provides customers the ability to leverage its data centers in all 10 markets when implementing disaster recovery solutions.
About Peak 10 Inc.
Peak 10 provides reliable, tailored cloud computing, data center and other information technology (IT) infrastructure solutions, primarily for mid-market businesses. Customer-centric, responsive and cost-effective, Peak 10 solutions are designed to scale and adapt to customers’ changing business needs, enabling them to increase agility, lower costs, improve performance and focus internal resources on their core competencies. Peak 10 holds the Cisco Cloud Provider Certification with a Cisco Powered Cloud Infrastructure-as-a-Service (IaaS) designation. Peak 10 is SSAE 16 audited and helps companies meet the requirements of various regulatory compliance acts such as Sarbanes-Oxley (SOX), HIPAA/HITECH, PCI DSS and Gramm-Leach-Bliley (GLBA).
Source: Peak 10
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.