August 09, 2012
TOKYO, Aug. 9 — NTT Communications Corporation (NTT Com), a global cloud services provider and wholly-owned subsidiary of NTT Group, announced on August 9 that it will start operation of the Asia Submarine-cable Express (ASE), a 40-gigabit-per-second (Gbps), ultra-low latency undersea cable connecting major cities in Asia on August 20. ASE will enhance NTT Com's highly reliable global network services by boosting the capacity and strengthening the redundancy of its Asian cable networks.
ASE, which eventually will incorporate 100 Gbps optical technology, will launch with a total carrying capacity exceeding 15 terabits per second (Tbps), a total length of about 7,800 km, and special designs to withstand earthquake and typhoon damage. NTT Com is the major investor in the cable system, which has been constructed in cooperation with Malaysia-based Telekom Malaysia, Philippines-based PLDT and Singapore-based StarHub. The cable has landing points in Japan, the Philippines, Singapore, and Malaysia, and will add Hong Kong in the first quarter of 2013. The route between Japan and Singapore not only covers the shortest distance to maximize reliability and minimize latency, it is also connected directly to the Serangoon Data Center in Singapore and later will also connect to the Hong Kong Financial Data Centre. The direct connection enables customers there to use NTT Com's network, data centers and cloud services on an end-to-end, one-stop basis.
On August 20, 2012, with the launch of the new subsea cable, NTT Com will begin offering an enhanced global leased line service, by incorporating ASE's low-latency routes into its existing Arcstar Global Leased Line Service. The newly enhanced service leverages ASE's Japan-Singapore connection with an industry-leading latency of than 65 milliseconds latency, more than 3 milliseconds faster than routes via other subsea cables. Existing US-Japan routes also will be used, including the NTT Com's own PC-1 cable which offers the lowest latency connection between Tokyo and Chicago, home of the Chicago Mercantile Exchange. Superior service between Asia and the U.S. is especially attractive for financial enterprises, such as high-frequency trading firms that issue huge numbers of buy/sell orders for financial products and must transmit such information instantaneously.
This month, NTT Com will incorporate ASE into the backbone of its Tier-1 global IP network, which directly connects the world's major internet service providers (ISPs) and content providers, and, in the near future, will do the same for Arcstar Universal One, its scalable, cloud-based network service.
About NTT Communications Corporation
NTT Communications provides consultancy, architecture, security and cloud services to optimize the information and communications technology (ICT) environments of enterprises. These offerings are backed by the company's worldwide infrastructure, including leading global tier-1 IP network, Arcstar Universal One(TM) VPN network reaching over 150 countries, and over 130 secure data centers. NTT Communications' solutions leverage the global resources of NTT Group companies including Dimension Data, NTT DOCOMO and NTT DATA.
Source: NTT Communications Corporation
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.