November 13, 2006
Asynchrony Solutions Inc., a consultancy focused on systems integration, application development and collaboration, has announced that Dennis Nadler has joined the company in the newly created position of vice president of Military Technical Solutions.
Dennis will support the rapid growth of Asynchrony's military solutions practice, which is focused on Net Centric Enterprise Services (NCES) and other key Department of Defense (DoD) enterprise architecture and systems integration initiatives. Dennis helped establish Asynchrony's government solutions practice in 2000 before leaving in 2001 for a career opportunity with Northrop Grumman, the nation's second largest prime government contractor. His return to Asynchrony Solutions underscores the critical contributions that smaller organizations are providing to significant DoD initiatives. He is considered a leading national expert in military service oriented architecture (SOA).
"Although large contractors provide a necessary and vital role, mission-critical architecture and engineering can best be delivered by agile organizations like Asynchrony," said Nadler. "Military customers and large contractors turn to us to deliver solutions for high-end technical challenges in the same way that Special Forces are called upon to handle high-risk military challenges."
Dennis' 20 year career provides him with a rare combination of proven technical expertise and domain knowledge across a wide spectrum of Department of Defense commands and initiatives.
As the USTRANSCOM Enterprise Architect and C2 Division Manager at Scott Air Force Base, Dennis was previously responsible for the analysis, architectural design and oversight of over 120 Defense Transportation Systems and the day-to-day operations of DISA USTRANSCOM support of the DoD's Global Command and Control System (GCCS) operations.
As the DoD Global Combat Support System (GCSS) chief engineer and program manager, he was responsible for complete execution of the system: budget control; human resources; requirements management; system design, coding, and implementation; and joint staff support for the global Combatant Commander level GCSS system. In addition, he was responsible for setting architectural guidelines and Common Operating Environment (COE) directives for DoD combat support systems. To accomplish this mission, Mr. Nadler managed an office of 65 government employees and more than 300 contractors.
As the Commander/Executive Engineering Manager for one of DISA's Software Engineering Centers, Dennis also managed the Center's budget and staff. He directed and led the organization's engineering staff in its mission to insert and integrate emerging technologies into Intelligence and Command and Control (C2) capabilities to increase their interoperability and effectiveness on our nation's warfighting missions.
"Many companies hire ex-military people for their Rolodex," said Bob Elfanbaum, Asynchrony CEO. "We didn't hire Dennis for who he knows, we hired him for what he knows. In today's rapidly changing and increasingly complex environment, only the most experienced and technically proficient leaders like Dennis can be counted upon to deliver the proper solutions."
Dennis is a nationally recognized expert in DoD architecture and integration. He has been a featured speaker at industry events and is often consulted by journalists covering the industry for leading publications.
Mr. Nadler is a native of Illinois. He holds a Bachelor's degree in Electrical Engineering from the Southern Illinois University of Illinois and a Master's degree in Computer and Resource Management from Webster University of Missouri.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.