March 31, 2008
In this executive Q&A, Tideway Systems Founder and CEO Richard Muirhead discusses how complexity is spiraling out of control in today's datacenters and explains how his company's solutions help to map datacenter interdependencies, automate processes and, generally, reduce the costs of datacenter management.
GRIDtoday: Tell me a little about Tideway Systems. What is the company's value proposition to customers, and what kinds of products/solutions does it offer?
RICHARD MUIRHEAD: Tideway's solutions enable business application change control, standardization and compliance in today's datacenters. We help our customers achieve cost savings faster, drive business agility and manage operational risk -- delivering true IT and business service optimization.
Software applications for both work and leisure are more critical than ever today. The hearts of these applications run in datacenters that require more highly qualified technologists than ever before to manage them properly. The problem? Paying these technologists can now amount to more than half the running costs, hiring them is a brake on the business, and they are far from infallible -- proven by the fact that most IT outages are caused by errant change (read: human error). Tideway's core invention is an automation technology that makes the change and configuration management of these applications efficient and effective.
Our flagship product, Tideway Foundation, gets control of business applications and their entire underlying IT infrastructure, including virtual components, connecting all technology layers -- from business applications to switches and all the dependencies in between. This allows companies to:
Tideway Foundation is built as a user-centric platform that pushes relevant information and knowledge to end-users according to their profile, providing them direct access to the tools and interfaces they need to act on that intelligence in a timely fashion. In this way, Tideway's user experience more closely resembles a consumer utility, like Google or Wikipedia, than a traditional enterprise IT Service Management (ITSM) tool.
Gt: What is going on in today's datacenters that requires these kinds of solutions? What IT practices are creating real problems in terms of complexity?
MUIRHEAD: A business' competitive advantage today is more dependent than ever on IT. IT has responded with a range of technologies and initiatives that can add multiple levels of complexity and, when improperly managed, severely exacerbate the problems they were intended to correct. Virtual machines, SOA, high availability architectures, EAI, virtualized storage, BPM, outsourcing initiatives -- there are so many from which to choose. Couple this with the fact that IT organizations need to simultaneously address the requirements of security, internal and regulatory audit, procurement, and cost transparency as part of their every day operations, and it's easy to understand the real problems that can be created in terms of complexity.
As obvious a goal as it might seem, attempting to run IT operations from the perspective of a business service can further complicate things. Why? Because it requires the involvement of experts across multiple technology and vendor silos in a culture that's systemically averse to sharing tribal knowledge and often poorly trained and supported when it comes to effective collaboration.
One of a number of dirty little secrets in most large enterprises today is that datacenter servers run at an average utilization of 10-15 percent -- and around 5 percent of servers perform no useful function. Optimizing datacenter resources can deliver significant savings in hardware, maintenance, licensing, rack space, cooling and power costs. Believe it or not, most organizations can't get an accurate server count, let alone understand the software running on those servers or its relationship to business applications. On average, it can take around 60 man-days to conduct a single-pass audit of one thousand servers -- and that's with an inadequate level of accuracy. This makes it too expensive to find opportunities for cost savings (and risky to implement them). Once identified, the opportunities for rapid cost savings fall into a few categories:
In addition to these cost savings, having effective change controls over business applications does not slow the business's ability to respond to market demands -- it actually speeds it. Just think of a car with no dashboard and no brakes and one with the standard gear: Which one would you feel comfortable driving faster? It's not a hard choice.
Gt: How have attempts to add flexibility to datacenter operations (e.g., virtualization, SOA, etc.) added to the fray?
MUIRHEAD: Virtualization is an interesting case in the sense that virtualization benefits each come with associated risks. We can map these to three of the most top-of-mind concerns for IT departments today:
So what we have is a catch-22 in a number of areas. The key point to understand is that none of these benefits will materialize as promised without a new way of managing IT that can cope with the new challenges introduced by virtualization.
Gt: How do Tideway's solutions help to bring order back to infrastructures?
MUIRHEAD: Tideway Foundation takes the complexity out of managing virtualized server environments. Foundation's ability to map business applications to IT infrastructure extends to virtual servers. By mapping applications to virtual servers from many vendors (including VMware, Citrix (Xen), Microsoft, IBM and Sun) and then virtual servers to their physical hosts, end-to-end application dependencies remain clear. Dependency visualizations support accurate change impact analysis, reducing system outages and increasing service availability. License usage is also clear, preventing waste caused by licensing software and operating systems that aren't really needed.
Gt: What mistakes are companies making when making changes to their infrastructure or datacenter operations? What could they do up front to avoid these issues? Can Tideway help with this transition process?
MUIRHEAD: When it comes to making changes in the datacenter or to infrastructure, inadequate change management practices due to poor data quality is a major problem we see holding back businesses -- and placing great strain on the datacenter. While poor data quality can be risky and expensive in its own right, existing bad data has a way of turning into very bad data fairly quickly. Why? Because an organization making changes based on bad data operates differently than one making changes based on accurate and reliable data. Users begin creating workaround solutions to push changes through, but these out-of-bounds, “invisible” changes cause the environment to drift further from its recorded state. This makes systems of record even less accurate and less relevant than they were in the first place.
Data quality will always be somewhat of a moving target, and the reliability of systems of record will plummet when approved changes are recorded inaccurately or incompletely and unapproved changes are not recorded at all. Having a system of record and implementing systematic or ad hoc audits isn't enough -- it doesn't help much to improve the quality of data only at a given point in time. If data is actionable sometimes but falls below a level of accuracy that users consider acceptable the rest of the time, the value of the system of record is negligible. After all, change requests are still coming in no matter how long ago the data was audited. Businesses find themselves spending an enormous number of hours and dollars on a system that's only useful for short periods of time. But users are very sensitive to data quality and will give a system of record a very short time to prove its value. After the first few major incidents, people will lose faith in the data and rely on the resident IT guru's knowledge instead of the system of record.
Truly effective change management cannot take place without acceptable (our customers say 97 percent accurate) data quality. Ensuring that the data available consistently hits this level requires an automated, accurate and comprehensive model of dependencies within the IT environment, which Tideway provides. This is the foundation of effective change management and data quality. Maintaining data quality needs to be a continuous process. Systems of record need to earn users' trust to be used effectively -- so building transparency into service management software and processes is essential.
Gt: Do you see any on-the-cusp trends or technologies that could potentially cause problems with unnecessary complexity or inefficiency? What can companies do now (or before adopting these new technologies) to avoid such issues?
MUIRHEAD: Virtualization is only just going into production in most environments, so we are only beginning to scratch the surface of the management challenges. There are five basic steps companies need to take before adopting virtualization technologies:
Gt: Finally, what kinds of quantifiable results have Tideway customers seen? In what industries do most of your customers play, or are your solutions popular across the spectrum of vertical industries?
MUIRHEAD: By marrying the application infrastructure to the business applications -- and thereby making their ownership undeniable -- Tideway quickly pays for itself. We can discover stranded hardware assets, support decommissioning and enable more effective software license negotiations. Because Foundation is fast to implement, customers see a return on investment within a short timeframe, unlike most IT deployments. One investment bank saw a 900 percent ROI within 12 months by using Foundation to automate manual inventory processes. This allowed them to reduce manual costs by an average of 80 percent and identify old, inefficient unused servers and software, eliminating them to reduce maintenance and licensing costs -- and removing 3 percent of their servers.
Tideway plays across all verticals and has particularly strong customer success stories in the financial, insurance, telecommunications, pharmaceutical and public sectors.
Gt: Is there anything else you'd like to add?
MUIRHEAD: IT management is crying out for a fresh approach to automation and needs to embrace inventions that can help solve their key business challenges. Companies now need to look to innovative solutions that address issues such as change, configuration and collaboration while offering unprecedented levels of transparency and trust in the ongoing relationships they have with their customers (i.e., the business and also their external suppliers). That's what Tideway is all about.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.