September 10, 2007
The High Performance on Wall Street conference takes place next Monday (Sept. 17), and -- as the title indicates -- it will be taking a look at the technologies that are allowing financial services firms to meet the incredible performance demands that have been placed at their feet.
Among the topics that will be covered this year are low latency, hardware acceleration and on-demand applications, and these will be covered from both a high level and a more granular, enabling technology focus. Speaking and presenting on these issues and several others will be representatives from leading vendors like DataSynapse, IBM, GigaSpaces, Appistry, AMD and Intel, among others, and user organizations including Credit Suisse First Boston, UBS, Barclays Capital and Citigroup, among others.
To get a little insight into the IT trends that helped to shape this years conference, as well as to get a little more insight into the financial services market, in general, GRIDtoday spoke with conference chair Peter Harris, who was responsible for setting the agenda. Harris is president for the Americas and editor-at-large for A-Team Group, a research, publishing and consulting company specializing in the field of information technology in financial markets.
GRIDtoday: What role did you have in planning the event?
PETER HARRIS: I really put together the conference program, so the content of the day is my responsibility. I work with Flagg Management, who is the overall event manager. I work for a company called A-Team Group, and we are a publishing and research company, and we focus on IT use by the financial markets.
Gt: There seems to be a big focus this year on grid computing and similar technologies, such as application platforms providing high availability and low latency. How important has grid computing traditionally been in financial services?
HARRIS: Grid computing over the last five years has really become very much en vogue on Wall Street; Wall Street’s a big user of compute power. A lot of the applications that are run -- analytics applications, portfolio management, risk management -- all require an awful lot of compute power, and the faster you can perform calculations, the more opportunity you have in the market. So, firms have really looked to embrace both high-performance clusters and also grids to a large extent.
Gt: How do companies like Appistry or GigaSpaces, etc., fit into the “grid” market? Are financial firms considering them grid computing, or do they look at these companies in a different light?
HARRIS: GigaSpaces is really a class of grid computing that’s focused on data rather than compute power. GigaSpaces is presenting alongside Microsoft at the event, and I think Microsoft is coming to the party with its high-performance server … which is really about compiling the compute power needed to run applications. What Wall Street has found is that having the compute power is not enough; you actually need to have the data available in a timely manner for the compute nodes to process.
So, there’s this class of applications that is sometimes called “data grids” or “data fabrics,” and they're really about making sure the data is available for the compute nodes when they need the data. GigaSpaces is in that data grid space, and they actually have a joint offering with Microsoft in that area.
Appistry is really more of a virtualization management play. It’s really more about making sure that the applications running within a datacenter -- whether they’re running on a grid or a cluster -- are actually managed correctly, that they’re provisioned to servers at the time when they’re needed with the right software stack, etc.
That’s a little bit of a different play. … DataSynapse is in that same space, in that application virtualization space. That’s another part of high-performance computing: You not only need an awful lot of compute resources, but you also need to use those resources effectively. The resources cost money to buy, but they also cost a lot of money to run.
Gt: What was your agenda in putting together this conference program? What areas and technologies were you looking to focus on, and how did you do in meeting those goals?
HARRIS: It’s certainly been a hot year for this subject, and so I’m very pleased with what we’ve been able to do. I think the area that’s become very hot this year is the area of “low latency,” and quite a lot of the conference is about that. It’s about different aspects of that, whether it be multi-core processing helping to achieve [low latency], or software engineering in terms of software stacks helping to achieve [low latency], or high-performance networking like InfiniBand helping to achieve [low latency.]
They all play into the mix of a bundle of technologies that help meet this goal, almost, called low latency. Low latency really is about having systems that can cope with extremely high market data rates, which in the next year -- in certain markets, such as the options market -- are going to go over the million messages per second level. You want to be able to capture that data and process it and perform analytics, and the faster you can do that, the more chance you have to react to changing market conditions. You’re literally talking about tens of microseconds in terms of processing time, which is kind of that state of the art -- it’s extremely, extremely real time.
Gt: Are there any sessions or panels that you’re looking forward to seeing, or that you think will be particularly interesting?
HARRIS: There are a couple of them, actually. There is one on hardware acceleration (“Focus on: Hardware Acceleration, FPGA Technologies, Low Latency Ticker Plants Implementing Hardware Accelerated Applications For Market Data and Financial Computations”), which is [an] emerging [area]. We actually ran the hardware acceleration panel last year, and it was a pretty new area then, and we were surprised to see it was totally packed, so we moved it to a big room this year and got lots of participants. We’ve got Intel and AMD on the same panel, which is going to be interesting, and we’ve got someone from UBS [Invesment Bank], so I think that one’s going to be a blast.
Also, IBM is actually giving a wide-ranging presentation about what it’s doing. One of the things it is doing is something called “System S,” which is a very scalable supercomputer, which they’re looking at for stream-processing applications, processing many messages per second. IBM’s research people are researching it; it’s not something you can buy right now. That’s kind of interesting.
Gt: Any other thoughts on the conference, or about high-performance IT on Wall Street, in general?
HARRIS: I’m very pleased. We’ve got a great bunch of pretty well-known IT heavyweights in, we’ve got a bundle of smaller, innovative companies in there, and we’ve got some good Wall Street users in there. It’s always really difficult to get Wall Street people to talk -- about anything -- so I think we’ve done pretty well. I think that’s a reflection that this is a pretty hot area.
Gt: On that note, how has planning for this year’s event compared to years past? Is it easier to get participants, speakers, etc., as this area becomes more mainstream?
HARRIS: It becomes easier in the way that everyone wants to talk and there’s a lot happening and it’s very exciting. It’s more difficult in that there are a lot more subjects we could cover, and we only have a day; fitting everyone in can be an issue. That was a real challenge, getting enough time for people to present. I think we’ve had a pretty good time of it, but there are probably about a dozen people who would love to speak but aren’t going to be able to.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.