September 3, 2002 at 5:18 am
I have been asked to provide a means of supplying near real time results to financial traders. Currently my trading system uses SQL2000, but I've been told this is not fast enough!! Can anyone advise how I can go about addressing this request?
TIA,
Simon
September 3, 2002 at 7:44 am
Can you give me a broad over of how the data flows, where you get it, how you store it and how it forwards to the customer? SQL 2000 under a light load may not be an issue at all, this also however depends on serveral Hardware Factors and data size.
Now if you are trying to track the history of a stock in addition to provide current quotes then it may make sense to develope a server app that pushes gets the data from your source and forward to currently connected subscribers and based on a time criteria change in status write the data to SQL as needed. But still for realtime there is truely no such animal, only close time otherwise networks would be flooded with a constant flow of data.
"Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)
September 3, 2002 at 9:20 am
OK, the current system is 2-tier with internal users connected via a LAN, and web users via a webserver farm. What I believe is being suggested is an intermediate layer (hence my other question about middleware). I suggest that this would be holding a list of open trades per client, with a pushed feed of prices coming in, and revaluing them in virtually real time. End users would connect into this layer and be pushed the details for a client, or an instrument or all of them etc. depending on their intentions. I have no experience of this, i.e. what would this layer be written in, how would it push data to a client, and continue to accept inputs while processing and updating the outputs.
Hardware isn't really an issue, I can spend what I need to.
Regards
Simon
September 3, 2002 at 9:48 am
The speed of SS2K is plenty, just depends how things are implemented. There are lots of tricks you can do, use triggers to keep a summary of data current in another table, distribute the db across multiple servers, etc.
A middle layer can also help, but keep in mind that more layers means a more complex system, plus more points of failure. Will you try to keep sticky connections to a middle layer? IF so, then this can be an issue. Can you keep data correct in real time on the client? Eliminates lots of traffic and complexity.
I've worked on a trading system and seen a couple implementations. One of the better ones was keeping the data in middle servers, but it was a very complex development effort.
If you keep prices in a table (small table) for the current period (hour, day, whatever) and also keep them in history, you can more quickly get them queried from the client. I'd revalue on the client if possible. Let them hold the positions.
If it's complex revaluations, like over time with many prices needed, then you may need to do this on the server. You could implement a server with a process to keep doing this, or even a 2nd database server that gets updates and revalues things immediately.
It's a tough question because so much depends on your implementation requirements.
Steve Jones
September 3, 2002 at 10:10 am
I have seen energy/commodity systems that use a middle tier to keep a much smaller dataset to serve out to help speed things up. The system had a number of custom com+ objects distributed among several servers to inert and retrive the info. There is a reason that trading systems cost a left arm, any of them that are any good are truly complex.
Wes
September 3, 2002 at 10:15 am
When I used to work at [Large Investment Bank], there was a prejudice against SQL based solutions for history time-series becuase of a "feeling" that SQL type data structures would be poorly optimized for queries. My "feeling" now is that hardware has made such strides that this is no longer relevant.
I recently ran an experiment to collect tick data from the Chicago Mercantile Exchange for my business and had my SQL Server (clustered SQL2KEE, Dual 933 PIIIs, 1 GB RAM, RAID 5 data disks) pretty much idle after a little buffering on the client side. I was picking up around 3 million rows a day.
A key concept, though, is over-subscription. Having middle level servers that grab more data than the client is actually requesting and buffer's it locally really helps. A trader who is comparing a stock to IBM is also likely to compare it to HPQ or MSFT. A trader who asks for a day's tick history on GE, is likely to ask for a years daily bars on the same stock. etc. It's a statistical optimization thing.
September 3, 2002 at 10:23 am
Oh, another thing to exploit is that CPU is cheap and price data is typically *MASSIVELY* compressible. Standard compression algorithms often get you up to 95-98% compression (depending on the data source). Spending a little time on packing and unpacking the data away from the server helps with disk space, network traffic, and I/O performance.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply