June 27, 2015 at 12:49 am
Comments posted to this topic are about the item Not a cloud to be seen
Best wishes,
Phil Factor
June 27, 2015 at 8:15 am
Nice.
June 28, 2015 at 6:41 am
It's amazing what one can do with SQL Server on bare-metal commodity hardware with a well architected system and attention to detail. I've been a staunch advocate of the "performance is a feature" story for literally decades.
I wouldn't quick to dismiss the viability of the cloud, though. What do you think would happen if all SQLServerCentral or StackOverflow users logged in at once and used the system heavily? It wouldn't be pretty with an on-prem solution unless the database server(s) and all other hardware and software components are designed and sized to meet the unexpected and extraordinary demand. The system is only as strong as it's weakest link and it is very expensive to maintain excess capacity that is rarely, if ever, used.
This sort capacity of capacity need is just academic consideration at SQLServerCentral or StackOverflow but it was a real issue at a major US financial services firm I used to work for. On a normal trading day, active traders login and place trades all within a few minutes of when the US financial markets open each morning. When there's a major financial event, such as a major new company IPO, that number increases by orders of magnitude. Such events are rare (years apart) but hardware and system design had to accommodate the peak nevertheless. Trust me, people are more concerned with trading securities, and doing so quickly, that getting answers to technical questions:-)
If that same system were being designed today, I have no doubt cloud components would be an integral part of a distributed architecture. It's exorbitantly expensive to main such excess capacity for an on-prem solution, especially when you consider licensing, redundancy, and DR. The ability to add additional capacity in short order is a compelling to consider a cloud-friendly architecture even if the initial implementation is on prem.
June 28, 2015 at 12:30 pm
Dan Guzman-481633 (6/28/2015)
It's amazing what one can do with SQL Server on bare-metal commodity hardware with a well architected system and attention to detail. I've been a staunch advocate of the "performance is a feature" story for literally decades.I wouldn't quick to dismiss the viability of the cloud, though. What do you think would happen if all SQLServerCentral or StackOverflow users logged in at once and used the system heavily? It wouldn't be pretty with an on-prem solution unless the database server(s) and all other hardware and software components are designed and sized to meet the unexpected and extraordinary demand. The system is only as strong as it's weakest link and it is very expensive to maintain excess capacity that is rarely, if ever, used.
This sort capacity of capacity need is just academic consideration at SQLServerCentral or StackOverflow but it was a real issue at a major US financial services firm I used to work for. On a normal trading day, active traders login and place trades all within a few minutes of when the US financial markets open each morning. When there's a major financial event, such as a major new company IPO, that number increases by orders of magnitude. Such events are rare (years apart) but hardware and system design had to accommodate the peak nevertheless. Trust me, people are more concerned with trading securities, and doing so quickly, that getting answers to technical questions:-)
If that same system were being designed today, I have no doubt cloud components would be an integral part of a distributed architecture. It's exorbitantly expensive to main such excess capacity for an on-prem solution, especially when you consider licensing, redundancy, and DR. The ability to add additional capacity in short order is a compelling to consider a cloud-friendly architecture even if the initial implementation is on prem.
I'm not sure about the involvement of cloud components for a trading system, but maybe cloud platforms.
Martin Fowler wrote an article on the LMAX architecture that's used as a trading platform.
It talks about a Business Logic Processor, Input and Output Disruptors, single threaded processing and avoiding message queues because of locking, etc.
When you get into the 6 million transactions per second realm, the problems become less domain driven and more about computer hardware and in-memory processing. In other words, solving the "computer problem".
As it turns out, this type of architecture has some inherent simplicities.
The LMAX architecture may very well be the future of development, but I think the industry will have to go through many gyrations before arriving there.
So watch your back Hipsters, your survival depends on your ability to adapt just as it has been for your predecessors.
June 29, 2015 at 12:14 pm
I think people have very little concept of how much 2TB is or how fast 2.4Ghz is. One fusion-io card can cope with a phenomenal amount of io.
They've bought the story that RDBMS don't perform at web scale without considering if their systems are now or ever will be performing at this remarkably unquantified level.
I get the developer's need for a flexible data model in a fast evolving system and that is fine for an extremely simplistic business model. The instant you want to make broader use of your data convenience at the front end translates to short term gain for long term pain
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply