July 6, 2015 at 2:37 pm
We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
Are there any special features in AWS that we can take advantage of for this purpose?
September 21, 2015 at 5:50 am
If you are migrating to AWS I take it that your existing offering is either on your own physical tin or virtualised in your own data centre?
I'd seriously recommend that you engage one of AWS's own architects to help with this. They are a talented bunch but be aware that they are priced to make sure that you focus on using them for the important stuff.
You may also have to consider re-designing your DB and application to break it down into smaller units. There used to be a joke going around that AWS provided a hyper-mega-ultra-super-DB server or as DBAs called it "small".
It's not quite that bad now with the R3.8xLarge being 32 vCPU, 244GB RAM and a 10Gbit network connection.
You can request provisioned IO which I think goes up to 10,000IOPS.
Make good use of http://calculator.s3.amazonaws.com/index.html Forget copying/pasting prices into Excel you will miss something and if you don't your spreadsheet will become a maintenance nightmare.
October 6, 2015 at 1:27 pm
Thanks for your advice and recommendations, David!
Funny you mentioned "AWS provided a hyper-mega-ultra-super-DB server" - even now they still don't have super powerful sql server instance types, IMO. Luckily for us, we didn't have massive data crunching need for the really powerful beasts. But we are hitting the top limit of their specs, and I don't even consider we have a big SQL server operation.
I was able to use a tool called SQLIO to benchmark the IO capacities in our existing environment and AWS servers. I was very happy to find out with drive sizes big enough that I can match our current IO capacity without paying for extra provisioned IOPS. Those IOPSes get expensive real quickly.
I also found out the IOPS mentioned in AWS specs refers to random read and write vs sequential r/w if anybody is wondering.
October 26, 2015 at 7:45 am
16 prefixed reads get you all the logs for mydomain.com for that hour:
http://myserverlogs.s3.amazonaws.com?prefix=0/service_log.2012-02-27-23.com.mydomain
http://myserverlogs.s3.amazonaws.com?prefix=1/service_log.2012-02-27-23.com.mydomain
…
http://myserverlogs.s3.amazonaws.com?prefix=e/service_log.2012-02-27-23.com.mydomain
http://myserverlogs.s3.amazonaws.com?prefix=f/service_log.2012-02-27-23.com.mydomain
March 29, 2016 at 12:22 pm
Yes, AWS can quickly add up on the bill if it needs to be customized
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply