February 20, 2017 at 11:27 am
We have a SQL 2016 on Azure. It's running into bad I/O problems. When I was looking at the resource monitor, I was surprised to see it's the drive C: that has the longest queue for I/O while all other drives barely have any lag. The C: drive has 126 GB total with 95 GB free space. All the db data is on another drive.
How this could be possible? I thought the bottleneck should be with the data drive.
February 20, 2017 at 11:45 am
I am assuming reading into the question that you have an Azure-based VM.....
Is your pagefile there on C:? Have you set MAX MEMORY sensibly in the instance?
February 20, 2017 at 12:55 pm
Are any of the following SQL Server bits installed on C:?
Thomas Rushton
blog: https://thelonedba.wordpress.com
February 20, 2017 at 1:42 pm
Thanks a lot, guys. I think I found out what the problem is. The tempdb right now is on C:, which is where the main i/o outage happens. My plan is to upgrade data drives and move all the system dbs there.
While I'm here, is it advisable to consolidate several small servers to a more powerful one? We have a bunch of these meager servers with 2/4 cores and 7/14 GB ram. I personally want to consolidate them, but I'm not quite sure about the performance impact of the consolidation vs isolated servers. These are all new servers we inheritted from a new company acquisition and I'm not sure where the servers' performances stand.
February 23, 2017 at 6:36 am
Michelle-138172 - Monday, February 20, 2017 1:42 PM...These are all new servers we inheritted from a new company acquisition and I'm not sure where the servers' performances stand.
Once you get the immediate issues sorted, run some baseline tests so you will know what Good and Bad mean for these servers 🙂
------------------------------------------------------------------------------------------------Standing in the gap between Consultant and ContractorKevin3NFDallasDBAs.com/BlogWhy is my SQL Log File HUGE?!?![/url]The future of the DBA role...[/url]SQL Security Model in Plain English[/url]
February 23, 2017 at 10:48 am
Thanks, Kevin.
February 23, 2017 at 11:20 am
Kevin3NF - Thursday, February 23, 2017 6:36 AMMichelle-138172 - Monday, February 20, 2017 1:42 PM...These are all new servers we inheritted from a new company acquisition and I'm not sure where the servers' performances stand.Once you get the immediate issues sorted, run some baseline tests so you will know what Good and Bad mean for these servers 🙂
Heh... it's on the cloud. It's all bad. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply