June 10, 2018 at 6:28 am
Comments posted to this topic are about the item Improving SQL Server 2016 Large Database Performance
June 11, 2018 at 2:45 am
Great Article. Particularly loved the trace flag tips.
Wondering about two things though:
1. Can we set optimize for ad-hoc workloads simply because we have a large database? I thought this should be based on the "owrkload" actually ... (no pun intended)
2. Because of compatibility issues, I have left some databases in 100 , 110 or 120 (DBMA pointed out potential risks of upgrade). How bad really is this?
Br. Kenneth Igiri
https://kennethigiri.com
All nations come to my light, all kings to the brightness of my rising
June 11, 2018 at 3:51 am
Hi Kenneth.
1: Optimize for ad-hoc workloads recommendation is general, has nothing to do with the size of databases.
2: Nothing wrong with using older compatibility modes as long as your policies for calculating statistics are aligned with the platform. My point is that when you upgrade to 120 or later, you cannot just trust auto update stats for large databases. You need to check your query plans and probable set up maintenance jobs to calculate stats.
June 11, 2018 at 7:04 am
Why would anyone run any other application on a SQL server? Especially one with large databases like this?
I've had really good results with lock pages in memory. I've never done it on SQL 2016 but running older SQL on bare metal hardware it was almost a necessity. Otherwise all the HP crapware you have to run on HP servers take up memory and you'll see periodic errors about it being pooled out or something like that. Forgot the exact error. Haven't seen it on vmware with vmtools, but it can still happen.
When I ran into this problem we would get periodic issues with SQL and memory issues. Once we enabled lock pages in memory they went away.
June 11, 2018 at 7:33 am
To be honest, I'm passing on the recommendation about not locking pages in memory from Microsoft black belt team.
I haven't tested performance with and without this policy. I doubt that it had any impact on performance in our setup.
I agree that no other apps should run on a VM that has SQL Server installed serving large databases.
June 11, 2018 at 9:25 am
Could you clarify the Maximum Server Memory formula?
My VM's for sql server run with a minimum of 48gb.
If I follow your recommendation, I would be taking 9gb away from sql server.
Usually I reserve 4-5gb for the OS.
Also, using your recommendation for my one server that has 96gb, I would need to reserve 15gb for the OS.
Why?
Bill Soranno
MCP, MCTS, MCITP DBA
Database Administrator
Winona State University
Maxwell 143
"Quality, like Success, is a Journey, not a Destination" - William Soranno '92
June 12, 2018 at 8:26 am
Again, I'm giving you the same lame answer: I'm repeating the recommendations from Microsoft black belt team.
I have not analysed memory consumption by the OS on our servers in detail.
We used the Azure L8s VMs with 64 GB RAM, giving 4 + (64 - 16) / 8 = 12 GB to the OS and 52 to MSSQL.
I have not tested a setup with just 4 GB for the OS and 60 GB for MSSQL, but I'm guessing that it would work just fine. I don't see this as the root cause for the performance issues that we experienced.
June 12, 2018 at 8:30 am
Thank you for the reply.
Bill Soranno
MCP, MCTS, MCITP DBA
Database Administrator
Winona State University
Maxwell 143
"Quality, like Success, is a Journey, not a Destination" - William Soranno '92
June 12, 2018 at 11:00 pm
Thank you for sharing this interesting article!
But there is specially one thing that I can't understand.
Why update the statistics on saturday with fullscan and destroy the fullscan stats on monday (with sample 10%)
This makes no sense for me?
Gerhard P.Advanced BI DeveloperAustria
June 13, 2018 at 12:36 am
That is an error in the article! This is wrong:
For large databases, do not use auto update statistics. Update statistics via maintenance jobs instead. I recommend the following maintenance jobs:
For large databases, do not use auto update statistics. Update statistics via maintenance jobs instead. I recommend the following maintenance jobs:
June 13, 2018 at 5:07 am
@henrik.thomsen.dk
This make sense!
I use incremental stats where possible (partitioned tables).
With a small script, which determines the amount of changed rows in the partitions. If there are enough changes I update the stats of the partition.
In the really big tables (gt. 3TB) 10% are a good measure for good plans (in our environment)
Gerhard P.Advanced BI DeveloperAustria
August 12, 2018 at 12:04 am
This seems to be totally totally wrong:
Your database volumes must have at least 25% free space.
The answer is "it depends". You can have 1% free space vs. 50% … depending on the situation and usage, etc. …. there will be no difference.
Other than that, I always would love to have 99% free space. … but it is not a good idea.
Of the biggest problems with SQL Server and giant database tables is that the Statistics is stuck on 200 rows. Hope one day, Microsoft will start using an index param, just like doing asc or descending, you will be able to pick the row or page count for stats.
200 rows works with small and big tables, but when it comes to tables over 10-20 billion … in some cases, good luck.
If anybody from Microsoft wants to contact me on that ….travisGatesMcGee @ Ho tm ail dot com
October 30, 2018 at 7:34 am
Couple of things here:
1. I would not blanketly disable auto update stats. In fact, I would "almost never" disable auto update stats. Its a good idea to update stats as often as you can manually; nightly, weekly, whatever... but there's no benefit to disabling auto stats unless you experience frequent updates that impair performance. Even in that case, you are probably better off leaving auto update on, enabling the asynch option. NOTE: This article is referring to SQL Server 2016 where TF 2371 changes are enabled by default. https://blogs.msdn.microsoft.com/psssql/2016/10/04/default-auto-statistics-update-threshold-change-for-sql-server-2016/
2. TF 4199 should NOT be enabled by default. If the compatibility level of the database is 130, it will enable all optimizer fixes up to RTM. If you need fixes released after RTM, use the database scoped option, not the trace flag.
Jared
CE - Microsoft
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply