October 22, 2019 at 12:09 pm
Hello everyone!
I am kind of an accidental DBA with limited knowledge of SQL Server. And also limited knowledge of English language, so please bear with me. If you feel like correcting me in respect of either SQL or English matters, it is appreciated.
I found an issue with a SQL Server I am trying to keep alive and read a lot online to hopefully solve it, but now I am stuck and hope someone can help me out.
I used the following query to gather information on plan caches, which I basically found on a Microsoft page:
select cht.name, cht.type, cht.buckets_count, cht.buckets_count*4 as [4 x buckets_count], mcc.entries_count, cht.buckets_count*4 -mcc.entries_count as delta, mcc.pages_kb
from sys.dm_os_memory_cache_hash_tables cht
join sys.dm_os_memory_cache_counters mcc
on cht.name = mcc.name and cht.name IN ( 'SQL Plans' , 'Object Plans' , 'Bound Trees' )
where cht.name IN ( 'SQL Plans' , 'Object Plans' , 'Bound Trees' )
Of the 160036 available SQL Plan slots, 160033 were used, leaving 3 slots available. The instance had been restarted less than 24 hours before. It turned out that on a work day it takes about 7 hours to completely fill up the Plan Cache.
The following query showed that with 137700 Adhoc plans, these by far consume the most buckets:
select objtype, count(*) from sys.dm_exec_cached_plans group by objtype order by count(*) desc
The instance is set to "optimize for ad hoc workloads", but of course still the plan stubs fill up the plan cache anyway (128000).
So I had a look at the respective query texts and found that the majority are extremely simple, most of them are like the statements below:
update qTable set ltime = 239733 where qTable.id =6094604 and wtime = 0
update qTable set ltime = 263296 where qTable.id =6094143 and wtime = 0
update qTable set ltime = 260195 where qTable.id =6094602 and wtime = 0
update qTable set ltime = 222187 where qTable.id =6094599 and wtime = 0
update qTable set ltime = 220809 where qTable.id =6094624 and wtime = 0
A few others are very similar, but with only one predicate (e.g. "update qTable set empID = 3358 where id = 6094624).
And one is a select which I am not sure about whether it is well written in terms of performance. Apparently it combines 3 tables; I didn't know this syntax before:
select customer.*
from qTable, task, customer
where customer.pid = 105
and customer.delat =0
and qTable.id = 6104254
and qTable.taskid = task.id
and task.id > 0
and task.customerid = customer.id
Thank you everyone for reading so far! Here is where I am stuck.
Is there something I could suggest to the developer that would allow SQL Server to recognize these queries as parameterizable? I read about parameterizing queries, but I did not really understand how it is done. What would it look like for the above examples?
Also, I am not sure if this even is an issue. As far as I understood, this causes older planes to be flushed out. As most of them were one-time use anyway, there might not be that big an impact. And the plans apparently are very simple, so while not perfect, compiling so many plans might also only put little additional stress on the instance.
Last question: From an experienced database administrator's perspective, is there a smarter approach to find these queries?
My last step was to group by the first 30 characters of the query text and sort by count(*) desc, then have a look at the corresponding query.
Thank you in advance for any hints and help!
October 22, 2019 at 1:20 pm
To get plan reuse, you need to use stored procedures or parameterized queries: https://www.red-gate.com/simple-talk/sql/t-sql-programming/performance-implications-of-parameterized-queries/
October 22, 2019 at 2:42 pm
in each database you can set the parameterisation options (right click on the database, go to properties and go to options)
obviously stored procs with nice parameters would be the ideal solution
but I found the best way (without changing the code) is to use a plan guide
USE [Mydatabase]
GO
/****** Object: PlanGuide TemplateGuide1 Script Date: 22/10/2019 15:37:31 ******/
EXEC sp_create_plan_guide @name = N'[TemplateGuide1]', @stmt = N'select Count ( Mydatabase_user_name ) as C1 from "v_user_role" where ( Mydatabase_user_name = @0 and Mydatabase_role_name = @1 )', @type = N'TEMPLATE', @params = N'@0 varchar(8000),@1 varchar(8000)', @hints = N'OPTION(PARAMETERIZATION FORCED)'
GO
this seemed to reduce my procedural cache by quite a lot - i'm no expert in this , but it has helped me out
MVDBA
October 24, 2019 at 11:38 am
Thank you very much for your much appreciated answers!
I've read the article and feel I understand parameterizing better now. I'm going to experiment with the examples and try a few own queries to learn more.
For the plan guide, I need to read a lot more; I've never heard of it before. But it appears to be promising! Is there a potential risk, too, or can I simply try and in the worst case it just isn't better than before?
October 24, 2019 at 1:14 pm
Thank you very much for your much appreciated answers!
I've read the article and feel I understand parameterizing better now. I'm going to experiment with the examples and try a few own queries to learn more.
For the plan guide, I need to read a lot more; I've never heard of it before. But it appears to be promising! Is there a potential risk, too, or can I simply try and in the worst case it just isn't better than before?
I tried it because we had an access client using direct links to our tables (nothing I could do about it) - it worked for us on queries like "select * from stock where stockid=11112222"
but you'll have to see for yourself, you'll be able to see the impact in the query store and hopefully if you query the plan cache you will get less plans - i can't see any risk, but get yourself comfortable with it first
MVDBA
October 24, 2019 at 2:06 pm
in each database you can set the parameterisation options (right click on the database, go to properties and go to options)
obviously stored procs with nice parameters would be the ideal solution
but I found the best way (without changing the code) is to use a plan guide
USE [Mydatabase]
GO
/****** Object: PlanGuide TemplateGuide1 Script Date: 22/10/2019 15:37:31 ******/
EXEC sp_create_plan_guide @name = N'[TemplateGuide1]', @stmt = N'select Count ( Mydatabase_user_name ) as C1 from "v_user_role" where ( Mydatabase_user_name = @0 and Mydatabase_role_name = @1 )', @type = N'TEMPLATE', @params = N'@0 varchar(8000),@1 varchar(8000)', @hints = N'OPTION(PARAMETERIZATION FORCED)'
GOthis seemed to reduce my procedural cache by quite a lot - i'm no expert in this , but it has helped me out
You should write an article on that one. This is especially helpful with 3rd party junk that can't be changed. A nice article on how to find problems like this and then how to fix them using this technique would likely be very popular. I've not thought about it before seeing your example but it seems like it would also help prevent a shedload of recompiles. We had such a problem with some code that would "only" take 100ms to execute but would take 2 to 22 SECONDS (not a misprint) to recompile. It would recompile every bloody time because of the non-parameterized literals in the code and it was being using around 40,000 times per hour. It was some legacy code that we were able to fix (and should be fixed if you can... it just works better for a lot of reasons) but doing something like this is a great work-around for 3rd party junk.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 24, 2019 at 4:53 pm
I'll give it a go... It's still new to me, but I love learning.
ive only just finished my last article and I'm at SQL in the summit being interviewed by Steve tommorow .. He's going to get sick of me soon ??
MVDBA
October 25, 2019 at 9:28 pm
question for the experts replying here, would "optimize for ad hoc workloads" help in these cases, or would it be a bad idea? I know its a fairly big change and don't advise that OP do it without really getting guidance but I know I've considered it!
October 26, 2019 at 2:38 pm
question for the experts replying here, would "optimize for ad hoc workloads" help in these cases, or would it be a bad idea? I know its a fairly big change and don't advise that OP do it without really getting guidance but I know I've considered it!
Your suggestion is a good one but the OP stated that he already has that enabled.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 26, 2019 at 2:40 pm
I'll give it a go... It's still new to me, but I love learning.
ive only just finished my last article and I'm at SQL in the summit being interviewed by Steve tommorow .. He's going to get sick of me soon ??
Interview? You interviewing to work at RedGate or is he working on an article about people?
--Jeff Moden
Change is inevitable... Change for the better is not.
October 26, 2019 at 11:04 pm
x wrote:question for the experts replying here, would "optimize for ad hoc workloads" help in these cases, or would it be a bad idea? I know its a fairly big change and don't advise that OP do it without really getting guidance but I know I've considered it!
Your suggestion is a good one but the OP stated that he already has that enabled.
Well that was inattentive of me 🙂
October 26, 2019 at 11:34 pm
Jeff Moden wrote:x wrote:question for the experts replying here, would "optimize for ad hoc workloads" help in these cases, or would it be a bad idea? I know its a fairly big change and don't advise that OP do it without really getting guidance but I know I've considered it!
Your suggestion is a good one but the OP stated that he already has that enabled.
Well that was inattentive of me 🙂
Nah... I was going to preface that with it's difficult to find everything on these longer threads. Sometimes you can miss stuff on a shorter thread. The only reason why I knew it was because I've got some good bit of interest in this thread and remember things in it. Otherwise, I'd have made the same suggestion.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 28, 2019 at 9:24 am
MVDBA (Mike Vessey) wrote:I'll give it a go... It's still new to me, but I love learning.
ive only just finished my last article and I'm at SQL in the summit being interviewed by Steve tommorow .. He's going to get sick of me soon ??
Interview? You interviewing to work at RedGate or is he working on an article about people?
ha ha - I wish - it was a panel interview at sql in the summit about devops….
it was nice to swap stories with Grant and steve.. plus the Redgate team are awsome
MVDBA
October 29, 2019 at 7:15 pm
question for the experts replying here, would "optimize for ad hoc workloads" help in these cases, or would it be a bad idea? I know its a fairly big change and don't advise that OP do it without really getting guidance but I know I've considered it!
Thank you for sharing your idea!
The instance does in indeed have this option enabled. As of my understanding it helps saving memory because for each first execution only a plan stub is stored, which consumes by far less memory than many (most?) full plans. My wild guess is that said saved memory would be available as buffer pool, but I don't actually know.
My instance has a lot of memory (if I remember correctly, it is the maximum possible memory for one CPU on this architecture). The bottleneck in this case is the plan cache which has a hard limit (see first query in opening post, 4x bucket). I have read about a trace flag to quadruple the bucket count and hence, the maximum number of cached plans, but in my case, this would only postpone filling up plan cache by roughly one or two days, so it'd just waste memory without any remedy.
Warning to any reader: I am not very experienced and might have gotten things wrong.
Also, I am not sure if "plan cache" in the thread title is a technically accurate term.
Thank you again for your answers. I am currently reading about plan guides and I am thinking about how I can test any change prior to bringing it to production. Unfortunately the database is some TB in size and I have neither storage nor an adequate test load. And I am scared to just give it a try in production.
I'll update this thread when I have implemented the change.
October 30, 2019 at 5:11 am
To be honest, I doubt that the stat you dropped had anything to do with the problem. I think that you may have just had a bad plan cached and you dropping the stat caused the proc to recompile, which gave you a better plan. This may or may not have originally been due to "bad parameter sniffing" but it's starting to sound like it.
Most will argue with me about this but I'd first try using an index hint (which is actually a directive) in the right spot in the stored procedure before I started farting around with plan guides and all that. You could also try an "optimize for" hint (disclaimer... I've never needed to use such a hint so I don't actually know what the behavior will be but have read that it can help for such things).
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply