December 8, 2009 at 10:43 am
I have a relatively simple query (from an application we purchased) that when ran pegs a 24 core processor box out to 100%. Here is the query:
SELECT not_log.nlh_start, not_log.nlh_status, not_log.id,nlh_hdr
FROM not_log
WHERE not_log.nlh_hdr LIKE N'%survey%'
I totally understand how inefficient the wildcards are and I understand the wild card at the beginning of the string is especially horrendous. What I'm trying to understand is specifically why this pegs out all the processors on the box as soon as I hit execute. I have tried to write a similar query against other similar datasets on a different box, but I have been unable to reproduce the same problem. Now if I take this not_log table and ship it to any other server regardless of sql edition or patch level I can reproduce the problem.
The not_log table has approx 13 million records. Below is the table structure and current indexes. The execution plan is also attached.
I'm just trying to understand exactly why this has the effect that it does. I have set the max parallelism on the server to 4 to avoid this single query from bringing the entire environment down. This query is generated from a search feature in the purchased application. It would literally be possible for 20 users to kick off similar queries to this at the same time.
I have told my managers that a wild card search like this is extremely cpu intensive due to the comparisons that have to happen on literally every row in the table. Can someone please confirm this or point me in the direction of the real cause.
Thanks in advance for any help!
CREATE TABLE [dbo].[not_log](
[id] [int] NOT NULL,
[int] NOT NULL,
[last_mod] [int] NULL,
[nlh_hdr] [nvarchar](255) NULL,
[nlh_msg] [ntext] NULL,
[nlh_msg_html] [ntext] NULL,
[nlh_c_alias] [binary](16) NULL,
[nlh_c_addressee] [binary](16) NULL,
[nlh_email] [nvarchar](50) NULL,
[nlh_cm_method] [int] NULL,
[nlh_transition] [int] NULL,
[nlh_pri] [int] NULL,
[nlh_type] [int] NULL,
[nlh_ack_time] [int] NULL,
[nlh_status] [int] NULL,
[nlh_user_ack] [nvarchar](40) NULL,
[nlh_start] [int] NOT NULL,
[nlh_ack_by] [int] NULL,
[nlh_end] [int] NULL,
[cntxt_obj] [nvarchar](30) NULL,
[cmth_used] [int] NULL,
[tenant] [binary](16) NULL,
CONSTRAINT [XPKnot_log] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
/****** Object: Index [not_log_x0] Script Date: 12/08/2009 11:35:57 ******/
CREATE NONCLUSTERED INDEX [not_log_x0] ON [dbo].[not_log]
(
[nlh_c_addressee] ASC,
[nlh_start] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [not_log_x1] Script Date: 12/08/2009 11:35:57 ******/
CREATE NONCLUSTERED INDEX [not_log_x1] ON [dbo].[not_log]
(
[nlh_status] ASC,
[nlh_c_addressee] ASC,
[cmth_used] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [not_log_x2] Script Date: 12/08/2009 11:35:57 ******/
CREATE NONCLUSTERED INDEX [not_log_x2] ON [dbo].[not_log]
(
[cntxt_obj] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [not_log_x3] Script Date: 12/08/2009 11:35:57 ******/
CREATE NONCLUSTERED INDEX [not_log_x3] ON [dbo].[not_log]
(
[tenant] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
December 8, 2009 at 12:19 pm
Your table is missing an important Index, it needs to have.
CREATE NONCLUSTERED INDEX [not_log_x4] ON [dbo].[not_log]
(
[nlh_hdr] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Use the above code and try running your query one more time.
Edited: Corrected a grammatical mistake
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
December 8, 2009 at 12:26 pm
That query has to run as a table or index scan, since any predicate with leading wildcards is not SARGable. Since scans can be paralleled, SQL does so, splitting the query over all CPUs. It wants to finish the query as fast as possible, so it runs the CPUs at full 'power' hence 100% cpu for the duration of the query.
There would be no logic in SQL only taking say 50% of the cpu for the query as that would mean the query would take twice as long.
A covering index may be useful here, but it'll still execute as an index scan over 13 million rows. That's a lot of pages to read.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 8, 2009 at 12:38 pm
Thanks GilaMonster... I was basically looking for someone to confirm what I already knew and you spelled it out perfectly.
Managers just don't like hearing that there isn't much you can do with a query like that.
Thanks,
again.
December 8, 2009 at 12:52 pm
Gail, I have never worked on such a query with leading wildcards on a varchar(255) column and 13 Million records.
As this is something I can't simulate and try to learn, can you let me know if the index that I suggested would have any impact at all.
Trying to learn from his situation.
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
December 8, 2009 at 1:11 pm
Bru Medishetty (12/8/2009)
Gail, I have never worked on such a query with leading wildcards on a varchar(255) column and 13 Million records.As this is something I can't simulate and try to learn, can you let me know if the index that I suggested would have any impact at all.
Why can't you simulate it? Set up a 1 million row test table, stick a char or varchar column in there and test. The full 13 million's not necessary
SELECT TOP (1000000) b.object_id, a.name
INTO TestingBigTable
FROM msdb.sys.columns a CROSS JOIN msdb.sys.columns b
GO
CREATE INDEX idx_isthisusable ON TestingBigTable (name)
GO
SELECT * FROM TestingBigTable WHERE NAME LIKE '%count%'
It's not very likely that the index will be used. The predicate is not SARGable (because of the leading wildcard) so at best it's an index scan. The index you suggested is not covering, hence, SQL will only use it if it thinks that the query will affect a small percentage of the table (small being around 0.5%). With leading and trailing wildcards, it's going to be hard to get a good cardinality estimate, so the optimiser will err on the side of caution and do a table scan
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 8, 2009 at 1:23 pm
Thanks Gail !
Appreciate for your time and clarification.
When I meant about not able to simulate it, I meant about not able to simulate immediately at my work place.
I would be able to spend sometime at home and check that out.
Thanks again.
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
December 14, 2009 at 2:18 am
There are plenty of optimizations available here. The best solution will depend on your particular circumstances and priorities, but there's certainly no need to 'give up' and just put this in the too-hard basket.
Potential solutions include:
1. Creating a full-text index on the column concerned.
2. If you don't need Unicode, use VARCHAR or CHAR instead. Not only will you halve the storage size, but non-Unicode data is faster to compare, given optimal collation settings.
3. So, if Unicode can be avoided, use a SQL collation, or better yet a BIN or BIN2 collation for the column concerned, and collate the comparison string '%Search%' explicitly to the same collation. Because of the complex linguistic rules applied when comparing strings using Windows collations (which are always used for unicode data), comparisons typically use 20-40 times as much CPU as a BIN or SQL collation. You will be amazed.
4. If there are a relatively small number of strings commonly searched, create a computed column to indicate whether each string exists, and index that.
5. Consider a non-clustered index on the search column - again appropriately collated. It may be optimal to make this a non-covering index, and force the queries to scan that index using an index 'hint'. The optimizer all to frequently decides that table scans are to be preferred to key lookups, even when that is clearly not the case.
6. Again if the number of searched strings is small, consider defining indexed views based on the appropriate LIKE conditions.
7. Use READ UNCOMMITTED if you can tolerate dirty reads. READ UNCOMMITTED is much more CPU efficient, and avoids the considerable locking overhead.
8. Your system clearly has plenty of memory and CPU - the query bottlenecks on CPU, not on I/O fetching pages, so it doesn't make much sense to talk about optimizing for I/O. This is one of the mistakes the optimizer makes when rejecting key lookups.
9. Once you have finished applying the above rules, consider reducing the degree of parallelism, or removing it entirely. Parallelism is a marvellous thing, but it is not necessarily the best thing here.
Paul
December 15, 2009 at 9:58 am
Good stuff there Paul (as usual).
I would add that the OP can easily limit how hard this (and other parallelized queries) hit the box by setting MAXDOP, either for the query specifically or for the server in general (or both). Very few servers (typically only those dedicated to data warehousing) should have MAXDOP = 0 at the server level.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 21, 2009 at 12:37 pm
I have been told by Microsoft to reduce the number of useable CPUs on a box with more than 8 physical processors to 8. SQL 2005 is greedy when it comes to more than 8 physical CPUs and will grab all of them draining resources. This was after opening a case with them regarding certain queries (optimized for that matter) on a 16 cpu box. Try using the following hint at the bottom of your search option (MAXDOP 8)
Josef Richberg
2009 Exceptional DBA
http://www.josef-richberg.squarespace.com
http://twitter.com/sqlrunner
December 23, 2009 at 11:26 pm
Paul White (12/14/2009)
...Because of the complex linguistic rules applied when comparing strings using Windows collations (which are always used for unicode data), comparisons typically use 20-40 times as much CPU as a BIN or SQL collation. You will be amazed....
Really? I'm not doubting you - just after more info on this point. I knew there was a perf difference between Windows and SQL collations for comparisons, but an order of magnitude? Do you have any references with more info on the costs involved?
Cheers!
Jacob
December 24, 2009 at 8:47 am
there was a massive thread here on sqlservercentral.com recently dealing with string splitting IIRC that had some amazing benchmarks involving all kinds of solutions. One of the things discovered was that collation had a very dramatic effect on performance.
Actually it was removal of spaces that the thread concerned.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 27, 2009 at 3:10 am
Jacob Luebbers (12/23/2009)
Paul White (12/14/2009)
...Because of the complex linguistic rules applied when comparing strings using Windows collations (which are always used for unicode data), comparisons typically use 20-40 times as much CPU as a BIN or SQL collation. You will be amazed....Really? I'm not doubting you - just after more info on this point. I knew there was a perf difference between Windows and SQL collations for comparisons, but an order of magnitude? Do you have any references with more info on the costs involved?
Hey Jacob, sure...not sure why I left it out of my original post, but you can find the details here: http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=512459
December 27, 2009 at 3:18 am
TheSQLGuru (12/24/2009)
there was a massive thread here on sqlservercentral.com recently dealing with string splitting IIRC that had some amazing benchmarks involving all kinds of solutions. One of the things discovered was that collation had a very dramatic effect on performance. Actually it was removal of spaces that the thread concerned.
Yes; and for anyone interested, this was the thread: http://www.sqlservercentral.com/Forums/Topic819042-203-7.aspx
December 27, 2009 at 3:50 am
I believe that my next step would be a serious conversation with the 3rd party vendor... first, for having a design that requires such a query and, second, for apparently putting it in the master database. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply