April 17, 2008 at 8:37 am
I need to look at performance issues with
a client SQL 2005 database.
It's a reverse logistic company. They do returns for
medical institutions.
Their SCAN table is 8.5 million records.
There are 9 indexes in this table:
create table scan(
licplate char(10) collate SQL_Latin1_General_CP1_CI_AS not null constraint PK_scan primary key nonclustered,
boxid char(10) collate SQL_Latin1_General_CP1_CI_AS not null,
din char(8) collate SQL_Latin1_General_CP1_CI_AS,
upc char(20) collate SQL_Latin1_General_CP1_CI_AS,
pkgsize int,
client char(3) collate SQL_Latin1_General_CP1_CI_AS,
lotnumber char(15) collate SQL_Latin1_General_CP1_CI_AS,
expiration datetime,
quantity int,
fullness decimal(4,2),
reason char(3) collate SQL_Latin1_General_CP1_CI_AS,
casequantity smallint,
crxnarc char(1) collate SQL_Latin1_General_CP1_CI_AS,
cage char(1) collate SQL_Latin1_General_CP1_CI_AS,
authcode char(20) collate SQL_Latin1_General_CP1_CI_AS,
sortid char(10) collate SQL_Latin1_General_CP1_CI_AS,
handling char(3) collate SQL_Latin1_General_CP1_CI_AS,
exception smallint,
dest char(3) collate SQL_Latin1_General_CP1_CI_AS,
recallnum int,
disposition char(3) collate SQL_Latin1_General_CP1_CI_AS,
status char(3) collate SQL_Latin1_General_CP1_CI_AS,
value decimal(14,2),
valuepct smallint,
process_line smallint,
process_operator char(12) collate SQL_Latin1_General_CP1_CI_AS,
process_date datetime,
process_time datetime,
sort_line smallint,
sort_operator char(12) collate SQL_Latin1_General_CP1_CI_AS,
sort_date datetime,
sort_time datetime,
credit datetime,
invoicenum int,
invoicedate datetime,
edi_notified char(1) collate SQL_Latin1_General_CP1_CI_AS,
is_invoiced char(1) collate SQL_Latin1_General_CP1_CI_AS,
bulksort char(1) collate SQL_Latin1_General_CP1_CI_AS
)
go
create clustered index scanndx6 on scan(din)
create index scanndx2 on scan(boxid,client)
create index scanndx3 on scan(sortid)
create index scanndx4 on scan(credit)
create index scanndx5 on scan(process_date)
create index scanndx7 on scan(recallnum)
create index Boxid_idx on scan(boxid)
create index scanndx8 on scan(bulksort)
create index Bulksort_IDX on scan(client,dest,disposition)
How does it look overall?
Any obvious potential issues with indexes maybe?
Thanks,
Robert
April 17, 2008 at 9:00 am
There is nothing glaring, but without seeing the data and the queries being run against the table, there is little that can be done.
I find it interesting that the "Bulksort_IDX" index is not the index on the bulksort column, but that is just a naming issue and would not impact performance unless someone specified the index in a hint by mistake.
create index scanndx8 on scan(bulksort)
create index Bulksort_IDX on scan(client,dest,disposition)
That index is a bit suspect on it's own though. It sounds like the field is being used for a boolean meaning that unless your data is overwhelmingly one of the two boolean options and this is the data you typically want, the index may not be terribly useful.
April 17, 2008 at 9:01 am
Are they reporting performance issues? If so - what?
what is the usage pattern for the table? inserts vs updates vs deletes?
how do they mostly pull data? are they pulling sets or one row at a time?
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
April 17, 2008 at 9:09 am
What we'd really need would be to know what the most common selects are for the table, and to see some execution plans for them. With that, and the table definition and indexs, we can tell you more about good/bad/indifferent on the indexes. (Posting the table is a good start, though.)
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
April 17, 2008 at 9:59 am
Thank you all guys?
Michael / Mat / SGuarder.
I just started investigation.
They have a VB application that generates EXCEL reports
from this database.
I will find out soon "what most common selects are for the table".
By the way,
is there any special technique to do it quick?
I mean I was planning to go to syscomments to find those selects:
select name,xtype,
case
when sysobjects.xtype = 'C' then 'CHECK constraint'
when sysobjects.xtype = 'D' then 'Default or DEFAULT constraint'
when sysobjects.xtype = 'F' then 'Foreign Key'
when sysobjects.xtype = 'FN' then 'Scalar function'
when sysobjects.xtype = 'P' then 'Stored Procedure'
when sysobjects.xtype = 'PK' then 'PRIMARY KEY constraint'
when sysobjects.xtype = 'S' then 'System table'
when sysobjects.xtype = 'TF' then 'Function'
when sysobjects.xtype = 'TR' then 'Trigger'
when sysobjects.xtype = 'U' then 'User table'
when sysobjects.xtype = 'UQ' then 'UNIQUE constraint'
when sysobjects.xtype = 'V' then 'View'
when sysobjects.xtype = 'X' then 'Extended stored procedure'
else 'Unknown object'
end
FROM sysobjects
where id in (
select id from syscomments where text like '% SCAN %')
order by xtype
and then go stor proc, one by one and see those selects.
Any quicker way?
Thanks,
Robert
April 17, 2008 at 10:25 am
If everything is being run as procedures or views, that will work, but you may benefit from just running profiler to see what people are actually running.
April 17, 2008 at 11:21 am
In SQL 2005, sys.sql_modules replaces syscomments (syscomments is kept for backwards compatibility), so you might want to use that instead. (It has some advantages over syscomments.)
Also, make sure you don't have spaces around the table name in the "Like '%SCAN%'" part of the query. It looks like your script does have spaces. If so, it won't get things like "dbo.SCAN", or "SCAN.Column1", which you would need to know.
As far as finding the most common queries, use Profiler to find that. It will also tell you how long they ran, which is very advantageous in figuring out which, if any, need to be tuned.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
April 17, 2008 at 11:55 am
You've got 1 redundant index. Rest look OK. Depending on the queries, they could maybe be widened.
create index scanndx2 on scan(boxid,client)
create index Boxid_idx on scan(boxid) -- redundant
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 17, 2008 at 12:11 pm
Thank you guys.
I will turn on SQL Profiler tomorrow morning.
It's a Production database. But it's a small
company. I think they might have 15-20 simaltenius connections. Not more.
It's not gonna cause any problems?
One more question.
In SQL Profiler should I use templates or just select the Events?
I've never used it heavily and stiil don't know if I do it correctly.
I'm interested in long running queries most of all.
There might be embedded SQL in VB program also.
Didn't have time yet to go through VB code.
So it's not only Stored Procedures.
Thanks,
Robert
April 17, 2008 at 12:34 pm
Guys,
One more SQL Profiler question.
"Duration" in the results table.
Is it the duration of SQL executed in milliseconds?
I think I'll have to become a SQL Profiler knowlegable user in the next few months. What would you recommend to read? Maybe some good book
or website?
Thanks,
Robert
April 17, 2008 at 12:38 pm
One more thing - depending on what you're returning using each index, you might care to include extra stuff in the INCLUDE sections of your indexes.... Might save you some cycles if you don't need to bookmark look up extra fields......
And yes - it's just a specific way to "widen" an index....
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
April 17, 2008 at 12:43 pm
Shouldn't cause any problems, unless you start tracing the very frequent events (object access, lock taken and such) but there's no need for that here.
The standard template is a good place to start, though the audit login, audit logout and existing connections are probably not required.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 17, 2008 at 12:49 pm
riga1966 (4/17/2008)
Guys,One more SQL Profiler question.
"Duration" in the results table.
Is it the duration of SQL executed in milliseconds?
It's in microseconds on 2005, but in the profiler GUI it will probably display in milliseconds. It depends on the option set in Tools-> Settings
I think I'll have to become a SQL Profiler knowlegable user in the next few months. What would you recommend to read? Maybe some good book
or website?
I like the Inside SQL Server 2005 series, but they are not beginner books
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 17, 2008 at 12:52 pm
Guys,
Please see the attached EXCEL file.
It's an extract from my SQL Profiler output.
Question.
Why sometimes CPU / Reads / Writes / Duration are NULL ?
Thanks,
Robert
April 17, 2008 at 12:58 pm
Are those from the Statement started events?
Those are logged when the statement begins. Since it hasn't finished, there's no duration, reads or CPU to log. Those are logged for the statement/RPC completed events.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 15 (of 25 total)
You must be logged in to reply to this topic. Login to reply