July 14, 2006 at 8:18 am
Hi guys,
I've got a stored procedure I've written against a table that came with our ERP system( E-Synergy by Exact) that is terriable. I can't figure out how to get this to run any faster. There are HUGE issues with DB design and it makes me want to cry. I'm trying to write queries for reports that end up taking several minutes... Yes MINUTES to run a simple report. Here's the query:
SELECT
ISNULL(prProject.parentProject,prProject.projectnr) as projectNumber,
prProject.description as projectdescription,
gbkmut.datum as RealizationDate,
gbkmut.artcode as Hour_Item,
gbkmut.aantal as Actual,
Items.SalesPackagePrice as Price,
BacoDiscussions.body as notes,
humres.fullname,
CASE When prProject.parentProject is not null then
substring(prProject.ProjectNR, Len( LTrim(prProject.parentProject) )+2, (LEN(prProject.ProjectNR) - LEN(prProject.parentProject)) )
End as projectGroupCode,
CASE When prProject.parentProject is null then
prProject.Description
ELSE
(Select parPro.Description from prProject parPro where parPro.ProjectNr = prProject.ParentProject)
END as parentDescription,
prProject.CostCenter,
cicmpy.cmp_name as customerName,
humRes_1.FullName as ResourceManager,
Humres_2.fullName as projectManager,
humres.costCenter as EmployeeCostCentre,
prProject.CostCenter as projectCostCentre,
gbkmut.ID
FROM gbkmut INNER JOIN
dbo.Humres on humRes.res_id = gbkmut.res_ID INNER JOIN
dbo.prProjecton prProject.ProjectNr = gbkmut.project INNER JOIN
dbo.Humres humres_1 on HumRes_1.res_id = HumRes.repto_id INNER JOIN
dbo.HumRes humres_2 on HumRes_2.res_id = prProject.responsible INNER JOIN
cicmpy on cicmpy.cmp_wwn = prProject.IDCustomer INNER JOIN
dbo.Items on Items.ItemCode = gbkmut.artcode LEFT OUTER JOIN
BACODiscussions on BACODiscussions.ID = gbkmut.DocumentID
WHERE items.type = ('L')
AND gbkmut.transtype = 'N'
AND gbkmut.transsubtype in ('B', 'L')
The biggest Issue I see is the 'bacoDiscussions.Body' column which is a 'TEXT' Datatype and isn't always there (hense the outer join). If I don't include body its a 8sec query. With it its almost 6min
Any help you can give is appreciated.
July 15, 2006 at 10:38 am
I guess you have the appropriate index on BACODiscussions.ID column right?
July 16, 2006 at 8:11 am
The ID has a clustered index on it.
July 17, 2006 at 3:19 am
Hi andy,
As you said the BACODiscussions.body column may not always be there, one suggestion would be
#. Remove the table "BACODiscussions" from the join.
#. insert your rsultset into a temp table and then update the notes column only in cases where the BACODiscussions.body exists.
Regards,
Peldin Fernandes
July 17, 2006 at 3:31 am
Alternetively, you may try to have an inner select where you need the "text" field in your select and use a where clause like "where bacoDiscussions.Body IS NOT NULL AND bacoDiscussions.ID = <Your criteria>"
Hope this eliminates null values to kill your index usage
July 17, 2006 at 6:43 am
Hi Guys,
Thanks for the replies. I tried the 'temp table' method with a temp table datatype instead of an actual temp table as its supposed to be faster. It did a little bit faster but not much.
I can't do a sub query as the text field is a 'text' datatype and that is not allowed.
I guess this is why you don't design a database like crap.
PS. If your company is going to buy an E-Synergy by Exact .... don't.
July 17, 2006 at 1:57 pm
The temp table is better - table vars are good for tiny record sets only. (like less than 40 records and narrow.) Reason a var is faster is becuase it is stored in memory...until it is too big an SQL needs to treat it like a temp table.
Also try taking out the update which is costly and simply return the select statement (unless you have more processing to do).
b
July 17, 2006 at 4:21 pm
table vars are also faster as inserts/deletes/updates to table vars are not logged.
Lynn
July 18, 2006 at 9:10 am
sorry but both temp tables and table vars start in memory - depending upon resource ( and data set size ) each may or may not spool out into a tempdb disk object. Table variables are logged and all as said before are not generally quicker - both fulfil different needs and uses. If your data set is of a reasonable size then temp tables will be quicker. btw .. make sure tempdb is on a dedicated quick array for best performance.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
July 18, 2006 at 9:23 am
I guess you have an index on items.type ? restricting this selection might help .. But .. I bet items.type is a poor index ??
so you could try a composite index on items.itemcode and items.type.
I usually try creating composite indexes as an aid to tuning. does a query plan help you at all.
I once had a similar problem with a text column and converted it to a varchar ( I discovered there wasn't actually anything over 250 chars in the text column ) but I guess this option isn't open to you. You might try creating a sub table of those matching item in the iitems table - suppose it depends upon the numbers of rows.
So many SQL Server apps are so poor aren't they, I just broke one today by replacing the blank sa password with a proper password, waste of time it's back to blank again!!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
July 18, 2006 at 9:40 am
Colin,
Try the following, you will see that the table variable still has data after the rollback.
This can only mean that the insert to the table var inside the begin transaction and rollback statements
is not logged. If it were, it would have rolled back also.
create table Table1 (
Part char(10) constraint pk_table1 primary key,
Descr varchar(20),
Sales char(3),
Prod char(3)
)
declare @vTable1 table (
Part char(10),
Descr varchar(20),
Sales char(3),
Prod char(3)
)
begin transaction
insert into Table1 values ('TEST', 'Test Part', 'SG1', 'PG2')
insert into @vTable1 values ('TEST', 'Test Part', 'SG1', 'PG2')
select
*
from
Table1
select
*
from
@vTable1
rollback
select
*
from
Table1
select
*
from
@vTable1
drop table Table1
Lynn
July 19, 2006 at 4:52 am
yeah, sorry what I intended to say was that the data is still stored the same way, e.g. if the table spills over into tempdb the data is still written. I hadn't actually thought about transactions and variables, I guess it could lead to issues.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
July 19, 2006 at 7:05 am
Actually, I've used this to do some Test-Driven Development in SQL Server. It has all been manual at this time, but it's let me capture updates without destroying the underlying data.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply