October 14, 2002 at 5:40 pm
I have a Sql Server that runs a Goldmine Database. It runs a process that synchronizes with the data input database located in another building. We added to this server by adding a reporting database that tracks the complaints logged in the Goldmine Database. Initially the appliation ran fine, now it seems that it needs "more power" I'm not sure what..
I've added counters, and for the most part the system is fine, in fact whenever a user opens up the application it excutes a procedures that brings back a listing (short summary) of the complaints assigned to that user. The first user to call the application is usually penelized with degradaed performance of up to 55 seconds.. I think its acctually less now but I haven't timed it this week. I've taken measures of running the Index Tunning Wizard which tells me that there will be 0% improvement by applying it's suggestions, none the less I apply them, then trace some more and re-run the wizard which then tells me that there are no new suggestions.
Watching the performance monitor I find that a fullscan is occuring every 5 seconds (average) which spikes up to 300 (?), the average of this tool is 15. The Index Scans is Max 31369, and the average is 180.
The server has a Raid Array set up with part of the 0 drive (raid) partitioned to C: and D:, and the swap file on the D: drive. the space on D: is 29gig, and the server has 1gig of RAM, w/ 768 fixed allocated to SQL server 7 and 256 allocated to the OS (win2k server).
I've taken much of my sprocs and coverted what I can into views, I'm still looking up what an Index View is and how to apply it, but any help will be greatly appreciated!!!
Thanks!
-Francisco
October 14, 2002 at 6:55 pm
Indexed views are a SQL 2K item, not available in SQL 7. Also, views are slower than SProcs for thre most part since they cannot take advantage of stored execution plans, however using a view with a Sproc will.
Now as for SQL 7 in regards to what you can do without changing your configuration or need for hardware.
Rebuild indexes
Run sp_update stats
Run DBCC UPDATEUSAGE
Check file growth for a specific period of time, say a week or a month, then based on how much it changes set file auto growth to a whole MB value slightly larger than what it increased during that timeframe. The reason is filegrowth takles time and if happens often you will see the server slow down, the more often the slower you see it. The say goes for the Transaction Log, you want to try to find a size that will do all the work in a period of time so you don't have to resize it or force shrink it. This is a bit harder to do.
Reevaluate index Fill Factors, if you know the data is going to remain in order and always insert at or near the end consider 95% or better. However if the data may insert into early parts of the DB often then consider using a lower % fill factor for that index (note this is done individually on indexes). The reason is page splits occurr when a data page becomes full and it will push the last data out to the next page. As all pages become full the number of splits will increase and this will cause performance to falter. Keep these documented so when you rebuild the indexes you can rebuild with the proper fill factors to limit page splits.
Also, even thou an index may have been suggested by the Tuning Wizard or have always been there. Consider getting rid of some if you have a lot. Reason the more you have the longer INSERT, UPDATES and DELETES will take. You may only occasionally need some and other indexes may actually take over for the missing ones which can sometimes be worse, the same, or even better.
Evaluate you queries and their execution plans. Look for tables scans and other lower performance processing. This may require some playing to get familiar with or just ask others for help. It took me a year to get used to it myself.Also, play with optional ways of doing the queries, you might find a better way, if so look at it's plan and compare.
Now for the hardware side. You might be experience bottlenecks on you Hard Drive as HDs are now the slowest performing animal of a server. If you have options to get a new server consider it and try for it. 29Gigs is quite a bit of data on a single drive.
Consider if you have options these are you best performing options. Get a multiarray RAID 10 setup with at least 10Gigs per array and on seperate array cards or channels. Then split the database into as many parts as you have arrays to get maximum performance from the IO subsystem. Also, seperate log file from the location of data files. Plus with your current configuration you are fighting the data files, the log files, with the swap file, and the temp database, so HD contention will most likely be real high on your server.
Other hardware options may be needed such as multiple NICs, additional memory, and additonal CPUs (and possibly if option faster ones, but remember two CPUs of slightly lower speeds can sometimes outperform a single of a higher speed).
Hardware is a bit harder to size right but most companies have tools or will help you get what you need for your situation if you ask.
Finally, you may want to take the server offline for a bit and stop SQL. Then run the DEFRAG utility with Windows 2000 to clean up the non-contigious stuff that most likely has been created by file size changes in the DB, LOG and SWAP file.
Also OS settings to make sure Background services is given a higher priority on the server. And that unneeded memory things such as 32bit color, show icons in all possible colors, windows effects, and other visual crap are turned off. This is a server after all and no one needs to care what the desktop looks like (this will squeeze a bit more out of memory, but memory is cheap, so go for it first).
Hope this helps and others should jump in to add as always so what I missed someone will add or fix anything I typed in error. Also, there have been many threads on this, look in search or browse thru performance.
"Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)
October 15, 2002 at 5:11 am
Do you have the db set to autoclose? What are your processor utilization, disk io %, disk queueing?
First one might account for the slow first query, or it could be all the data gets flushed out of cache. Can you post the query plan?
Andy
October 15, 2002 at 9:14 am
Thanks Antares686 and Andy Warren for your prompt replies! The server does have 2 cpu's (XEON 800mhz IIRC). I ran both the sp_updatestats and DBCC UPDATEUSAGE, the reply I got from UPDATEUSAGE is the following...
DBCC UPDATEUSAGE: sysindexes row updated for table 'sysobjects' (index ID 2):
USED pages: Changed from (2) to (4) pages.
RSVD pages: Changed from (2) to (4) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'sysindexes' (index ID 255):
USED pages: Changed from (21) to (22) pages.
RSVD pages: Changed from (24) to (25) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'syscolumns' (index ID 2):
USED pages: Changed from (2) to (7) pages.
RSVD pages: Changed from (2) to (7) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_ccAssigned' (index ID 1):
USED pages: Changed from (12) to (9) pages.
RSVD pages: Changed from (12) to (9) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_ccNotes' (index ID 1):
USED pages: Changed from (142) to (140) pages.
RSVD pages: Changed from (141) to (145) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_ccOriginalComplaint' (index ID 1):
USED pages: Changed from (4) to (3) pages.
RSVD pages: Changed from (4) to (3) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_ccStatus' (index ID 1):
USED pages: Changed from (26) to (18) pages.
RSVD pages: Changed from (43) to (31) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_Complaints' (index ID 3):
USED pages: Changed from (5) to (4) pages.
RSVD pages: Changed from (5) to (4) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_Complaints' (index ID 1):
USED pages: Changed from (7) to (9) pages.
RSVD pages: Changed from (7) to (9) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_ccFailCode' (index ID 1):
USED pages: Changed from (5) to (6) pages.
RSVD pages: Changed from (5) to (6) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_FailCode' (index ID 1):
USED pages: Changed from (4) to (3) pages.
RSVD pages: Changed from (4) to (3) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'tbl_Status' (index ID 1):
USED pages: Changed from (4) to (3) pages.
RSVD pages: Changed from (4) to (3) pages.
DBCC UPDATEUSAGE: sysindexes row updated for table 'dtproperties' (index ID 1):
USED pages: Changed from (4) to (3) pages.
RSVD pages: Changed from (4) to (3) pages.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
I don't know if this is usefull information to you.
-Francisco
Edited by - fhtapia on 10/15/2002 10:08:14 AM
-Francisco
October 15, 2002 at 9:44 am
Andy,
Here are the counters you commented about. This system is not mission critical or a 24x7 server, but it serves as a backup to the call center database because this one receives all the updates that they make on the other db. The database is however needed to be up and running in order for our company to track the complaints and resolution of it's customers. They (goldmine) are using their own version of replication for this. When all the other counter show no activity, I still see the Full Scan and Index Scan spiking ever few seconds (about 5). Well on to the counters...
Processor 0 = 46max avg 15
Processor 1 = 86max avg 17
DISK (C: D:) Read Time = 2387max avg 725
DISK queueing (_Total) = 32max avg 1
(please note that I've already done the step mentioned by Antares686).
I do want to state that INSERTING and UPDATING the database seems to go very fast, but because the main screen for the users is the summary screen with their up to date listing, it has a timmer on it that runs every 60 minutes to refresh their screen. Additionally I had to also give the screen a refresh for when they re-selected it...I will gather timings today on the length it takes to run the sprocs today...also all the tables in the reporting db have a fill factor of 0% I can move this up to the 95% suggested to see if that improves performance at all.
btw, I'm looking at the query exectuion plan, but it looks to convoluted..I don't know if it would be easy to decipher, all the text (when exported) looks run together... are there columns you want.
Thanks,
-Francisco
Edited by - fhtapia on 10/15/2002 10:11:17 AM
-Francisco
October 15, 2002 at 10:44 am
Antares686,
I'm not sure I know how to play with the Fill Factors, they are all currently defaulted to 0%, I tried moving some to 95% but I did not see an improvement in fact I saw a degradation.. up 3 seconds. The system may have have been busy filling up the index factor and thus this is why I had this problem... In a day my database will see about 10 to 20 entries. That is extremly low traffic. However they may pull up the screen more than 30 times, this is because as they research the problem they will revisit what they have already done to make sure that all the bases are covered. This to me sounds like I could take advantage of the 50% fill rate. What do you think? How long does it take the server to process this information (filling up the index?)
quote:
Reevaluate index Fill Factors, if you know the data is going to remain in order and always insert at or near the end consider 95% or better. However if the data may insert into early parts of the DB often then consider using a lower % fill factor for that index (note this is done individually on indexes). The reason is page splits occurr when a data page becomes full and it will push the last data out to the next page. As all pages become full the number of splits will increase and this will cause performance to falter. Keep these documented so when you rebuild the indexes you can rebuild with the proper fill factors to limit page splits.Also, even thou an index may have been suggested by the Tuning Wizard or have always been there. Consider getting rid of some if you have a lot. Reason the more you have the longer INSERT, UPDATES and DELETES will take. You may only occasionally need some and other indexes may actually take over for the missing ones which can sometimes be worse, the same, or even better.
Evaluate you queries and their execution plans. Look for tables scans and other lower performance processing. This may require some playing to get familiar with or just ask others for help. It took me a year to get used to it myself.Also, play with optional ways of doing the queries, you might find a better way, if so look at it's plan and compare.
-Francisco
-Francisco
October 15, 2002 at 11:30 am
Post the plan, we'll see if we can figure it out.
Andy
October 15, 2002 at 12:43 pm
quote:
Antares686,I'm not sure I know how to play with the Fill Factors, they are all currently defaulted to 0%, I tried moving some to 95% but I did not see an improvement in fact I saw a degradation.. up 3 seconds. The system may have have been busy filling up the index factor and thus this is why I had this problem... In a day my database will see about 10 to 20 entries. That is extremly low traffic. However they may pull up the screen more than 30 times, this is because as they research the problem they will revisit what they have already done to make sure that all the bases are covered. This to me sounds like I could take advantage of the 50% fill rate. What do you think? How long does it take the server to process this information (filling up the index?)
quote:
Reevaluate index Fill Factors, if you know the data is going to remain in order and always insert at or near the end consider 95% or better. However if the data may insert into early parts of the DB often then consider using a lower % fill factor for that index (note this is done individually on indexes). The reason is page splits occurr when a data page becomes full and it will push the last data out to the next page. As all pages become full the number of splits will increase and this will cause performance to falter. Keep these documented so when you rebuild the indexes you can rebuild with the proper fill factors to limit page splits.Also, even thou an index may have been suggested by the Tuning Wizard or have always been there. Consider getting rid of some if you have a lot. Reason the more you have the longer INSERT, UPDATES and DELETES will take. You may only occasionally need some and other indexes may actually take over for the missing ones which can sometimes be worse, the same, or even better.
Evaluate you queries and their execution plans. Look for tables scans and other lower performance processing. This may require some playing to get familiar with or just ask others for help. It took me a year to get used to it myself.Also, play with optional ways of doing the queries, you might find a better way, if so look at it's plan and compare.
-Francisco
Sorry, 0% if you only have 10 to 20 rows a day sounds fine. As for the DBCC UPDATEUSAGE report that is just letting you know it is making those changes to sysindexes to reflect the actual data pages.
Use
SET SHOWPLAN_TEXT ON
GO
AQUERYHERE
GO
and post the output here so we can see what your execution plan is up to.
"Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)
October 15, 2002 at 1:12 pm
I hope the format comes accross well, I moved some of the index fill factors to 95% for tbl_ccStatus, tbl_ccAssigned and tbl_ccNotes which are where most of the data resides...
Should I move the 95% fill factor back to 0%?, the longest running sproc (plan showed below) took only 2 seconds to execute, so the time is better It sounds as if it's the fill factor that changed this.
StmtText
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EXEC dbo.stp_ListStatusAssigned
/****** Object: Stored Procedure dbo.stp_ListStatusAssigned Script Date: 10/9/2002 8:01:41 AM ******/
/****** Object: Stored Procedure dbo.stp_ListStatusAssigned Script Date: 10/4/2002 10:10:47 AM ******/
/****** Object: Stored Procedure
Select IWstatus.ccID, /* callComplaint.hfoid,*/ callComplaint.Company, AssignedStatus.ContactName as AssignedTo,
DateDiff(dd,DaysInCyle.ccDateTime,@CurrDate) DaysInCycle,
DateDiff (dd,AssignedStatus.ccDateTime, @currDate) as Days_INBI
StmtText
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|--Sort(ORDER BY:([].[ContactName] ASC, [CONTACT1].[Company] ASC, [Expr1027] ASC))
|--Compute Scalar(DEFINE:([Expr1026]=datediff(4, [Expr1004], [@CurrDate]), [Expr1027]=datediff(4, [Expr1008], [@CurrDate]), [Expr1028]=datediff(4, If ([Expr1011]=NULL) then [Expr1002] else [Expr1011], [@CurrDate]), [Expr1029]=Convert([Expr1008]
|--Hash Match(Left Outer Join, HASH:([tbl_ccAssigned].[ccID], [].[ContactID])=([tbl_ccNotes].[ccID], [].[ContactID]), RESIDUAL:([tbl_ccNotes].[ccID]=[tbl_ccAssigned].[ccID] AND [].[ContactID]=[].[ContactID]))
|--Nested Loops(Inner Join)
| |--Merge Join(Inner Join, MERGE:([A].[ccID])=([tbl_Complaints].[ccID]), RESIDUAL:([A].[ccID]=[tbl_Complaints].[ccID]))
| | |--Stream Aggregate(GROUP BY:([A].[ccID]) DEFINE:([Expr1004]=MIN([A].[ccDateTime])))
| | | |--Sort(ORDER BY:([A].[ccID] ASC))
| | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccStatus].[PK_tbl_ccStatus] AS [A]), WHERE:([A].[StatusID]=2))
| | |--Merge Join(Inner Join, MANY-TO-MANY MERGE:([A].[ccID])=([tbl_ccStatus].[ccID]), RESIDUAL:([A].[ccID]=[tbl_Complaints].[ccID] AND [Expr1008]=[A].[ccDateTime]))
| | |--Sort(ORDER BY:([A].[ccID] ASC))
| | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccAssigned].[PK_tbl_ccAssigned] AS [A]))
| | |--Merge Join(Inner Join, MERGE:([tbl_ccAssigned].[ccID])=([tbl_ccStatus].[ccID]), RESIDUAL:([tbl_ccAssigned].[ccID]=[tbl_ccStatus].[ccID]))
| | |--Stream Aggregate(GROUP BY:([tbl_ccAssigned].[ccID]) DEFINE:([Expr1008]=MAX([tbl_ccAssigned].[ccDateTime])))
| | | |--Sort(ORDER BY:([tbl_ccAssigned].[ccID] ASC))
| | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccAssigned].[PK_tbl_ccAssigned]))
| | |--Sort(ORDER BY:([tbl_ccStatus].[ccID] ASC))
| | |--Hash Match(Inner Join, HASH:([tbl_Complaints].[ccID])=([A].[ccID]))
| | |--Sort(DISTINCT ORDER BY:([tbl_Complaints].[ccID] ASC, [CONTACT1].[Company] ASC))
| | | |--Bookmark Lookup(BOOKMARK:([Bmk1024]), OBJECT:([TeleMarket].[dbo].[CONTACT1]))
| | | |--Nested Loops(Inner Join)
| | | |--Bookmark Lookup(BOOKMARK:([Bmk1022]), OBJECT:([TeleMarket].[dbo].[CONTHIST]))
| | | | |--Nested Loops(Inner Join)
| | | | |--Index Scan(OBJECT:([HAAScc].[dbo].[tbl_Complaints].[PK_tbl_Complaints]))
| | | | |--Index Seek(OBJECT:([TeleMarket].[dbo].[CONTHIST].[CNHRECID]), SEEK:([CONTHIST].[RECID]=[tbl_Complaints].[RECID]) ORDERED)
| | | |--Index Seek(OBJECT:([TeleMarket].[dbo].[CONTACT1].[CONTACC]), SEEK:([CONTACT1].[AccountNo]=[CONTHIST].[AccountNo]) ORDERED)
| | |--Nested Loops(Inner Join)
| | |--Hash Match Root(Inner Join, HASH:([tbl_ccStatus].[ccID])=([A].[ccID]), RESIDUAL:([Expr1002]=[A].[ccDateTime]))
| | | |--Hash Match Team(Aggregate, HASH:([tbl_ccStatus].[ccID]) DEFINE:([Expr1002]=MAX([tbl_ccStatus].[ccDateTime])))
| | | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccStatus].[PK_tbl_ccStatus]))
| | | |--Filter(WHERE:([A].[StatusID]=5 OR [A].[StatusID]=2))
| | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccStatus].[PK_tbl_ccStatus] AS [A]))
| | |--Index Seek(OBJECT:([HAAScc].[dbo].[tbl_Complaints].[PK_tbl_Complaints] AS [A]), SEEK:([A].[ccID]=[A].[ccID]) ORDERED)
| |--Filter(WHERE:([A].[AssignedID]=[].[ContactID]))
| |--Table Spool
| |--Remote Scan(SELECT * FROM DEV_REDe.dbo.tbl_Contacts)
|--Hash Match(Inner Join, HASH:([A].[UserID])=([].[ContactID]), RESIDUAL:([].[ContactID]=[A].[UserID]))
|--Hash Match Root(Inner Join, HASH:([tbl_ccNotes].[ccID])=([A].[ccID]), RESIDUAL:([Expr1011]=[A].[ccDateTime]))
| |--Hash Match Team(Aggregate, HASH:([tbl_ccNotes].[ccID]) DEFINE:([Expr1011]=MAX([tbl_ccNotes].[ccDateTime])))
| | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccNotes].[PK_tbl_ccNotes]))
| |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccNotes].[PK_tbl_ccNotes] AS [A]))
|--Remote Scan(SELECT * FROM DEV_REDe.dbo.tbl_Contacts)
-Francisco
Edited by - fhtapia on 10/15/2002 1:29:18 PM
-Francisco
October 15, 2002 at 3:51 pm
Sorry, feel like an idiot, can you post the query code itself. Makes it easier to understand the Execution Plan.
Thanks
"Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)
October 15, 2002 at 5:37 pm
No problem, I've included both the sproc and a view that is used, the only other views are the ones pulling data from a remote server (sql 2000) as Select * From Openquery(Service, "Select * From dev_rede.tbl_Contacts")
/begin sproc/
ALTER Procedure stp_ListStatusAssigned as
--Date 2/15/2002
--Purpose: Pull Records that Match Criteria Status (see tbl_Status)
Declare @CurrDate as datetime
Select @CurrDate = GetDate()
Select IWstatus.ccID,
callComplaint.Company, AssignedStatus.ContactName as AssignedTo,
DateDiff(dd,DaysInCyle.ccDateTime,@CurrDate) DaysInCycle,
DateDiff (dd,AssignedStatus.ccDateTime, @currDate) as Days_INBIN,
DateDiff (dd, Case
When NotesStatus.NotesEntryTime is NULL Then IWstatus.ccDateTime
Else NotesStatus.NotesEntryTime
END, @CurrDate) InActiveDays,CONVERT(CHAR(12), AssignedStatus.ccDateTime,110) as AssignedON,
Case
When NotesStatus.NotesEntryTime is NULL Then CONVERT(CHAR(12), IWstatus.ccDateTime, 110)
Else CONVERT(CHAR(12),NotesStatus.NotesEntryTime, 110)
END as LastActionOn
From (
--HAAScc STATUS
SELECT IWStatus.CCID, A.RecID, IWStatus.ccDateTime, IWStatus.StatusID FROM
(Select B.CCid, B.ccDateTime, A.StatusID From tbl_ccStatus A
INNER JOIN (Select ccID, MAX(ccDateTime) as ccDateTime From tbl_ccStatus Group by ccID) B
ON (A.ccID = B.ccID and A.ccDateTime = B.ccDateTime)
WHERE A.StatusID IN (2,5)) IWStatus INNER JOIN
HAAScc.dbo.tbl_Complaints A ON (A.ccID = IWStatus.ccID)) AS IWstatus
/*
SELECT * From
(Select Distinct A.ccID, A.RECid, Max(B.ccDateTime) as ccDateTime, max(b.StatusID) as StatusID
From HAAScc.dbo.tbl_Complaints as A
Join HAAScc.dbo.tbl_ccStatus as B
On (A.ccID = B.ccID)
Group by A.ccID, A.RECid) as C
Where C.StatusID in (2, 5)) as IWstatus
*/
INNER JOIN
(Select Distinct A.ccID, Min(A.ccDateTime) as ccDateTime, max(A.StatusID) as StatusID
From HAAScc.dbo.tbl_ccStatus A
Where A.StatusID = 2
Group by A.ccID) as DaysInCyle
ON (DaysInCyle.ccID = IWstatus.ccid)
INNER JOIN
--********************--TELEMARKET INFO--********************************************
/*(
Select tblA.ccID, tblA.Company--, tblA.HFOid
From Telemarket.dbo.ContHist as tblB Inner Join(
Select DISTINCT T.ccID, A.RecID,
--'FO'+ Right(B.Key2,len(B.Key2) - 2) as HFOid,
--B.key2 as HFOid,
CASE
When Len(B.key2)>=5 THEN 'FO' + Right(B.Key2,len(B.Key2) - 2)
When Len(B.key2)=4 THEN 'FO' + Right(B.Key2,len(B.Key2) - 1)
Else B.Key2
END
as HFOid,
C.UHFOName as HFOName,C.UPhoneHFO as HFOPhone,A.OnDate,A.OnTime,
B.Company,B.Contact,B.Title,B.Address1,B.Address2,B.Address3,B.City,B.State,B.Zip,B.Country,
B.Phone1, B.Phone2, B.Fax,D.Email,
C.UhaasCust,C.UcncUser,C.UIndustry,C.UBusType,C.UVMC,C.UHMC,C.ULathe,C.URotary,
C.UOtherEqup
--A.Notes
From
HAAScc.dbo.tbl_Complaints as T Inner Join
Telemarket.dbo.ContHist as A ON (A.RECID = T.RECID) Inner Join
Telemarket.dbo.Contact1 as B On (A.AccountNo = B.AccountNo)Inner Join
Telemarket.dbo.Contact2 as C On (B.AccountNo = C.AccountNo)left Outer Join
(Select AccountNo,ContSupRef as Email From Telemarket.dbo.ContSupp
Where Contact ='E-mail Address' ) as D On (D.AccountNo = C.AccountNo)
) as tblA On (tblA.RecID = tblB.RecID)
)*/ vw_TelemarketView as CallComplaint
----*********************************************************************************
ON (IWstatus.ccid = CallComplaint.ccid) Inner join
--AssignedStatus
(Select B.CCid, ctc.ContactName , ctc.ContactID ,B.ccDateTime From tbl_ccAssigned A
inner Join (Select ccID, MAX(ccDateTime) as ccDateTime From
tbl_ccAssigned
Group by ccID) B On (A.ccID = B.ccID and A.ccDateTime = B.ccDateTime) Inner Join
vw_Contacts Ctc on (ctc.ContactID = A.AssignedID)) AssignedStatus
ON (AssignedStatus.ccID = IWstatus.ccID)
--NotesEntries
Left Join
(Select B.CCid, A.Notes, ctc.ContactID, B.ccDateTime as NotesEntryTime From tbl_ccNOTES A
inner Join (Select ccID, MAX(ccDateTime) as ccDateTime From
tbl_ccNOTES
Group by ccID) B On (A.ccID = B.ccID and A.ccDateTime = B.ccDateTime) Inner Join
vw_Contacts Ctc on (ctc.ContactID = A.UserID)) NotesStatus
ON (NotesStatus.ccID = AssignedStatus.ccID and NotesStatus.ContactID = AssignedStatus.ContactID )
Order by AssignedTO,Company, Days_INBIN
/end sproc/
/view/
ALTER VIEW vw_TeleMarketView AS
SELECT DISTINCT T.ccID,B.Company
FROM
HAAScc.dbo.tbl_Complaints AS T INNER JOIN
Telemarket.dbo.ContHist AS A ON (A.RECID = T.RECID) INNER JOIN
Telemarket.dbo.Contact1 AS B ON (A.AccountNo = B.AccountNo)
/end view/
-Francisco
-Francisco
October 16, 2002 at 4:41 am
Looking at for a moment but gotta break it down. One thing thou.
In regards to the contact view which you listed as coded
Select * From Openquery(Service, "Select * From dev_rede.tbl_Contacts")
Do you actually use the * or are their columns, reason is if you do not need all the columns this query is pulling them all over. You should see a bit of improvement (depending on amount of data) by naming just the columns you need in the inside query, you can keep the * on the outside. Or personally I would swap to using
SELECT COLLIST FROM Service.dbname.dev_rede.tbl_Contacts
I will look in further detail but if you try that let us know what effect it has.
"Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)
October 16, 2002 at 9:13 am
This is the exact code for that view... I am using the OpenQuery instead of the full linked name " SELECT COLLIST FROM Service.dbname.dev_rede.tbl_Contacts" because the Front End application (an Access ADP = Access data project) doesn't understand the linked name for some reason... colleges tell me that this has been fixed for Access2002. Another workaround is to move the application to VB6, however a benefit of using Access is it's reporting capabilities.. but I digress.
SELECT * FROM OPENQUERY(Service, 'SELECT * FROM DEV_REDe.dbo.tbl_Contacts')
In observing the above statement I could optimize it to only bring over just ContactID and ContactName, which is what I'm after anyways. Do I need to specify ContactID and ContactName in both the OpenQuery and the Select Statement? or only in the OpenQuery?
Thanks for catching this!
-Francisco
-Francisco
October 16, 2002 at 9:23 am
This is an updated Execution plan after minimizing the * down to columns...
StmtText
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EXEC dbo.stp_ListStatusAssigned
/****** Object: Stored Procedure dbo.stp_ListStatusAssigned Script Date: 10/9/2002 8:01:41 AM ******/
/****** Object: Stored Procedure dbo.stp_ListStatusAssigned Script Date: 10/4/2002 10:10:47 AM ******/
/****** Object: Stored Procedure
Select IWstatus.ccID, /* callComplaint.hfoid,*/ callComplaint.Company, AssignedStatus.ContactName as AssignedTo,
DateDiff(dd,DaysInCyle.ccDateTime,@CurrDate) DaysInCycle,
DateDiff (dd,AssignedStatus.ccDateTime, @currDate) as Days_INBI
(3 row(s) affected)
StmtText
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|--Sort(ORDER BY:([].[ContactName] ASC, [CONTACT1].[Company] ASC, [Expr1027] ASC))
|--Compute Scalar(DEFINE:([Expr1026]=datediff(4, [Expr1004], [@CurrDate]), [Expr1027]=datediff(4, [Expr1008], [@CurrDate]), [Expr1028]=datediff(4, If ([Expr1011]=NULL) then [Expr1002] else [Expr1011], [@CurrDate]), [Expr1029]=Convert([Expr1008]
|--Hash Match(Left Outer Join, HASH:([tbl_ccAssigned].[ccID], [].[ContactID])=([tbl_ccNotes].[ccID], [].[ContactID]), RESIDUAL:([tbl_ccNotes].[ccID]=[tbl_ccAssigned].[ccID] AND [].[ContactID]=[].[ContactID]))
|--Nested Loops(Inner Join)
| |--Merge Join(Inner Join, MERGE:([A].[ccID])=([tbl_Complaints].[ccID]), RESIDUAL:([A].[ccID]=[tbl_Complaints].[ccID]))
| | |--Stream Aggregate(GROUP BY:([A].[ccID]) DEFINE:([Expr1004]=MIN([A].[ccDateTime])))
| | | |--Sort(ORDER BY:([A].[ccID] ASC))
| | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccStatus].[PK_tbl_ccStatus] AS [A]), WHERE:([A].[StatusID]=2))
| | |--Merge Join(Inner Join, MANY-TO-MANY MERGE:([A].[ccID])=([tbl_ccStatus].[ccID]), RESIDUAL:([A].[ccID]=[tbl_Complaints].[ccID] AND [Expr1008]=[A].[ccDateTime]))
| | |--Sort(ORDER BY:([A].[ccID] ASC))
| | | |--Table Scan(OBJECT:([HAAScc].[dbo].[tbl_ccAssigned] AS [A]))
| | |--Merge Join(Inner Join, MERGE:([tbl_ccAssigned].[ccID])=([tbl_ccStatus].[ccID]), RESIDUAL:([tbl_ccAssigned].[ccID]=[tbl_ccStatus].[ccID]))
| | |--Stream Aggregate(GROUP BY:([tbl_ccAssigned].[ccID]) DEFINE:([Expr1008]=MAX([tbl_ccAssigned].[ccDateTime])))
| | | |--Sort(ORDER BY:([tbl_ccAssigned].[ccID] ASC))
| | | |--Table Scan(OBJECT:([HAAScc].[dbo].[tbl_ccAssigned]))
| | |--Sort(ORDER BY:([tbl_ccStatus].[ccID] ASC))
| | |--Hash Match(Inner Join, HASH:([tbl_Complaints].[ccID])=([A].[ccID]))
| | |--Sort(DISTINCT ORDER BY:([tbl_Complaints].[ccID] ASC, [CONTACT1].[Company] ASC))
| | | |--Bookmark Lookup(BOOKMARK:([Bmk1024]), OBJECT:([TeleMarket].[dbo].[CONTACT1]))
| | | |--Nested Loops(Inner Join)
| | | |--Bookmark Lookup(BOOKMARK:([Bmk1022]), OBJECT:([TeleMarket].[dbo].[CONTHIST]))
| | | | |--Nested Loops(Inner Join)
| | | | |--Index Scan(OBJECT:([HAAScc].[dbo].[tbl_Complaints].[PK_tbl_Complaints]))
| | | | |--Index Seek(OBJECT:([TeleMarket].[dbo].[CONTHIST].[CNHRECID]), SEEK:([CONTHIST].[RECID]=[tbl_Complaints].[RECID]) ORDERED)
| | | |--Index Seek(OBJECT:([TeleMarket].[dbo].[CONTACT1].[CONTACC]), SEEK:([CONTACT1].[AccountNo]=[CONTHIST].[AccountNo]) ORDERED)
| | |--Nested Loops(Inner Join)
| | |--Hash Match Root(Inner Join, HASH:([tbl_ccStatus].[ccID])=([A].[ccID]), RESIDUAL:([Expr1002]=[A].[ccDateTime]))
| | | |--Hash Match Team(Aggregate, HASH:([tbl_ccStatus].[ccID]) DEFINE:([Expr1002]=MAX([tbl_ccStatus].[ccDateTime])))
| | | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccStatus].[PK_tbl_ccStatus]))
| | | |--Filter(WHERE:([A].[StatusID]=5 OR [A].[StatusID]=2))
| | | |--Clustered Index Scan(OBJECT:([HAAScc].[dbo].[tbl_ccStatus].[PK_tbl_ccStatus] AS [A]))
| | |--Index Seek(OBJECT:([HAAScc].[dbo].[tbl_Complaints].[PK_tbl_Complaints] AS [A]), SEEK:([A].[ccID]=[A].[ccID]) ORDERED)
| |--Table Spool
| |--Filter(WHERE:([A].[AssignedID]=[].[ContactID]))
| |--Remote Scan(SELECT ContactID, ContactName FROM DEV_REDe.dbo.tbl_Contacts)
|--Hash Match(Inner Join, HASH:([A].[UserID])=([].[ContactID]), RESIDUAL:([].[ContactID]=[A].[UserID]))
|--Hash Match Root(Inner Join, HASH:([tbl_ccNotes].[ccID])=([A].[ccID]), RESIDUAL:([Expr1011]=[A].[ccDateTime]))
| |--Hash Match Team(Aggregate, HASH:([tbl_ccNotes].[ccID]) DEFINE:([Expr1011]=MAX([tbl_ccNotes].[ccDateTime])))
| | |--Table Scan(OBJECT:([HAAScc].[dbo].[tbl_ccNotes]))
| |--Table Scan(OBJECT:([HAAScc].[dbo].[tbl_ccNotes] AS [A]))
|--Remote Scan(SELECT ContactID, ContactName FROM DEV_REDe.dbo.tbl_Contacts)
-Francisco
Edited by - fhtapia on 10/16/2002 09:34:30 AM
-Francisco
October 17, 2002 at 2:06 pm
I wanted to say thanks for taking the time to help me out... I am currently re-reading one of my SQL books "Using SQL Server 7.0" under the performance chapter in order to figure out what I can use.
Some new information that I've found out, The Telemarket database is > 4gigs with one of the tables that I access being over 230 megs, This is all on ONE filegroup (one mdf) I asked the keeper of that database if he had any performance issues on his end, and he remarked that he doesn't use it much other than for running some reports that aren't critical the amount of time it takes for them to return.
I suggested perhaps investing some time to find out from the vendor (Goldmine) weather tearing a few tables out to other filegroups (mdf's) would be plausible w/o affecting the synchronization or otherwise environment of the Goldmine application. He said he'll get back to me.
-Francisco
-Francisco
Viewing 15 posts - 1 through 15 (of 32 total)
You must be logged in to reply to this topic. Login to reply