March 9, 2009 at 12:30 pm
Whenever I run following query, it does self blocking and takes 50 minutes to complete. Is there anything I can do to improve the performance of this query. I am running this query on SQL2000 server.
Transac is a view of two table, one table contains 136740760 rows and other table contains 102090000 rows. There is a clustered primary key indexes on transac table. DN200 table is very small table and contains 1000 rows. I have rebuild the index but the performance is still same. attached is the estimation plan also.
SELECT 'scdc200db' AS Server
, CLIENT.CLIENT_MNEMONIC AS Client
, 'BarInvoice' as [FileGroup]
, 'TRANSAC' AS [Table]
, COUNT(*) AS Records
, CAST(CONVERT(char(12), GETDATE(), 1) AS datetime) AS VALID_DT
FROM dbo.DN200 CLIENT INNER JOIN
dbo.TRANSAC TARGET ON CLIENT.Record_Number = TARGET.GROUP_ZZ
GROUP BY CLIENT.CLIENT_MNEMONIC
March 9, 2009 at 1:16 pm
It looks like you are doing a table scan against one of those gigantic tables.
You say you have a clustered index on the Transac table but I am guessing you do not have one on anything else? If not you certainly could use one at a minimum on the Group_ZZ column that you are joining to the small table on, so that hopefully you can actually skip that table scanning (you should consider covering the index with the other columns in the select as well, but you should certainly consider adding an index on that one column).
March 9, 2009 at 1:34 pm
There is a clustered index on following fields and group_zz is the part of the primary key indexes.
[GROUP_ZZ] ASC,
[PATIENT] ASC,
[INVOICE] ASC,
[TRANSACTION_ZZ] ASC
) ON [PRIMARY]
I also checked the fragmentation.
DBCC SHOWCONTIG scanning 'TRANSAC' table...
Table: 'TRANSAC' (1435152158); index ID: 1, database ID: 9
TABLE level scan performed.
- Pages Scanned................................: 9825519
- Extents Scanned..............................: 1240771
- Extent Switches..............................: 1790428
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 68.60% [1228190:1790429]
- Logical Scan Fragmentation ..................: 3.55%
- Extent Scan Fragmentation ...................: 0.23%
- Avg. Bytes Free per Page.....................: 542.0
- Avg. Page Density (full).....................: 93.30%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
March 9, 2009 at 1:49 pm
Could you run the query with
SET STATISTICS IO ON
That way you can see the total number of reads done for each table.
How many rows does this query return? The index by itself is obviously going to be ordered by the Group_ZZ column since it is the leftmost column, but it seems that perhaps is not enough in the case since according to your query plan you are doing a table scan of Transac on the bottom portion.
A covering index might be useful. Possibly partitioning each of the two tables (each of over 100 million rows) and rewriting the view to join those up might also be useful. Is the view itself indexed or just each of the two tables that comprise the view, or perhaps just one table and not the other?
March 9, 2009 at 2:02 pm
This query returns only 23 rows. I cannot use covering indexes because of SQL 2000 server and it does not support convering indexes.
March 9, 2009 at 2:30 pm
You can create covering indexes in 2000. See -->http://www.sqlusa.com/bestpractices/coveringindex/
March 9, 2009 at 2:35 pm
balbirsinghsodhi (3/9/2009)
I am running this query on SQL2000 server.
Please post SQL 2000-related questions in the SQL 2000 forums. If you post in the 2005 forums, you're very likely to get 2005-specific solutions.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 9, 2009 at 3:04 pm
As Manu said, covering indexes absolutely can be done in 2000. Partitioning can also be done, just not in the same fashion as it is done in 2005. See link
http://msdn.microsoft.com/en-us/library/aa902650(SQL.80).aspx
The query plan is good, but I would need more information to really see why this is performing slowly and indexing (particularly a covering index in this case) seems like the most logical solution. Without seeing the Logical Reads, it is just throwing darts at a dart board though.
You also have the ShowContig results here and that is good information, but it does not appear that Fragmentation is your biggest issue. Obviously you are somewhat fragmented here, and you will certainly be doing more reads than you would be if you rebuilt the indexes on each of the table (reindexing a 130 million row table could take some time), but my guess is that if you are indexed properly that will not matter anyway.
It may also be beneficial to see the logic within the view that joins those two tables.
Obviously the self-blocking is nothing more than parallelism as shown by your plan, which is not a problem. The problem is accessing the data in a faster more effective manner.
March 10, 2009 at 9:44 am
Thanks guys,
I have modified the query little bit. Instead of using view in the query , I am using the table directly. somehow when I use view, query optimizer does not used index seek on the other table.
When I use table directly and see the execution plan, it does use index seek on both the table.
You are right about the fragmentation, is not the issue here. Let me build the indexes and see how much performance I can get.
I let you guys know .
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply