June 30, 2012 at 9:27 am
Hi All,
I am using SQL SERVER 2008 R2.
Query used,
select col_name1, max(col_name2),max(col_name3),max(date_time)
from tble_candt
where date_time between '2011-01-01' and '2012-06-30'
group by col_name1
order by date_time desc;
The above query returns 3.5 crore records
its taking nearly 9 minutes to give the results.
I have used NON Clustered index on Col_name1 and date_time columns.
and also i have tried with >= and <= condition in my where clause. Its also behaving as same as like above.
Please guide me in this regard.
Thanks,
Syed
June 30, 2012 at 9:37 am
Read up on use the READ UNCOMMITTED isolation levels, also do you know how many reads your query is doing?
***The first step is always the hardest *******
June 30, 2012 at 9:40 am
Thanks for the reply,
Could you please ellaborate on this, I am not getting exactly.
Syed.
June 30, 2012 at 10:12 am
Post the Query Plan for your query please. Also, please post the script for your table and its keys and indexes.
Thanks,
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
June 30, 2012 at 10:32 am
Thanks for your reply.
Please find the attached screen shots for indexes used.
there is no keys for this tble_candt table.
Table script :
CREATE TABLE tble_candt(
[col_name1] [varchar](10) NULL,
[col_name2] [nvarchar](25) NULL,
[col_name3] [nvarchar](64) NULL,
[col_name4] [nvarchar](512) NULL,
[col_name5] [nvarchar](512) NULL,
[DATE_TIME] [datetime] NULL,
[col_name6] [nvarchar](25) NULL
) ON [PRIMARY]
GO
June 30, 2012 at 11:54 am
There are couple of things.
Can you post the output of the following?
select * from sys.dm_db_index_physical_stats(DB_ID(),object_id('<tablename>'),null,null,null)
To me it looks like that you do not have any clustered index and you have a heap table.It could be possible that your table is much bigger than it actually is because heaps have forwarded rows and do not reclaim the space from the deletes happened on table.Thus need to make sure that there is not too much space used by the table then it has to be.
Also, as you mentioned that query is returning 35 milllion rows (3.5 crores) which is way too much data and it is possible that lots of time is spent in just returning this data out of this 9 minutes. I guess, you are based out of India and the server must be in US or Europe in that case retruning 35 million rows will surely take huge time.
Also, after group by you are getting 35 million rows that means you are processing more than 35 million rows and that explains a lot why it is taking time.
Thus it would be great,as mentioned in previous posts, if you could post the full DDL, and actual execution plan as well as statistics io,time on...
while executing the query discard the resultset from ssms and that will make sure that the network part will not come into play and hopefully you will get these outputs sooner than 9 minutes..
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
June 30, 2012 at 1:41 pm
Hi Thanks for your replay,
Please see the attached snapshot will xplain you about the nonclustered index reference.
Thanks,
Syed
June 30, 2012 at 1:56 pm
There are no record count and there are no forward row count data in this snap shot. This is indeed a heap and fragmentation is very heavy. Can you post remaining part of that query's output? like record count, forward row count etc and page count,record size etc...
Also did you run the query again by discarding the output of the query and checked the time as well as plan? Plan is most important thing to look for perf issues.
Als, from this I could see that table have just 4000 pages (I could be wrong but that could be confirmed by page count ) and I doubt that 4000 pages contains the 35 million rows..
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
June 30, 2012 at 2:27 pm
Please post table definitions, index definitions and execution plan, as per http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 30, 2012 at 3:35 pm
SGT_squeequal (6/30/2012)
Read up on use the READ UNCOMMITTED isolation levels, also do you know how many reads your query is doing?
Since the OP didn't have that in the original code, I have to assume that you're recommending its use. I've lost track of the links I used to have on the subject but using READ UNCOMMITTED and/or WITH(NOLOCK) are usually really bad ideas. Yeah, of course you already know that they both allow dirty reads that might be rolled back but did you know that it can also cause duplicate reads?
Again, sorry I don't have the links and I normally don't post about such things without providing links, but reading dirty data has a whole lot more wrong with it than the simple reading of dirty data and wanted to let you know.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 1, 2012 at 5:53 am
Here's one: http://blogs.msdn.com/b/davidlean/archive/2009/04/06/sql-server-nolock-hint-other-poor-ideas.aspx
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 1, 2012 at 10:06 am
Jeff Moden (6/30/2012)
SGT_squeequal (6/30/2012)
Read up on use the READ UNCOMMITTED isolation levels, also do you know how many reads your query is doing?Since the OP didn't have that in the original code, I have to assume that you're recommending its use. I've lost track of the links I used to have on the subject but using READ UNCOMMITTED and/or WITH(NOLOCK).
Well it depends on what he wants to return, hence i mentioned looking up on and not recomending the use of 🙂
***The first step is always the hardest *******
July 1, 2012 at 10:50 am
SGT_squeequal (7/1/2012)
Jeff Moden (6/30/2012)
SGT_squeequal (6/30/2012)
Read up on use the READ UNCOMMITTED isolation levels, also do you know how many reads your query is doing?Since the OP didn't have that in the original code, I have to assume that you're recommending its use. I've lost track of the links I used to have on the subject but using READ UNCOMMITTED and/or WITH(NOLOCK).
Well it depends on what he wants to return, hence i mentioned looking up on and not recomending the use of 🙂
In a semantical sense, I suppose you're correct but telling someone to read up on it without mentioning the caveates that exist will be taken by most as a recommendation.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 2, 2012 at 2:01 am
Hi,
Please find the attached file having Table definition, Index definition, and execution plan (FROM Development server)
Query used,
select col_name1, max(col_name2),max(col_name3),max(date_time)
from tble_candt
where date_time between '2011-01-01' and '2012-06-30'
group by col_name1
order by date_time desc;
The above query returns 3.5 crore records
its taking nearly 9 minutes to give the results.
I have used NON Clustered index on Col_name1 and date_time columns.
and also i have tried with >= and <= condition in my where clause. Its also behaving as same as like above.
Please guide me in this regard.
Note : Here execution plan which is taken from development server as i am not having access in Production server.
Thanks,
Syed
July 2, 2012 at 3:46 am
Exec plan from dev is useless, unless dev is the one having performance problems. Ask someone who does have permission on prod to please run the query and save the exec plan
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 15 (of 29 total)
You must be logged in to reply to this topic. Login to reply