December 12, 2013 at 8:51 pm
Having a table with 100 columns and 1 million records.
Select Count(*) from Mytable;
COUNT(*)
----------
1001920
Time taken to return is 15 sec
select * from Mytable;
Time taken to return is 02:15 min
select * from Mytable order by some_coumn desc;
Time taken to return is 04:58 min
There is no primary key for the column which is ordered by
Order by is taking more time.
I need to show to client some 1000 records which are order by then filter condition
Conditions:
Need to Order by my data based on sort condition client specified
Then apply filter condition
sample
Select * from mytable order by some_column and Where condition like 1 to 1000 and 1001 to 2000 and so on.....
Note there are no Joins just straight select with one primary key
How to improve the query ?
December 12, 2013 at 11:42 pm
You have to go for table level partition (split the tables in different disks), before that check the fragmentation level of the tables and defrag the same.
Is this table used as an archive or a regular use , If its regular use consider the table partition, also create a job for re-indexing in the low usage time.
Thanks & regards
MJ
December 12, 2013 at 11:46 pm
By how many of those hundred columns client wants to order? If number is limited then adding clustered primary key and index to each column helps.
Is there lot of changes in table, additional indexes might slow these.
December 13, 2013 at 12:10 am
is this table consist of all field with varchar/nvarchar, or there are multiple data types. i am asking this because if you have all varchar field it is a issue in itself. if there are varchar(max)/nvarchar(max) fields then you need to rethink.
December 13, 2013 at 2:17 am
yuvipoy (12/12/2013)
Having a table with 100 columns and 1 million records.Select Count(*) from Mytable;
COUNT(*)
----------
1001920
Time taken to return is 15 sec
select * from Mytable;
Time taken to return is 02:15 min
select * from Mytable order by some_coumn desc;
Time taken to return is 04:58 min
There is no primary key for the column which is ordered by
Order by is taking more time.
I need to show to client some 1000 records which are order by then filter condition
Conditions:
Need to Order by my data based on sort condition client specified
Then apply filter condition
sample
Select * from mytable order by some_column and Where condition like 1 to 1000 and 1001 to 2000 and so on.....
Note there are no Joins just straight select with one primary key
How to improve the query ?
Post the ddl for the table and the indexes. Post a few rows of sample data too if you can.
One million rows is trivial, the count would normally return within a second or so.
The timing for "select * from Mytable" is meaningless, most of the time will be network latency and screen refresh.
This doesn't make sense - "Select * from mytable order by some_column and Where condition like 1 to 1000 and 1001 to 2000 and so on....." - what are you trying to do here?
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
December 13, 2013 at 3:35 am
To understand why a SELECT statement is slow, you need to look at the execution plan to understand how the optimizer is resolving the query. Also, you need to look to the system to see if you are experiencing blocking or resource contention.
Based on what you've posted, that's all I've got.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 13, 2013 at 6:07 am
After you address the stuff above about some of the specifics, here's a good article on catch-all queries http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/ written by Gail Shaw. However, it sounds like you don't really know what you want yet.
December 14, 2013 at 8:39 am
the table consists of numbers and floats only.. int bigint , float data types only...
100 columns combination of above.
option will be given to user , where he can scroll through the records of 1 million rows .
* he might order by say column 10 desc
* so i need to do desc on column 10 and given the 1 million records based on it.
* user may choose any column to sort
* there may be multiple columns also to sort the data.
December 15, 2013 at 7:55 am
yuvipoy (12/14/2013)
...where he can scroll through the records of 1 million rows... .
[sarcasm ON]
If the company can afford to pay someone to scroll through one million rows, they'll definitely have the time to way a few minutes each time the sort order is changed...
[sarcasm OFF]
More honestly: I'd question the business need to return a million rows in the first place. The time it takes to transfer the data will be significant.
Example: let's assume all 100 columns are defined as integer. Then one row will use 400byte. So you're transferring almost 400MB each time the sort order is changed.
Assuming the hardware hosting the SQL Server does provide a large enough amount of memory, data might be cached in memory leading to a faster return of the rows in a different order.
But again: What's the purpose of it?
December 15, 2013 at 9:17 am
here i meant scroll is where condition..... for every scroll i will be giving 1000 records...
For the 1st time i will return 1000 records and next time 1001 to 2000 on 1 million rows which is sorted (order by )
December 16, 2013 at 2:09 am
yuvipoy (12/15/2013)
here i meant scroll is where condition..... for every scroll i will be giving 1000 records...For the 1st time i will return 1000 records and next time 1001 to 2000 on 1 million rows which is sorted (order by )
1000 rows of INT's, FLOATs and BIGINTs. Is this data processed in some way by a front end?
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
December 17, 2013 at 9:19 pm
1000 rows of INT's, FLOATs and BIGINTs. Is this data processed in some way by a front end?
For each and every scroll the SP will be called,giving from_no and to_no to it.
There is no processing at the front end whatever data is there database will be displayed.
Thanks!
December 17, 2013 at 10:53 pm
ChrisM@Work (12/16/2013)
yuvipoy (12/15/2013)
here i meant scroll is where condition..... for every scroll i will be giving 1000 records...For the 1st time i will return 1000 records and next time 1001 to 2000 on 1 million rows which is sorted (order by )
1000 rows of INT's, FLOATs and BIGINTs. Is this data processed in some way by a front end?
Serious laguage barrier here. I believe "scroll" actually means that this is a paging problem.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 17, 2013 at 11:55 pm
page will be having 1000 records (between 1 and 1000 ) in first go.. for each and every scroll there be another 1000 rows (between 1001 and 2000 )and so on...
December 18, 2013 at 12:13 am
LutzM (12/15/2013)
Example: let's assume all 100 columns are defined as integer. Then one row will use 400byte. So you're transferring almost 400MB each time the sort order is changed.Assuming the hardware hosting the SQL Server does provide a large enough amount of memory, data might be cached in memory leading to a faster return of the rows in a different order.
I will be fetching only 0.381 MB data(400 bytes *1000 rows) each time but will of in order by enteir table
say
mytable
c1 c2
1 1
2 2
3 3
4 4
Select Row_number R1, C2 from mytable order by c2 desc
R1 c2
1 4
2 3
3 2
4 1
Select Row_number R1 , C2 from mytable where r1 between 3 and 4 order by c2 desc
R1 c2
3 3
4 4
here 3 and 4 will be first filtered then order by is applied.
This is what i needed
Select Row_number R1, c2 from mytable order by c2 desc and where r1 between 3 and 4
R1 c2
3 2
4 1
first order by is applied and then where condition .
but for doing 1 miilion row is slower.
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply