August 3, 2009 at 6:44 am
Hi All
This might sound crazy, but just wanted to know what is the background process when we write a T-SQL to read records from a table, i tested without any where clause on table and with where clause, seems that with where clause uses more time,
(The basic Idea behind this is to create a view eliminating some rows)
Say for Example
Select * from #Temp where id=1
The above query users 1ms to process
and the below one
Select * from #Temp where day<>3 and month<>4 and id=1
The above SQL takes 20ms,
Can any one help me with the background process and why this takes more time, will the time reduce when you create a view with day and month and then do a select on the view
Thanks for all your help 🙂
August 3, 2009 at 7:08 am
CrazyMan (8/3/2009)
Hi AllThis might sound crazy, but just wanted to know what is the background process when we write a T-SQL to read records from a table, i tested without any where clause on table and with where clause, seems that with where clause uses more time,
(The basic Idea behind this is to create a view eliminating some rows)
Say for Example
Select * from #Temp where id=1
The above query users 1ms to process
and the below one
Select * from #Temp where day3 and month4 and id=1
The above SQL takes 20ms,
Can any one help me with the background process and why this takes more time, will the time reduce when you create a view with day and month and then do a select on the view
Thanks for all your help 🙂
Filtering data takes time. Depending on your filter criteria in your where clause, the indexes available on the table, the volume of data in the tables, what data you are returning form the table will affect execution time. If you are doing table scans or index scans and bookmark lookups then it is most likely that your query will run longer than if it is doing an index seek.
Lets take a quick look at your two queries. The first query can use an index on id to quickly identify all the records with an id = 1, probably using and index seek, very fast. The second query with the two inequality operators will probably end up doing a table scan as it has to determie if each record day 3 and month 4, this will be slower in comparision to your first query.
August 3, 2009 at 7:35 am
Thanks Lynn, That explains a lot to me :-), got one more doubt, If we create a view with that filter, does this time will be reduced, since the view creates a virtual table, and dosent have to filter out each time i run a SQL ( may be i am wrong in this),
And about the reads, whats the difference between an table read with and without a where clause?
August 3, 2009 at 7:45 am
First, you have a WHERE clause in both examples. The problem is the inequality, not the WHERE clause. The only way you can determine something is NOT another value is to scan through all, or almost all values.
the view is a virtual table, but it's not created when you run the CREATE statement. It gets evaluated at query time, so things will be slower.
Look through the execution plan for the query. That will help you better understand what different values in the WHERE clause will cause. Note that the plans depend on your indexes, so creating difference indexes will change execution plans.
August 3, 2009 at 7:48 am
Thanks Steve, thanks a lot for your help, I will check the execution plans
August 6, 2009 at 3:42 am
Even if a query runs quicker without a where clause, this isn't necessarily a reason to leave it out. That's because if you return more data than you need, the unnecessary data will flush out other (possibly useful) data from the cache, which could have an impact on the execution times of other queries.
John
August 10, 2009 at 3:57 am
When you are doing this type of test, it is important to simulate what you will have in Production.
If you are running your queries on a single core development server against a table with (say) 100 rows, you could reach a set of conclusions about the relative speed of using WHERE, ORDER BY, indexes, etc.
If you run your queries on a 8 core production server with the real table size of 50,000,000 rows, you are likely to reach very different conclusions about the relative speeds of these items.
Increased core counts and data volumes mean that SQL Server can use multiple threads to read your data, which means that the order in which the data is presented is not predictable. You need to have the right filters, etc, to get just the data you want.
For example, abut 2 years ago there was a thread from someone who was certain that ORDER BY was not needed if you wanted to read data in the sequence of the cluster index. This person had dome multiple tests with different tables, and found the order was always the same as the cluster index definition. It turned out they were using a low-spec machine with low data volumes, and when the same statements were run with production data the need for an ORDER BY became apparant.
If you are doing your tests on a low spec box you may have similar problems.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply