Every so often on the on-line forums, the question of : Which is better Count(*) or Count(1) ? will occur. So can it categorically be proved than one is better than the other ?
What we need to do is populate a table and use both count(*) and count(1) and see if we get any performance difference between the two.
- drop table largeTable
- go
- create table largeTable
- (
- id integer not null
- )
- go
- declare @v integer
- insert into largeTable (id)
- Select top 1000000 ROW_NUMBER() over(order by @v)
- from sys.columns a cross join sys.columns b cross join sys.columns c
- go
- select COUNT(*) from largeTable
- go 100
- select COUNT(1) from largeTable
- go 100
After executing the above code count(*) executes in an average of 107 milliseconds and count(1) executes in an average of 108 milliseconds. So, there is no difference in performance.
What if though we use both count(*) and count(1) in the same statement ? The average execution sound be in the order of 200 milliseconds, right ?
- select COUNT(*),COUNT(1) from largeTable
- go 100
No, we still get the same average execution time, 100ms (ish). Why is that ? Two operations at 100ms each should equal a total of 200ms. The answer to this, as a lot of answers do, lies within the execution plan.
Lets look at the properties for the highlighted compute scalar operation.
Both of the output columns are derived from the SAME Expression, Expr1006, which is the result of the stream aggregate. Looking at the properties of that we see
No mention at all of count(1) anywhere. Indeed, if we look at the query plan of
- select COUNT(1) from largeTable
we still see the count(*) scalar operator used.
So not only now have we proved that there is no performance difference between the two, but to the engine, they are the same operation.