April 4, 2012 at 8:37 am
Hi there,
I'm trying to check out which queries are running on our server and i'm using this query:
SELECT
db.name as DB,
sid.login_name,
sqltext.TEXT,
req.session_id SID,
req.start_time ,
req.status,
req.command,
req.cpu_time,
req.logical_reads,
req.total_elapsed_time / 1000 as total_elapsed_time_seconds,
req.last_wait_type,
req.wait_time,
plan_used.query_plan
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
CROSS APPLY sys.dm_exec_query_plan(plan_handle) as plan_used
JOIN sys.databases db ON req.database_id = db.database_id
JOIN sys.dm_exec_sessions sid ON sid.session_id = req.session_id
ORDER BY req.total_elapsed_time DESC
One of the lines returned says that one SID is using 367257823 Logical Reads.
WHAT ???? I'm very new to SQL Server DBA Stuff, but the right way to calculate the number of buffer used, that this guy is something like this ?
number of logical reads * 8(kb per page) ??
This returns a huge number...
What i'm doing wrong here ?
I know that logical reads are records comming from the buffer, but i'm trying to understand the volume of data used by the query.
Thanks for your help.
April 4, 2012 at 8:56 am
It's read a total of 2.8 TB of data from cache since it connected. That could be a 2GB table read 1000 times, it could be a 200 MB table read 10000 times or anything like that.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 4, 2012 at 9:01 am
That are very cool numbers 🙂 !!
If i kill his session, that numbers are reset , if the guy starts again the query (this is comming from reporting services , someone generating the report online)
I'm very confused about this situation, that's a lot of information that we dont really have.
In which situations a 2GB table could be read 1000times?
Thank your for your time.
GilaMonster (4/4/2012)
It's read a total of 2.8 TB of data from cache since it connected. That could be a 2GB table read 1000 times, it could be a 200 MB table read 10000 times or anything like that.
April 4, 2012 at 9:07 am
p.s. I've seen single stored procedures with a logical IO of > 100 000 000 (1.4TB data) in a database that is 50GB in size
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 4, 2012 at 9:08 am
tiago.palhota (4/4/2012)
In which situations a 2GB table could be read 1000times?
Many, many situations. Usually around poor coding or poor indexing.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 4, 2012 at 9:17 am
tiago.palhota (4/4/2012)
That are very cool numbers 🙂 !!If i kill his session, that numbers are reset , if the guy starts again the query (this is comming from reporting services , someone generating the report online)
I'm very confused about this situation, that's a lot of information that we dont really have.
In which situations a 2GB table could be read 1000times?
Thank your for your time.
GilaMonster (4/4/2012)
It's read a total of 2.8 TB of data from cache since it connected. That could be a 2GB table read 1000 times, it could be a 200 MB table read 10000 times or anything like that.
Is the query written by the user, or is he using Report Services Models (.smdl)? I've occasionally caught the queries thrown by SSRS via model in profiler and been completely baffled as to what they do. My favorite was a case statement (switch to RS) that turned a simple
select field from table
query into
select field as field1, field as field2, field as field3.... field as field15 from table.
This was nested in the 5th level of nested subqueries in the 2nd join of different subqueries... I tried to figure out what was going on and just gave up. We've decided models are crap and are instead planning to teach SQL to report developers (who typically are secretaries).
April 4, 2012 at 9:23 am
Hi there,
This is a user statement, it's a huge query dealing with a lot of transactions and grouping them. It's a huge Plan,very hard for me(newbie) to understand and act but, i'm trying.
I had to kill the session, and the logical reads were 663453008.
I never seen a huge number like this,
Please, can you help me understand how a table can be read multiple times, like 200MB 10000times?
Thanks
mtassin (4/4/2012)
tiago.palhota (4/4/2012)
That are very cool numbers 🙂 !!If i kill his session, that numbers are reset , if the guy starts again the query (this is comming from reporting services , someone generating the report online)
I'm very confused about this situation, that's a lot of information that we dont really have.
In which situations a 2GB table could be read 1000times?
Thank your for your time.
GilaMonster (4/4/2012)
It's read a total of 2.8 TB of data from cache since it connected. That could be a 2GB table read 1000 times, it could be a 200 MB table read 10000 times or anything like that.Is the query written by the user, or is he using Report Services Models (.smdl)? I've occasionally caught the queries thrown by SSRS via model in profiler and been completely baffled as to what they do. My favorite was a case statement (switch to RS) that turned a simple
select field from table
query into
select field as field1, field as field2, field as field3.... field as field15 from table.
This was nested in the 5th level of nested subqueries in the 2nd join of different subqueries... I tried to figure out what was going on and just gave up. We've decided models are crap and are instead planning to teach SQL to report developers (who typically are secretaries).
April 4, 2012 at 9:28 am
As I said, has to do with bad coding and bad indexing.
Something like this is a trivial example:
DECLARE CURSOR FOR <select that returns 200 products>
WHILE <not end of cursor>
SELECT SUM(TotalOrdered) FROM ReallyHugeTable WHERE ProductID = <current product from cursor>
If the right indexes aren't in place, that could do 200 table scans. Even if the right indexes are in place, that's an inefficient way to write the query.
There are many, many, many other things that do the same thing.
Comes down to inefficient code, poor indexing and I can't say more without seeing the details.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 4, 2012 at 9:32 am
GilaMonster, sorry i didnt see your reply .
I'm going to lookup at the statement, and i will return with some more info.
Thanks for your help.
GilaMonster (4/4/2012)
As I said, has to do with bad coding and bad indexing.Something like this is a trivial example:
DECLARE CURSOR FOR <select that returns 200 products>
WHILE <not end of cursor>
SELECT SUM(TotalOrdered) FROM ReallyHugeTable WHERE ProductID = <current product from cursor>
If the right indexes aren't in place, that could do 200 table scans. Even if the right indexes are in place, that's an inefficient way to write the query.
There are many, many, many other things that do the same thing.
Comes down to inefficient code, poor indexing and I can't say more without seeing the details.
April 4, 2012 at 9:42 am
OK! Now i'm very very very very surprised!
SQLServer returned all the rows in 8 SECONDS!
In Reporting Services is taking more than 1 hour and is using the logical reads that i said.. this is really a wtf???
Total Lines returned 39870.
Is this the problem ? Too much lines ? This is a transaction dump from one of our tables.
By the way, the logical reads from SQL Management Studio is 34262 !
Edit: My boss dumped all the rows in a single table and feed the report with that single table and voila, it was very fast.
I cannot understand the behaviour of reporting services in this case. Any ideia ?
GilaMonster (4/4/2012)
As I said, has to do with bad coding and bad indexing.Something like this is a trivial example:
DECLARE CURSOR FOR <select that returns 200 products>
WHILE <not end of cursor>
SELECT SUM(TotalOrdered) FROM ReallyHugeTable WHERE ProductID = <current product from cursor>
If the right indexes aren't in place, that could do 200 table scans. Even if the right indexes are in place, that's an inefficient way to write the query.
There are many, many, many other things that do the same thing.
Comes down to inefficient code, poor indexing and I can't say more without seeing the details.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply