December 1, 2005 at 4:36 am
Dear all
I am experiencing quite a strange situation, with a query.
When the query is sent locally from SQL Query Analyzer takes a couple of secs, (I can see this in SQL Profiler) but when exactly the same query is sent from the Visual Basic application it is a lot slower.
Looking at the values in SQL Profiler
I see that the query executed directly on the database needs 65221 reads and takes 0.5 secs
And when it is the VB application that requests the query it needs also 65221 reads but takes 7.7 secs!!
In fact the SQL profiler shows me the exact sql query that the application requests, and I just copy paste it and execute it locally. So I can't really see the diference or why it takes a different amount of time in one and in the other.
Does this make any sense?
Thanks for your help
Eduardo
December 1, 2005 at 5:56 am
Eduardo,
Can you enlighten a bit more please? What is the query that you are sending from your VB application, is is just basic delaritive SQL, i.e. SELECT, INSERT, UPDATE, DELETE etc... commands? Have you thought about wrapping your SQL code in to Stored Procedures and executing them from your VB application?
ll
December 1, 2005 at 6:59 am
Here is the query, it is a bit of a monster.
I have just created a stored procedure, to see if there will be a difference, but need to get the programmers of the application to do their VB side.
SELECT DISTINCT a.id,'$' as '$',a.device_type,a.id_device,a.datetime,a.latitude,a.longitude,
ISNULL((SELECT twd_gain FROM weather_calibration WHERE id = a.id_calibration), 1) * CONVERT(numeric(18,2), ISNULL(wind_direction, 1)) +
ISNULL((SELECT twd_offset FROM weather_calibration WHERE id = a.id_calibration), 0) AS wind_direction,ISNULL((SELECT tws_gain FROM weather_calibration WHERE id = a.id_calibration), 1) * CONVERT(numeric(18,2), ISNULL(wind_speed, 1)) +
ISNULL((SELECT tws_offset FROM weather_calibration WHERE id = a.id_calibration), 0) AS wind_speed,b.send_data FROM WEATHER_DATA a INNER JOIN DEVICES b on a.id_device=b.id_device where a.id>8434459 and CONVERT(numeric, datetime) >20051130115707 AND (b.send_data = 1) order by a.id
December 1, 2005 at 7:21 am
I see that the query executed directly on the database needs 65221 reads and takes 0.5 secs
And when it is the VB application that requests the query it needs also 65221 reads but takes 7.7 secs!!
7.2 sec to send/retrieve the data across the network is plausible since the amount of reads are the same.
Are there any good reason to put subqueries in the select area instead of using joins? (since for each record the subquery is fired)
December 3, 2005 at 2:23 am
I thought that the SQL Profiler gave the time it took to execute the Query, not how long it took to execute and send back? are you sure about this? I'm also running it in the local intranet, so we've got a fast ethernet connection.
THe reason we're using subqueries is that some of the tables used are not compolete and have NULL values, we're looking at improving this.
December 4, 2005 at 2:14 pm
I can't find the exact definition for duration at the moment. But it could be that the application is holding locks till all rows are fetched.
The results are both executed (query analyzer & application) from the same computer at the same conditions (time, network congestion...)?
Is SET NOCOUNT ON specified (reducing a roundtrip)?
Is the query stored in a stored procedure?
What is CONVERT(numeric, datetime) >20051130115707 adding to your query?
December 5, 2005 at 4:56 am
- If the application is holding locks, does that mean you suspect that it is executing more than one query apart from the one i'm analyzing and that is locking it?
- The results are both executed in the same computer, one just after the other, no network congestion.
- I'm afraid i don't know what SET NOCOUNT is, or how to tell if it's specified or not (I'm not really an SQL programmer, and have no experience to speak of in databases). Is this something i should enable?
- The query is not stored in a stored procedure
- CONVERT(numeric,datetime) > 20051130115707 doesn't add anything, it has been removed since my post, as i noticed it was an unnecesary conversion and thought it could be slowing down the process. It is now datetime > '20051130115707'. However it has made no noticeable difference to the execution time or the difference between both.
December 5, 2005 at 5:33 am
- If the application is holding locks, does that mean you suspect that it is executing more than one query apart from the one i'm analyzing and that is locking it?
I've seen many msaccess applications that works like
set rs=qdf.openrecordset()' executes select
while rs.eof=false 'locks open till last record is retrieved
msgbox "Waiting for user input"
rs.movenext
wend
insteaf of
if rs.recordcount>0 then
rs.movelast 'quickly retrieve all rows so select locks can be freed
rs.movefirst 'back to first record, no locks held
while rs.eof=false ...
end if
- I'm afraid i don't know what SET NOCOUNT is, or how to tell if it's specified or not (I'm not really an SQL programmer, and have no experience to speak of in databases). Is this something i should enable?
SET NOCOUNT ON is a connection setting that tells SQL Server not to send a packet with the amount of rows affected by each statement.
with nocount off you would retrieve data like this
*send commmand composing of multiple statements
*process statement 1 -> send x records affected to receiving computer
*process statement 2 ->send y records affected
Stops the message indicating the number of rows affected by a Transact-SQL statement from being returned as part of the results.
SET NOCOUNT { ON | OFF }
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement) is not returned. When SET NOCOUNT is OFF, the count is returned.
The @@ROWCOUNT function is updated even when SET NOCOUNT is ON.
SET NOCOUNT ON eliminates the sending of DONE_IN_PROC messages to the client for each statement in a stored procedure. When using the utilities provided with Microsoft® SQL Server™ to execute queries, the results prevent "nn rows affected" from being displayed at the end Transact-SQL statements such as SELECT, INSERT, UPDATE, and DELETE.
For stored procedures that contain several statements that do not return much actual data, this can provide a significant performance boost because network traffic is greatly reduced.
The setting of SET NOCOUNT is set at execute or run time and not at parse time.
*datetime
from which table is [datetime] (reserved word)?
December 22, 2005 at 10:37 am
Two things to add here:
1. It seems possible to me that you are seeing a benefit the second time you run it because of caching on the SQL Server. SQL Server shouldn't care what application is running the query. To test this, try running it from the VB application two times in a row. Is it faster the second time? Also try running it from SQL Query Analyzer then VB to see what happens.
2. You can make this query more efficient using left outer joins. It will also be much more readable.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply