February 14, 2003 at 7:27 am
Executing an query is extremely slow. It works normal after I reboot the server. What else can I do on the production database?
February 17, 2003 at 8:00 am
This was removed by the editor as SPAM
February 17, 2003 at 8:38 am
One query, or all queries? Can you post some code?
February 17, 2003 at 10:47 am
It happens on some complicated queries joined from several tables. These tables are updated frequently. It ran ok for several months with execution time 1s. But now it takes over 30s. It is called from a VB application. So VB gets error when timeout.
I know in Oracle: dbms_utility.analyze_schema will help.
Do you know any ways in SQL?
I post one query that causes the problem
select distinct c.id as 'File ID',
c.recordcreatedatetime as 'FileCreateDateTime'
from webholterpath a
inner join webusers b on a.userid = b.userid
left outer join webholterfiles c on a.filefolder = c.filefolder
left outer join mailservicedetaillog d on a.userid = d.userid
where a.userid = 'me'
and d.mailid=3
and datediff(minute,isnull((select max(completedatetime) from mailservicedetaillog
where userid = 'me' and mailid=3), getdate()-1),c.recordcreatedatetime)>=30
order by 1
February 17, 2003 at 11:23 am
My best advice is to look at the execution plans for each query and try to identify if indexes are being used properly on each of the join columns. If not, and you see a massive amount of table scans (turn STATISTICS IO ON), then redesign indexing or query to take advantage of indexes. The LEFT OUTER JOINS are certainly not helping, and neither is the aggregated subquery in the WHERE clause. My guess is that there is a lot of scanning being done from mailservicedetaillog table. It may be useful to turn that into a derived table. If you could post some results of the STATISTICS IO statements, that would help.
February 17, 2003 at 3:38 pm
Thanks, Jay:
It works after I used inner join instead of left outer join. I am wondering about other queries that have to use left outer join.
February 20, 2003 at 1:46 am
try these solutions :
1- statistics
right click on your database file and choose "properties" and in the options tab, make Auto update statistics and Auto update statistics on.
2- Indexes
try to add more indexes on columns that has criteria
hope this help u.
bye
Alamir Mohamed
Alamir Mohamed
Alamir_mohamed@yahoo.com
February 20, 2003 at 4:31 am
The sqlserver version for dbms_utility.analyze_schema is sp_updatestats. You do not need parameters. Alamir's suggestion of autostatistics does it autmatically after some number of mutations.
February 24, 2003 at 1:10 am
here is the solution as i think:
when you want a report which based on complicated queries joined from several tables. These tables are updated frequently. then you lock these tables (or wait for transaction locks on these tables to be completed to get them) .. so all other poeple using these tables are locked too!!
the best thing is to make the query on the server (Pass-Through Query) .. and with every table name add "with (nolock)" .. so you don't wait for locks(commited transaction) and you don't make a lock on tables.
you will make like this
select kassima.*, Situation.*
from Situation with (nolock) INNER JOIN kassima with (nolock)
ON Situation.SituationID = kassima.Situation
I hope this help u
bye
Alamir Mohamed
Alamir Mohamed
Alamir_mohamed@yahoo.com
February 24, 2003 at 4:25 am
I would say with the fact you state
quote:
It works normal after I reboot the server.
that most likely it is due to insufficient RAM for server. The problem is that it works fine initially and as the server goes along it slows down. Most likely the problem is the data and execution plans are getting pushed out of memory but other work and thus the query engine is starting from scratch as opposed to looking at the cache.
If you have a large amount of RAM make sure the server is configured proerly to use that RAM. As a default SQL can only use the first 2GB of RAM, you have other items you can enable to change that.
To verify a potential Memory bottleneck open Performance Monitor and look at the following values.
Memory Object: Pages/Sec = The more paging that occurs, the more I/O overhead your server experiences, which in turn can decrease the performance of SQL Server.
Memory Object: Available Bytes = This value should be greater than 5MB. On a server dedicated to SQL Server, SQL Server attempts to maintain from 4-10MB of free physical memory. If drops below 4MB then server will show performance lags.
SQLServer:Memory Manager: Total Server Memory (KB) and SQLServer:Memory Manager: Target Server Memory (KB) = The first counter, SQLServer:Memory Manager: Total Server Memory (KB), tells you how much the mssqlserver service is currently using. The second counter, SQLServer:Memory Manager: Target Server Memory (KB), tells you how much memory SQL Server would like to have in order to operate efficiently. If your SQL Server has enough memory, then these two counters will be identical.
There are others you could look at but these show most bottlenecks just fine.
Also, in regards to nolock hint you need to keep the following in mind.
quote:
Do not issue shared locks and do not honor exclusive locks. When this option is in effect, it is possible to read an uncommitted transaction or a set of pages that are rolled back in the middle of a read. Dirty reads are possible. Only applies to the SELECT statement.
Meaning if User A is doing and insert or update while User B runs a query that would include that uncommitted value and they get their results then User A has a rollback occurr then you values for User B were never valid and in the case where those queries are for reports that are critical as far as values then you are producing invlaid reports.
Just be sure the end result is what you want it to be.
February 24, 2003 at 6:21 am
what Antares686 says is true ..
those reports that based on queries that has (with (nolock)) will not show commited data that happened now .. so it may produce (invalid data)
bye
Amir
Alamir Mohamed
Alamir Mohamed
Alamir_mohamed@yahoo.com
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply