April 4, 2013 at 3:58 am
Dear All
I am running a process (updation/deletion/insert) of 2 million records. After some time this process makes my system very slow.
How do i find out whats the issue.
In task manager it shows
CPU usage max as 15
memory as 4.14 gb
Regards
Kirshan1
April 4, 2013 at 7:29 am
Check wait stats while the operation is running
April 4, 2013 at 7:21 pm
How to check this?
April 4, 2013 at 9:40 pm
April 5, 2013 at 12:15 am
From this website, download a book called Troubleshooting SQL Server: A Guide for the Accidental DBA by Jonathan Kehayias and Ted Krueger
Chapter 1.
April 5, 2013 at 1:21 am
Krishna1 (4/4/2013)
Dear AllI am running a process (updation/deletion/insert) of 2 million records. After some time this process makes my system very slow.
How do i find out whats the issue.
In task manager it shows
CPU usage max as 15
memory as 4.14 gb
Regards
Kirshan1
Post the estimated plan for folks to have a look now, and then the actual execution plan when you can.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 7, 2013 at 4:33 am
I have run folloing query from the site mention
WITH [Waits] AS
(SELECT
[wait_type],
[wait_time_ms] / 1000.0 AS [WaitS],
([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS],
[signal_wait_time_ms] / 1000.0 AS [SignalS],
[waiting_tasks_count] AS [WaitCount],
100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage],
ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum]
FROM sys.dm_os_wait_stats
WHERE [wait_type] NOT IN (
N'CLR_SEMAPHORE', N'LAZYWRITER_SLEEP',
N'RESOURCE_QUEUE', N'SQLTRACE_BUFFER_FLUSH',
N'SLEEP_TASK', N'SLEEP_SYSTEMTASK',
N'WAITFOR', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH',
N'XE_TIMER_EVENT', N'XE_DISPATCHER_JOIN',
N'LOGMGR_QUEUE', N'FT_IFTS_SCHEDULER_IDLE_WAIT',
N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT',
N'CLR_AUTO_EVENT', N'DISPATCHER_QUEUE_SEMAPHORE',
N'TRACEWRITE', N'XE_DISPATCHER_WAIT',
N'BROKER_TO_FLUSH', N'BROKER_EVENTHANDLER',
N'FT_IFTSHC_MUTEX', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
N'DIRTY_PAGE_POLL', N'SP_SERVER_DIAGNOSTICS_SLEEP')
)
SELECT
[W1].[wait_type] AS [WaitType],
CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S],
CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S],
CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S],
[W1].[WaitCount] AS [WaitCount],
CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage],
CAST (([W1].[WaitS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgWait_S],
CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgRes_S],
CAST (([W1].[SignalS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgSig_S]
FROM [Waits] AS [W1]
INNER JOIN [Waits] AS [W2]
ON [W2].[RowNum] <= [W1].[RowNum]
GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS],
[W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]
HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95; -- percentage threshold
GO
the result is
PAGEIOLATCH_SH 48.96%
SLEEP_BPOOL_FLUSH 30.46%
PAGEIOLATCH_EX 13.40%
IO_COMPLETION 4.05%
When my process is going on, its very difficult to open any other winow/exploer ....
April 8, 2013 at 1:36 am
Krishna1 (4/7/2013)
...When my process is going on, its very difficult to open any other winow/exploer ....
Are you running this process on <local>?
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 8, 2013 at 3:27 am
Yes. I have installed Server is on my laptop and doing some testing
April 8, 2013 at 3:42 am
What are your autogrowth settings? If you're not sure about the effect autogrowth can have on performance and how and why to change the settings, this excellent article [/url]by Greg Larsen will take you a long way.
The article includes a script for looking at autogrowth events which have already occurred. Run it and check the results - you may have a quick win here.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 8, 2013 at 8:11 am
thanks for the link it was realy informative. I changed the setting after reading the artical.
My log fie is increasing a lot. 10MB every 8-9 seconds. What may be the reason.
I am reading the records from million rows and set updating one column in the same table at a time max of 10000 rows. This update is in begin tran and commit tran then why should the log file increas?
Note i have one index which has the updateable column in its inlcude list.
April 8, 2013 at 9:12 am
Your transaction log is currently too small to accommodate logging of the changes you are making to your tables in this series of statements.
Check which recovery model are you using. If your local instance is for training / testing, you probably don't need logging for backup, in which case setting your recovery model to Simple is likely to help. For a detailed look at transaction logs and an explanation of why this is the case, try this article by Gail Shaw[/url].
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 9, 2013 at 10:50 am
Thank for the link. Auto growth was 10% every few second it was increasing the log. After i chnaged it to few mb the perfeomce has improved.
i have following code -
while 1=1
begin
begin tran
delete top 1000
where primarkkeycol = @variable
out out to deletehistory
if @@row count < 1000
break
commit tran
end
Out table(delete history) is also in the same noticed that in this case it now increasing temp db log and data. if log. if i am deleting records and adding same infor in other table why should the data log should increase
Also there is commit tran after 1000 records in process breaks then it takes long time to recover.
April 10, 2013 at 12:59 am
Which recovery model is your db set to?
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 10, 2013 at 1:06 am
recovery model - simple
Viewing 15 posts - 1 through 15 (of 41 total)
You must be logged in to reply to this topic. Login to reply