January 4, 2012 at 8:00 am
Ninja's_RGR'us (1/4/2012)
Marco V (1/4/2012)
Dave Ballantyne (1/4/2012)
Ninja's_RGR'us (1/4/2012)
Dave Ballantyne (1/4/2012)
That being the case , maybe a simple SSIS would be better for you here ?How would this go in anything less than 6 minutes in SSIS?
How would SSIS correct the design flaw?
Its not going to help *much* and certainly not solve anything , but SSMS is slow *IMO* at consuming results and drawing the screen, if all the OP is doing is then clicking r-click save as , then it may go down to 5 mins or so.
Infact , OP , as an experiment try turning on the 'discard query results' check box, what is the time difference ?
iam using result to file so i dont have the overhead...on a side note it will not even complete if i choice any other method as it will run out of mem (SSMS that is)
Won't run out of ram if you discard the results.
If your powerful dev machine can't handle that much data, what will the users' machine do! 😉
got the discard results enabled, 29 seconds....
so can i conclude that the bottleneck is writing the results in mem and then to the output file?
January 4, 2012 at 8:03 am
Of course.
What does this return?
/*
Cost threshold for parallelism (CXPACKET) http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/01/19/tuning-cost-threshold-of-parallelism-from-the-plan-cache.aspx
Paul White: Understanding parallelism http://www.simple-talk.com/sql/learn-sql-server/understanding-and-using-parallelism-in-sql-server/
Microsoft White Paper on waits http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/Performance_Tuning_Waits_Queues.doc
Next query by Paul Randal http://www.sqlskills.com/BLOGS/PAUL/post/Wait-statistics-or-please-tell-me-where-it-hurts.aspx
*/
WITH Waits AS
(SELECT
wait_type,
wait_time_ms / 1000.0 AS WaitS,
(wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
signal_wait_time_ms / 1000.0 AS SignalS,
waiting_tasks_count AS WaitCount,
100.0 * wait_time_ms / SUM (wait_time_ms) OVER() AS Percentage,
ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS RowNum
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (
'CLR_SEMAPHORE', 'LAZYWRITER_SLEEP', 'RESOURCE_QUEUE', 'SLEEP_TASK',
'SLEEP_SYSTEMTASK', 'SQLTRACE_BUFFER_FLUSH', 'WAITFOR', 'LOGMGR_QUEUE',
'CHECKPOINT_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BROKER_TO_FLUSH',
'BROKER_TASK_STOP', 'CLR_MANUAL_EVENT', 'CLR_AUTO_EVENT', 'DISPATCHER_QUEUE_SEMAPHORE',
'FT_IFTS_SCHEDULER_IDLE_WAIT', 'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'BROKER_EVENTHANDLER',
'TRACEWRITE', 'FT_IFTSHC_MUTEX', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', 'BROKER_RECEIVE_WAITFOR')
)
SELECT
W1.wait_type AS WaitType,
CAST (W1.WaitS AS DECIMAL(14, 2)) AS Wait_S,
CAST (W1.WaitS * 1000 / W1.WaitCount AS DECIMAL(14, 2)) AS AvgWait_MS,
CAST (W1.ResourceS AS DECIMAL(14, 2)) AS Resource_S,
CAST (W1.SignalS AS DECIMAL(14, 2)) AS Signal_S,
W1.WaitCount AS WaitCount,
CAST (W1.Percentage AS DECIMAL(4, 2)) AS Percentage
FROM Waits AS W1
INNER JOIN Waits AS W2
ON W2.RowNum <= W1.RowNum
GROUP BY W1.RowNum, W1.wait_type, W1.WaitS, W1.ResourceS, W1.SignalS, W1.WaitCount, W1.Percentage
HAVING SUM (W2.Percentage) - W1.Percentage < 97 -- percentage threshold
AND W1.WaitCount > 0;
GO
January 4, 2012 at 8:09 am
result in the attachment
January 4, 2012 at 8:12 am
Nothing weird in there.
It's just long to process that much data.
January 4, 2012 at 8:30 am
So as a summary to conclude this case for me.
to run through the data it goes between 20 and 30 seconds (maxdop 1 is going abit faster).
i also rewrote the query abit but iam not a coding hero (still learning)
select MeterID, ActualTimeStamp , value from MeterData
where MeterID in (select objectid from TreeView
where ObjectID in
(select MeterID from Meter where MeterTypeID = 9 and SaveData =1) )
option(maxdop 1)
and hope Grant meant this by avoiding outer joins. This is running the fastest of all
The problem lies in the process of the ammount data not the query/index itself
If some conclusion are not correct i like to hear it.
I learned alot from this discussion and where to look thanks all
January 4, 2012 at 8:34 am
Close enough. I generally don't like hints but this one is fine.
January 4, 2012 at 8:39 am
Marco V (1/4/2012)
Hello,I got a database with a table (meterdata) which has little of 70 million rows. Often new rows are added to be precise every 15 mins from between 500 a 800 energy readers, they are writing their latest value in that table.
There is a select statement running once and a while over the data which takes between 06:50 and 08:00 mins
...
Can the consumer of the data accept a diff rather than the whole lot? If you're only INSERTing, this would be trivial at your end.
For better assistance in answering your questions, please read this[/url].
Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]
January 4, 2012 at 8:43 am
Marco V (1/4/2012)
So as a summary to conclude this case for me.to run through the data it goes between 20 and 30 seconds (maxdop 1 is going abit faster).
i also rewrote the query abit but iam not a coding hero (still learning)
select MeterID, ActualTimeStamp , value from MeterData
where MeterID in (select objectid from TreeView
where ObjectID in
(select MeterID from Meter where MeterTypeID = 9 and SaveData =1) )
option(maxdop 1)
and hope Grant meant this by avoiding outer joins. This is running the fastest of all
The problem lies in the process of the ammount data not the query/index itself
If some conclusion are not correct i like to hear it.
I learned alot from this discussion and where to look thanks all
Nope. Sorry.
I mean the JOIN syntax
SELECT ...
FROM dbo.TableA AS a
JOIN dbo.TableB AS b
ON a.SomeID = b.SomeID
LEFT JOIN dbo.TableC as c
ON b.SomeOtherID = c.SomeOtherID
WHERE...
That's ANSI standard JOIN syntax. Here's the issue that you're going to run into.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 5, 2012 at 11:11 am
Side note. This analytical software sounds like something you'd usually do in SSAS if you were just using SQL Server tools. You build cubes overnight to see trends like this. I'm not sure real-time requests are the way to go here.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 9 posts - 16 through 23 (of 23 total)
You must be logged in to reply to this topic. Login to reply