December 12, 2010 at 6:19 am
Final demo: converting PortSecAgg into a partitioned table, using fast (minimally-logged) bulk load:
(Converting to a partitioned structure in-place will be glacially slow and blow up your log)
SET ARITHABORT ON
SET CONCAT_NULL_YIELDS_NULL ON
SET QUOTED_IDENTIFIER ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
SET NUMERIC_ROUNDABORT OFF
GO
USE master;
DROP DATABASE Craig;
GO
CREATE DATABASE Craig;
ALTER DATABASE Craig
MODIFY FILE (name = 'Craig', size = 2GB, filegrowth = 256MB);
ALTER DATABASE Craig
MODIFY FILE (name = 'Craig_log', size = 128MB, filegrowth = 64MB);
GO
USE Craig;
GO
-- Partition function - just three dates for this demo
CREATE PARTITION FUNCTION PF (DATETIME)
AS RANGE
RIGHT
FOR VALUES
(
'2010-12-01', '2010-12-02', '2010-12-03'
);
GO
-- Partition scheme, all on the same file group
CREATE PARTITION SCHEME PS
AS PARTITION PF
ALL TO ([PRIMARY]);
GO
-- Main table *unpartitioned* (but on the right file group)
CREATE TABLE dbo.PortSecAgg
(
AsOfDate DATETIME NOT NULL,
PortfolioID INTEGER NOT NULL,
SecurityID INTEGER NOT NULL,
IsShortPosition BIT NOT NULL,
PortfolioCode VARCHAR(10) NULL,
Symbol VARCHAR(20) NULL,
Quantity DECIMAL(18,4) NULL,
SecTypeCode VARCHAR(5) NULL,
Cusip VARCHAR(15) NULL,
UnitCost MONEY NULL,
TotalCost MONEY NULL,
Price MONEY NULL,
MarketValue MONEY NULL,
AccruedInterest MONEY NULL,
UnrealizedGL MONEY NULL,
IsSubAccount BIT NULL,
CONSTRAINT
[PK dbo.PortSecAgg AsOfDate, PortfolioID, SecurityID, IsShortPosition]
PRIMARY KEY CLUSTERED
(
AsOfDate ASC,
PortfolioID ASC,
SecurityID ASC,
IsShortPosition ASC
)
)
ON [PRIMARY];
GO
-- 3 million rows of nonsense data (takes about 18s)
WITH Numbers (n)
AS (
SELECT TOP (3000000)
ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM master.sys.columns C1,
master.sys.columns C2,
master.sys.columns C3
)
INSERT dbo.PortSecAgg
WITH (TABLOCKX)
(PortfolioID, SecurityID, AsOfDate, IsShortPosition)
SELECT PortfolioID = N.n,
SecurityID = N.n,
AsOfDate = DATEADD(DAY, N.n % 3, '2010-12-01'),
IsShortPosition = CONVERT(BIT, CASE WHEN RAND(CHECKSUM(NEWID())) < 0.3 THEN 1 ELSE 0 END)
FROM Numbers N;
GO
-- First step is to use bcp to export the data, one day per file, ordered by the PK
-- Run the following commands from a command window, substituting the name of your server
-- Requires a working folder c:\temp
-- Each export of 1M rows takes about 3s on my laptop
/*
bcp "SELECT * FROM Craig.dbo.PortSecAgg WHERE AsOfDate = '2010-12-01' ORDER BY PortfolioID, SecurityID, IsShortPosition" queryout c:\temp\20101201.bcp -n -S .\SQL2K8 -T
bcp "SELECT * FROM Craig.dbo.PortSecAgg WHERE AsOfDate = '2010-12-02' ORDER BY PortfolioID, SecurityID, IsShortPosition" queryout c:\temp\20101202.bcp -n -S .\SQL2K8 -T
bcp "SELECT * FROM Craig.dbo.PortSecAgg WHERE AsOfDate = '2010-12-03' ORDER BY PortfolioID, SecurityID, IsShortPosition" queryout c:\temp\20101203.bcp -n -S .\SQL2K8 -T
*/
GO
-- Throw the data away
DROP TABLE dbo.PortSecAgg;
GO
-- Create a new partitioned table
CREATE TABLE dbo.PortSecAgg
(
AsOfDate DATETIME NOT NULL,
PortfolioID INTEGER NOT NULL,
SecurityID INTEGER NOT NULL,
IsShortPosition BIT NOT NULL,
PortfolioCode VARCHAR(10) NULL,
Symbol VARCHAR(20) NULL,
Quantity DECIMAL(18,4) NULL,
SecTypeCode VARCHAR(5) NULL,
Cusip VARCHAR(15) NULL,
UnitCost MONEY NULL,
TotalCost MONEY NULL,
Price MONEY NULL,
MarketValue MONEY NULL,
AccruedInterest MONEY NULL,
UnrealizedGL MONEY NULL,
IsSubAccount BIT NULL,
CONSTRAINT
[PK dbo.PortSecAgg AsOfDate, PortfolioID, SecurityID, IsShortPosition]
PRIMARY KEY CLUSTERED
(
AsOfDate ASC,
PortfolioID ASC,
SecurityID ASC,
IsShortPosition ASC
)
)
ON PS (AsOfDate);
GO
-- Create an *unpartitioned* temporary table to hold each partition we load back in
CREATE TABLE dbo.PortSecAgg_Partition
(
AsOfDate DATETIME NOT NULL,
PortfolioID INTEGER NOT NULL,
SecurityID INTEGER NOT NULL,
IsShortPosition BIT NOT NULL,
PortfolioCode VARCHAR(10) NULL,
Symbol VARCHAR(20) NULL,
Quantity DECIMAL(18,4) NULL,
SecTypeCode VARCHAR(5) NULL,
Cusip VARCHAR(15) NULL,
UnitCost MONEY NULL,
TotalCost MONEY NULL,
Price MONEY NULL,
MarketValue MONEY NULL,
AccruedInterest MONEY NULL,
UnrealizedGL MONEY NULL,
IsSubAccount BIT NULL,
CONSTRAINT
[PK dbo.PortSecAgg_Partition AsOfDate, PortfolioID, SecurityID, IsShortPosition]
PRIMARY KEY CLUSTERED
(
AsOfDate ASC,
PortfolioID ASC,
SecurityID ASC,
IsShortPosition ASC
)
)
ON [PRIMARY];
GO
-- Add a constraint for our first date to load
ALTER TABLE dbo.PortSecAgg_Partition
WITH CHECK
ADD CONSTRAINT [CK dbo.PortSecAgg_Partition Range]
CHECK (AsOfDate = '2010-12-01');
GO
-- Bulk load 1 December (notice hints)
-- No sorts, just super-fast minimally-logged bulk load
-- Takes 9s on my laptop
BULK INSERT
Craig.dbo.PortSecAgg_Partition
FROM 'C:\temp\20101201.bcp'
WITH (
DATAFILETYPE = 'native',
TABLOCK,
CHECK_CONSTRAINTS,
ORDER (AsOfDate, PortfolioID, SecurityID, IsShortPosition)
);
GO
-- Move the 1M rows to the partitioned table (instant)
ALTER TABLE Craig.dbo.PortSecAgg_Partition
SWITCH
TO dbo.PortSecAgg
PARTITION 2;
GO
-- Drop the table constraint
ALTER TABLE dbo.PortSecAgg_Partition
DROP CONSTRAINT [CK dbo.PortSecAgg_Partition Range];
GO
-- Add a constraint for our second date to load
ALTER TABLE dbo.PortSecAgg_Partition
WITH CHECK
ADD CONSTRAINT [CK dbo.PortSecAgg_Partition Range]
CHECK (AsOfDate = '2010-12-02');
GO
-- Bulk load 2 December (notice hints)
BULK INSERT
Craig.dbo.PortSecAgg_Partition
FROM 'C:\temp\20101202.bcp'
WITH (
DATAFILETYPE = 'native',
TABLOCK,
CHECK_CONSTRAINTS,
ORDER (AsOfDate, PortfolioID, SecurityID, IsShortPosition)
);
GO
-- Move the 1M rows to the partitioned table (instant)
ALTER TABLE Craig.dbo.PortSecAgg_Partition
SWITCH
TO dbo.PortSecAgg
PARTITION 3;
GO
-- Drop the table constraint
ALTER TABLE dbo.PortSecAgg_Partition
DROP CONSTRAINT [CK dbo.PortSecAgg_Partition Range];
GO
-- Add a constraint for our fnal date
ALTER TABLE dbo.PortSecAgg_Partition
WITH CHECK
ADD CONSTRAINT [CK dbo.PortSecAgg_Partition Range]
CHECK (AsOfDate = '2010-12-03');
GO
-- Bulk load 3 December
BULK INSERT
Craig.dbo.PortSecAgg_Partition
FROM 'C:\temp\20101203.bcp'
WITH (
DATAFILETYPE = 'native',
TABLOCK,
CHECK_CONSTRAINTS,
ORDER (AsOfDate, PortfolioID, SecurityID, IsShortPosition)
);
GO
-- Move the 1M rows to the partitioned table (instant)
ALTER TABLE Craig.dbo.PortSecAgg_Partition
SWITCH
TO dbo.PortSecAgg
PARTITION 4;
GO
-- Show the final state of the partitioned table
SELECT PSA.AsOfDate,
PartitionNumber = $PARTITION.PF(PSA.AsOfDate),
RecordCount = COUNT_BIG(*)
FROM dbo.PortSecAgg PSA
GROUP BY
PSA.AsOfDate
ORDER BY
PSA.AsOfDate;
It's pretty easy to script, as you can imagine.
December 12, 2010 at 11:25 am
Paul, thank you.
Please don't feel I'm ignoring your comments if I don't provide an immediate answer on this while I wrap my head around all the information and recommendations.
The first thing that's striking me is the 'every day' partition on the same filegroup. Besides the fact that PortSecAgg stores for ~2 years, I've never seen partitioning done this way. I'm going to assume my brain is still in weekend mode and that's why I'm just not grokking the value of this modification. I don't believe this is difficult to maintain, adding a new partition as I introduce new data and removing old ones as I clear historicals, is it?
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
December 12, 2010 at 7:21 pm
No worries - the basic idea is pretty simple, but it may take a while for all the pieces to fall into place. I'd encourage you to play around with the scripts, and examine the query plans they produce. The eureka moment will come 😉
Craig Farrell (12/12/2010)
The first thing that's striking me is the 'every day' partition on the same filegroup. Besides the fact that PortSecAgg stores for ~2 years, I've never seen partitioning done this way.
All the partitions being on the same file group is largely a convenience for the demo script - it's not at all central to the concept. You're free to place your partitions wherever you like (having them all on the same file group is a valid a design though!) The important thing is that the switch-in and switch-out tables are created on the same file group as the partition to be switched. This makes the switching a metadata-only operation, and therefore instant.
I'm going to assume my brain is still in weekend mode and that's why I'm just not grokking the value of this modification.
The bulk-loading and switching-in of complete new days of data should have fairly obvious benefits, so I'm going to assume you are wondering about applying the deltas. The objective of course is to avoid all the scans, hash joins, sorts, spools, memory usage, and full-logging of changes that occurs with the existing script.
Working with one file at a time, we switch out a single partition (holding the day of data we need to modify) to a holding table. First benefit: we are now working with a very much smaller table.
Deleting the existing portfolio data is the simple: DELETE 'HoldingTable' WHERE AsOfDate = 'value' AND PortfolioID = 'value'. This uses the clustered index (leading on AsOfDate, PortfolioID) to delete just the rows we are replacing. The INSERT from the staging table (replacement records for the single portfolio) to the holding table is also fast - no joins, scans, sorts, spools so far! Once that's done, we just switch the finished partition back into the main table, job done.
So why is this faster than the (intuitively more set-based) idea of applying all the deltas at once? There are no joins. We're deleting a range (from a smaller table with a lower index depth) using a seek on (AsOfDate, PortfolioID). The plan will show a single iterator: a clustered index delete. With an index and no joins, there are no table scans, no joins, no sorts, no spools, and no memory grant.
I wanted to keep the demo scripts simple, but there is nothing to prevent you from loading all the deltas into the staging table at once. You could then use a cursor to process one day/portfolio delta at a time, using the same efficient process.
I don't believe this is difficult to maintain, adding a new partition as I introduce new data and removing old ones as I clear historicals, is it?
There is a limit of 1000 partitions (and a way to extend that but I do not recommend that at all). In most scenarios there is a time limit on when deltas might be received. Say, for example, you might get deltas up to 60 days ago. That means we just need 60 partitions for days that might be modified, but we can use a smaller number of partitions (perhaps even just one) for data older than that. Merging a daily partition into the archive partition(s) is a simple ALTER PARTITION FUNCTION...MERGE RANGE statement. This will be a relatively slow fully-logged operation, but it doesn't take that long, and only needs to happen once a day.
December 12, 2010 at 7:46 pm
You may even find that the partition-switching doesn't speed up the application of the deltas over using a cursor to delete and then insert one PortfolioID/Day combination from the staging table. I don't have the data size information, or the same knowledge of the application you have, to make that determination.
It will, however, speed up the load of new daily data to a ridiculous extent :w00t:
I do think it is vital, though, to update the SSIS package so that the staging table schema exactly matches the destination table - at the moment several of the data types are different, and so is the nullability. Moving the CASE statement logic for IsSubAccount and removing the computed column is essential.
Anyway, the overall objective is to avoid the scans, joins, sorts, spools, and memory grant. It will be interesting to hear your thoughts when you've had some time to reflect on all this.
December 13, 2010 at 9:33 am
Hi Paul,
that really is a very interesting approach to deal with large volume tables. I guess the partition sturcture of some of our archive tables might change soon! (We're currently struggling with a performance bottleneck for some of our nightly jobs to send data to our archive DB). I guess 5-day based partition + "old data" will help us.... Thanks in advance for sharing the concept!!!
December 13, 2010 at 9:46 am
LutzM (12/13/2010)
that really is a very interesting approach to deal with large volume tables. I guess the partition sturcture of some of our archive tables might change soon! (We're currently struggling with a performance bottleneck for some of our nightly jobs to send data to our archive DB). I guess 5-day based partition + "old data" will help us.... Thanks in advance for sharing the concept!!!
No worries. SQL Server 2008 adds a lot of extra goodness and fixes many of the subtle limitations in 2005 (especially around parallelism). I usually point people with an interest to the totally awesome Data Loading Performance Guide White Paper.
December 13, 2010 at 9:58 am
Paul White NZ (12/13/2010)
LutzM (12/13/2010)
that really is a very interesting approach to deal with large volume tables. I guess the partition sturcture of some of our archive tables might change soon! (We're currently struggling with a performance bottleneck for some of our nightly jobs to send data to our archive DB). I guess 5-day based partition + "old data" will help us.... Thanks in advance for sharing the concept!!!No worries. SQL Server 2008 adds a lot of extra goodness and fixes many of the subtle limitations in 2005 (especially around parallelism). I usually point people with an interest to the totally awesome Data Loading Performance Guide White Paper.
Paul,
As I'm wrapping my head around this, your comment above concerns me. This approach is as valid in 2k5 as well, right?
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
December 13, 2010 at 10:02 am
Craig Farrell (12/13/2010)
As I'm wrapping my head around this, your comment above concerns me. This approach is as valid in 2k5 as well, right?
Yes :laugh:
All things are better in 2008, that's all. I first used this approach in production on 2005 RTM 😉
December 13, 2010 at 10:06 am
Paul White NZ (12/13/2010)
Craig Farrell (12/13/2010)
As I'm wrapping my head around this, your comment above concerns me. This approach is as valid in 2k5 as well, right?Yes :laugh:
All things are better in 2008, that's all. I first used this approach in production on 2005 RTM 😉
Isn't the first release supposed to be called RTFM? 😀
December 13, 2010 at 10:14 am
LutzM (12/13/2010)
Paul White NZ (12/13/2010)
Craig Farrell (12/13/2010)
As I'm wrapping my head around this, your comment above concerns me. This approach is as valid in 2k5 as well, right?Yes :laugh:
All things are better in 2008, that's all. I first used this approach in production on 2005 RTM 😉
Isn't the first release supposed to be called RTFM? 😀
You need to put that in your signature for the next release!!!
December 13, 2010 at 11:14 am
Paul White NZ (12/13/2010)
Craig Farrell (12/13/2010)
As I'm wrapping my head around this, your comment above concerns me. This approach is as valid in 2k5 as well, right?Yes :laugh:
All things are better in 2008, that's all. I first used this approach in production on 2005 RTM 😉
Sorry, It's being one of those mornings. 🙂
I'm working my way through what you've discussed and I'm getting to said Eureka. Bear with a couple of 'idiot' questions, if you would... I apparently was unclear on one of my requirements.
In #5, Ensure the SSIS package produces data in clustered order, you're talking about including a hard sort component at the tail of the load process in SSIS to conform to the Clustered to reduce churn, correct?
And for #3, Modify the SSIS package to produce data of the correct type and nullability
, you're discussing using the Data Conversion component to do this in midstream, and convert anything not picked up as expected? I assume this is because the Flat File definition can't handle null/not null, so nulls everything, and it will insert/compare with less checks if I force non-nulls on the column definitions before inclusion?
One of the things I'm noting here is that instead of dropping all the data to a staging structure, and then processing, is you're looking at processing a single item at a time, all the way to the final destination. The T1 process intrigues me. I think I finally grok where you're going here.
Now, looking to see what I can make work from this functionality, I ran into this gem, which indicates I'm way behind on my partitioning usage:
ALTER TABLE dbo.PortSecAgg
SWITCH PARTITION 13
TO dbo.PortSecAgg_Partition;
GO
If we did use your method, how would I be able to detect this as I destroy older data and continuously add in new data? This doesn't seem like a simple value to find.
The BCP script out/in method is now stored in my toolbox, however. That is handy. You could almost while loop the code to make it even easier and more maintainable, but I appreciate you spelling it out like you did.
The Partition Switching is a new concept to me, and dang, that looks handy.
From another post:
So why is this faster than the (intuitively more set-based) idea of applying all the deltas at once? There are no joins. We're deleting a range (from a smaller table with a lower index depth) using a seek on (AsOfDate, PortfolioID). The plan will show a single iterator: a clustered index delete. With an index and no joins, there are no table scans, no joins, no sorts, no spools, and no memory grant.
I'm curious as to why my existing clustered index on PortSecAgg, which is based AsOfDate/PortfolioID/hanging columns doesn't do the same thing, as a range removal. However, because each data file for historical edits holds multiple PortfolioIDs, even this way I would need to either loop-delete, or join to a derived table to find just the AsOfDate/PortfolioID combinations. I'm not sure how this avoids joining at all.
I wanted to keep the demo scripts simple, but there is nothing to prevent you from loading all the deltas into the staging table at once. You could then use a cursor to process one day/portfolio delta at a time, using the same efficient process.
I believe this is going to be a required component of my process. Cursor based optimization, the SQL Gods are going to rain fire on my head... 🙂
This delta table gets processed in two locations. The association processing happens locally (secId/PortID/anything else). It gets processed locally to have a local history for another delta operations that are occuring. I also ship this delta structure, after it's been cleaned up, to the reporting server, and process it there. A per file basis won't work for that intention.
A more explicit discussion of that component: I receive the 40 files for two different batches. Each batch holds the last 20 days, plus complimentary point in time data files. These files are processed into the staging database, after being cleaned up, compared, and doublechecked. The staging database uses these processed files to adjust other delta components I receive (audit history files and the like), appending data as required, or adjusting values.
All these delta tables, after these adjustments are made, cleaned, etc... are then shipped to the reporting database and re-processed, without all the cleanup work, making it process much faster on the target system.
There is a limit of 1000 partitions (and a way to extend that but I do not recommend that at all). In most scenarios there is a time limit on when deltas might be received. Say, for example, you might get deltas up to 60 days ago. That means we just need 60 partitions for days that might be modified, but we can use a smaller number of partitions (perhaps even just one) for data older than that. Merging a daily partition into the archive partition(s) is a simple ALTER PARTITION FUNCTION...MERGE RANGE statement. This will be a relatively slow fully-logged operation, but it doesn't take that long, and only needs to happen once a day
And 60 should be more than enough, in reality ~40 just to cya myself would work.
I think I'm going to tackle these one piece at a time. As strong as the partitioning structure sounds like it may be, I want to get baselines before I tackle that side of things, and there's plenty of other cleanup to do here. There's also the problem of I'm a contractor and if I can't explain it, I can't hand it off when I'm done. 🙂
Paul, thanks. I have numerous takeaways at the moment to work on. Once I get them sorted out I'll come back with an updated process/sqlplans/code/etc.
I also need to do some more research on minimally logged operations. 🙂
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
December 13, 2010 at 1:22 pm
Alright, I'll bite. Anyone know how to turn a column from a stream in an SSIS package into a non-nullable version of itself? Data Conversion seemed like the right place, but you can't tell it 'not-null'.
EDIT: It's not there, at least not in 2k5. Well, I can at least get everything sorted and have _staging not null properly.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
December 13, 2010 at 4:34 pm
Craig Farrell (12/13/2010)
Alright, I'll bite. Anyone know how to turn a column from a stream in an SSIS package into a non-nullable version of itself? Data Conversion seemed like the right place, but you can't tell it 'not-null'.EDIT: It's not there, at least not in 2k5. Well, I can at least get everything sorted and have _staging not null properly.
I see two options: a CONDITIONALSPLIT task to eliminate NULL values (even if those won't occur) or adding a DERIVEDCOLUMN task to replace NULL values using the ISNULL() function.
I'm not sure which option would be better to "convince SSIS" the column has no NULL values - if any of the two will in the first place.
December 13, 2010 at 8:20 pm
Craig Farrell (12/13/2010)
Alright, I'll bite. Anyone know how to turn a column from a stream in an SSIS package into a non-nullable version of itself? Data Conversion seemed like the right place, but you can't tell it 'not-null'. EDIT: It's not there, at least not in 2k5. Well, I can at least get everything sorted and have _staging not null properly.
The Flat File Source has a property called RetainNulls - do you have that set to true? Setting it false will prevents NULLs entering the flow in the first place. Instead of a NULL, you get the default for the data type e.g. INT = 0.
More generally, one can use an expression in a Derived Column transformation (set to replace the contents of the column, rather than adding a new one). This allows you to choose the value that replaces NULLs. An example expression would be ISNULL([col1]) ? 0 : [col1].
December 13, 2010 at 8:42 pm
Paul White NZ (12/13/2010)
Craig Farrell (12/13/2010)
Alright, I'll bite. Anyone know how to turn a column from a stream in an SSIS package into a non-nullable version of itself? Data Conversion seemed like the right place, but you can't tell it 'not-null'. EDIT: It's not there, at least not in 2k5. Well, I can at least get everything sorted and have _staging not null properly.The Flat File Source has a property called RetainNulls - do you have that set to true? Setting it false will prevents NULLs entering the flow in the first place. Instead of a NULL, you get the default for the data type e.g. INT = 0.
More generally, one can use an expression in a Derived Column transformation (set to replace the contents of the column, rather than adding a new one). This allows you to choose the value that replaces NULLs. An example expression would be ISNULL([col1]) ? 0 : [col1].
It's definately set to true, but that's mostly for the secondary attribute columns. I do have a NULL converter in case on the primaries, just in case I get a bad quantity (which feeds into IsShortPosition), so that part was covered. I was playing hide and go seek to actually set it not-null in the stream, though... heh. Welp, this is why I clarify things. Thanks again Paul.
I've done some significant modifications, and a new rule got put into play (it doesn't affect much, but it does mean I need to do batch to batch comparisons) which doesn't take a lot of time. I'm hoping to finish the re-benchmark up tomorrow, at which point I'll come back to this with the corrections in place and I can move forward with your other recommendations from there... and hopefully you can help point out if I missed anything. 😀
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 15 posts - 31 through 45 (of 72 total)
You must be logged in to reply to this topic. Login to reply