June 21, 2012 at 1:34 pm
Well I think I might have it conceptually. Just starting to code it right now...I'll post the final result if everything checks out.
Just create a partition function on two values 1 & 2 let's say but that really doesn't matter.
Before I load the staging table I'll just check what the current value is in the main table and populate the staging table with the opposite number using a case statement. Then when I partition the main table the one partition will remain empty ready to be switched with the valid numbered range from the staging table.
June 21, 2012 at 1:37 pm
Lynn Pettis (6/21/2012)
Assuming that you are doing this update in a stored procedure, I would do something more along the lines of this psudo code:... prep work
set transaction isolation level serializable;
begin transaction
begin try
truncate destination_table;
insert into destination_table;
commit transaction
end try
begin catch
rollback transaction
... other error code as needed
end catch
end -- end of update procedure.
In conjunction with this, they could have the web application query the table using the snapshot isolation level (or set the database to read committed snapshot) so that the web app will still be able to see the data that was committed before the update transaction started.
That would eliminate even momentary blocking, and be much simple to implement than a partitioned table.
Note: You can enable the database for SNAPSHOT isolation while the database is online, but cannot be enabled for READ COMMITTED SNAPSHOT while there are users connected.
June 21, 2012 at 1:44 pm
Err....
You don't recreate the partition scheme or function...
The two things you'd do to add a new partition are to mark the next filegroup then add a new partition value to the function.
Roughly (and copied from BoL)
ALTER PARTITION SCHEME partition_scheme_name
NEXT USED <file group name>
then
ALTER PARTITION FUNCTION partition_function_name ()
SPLIT RANGE (<new boundary point>)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 21, 2012 at 1:45 pm
Michael Valentine Jones (6/21/2012)
In conjunction with this, they could have the web application query the table using the snapshot isolation level (or set the database to read committed snapshot) so that the web app will still be able to see the data that was committed before the update transaction started.
That would eliminate even momentary blocking, and be much simple to implement than a partitioned table.
Note: You can enable the database for SNAPSHOT isolation while the database is online, but cannot be enabled for READ COMMITTED SNAPSHOT while there are users connected.
Another good option. I'm interesting in persuing the partition method simply to give me an excuse to learn more about it. However if it blows up in my face, I'm sure I'll run back to this. 😀
June 21, 2012 at 1:52 pm
yb751 (6/21/2012)
Michael Valentine Jones (6/21/2012)
In conjunction with this, they could have the web application query the table using the snapshot isolation level (or set the database to read committed snapshot) so that the web app will still be able to see the data that was committed before the update transaction started.
That would eliminate even momentary blocking, and be much simple to implement than a partitioned table.
Note: You can enable the database for SNAPSHOT isolation while the database is online, but cannot be enabled for READ COMMITTED SNAPSHOT while there are users connected.
Another good option. I'm interesting in persuing the partition method simply to give me an excuse to learn more about it. However if it blows up in my face, I'm sure I'll run back to this. 😀
Since one of your main concerns was "highly available", I think you might want to reconsider.
With the partition switching, there will still be unavoidable blocking, since you are making a schema change.
Withe snapshot isolation, the data will continue to be available without blocking.
June 21, 2012 at 2:01 pm
That's a very good point...I appreciate all the input. I'll carefully consider both options and test them out.
Thanks
June 21, 2012 at 3:18 pm
GilaMonster (6/21/2012)
Err....You don't recreate the partition scheme or function...
The two things you'd do to add a new partition are to mark the next filegroup then add a new partition value to the function.
Sorry, you're right Gail. I'd forgotten to clarify that, thinking that noone would actually rebuild the entire partition. You do need to programatically (usually) decide on the next contruction instead of simply rotating, but yeah, you don't actually completely rebuild the entire table's partitioning.
Kind of embarassed I forgot to include that. Sorry OP.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
January 10, 2014 at 6:27 pm
Lynn Pettis (6/21/2012)
Assuming that you are doing this update in a stored procedure, I would do something more along the lines of this psudo code:... prep work
set transaction isolation level serializable;
begin transaction
begin try
truncate destination_table;
insert into destination_table;
commit transaction
end try
begin catch
rollback transaction
... other error code as needed
end catch
end -- end of update procedure.
nice tips
January 11, 2014 at 2:12 pm
Michael Valentine Jones (6/21/2012)
In conjunction with this, they could have the web application query the table using the snapshot isolation level (or set the database to read committed snapshot) so that the web app will still be able to see the data that was committed before the update transaction started.That would eliminate even momentary blocking, and be much simple to implement than a partitioned table.
I haven't read the rest of the posts on this thread yet but I agree with the above. Partitioning isn't as easy as some would have you believe. For example, if you have any unique indexes on the table, the partitioning key will automatically be added which makes them {drum roll please} non-unique based on the original unique column(s). It also makes FKs to the table damned near impossible for the same reason unless the only UNIQUE index is also the partitioning column itself.
If the table has no foreign keys and the only UNIQUE index is based on an IDENTITY column, then partitioning gets a whole lot easier but I still like the idea of Michaels suggestion for this particular problem much better.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 9 posts - 16 through 23 (of 23 total)
You must be logged in to reply to this topic. Login to reply