June 20, 2014 at 10:27 am
We have a SQL 2012 database at 6G in size handling about 15K transactions per day that is growing quickly, ideally is avail 24x7, where I need to update lots of price data once per day and don’t want to make the database unavailable for 10-15 minutes every day while I replace/update R/O pricing related tables, and then use that new price data to update a couple other actively used tables with info based on the new prices.
We are using Entity Framework 6.0 to build the SQL for most transactional queries and updates and also use sprocs, bulk inserts, scripts etc. for better performance when needed.
Currently the database is taken offline for 10-15 minutes while I do these updates, and am looking to start doing these updates without locking transactions out while bulk inserts or massive updates are going on.
Basically I need to –
•Replace two large tables (200K and 400K rows) that are currently being used R/O. I need all of this data loaded as part of a single transaction, and before the next step, and they would be actively used while being replaced.
•Then update two lookup tables (20K and 200K rows) that are also being used R/O with data based on the just loaded tables. These secondary tables can be updated a row at a time but that would likely be pretty inefficient and especially take a long time using EF.
The three strategies I can think of are –
•Take the server offline 10-15 minutes while I update the databases which I am doing now weekly but don’t want to do daily.
•Keep the data in server memory and update the databases then reload the server in memory data. Seems would be a good idea to keep cloned tables so the server can restart anytime even if the new price update is not finished.
•Clone the tables to be updated, update the cloned tables, then have a SQL txn to delete the old table archives, rename the current tables to archive tables, rename new tables to current tables in one swell foop (sic).
What do other folks do and are there other good strategies that let me do these sorts of updates within a SQL Transaction while keeping the database available for server query and update transactions?
Thanks!
June 20, 2014 at 11:06 am
Clone the tables, adjust as necessary and then swap in. Seems like a no-brainer best solution to your specific set of needs.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 20, 2014 at 11:12 am
Thanks Kevin.
Since I am doing all this in code, not SSMS, is there any better way SQL way I can access programmatically to clone a table including data, indices and constraints, than just creating a new similar table, and then doing a select into?
I'm a developer and the defacto DBA since we are small and am not as experienced in SQL management asnd architecture as using SQL.
Dave
June 20, 2014 at 11:22 am
This should probably be done with sproc(s) as much as possible. Maybe in conjunction with some form of data movement into SQL Server that would allow set-based DML (insert/update/delete) of the copied data. Table Valued Parameter used to drive a MERGE statement comes to mind. There are other efficient solutions too I am sure. But you definitely don't want to do onesie DML from EF to get the copied tables into the state you need them.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 20, 2014 at 11:38 am
Kevin, I figured out that last 'onsies w/ EF' part already so thanks so much for the suggestions and alternatives to doing that. Dave
June 20, 2014 at 5:38 pm
Using TheSQLGuru's idea, create two tables in your database that mirror each of the tables including indexes you need to update once a day. One table can be labeled new the other old. Select the data from the base tables to the "New" tables. With the data in the "new" tables you can perform all the updates you need to these "new" tables. When it is time to update the base tables you switch the current base table to the "old" tables, switch the "new" tables into the base tables. You can then truncate the data in the "old" tables.
To see how this works test the following in an empty sandbox database. Run the create and inserts first, then run the ALTER TABLES separately. You will swap out and in 1,000,000 rows very quickly.
create table dbo.TestTable(
TestTableId int not null primary key,
TestData1 numeric(10,2) not null default 0.00,
TestData2 datetime not null default '19000101'
);
create table dbo.TestTableNew(
TestTableId int not null primary key,
TestData1 numeric(10,2) not null default 0.00,
TestData2 datetime not null default '19000101'
);
create table dbo.TestTableOld(
TestTableId int not null primary key,
TestData1 numeric(10,2) not null default 0.00,
TestData2 datetime not null default '19000101'
);
go
with e1(n) as (select n from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
e2(n) as (select a.n from e1 a, e1 b),
e4(n) as (select a.n from e2 a, e2 b),
eTally(n) as (select row_number() over (order by (select null)) from e4 a, e4 b)
insert into dbo.TestTable(TestTableId,TestData1,TestData2)
select top(1000000)
n,
round(rand(checksum(newid())) * 10000, 2),
cast(rand(checksum(newid())) * 5844 + 36524 as datetime)
from
eTally;
with e1(n) as (select n from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
e2(n) as (select a.n from e1 a, e1 b),
e4(n) as (select a.n from e2 a, e2 b),
eTally(n) as (select row_number() over (order by (select null)) from e4 a, e4 b)
insert into dbo.TestTableNew(TestTableId,TestData1,TestData2)
select top(1000000)
n,
round(rand(checksum(newid())) * 10000, 2),
cast(rand(checksum(newid())) * 5844 + 36524 as datetime)
from
eTally;
select * from dbo.TestTable;
select * from dbo.TestTableNew;
select * from dbo.TestTableOld;
go
alter table dbo.TestTable switch to dbo.TestTableOld;
alter table dbo.TestTableNew switch to dbo.TestTable;
go
select * from dbo.TestTable;
select * from dbo.TestTableNew;
select * from dbo.TestTableOld;
go
drop table dbo.TestTable;
drop table dbo.TestTableOld;
drop table dbo.TestTableNew;
go
The following link provides more information:
http://technet.microsoft.com/en-us/library/ms191160(v=SQL.105).aspx
June 20, 2014 at 9:35 pm
Nice. Thank you!
June 20, 2014 at 11:02 pm
We do such a thing on a regular basis except we keep it even simpler.
Two tables almost identically named. For example, SomeTable1 and SomeTable2.
We also have a synonym called SomeTable.
When SomeTable1 is active, the synonym points to it. You can do just about anything you want with SomeTable2.
When the time comes, you can truncate, load, validate, etc, etc, SomeTable2 with relative impunity. Once finished, just drop/recreate the synonym to repoint it to SomeTable2. SomeTable1 auto-magically becomes the "old" table and if something went horribly wrong with SomeTable2, you just have to repoint the synonym back to it.
The next time, things just reverse... SomeTable2 is active and the synonym points at it. Truncate, load, validate, etc, etc, SomeTable1 and, when ready, just repoint the synonym.
It's nasty fast and you can setup multiple tables either in series or parallel and when all ready, it takes milliseconds to flop the synonyms.
You can also use views as synonyms.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 21, 2014 at 6:45 am
Good recommendation Jeff! I always forget about synonyms. My client's stuff is typically so messed up we never get around to the little things like that that can simplify certain stuff immensely. :hehe:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 21, 2014 at 7:15 am
I had not heard about DB synonyms before which seem a great feature for this problem. I guess you keep the state as to which table is current elsewhere in the DB?
For both techniques, dropping, cloning and renaming 3 tables vs. using synonyms on 2 tables, will a client connection be maintained by SQL Server such that the newly switched to or synonym redirected table will be accessed on future requests w/o breaking an existing client connection (my server)? and are these two techniques transactional in the DB such that a current request can finish using the table it is accessing before it is renamed or the synonym does a redirect?
Thanks for the great suggestions all!
June 21, 2014 at 11:28 am
I've used synonyms before as well, should have thought of it myself. But, it was a good exercise in learning to swap partitions since I hadn't done it before.
June 21, 2014 at 1:56 pm
dave 67553 (6/21/2014)
I had not heard about DB synonyms before which seem a great feature for this problem. I guess you keep the state as to which table is current elsewhere in the DB?For both techniques, dropping, cloning and renaming 3 tables vs. using synonyms on 2 tables, will a client connection be maintained by SQL Server such that the newly switched to or synonym redirected table will be accessed on future requests w/o breaking an existing client connection (my server)? and are these two techniques transactional in the DB such that a current request can finish using the table it is accessing before it is renamed or the synonym does a redirect?
Thanks for the great suggestions all!
I "keep the state" in the Synonym itself (I read it through sys.synonyms). Specifically, I lookup the Synonym name in the "name" column of sys.synonyms and get the target of the Synonym from the "base_object_name", which is a fully qualified name that uses brackets around each part of the name and will contain the same number of parts of the name as those that were contained in the CREATE SYNONYM command.
As for what it will do with current requests against the table, it does the same thing as if you were dropping any object. If the object is in use, the DROP SYNONYM command will wait patiently until there are no requests (open transactions) against the SYNONYM. Once there's even a brief opening, the DROP and CREATE are both lightning quick (something less than 1ms).
I haven't tried it but I believe that "SWITCH" will also wait until the table is no longer involved in a transaction.
Of course, if someone on a single conection did a lookup and an update as very close but still separate transactions, it doesn't matter which method you use, there could be a problem. Of course, knowing that, you can plan for such an eventuality to either prevent the problem or to handle it properly. Owing to the speed of "flopping" the Synonym, it would be a rare problem, indeed, unless the 2 transactions were just silly long.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 21, 2014 at 2:02 pm
Lynn Pettis (6/21/2014)
I've used synonyms before as well, should have thought of it myself. But, it was a good exercise in learning to swap partitions since I hadn't done it before.
It's a great exercise! Thanks for posting it, Lynn.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 8, 2014 at 8:51 am
I ended up cloning the tables to be updated into a different schema, then updated EF to be able to open two DbContexts for the databse, one for each schema, and can now update the new tables from the old, then use synonyms to swap to the new tables. Thanks all! Dave
July 31, 2014 at 1:01 pm
I could not make synonyms work with Entity Framework 6, so converted to using dated schema names (E.G. DYYYYMMDD) and create my new tables in the new dated schema. When the new tables are completely built, I move the existing tables to a new schema called DYYYYMMDD_HHMMSS and move the DYYYYMMDD tables into the dbo schema. This works fine with EF6.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply