June 8, 2017 at 9:09 am
Hello all together,
I'm having some problems with a stored procedure which is called every hour from an agent job. In this procedure a table gets truncated and completely refilled from a View, this all takes about two minutes.
In this two minutes I would like all other sessions to be able to query the table without blocking them for those two minutes and also without the problem that the result stays empty after the truncate until the insert has finished... The desired solution would be that all queries would get the result of the "old" table-content (before the truncate) until the Insert has finished, then of course they should get and use the new content.
I searched around and tried different approaches from reading uncommitted results, using Snapshot Isolation and so on, but until now nothing worked for me :/...
Has someone a hint for me which technique would be suitable here or a link with a appropriate article/forum post?
Thank you for any help!
June 8, 2017 at 9:20 am
mlueke - Thursday, June 8, 2017 9:09 AMHello all together,I'm having some problems with a stored procedure which is called every hour from an agent job. In this procedure a table gets truncated and completely refilled from a View, this all takes about two minutes.
In this two minutes I would like all other sessions to be able to query the table without blocking them for those two minutes and also without the problem that the result stays empty after the truncate until the insert has finished... The desired solution would be that all queries would get the result of the "old" table-content (before the truncate) until the Insert has finished, then of course they should get and use the new content.I searched around and tried different approaches from reading uncommitted results, using Snapshot Isolation and so on, but until now nothing worked for me :/...
Has someone a hint for me which technique would be suitable here or a link with a appropriate article/forum post?
Thank you for any help!
Sounds to me like a candidate for multiple tables and synonyms.
TableA and TableB have identical structures.
Your synonym points to TableA, which is populated with data. TableB is empty.
When the proc runs, it populates the empty table (TableB in this instance, TableA next time it runs), changes the synonym to point to TableB and truncates TableA, ready for the next time it runs.
Users access the data via the synonym, of course.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
June 8, 2017 at 9:32 am
Phil Parkin - Thursday, June 8, 2017 9:20 AMmlueke - Thursday, June 8, 2017 9:09 AMHello all together,I'm having some problems with a stored procedure which is called every hour from an agent job. In this procedure a table gets truncated and completely refilled from a View, this all takes about two minutes.
In this two minutes I would like all other sessions to be able to query the table without blocking them for those two minutes and also without the problem that the result stays empty after the truncate until the insert has finished... The desired solution would be that all queries would get the result of the "old" table-content (before the truncate) until the Insert has finished, then of course they should get and use the new content.I searched around and tried different approaches from reading uncommitted results, using Snapshot Isolation and so on, but until now nothing worked for me :/...
Has someone a hint for me which technique would be suitable here or a link with a appropriate article/forum post?
Thank you for any help!
Sounds to me like a candidate for multiple tables and synonyms.
TableA and TableB have identical structures.
Your synonym points to TableA, which is populated with data. TableB is empty.
When the proc runs, it populates the empty table (TableB in this instance, TableA next time it runs), changes the synonym to point to TableB and truncates TableA, ready for the next time it runs.
Users access the data via the synonym, of course.
I was going to suggest using ALTER SCHEMA to move a table quickly. Something like:
So your process would be:
These ALTER SCHEMA statements should be quick operations.
But the synonym that Phil suggests would probably be quicker. Oh well.
Thomas Rushton
blog: https://thelonedba.wordpress.com
June 8, 2017 at 1:45 pm
a option that I like is as follows.
requires 3 tables
1 - main table
2 - staging table
3 - intermediary table to allow for alter table switch out of main table
table 3 can either be a permanent table or created/dropped on the fly when the process runs.
another constraint is that both of the 3 table must have exactly the same constraints/indexes at the time of the alter table switch.
This approach quite often used with high volume loads onto partitioned tables
-- create 3 tables for demo
if object_id('t_primary') is not null
drop table t_primary;
create table t_primary
(id int not null
,data varchar(20)
);
alter table t_primary add constraint pk_primary primary key clustered
(id
);
if object_id('t_primary_switch') is not null
drop table t_primary_switch;
create table t_primary_switch
(id int not null
,data varchar(20)
);
alter table t_primary_switch add constraint pk_t_primary_switch primary key clustered
(id
);
if object_id('t_staging') is not null
drop table t_staging;
create table t_staging
(id int not null
,data varchar(20)
);
alter table t_staging add constraint pk_staging primary key clustered
(id
);
-- populate primary table
insert into t_primary values (1,'abc');
insert into t_primary values (2,'abc');
insert into t_primary values (3,'abc');
insert into t_primary values (4,'abc');
select 't_primary', * from t_primary;
-- populate staging table
insert into t_staging values (1,'abc_new');
insert into t_staging values (2,'abc_new');
insert into t_staging values (3,'abc_new');
insert into t_staging values (4,'abc_new');
select 't_staging', * from t_staging;
-- "clear" primary table using alter table .. switch - metadata change only so quite fast
alter table t_primary switch to t_primary_switch;
-- "populate" primary table from staging using alter table .. switch - metadata change only so quite fast
alter table t_staging switch to t_primary;
select 't_primary', * from t_primary;
select 't_staging', * from t_staging;
select 't_primary_switch', * from t_primary_switch;
-- clear down switch table
truncate table t_primary_switch;
-- cleanup demo
if object_id('t_primary') is not null
drop table t_primary;
if object_id('t_staging') is not null
drop table t_staging;
if object_id('t_primary_switch') is not null
drop table t_primary_switch;
June 9, 2017 at 6:07 am
Thanks to all of you!
I was hoping there was an other way than handling with different tables to transfer the data... But nevermind, thanks for the very interesting solutions, they are all better than everything i would have done!
I think using synonym would be the cleanest solution, but I've decided to use the "alter table Switch" code - just because I'm not very experienced with synonyms...
Everything works now - no measurable blockings can be seen and the data is as actual as possible the whole time.
Thanks!
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply