SQLServerCentral Article

How to Backup Redshift Log Views

,

Recently I had a discussion with my manager about the need to save the log data that is stored in the STL log views, as I found the following line in the AWS Redshift documentation:

"To manage disk space, the STL log views only retain approximately two to five days of log history, depending on log usage and available disk space. If you want to retain the log data, you will need to periodically copy it to other tables or unload it to Amazon S3."

To find solution, I searched the Internet, however apart from few Q&As, I could not find complete solutions from to implement. Therefore I thought to create a simple solution and share it with community. I and my manager were interested only in only the five main system log views/tables to copy to the new backup tables using a AWS Redshift scheduled query.

A list of the view is available at this link: https://docs.aws.amazon.com/redshift/latest/dg/c_intro_STL_tables.html

My Solution

I created new schema and named it "Audit", using following script:

CREATE SCHEMA audit;

After creating the schema, I created five tables to backup the five SVL / STL views in audit schema:

CREATE TABLE "audit"."bck_svl_stored_proc_call"
(
userid integer encode az64,
session_userid integer encode az64,
query integer encode az64,
label character(320) encode lzo,
xid bigint encode az64,
pid integer encode az64,
database character(32) encode lzo,
querytxt character(500) encode lzo,
starttime timestamp without time zone encode az64,
endtime timestamp without time zone encode az64,
aborted integer encode az64,
from_sp_call integer encode az64,
modifieddatetime timestamp without time zone NOT NULL encode az64 default getdate()
);
---------------------------
CREATE TABLE "audit"."bck_svl_stored_proc_messages"
(
userid integer encode az64,
session_userid integer encode az64,
pid integer encode az64,
xid bigint encode az64,
query integer encode az64,
recordtime timestamp without time zone encode az64,
loglevel integer encode az64,
loglevel_text character varying(9) encode lzo,
message character(1024) encode lzo,
linenum integer encode az64,
querytxt character(500) encode lzo,
label character(320) encode lzo,
aborted integer encode az64,
modifieddatetime timestamp without time zone NOT NULL encode az64 default getdate()
)
distkey(query);
---------------------------
CREATE TABLE "audit"."bck_stl_query"
(
userid integer encode az64,
query integer encode az64,
label character(320) encode lzo,
xid bigint encode az64,
pid integer encode az64,
database character(32) encode lzo,
querytxt character(4000) encode lzo,
starttime timestamp without time zone encode az64,
endtime timestamp without time zone encode az64,
aborted integer encode az64,
insert_pristine integer encode az64,
concurrency_scaling_status integer encode az64,
modifieddatetime timestamp without time zone NOT NULL encode az64 default getdate()
);
---------------------------
CREATE TABLE "audit"."bck_stl_load_commits"
(
userid integer encode az64,
query integer encode az64,
slice integer encode az64,
name character(256) encode lzo,
filename character(256) encode lzo,
byte_offset integer encode az64,
lines_scanned integer encode az64,
errors integer encode az64,
status integer encode az64,
curtime timestamp without time zone encode az64,
file_format character(16) encode lzo,
is_partial integer encode az64,
start_offset bigint encode az64,
modifieddatetime timestamp without time zone NOT NULL encode az64 default getdate()
);
---------------------------
CREATE TABLE "audit"."bck_stl_load_errors"
(
userid integer encode az64,
slice integer encode az64,
tbl integer encode az64,
starttime timestamp without time zone encode az64,
session integer encode az64,
query integer encode az64,
filename character(256) encode lzo,
line_number bigint encode az64,
colname character(127) encode lzo,
type character(10) encode lzo,
col_length character(10) encode lzo,
position integer encode az64,
raw_line character(1024) encode lzo,
raw_field_value character(1024) encode lzo,
err_code integer encode az64,
err_reason character(100) encode lzo,
is_partial integer encode az64,
start_offset bigint encode az64,
modifieddatetime timestamp without time zone NOT NULL encode az64 default getdate()
);
---------------------------

I created backup tables with same column names as currently present in the system log views, with an additional last column, modifieddatetime. This last column has a default value of the current system datetime stamp. This allows me to capture the time when a record was inserted in any of the backup tables. This will ensure the process is running correctly and you can identify any issues with the process.

After these tables have been created, I created following stored procedure to update these tables periodically with data in system tables/views:

CREATE OR REPLACE PROCEDURE audit.usp_copy_svl_tables () LANGUAGE plpgsql AS $$
BEGIN
Insert into "audit"."bck_svl_stored_proc_call"
(
userid,session_userid,query ,label,xid,pid ,database,querytxt,
starttime,endtime ,aborted ,from_sp_call)
Select
userid,session_userid,query ,label,xid,pid ,database,querytxt,
starttime,endtime ,aborted ,from_sp_call
From svl_stored_proc_call org
Where NOT EXISTS
(
SELECT 1 FROM "audit"."bck_svl_stored_proc_call" aud
Where org.userid = aud.userid and org.session_userid=aud.session_userid and aud.xid = org.xid and org.query=aud.query
);
---------------------------------------------
Insert into audit.bck_svl_stored_proc_messages
(
userid,session_userid,pid ,xid,query ,recordtime ,loglevel ,
loglevel_text,message,linenum ,querytxt,label,aborted )
Select userid, session_userid, pid , xid, query , recordtime ,
loglevel , loglevel_text, message, linenum ,querytxt, label, aborted
From svl_stored_proc_messages org
Where NOT EXISTS
(
SELECT 1 FROM audit.bck_svl_stored_proc_messages aud
Where org.userid = aud.userid and org.session_userid=aud.session_userid and org.xid = aud.xid and org.pid=aud.pid and org.query=aud.query
);
------------------------------------------------
INSERT INTO audit.bck_stl_query
(
userid , query , label , xid , pid , database, querytxt , starttime , endtime ,
aborted , insert_pristine , concurrency_scaling_status
)
Select userid , query , label , xid , pid , database, querytxt , starttime , endtime ,
aborted , insert_pristine , concurrency_scaling_status
From stl_query
Where query >
(
SELECT nvl (max(query),0) FROM audit.bck_stl_query
);
--------------------------------------------------
INSERT INTO "audit"."bck_stl_load_commits"
(userid ,query,slice,name ,filename ,byte_offset ,lines_scanned,errors
,status ,curtime ,file_format ,is_partial ,start_offset)
Select userid ,query,slice,name ,filename ,byte_offset ,lines_scanned,errors
,status ,curtime ,file_format ,is_partial ,start_offset
from stl_load_commits
Where query >
(
SELECT nvl (max(query), 0) FROM audit.bck_stl_load_commits
);
----------------------------------------------------------
INSERT INTO "audit"."bck_stl_load_errors"
(
userid , slice , tbl , starttime , session, query , filename , line_number ,
colname , type , col_length , position , raw_line, raw_field_value,
err_code , err_reason, is_partial , start_offset )
SELECT userid , slice , tbl , starttime , session, query , filename , line_number ,
colname , type , col_length , position , raw_line, raw_field_value,
err_code , err_reason, is_partial , start_offset
FROM stl_load_errors
Where query >
(
SELECT nvl (max(query), 0) FROM audit.bck_stl_load_errors
);
EXCEPTION WHEN OTHERS THEN
RAISE INFO 'An exception occurred / Error: % / Message: %', SQLSTATE, SQLERRM;
END;
$$

After adding the above SP, I scheduled the above stored procedure call using AWS Redshift Query Scheduler in the night to copy all data to above backup tables on daily basis.

Call audit.usp_copy_svl_tables ();

I used above method. If you think I missed something please share with me so that I improve my process also.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating