October 19, 2017 at 2:52 am
I have inherit a system that uses MS SQL Server 2014 (and 2008 R2) Express as the database server.
It's used for creating serial numbers. Each day a maximum of 10000 serialnumbers can be created.
For each serial number there is about 25 MB of data stored in the database.
There is two tables that grow, SerialNo and Metadata and Metadata contains most of the actual data >24MB.
This works very well, except for the fact that the system creators selected to use the Express
version of MS SQL Server, which has a limit of 10 GB size of the database!
All the data is also stored on a remote server, using a "very clever" script that uses triggers and
a stored procedure that is permanently running...
So the system only needs a couple of days of data in the local (Express) database. The rest could
(and has to) be deleted.
The system is used 24/7
What would be the best way of managing the database so it will keep within the 10 GB size?
I was thinking that I could create a SQL script that deletes all records older than 5 days and run this
script (via Scheduler and SQLCMD) once a day. I don't want any table locks, since we are inserting data "continuesly"
Must the database be shrunk to make the deleted space available?
If so, can I do it from the script (still without locking the table)?
As you can see I'm NOT a database administrator, just a programmer that has gotten this in my lap...
// Anders
My attempt to a script that will delete old rows-- BEGIN OF SCRIPT
-- Delete all records from tables SerialNumber, Metadata, that is older than 5 days
USE TheDatabase;
GO
DECLARE @numberOfDays INT;
SET @numberOfDays = 5;
DECLARE @batchSize INT;
SET @batchSize = 1000;
DECLARE @currentdate DATETIME;
SET @currentdate = GETDATE();
WHILE 1 = 1
BEGIN;
DELETE TOP(@batchSize) FROM SerialNo WHERE DATEDIFF(day, rowCreatedDT, @currentdate) > @numberOfDays;
IF @@RowCount = 0 BREAK;
END;
WHILE 1 = 1
BEGIN;
DELETE TOP(@batchSize) FROM Metadata WHERE DATEDIFF(day, rowCreatedDT, @currentdate) > @numberOfDays;
IF @@RowCount = 0 BREAK;
END;
GO
-- END OF SCRIPT
Metadata StructureCREATE TABLE [dbo].[Metadata](
[ID] [int] IDENTITY(1,1) NOT NULL,
[SerialNumber] [nvarchar](32) NOT NULL,
[ScannerTypeID] [nvarchar](32) NOT NULL,
[MetadataID] [nvarchar](32) NOT NULL,
[Metadata] [nvarchar](32) NULL,
[rowCreatedDT] [datetime] NOT NULL CONSTRAINT [DF_Metadata_rowCreatedDT] DEFAULT (getdate()),
CONSTRAINT [PK_Metadata] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
SerialNo StructureCREATE TABLE [dbo].[SerialNo](
[ID] [int] IDENTITY(1,1) NOT NULL,
[SerialNumber] [nvarchar](32) NOT NULL,
[Status] [int] NOT NULL CONSTRAINT [DF_SerialNo_Status] DEFAULT ((0)),
[MaskinID] [nvarchar](32) NOT NULL,
[MetodID] [nvarchar](32) NOT NULL,
[Artikelnummer] [nvarchar](32) NOT NULL,
[Date] [date] NOT NULL,
[Sequence] [int] NOT NULL,
[BatchID] [nvarchar](12) NULL,
[rowCreatedDT] [datetime] NOT NULL CONSTRAINT [DF_SerialNo_rowCreatedDT] DEFAULT (getdate()),
[rowChangedDT] [datetime] NOT NULL CONSTRAINT [DF_SerialNo_rowChangedDT] DEFAULT (getdate()),
[DetaljIndex] [nvarchar](32) NULL,
[NoTriggerExec] [nchar](1) NULL,
CONSTRAINT [PK_SerialNo] PRIMARY KEY CLUSTERED
(
[SerialNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
October 19, 2017 at 3:14 am
Anders
If your database has a size limit of 10GB and you're always hovering near to that number, there's no point in shrinking it. You won't get much back and it'll have to be used again eventually.
So long as you delete old rows fairly frequently and your indexing is good, you shouldn't have any problems with blocking. You might consider putting the clustered index on rowCreatedDT and making the primary key non-clustered. Perhaps also put a one-second (say) delay in your loop to allow other processes to access the table in between batches of deletes.
John
October 19, 2017 at 5:10 am
Hello John,
Just to make sure that I understand.
Shrink is just used to "shrink" the file size. The deleted rows (or the space they took) will be available without me doing a shrink.
Thank you for your reply!
// Anders
October 19, 2017 at 5:17 am
Anders
Yes, shrinking a file reduces the physical space it takes on disk. Without a shrink, the space that the deleted rows took will be available for more data to be added to the database, but not available to the operating system. (That's a simplification, since it doesn't take into account free space on data pages, which won't be recovered unless you rebuild your indexes and/or tables.)
John
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply