July 3, 2008 at 10:39 am
Actually ,we have some servers and for one server ,there is no enough space . Due to that jobs are failing daily . I used to delete old back up files. In that data base , all tables are having non clustered indexes and i used shrinked the database daily.
IN that system 90% of the memory is occupied. Can anyone can suggest me how to run jobs with out adding any extra space, not taking backup in tape drives.
July 3, 2008 at 10:58 am
What kind of jobs are you talking about?
How much space does your server have?
How much available space do you have to work with?
How large is your DB, log?
July 3, 2008 at 11:27 am
Shrinking the database daily is a very bad idea - you will end up with fragmented database files very quickly.
What backup schedules are you running: Full - how often, Differentials - how often , T-Log - how often?
How long do you retain the backup files for?
What are the sizes of your databases (MDF and LDF) files and backup files?
What are the disk volume sizes on your server?
July 3, 2008 at 12:51 pm
Which SQL Server Edition/Version are you on?
What are your PRocessors like? and How Many?
MEMORY?
HDD:: how much space in each partition?List all partitions
List Databases and Sizes
Are there any other applications Running on this BOX.
Maninder
www.dbanation.com
July 3, 2008 at 1:47 pm
What is the size allocated to the DB and how much data in the DB? Daily Shrinking DB is not a Best Practice. Why can not you add more disk on the server? Check the fragmented tables and rebild the index on that particulter Tables. Always Take a FULL Backup If possible Take 2 Diff.Backup in a Day and at least hourly Tlog Backup.
MCP, MCTS (GDBA/EDA)
July 3, 2008 at 7:04 pm
Hi sanya
I will agree with manoj. If possible add one more disk to your server. And take Full backup, 2 diff backups and hourly transaction log backup with truncate_only option
July 3, 2008 at 9:45 pm
Kalyan (7/3/2008)
... transaction log backup with truncate_only option
Of course, only use the TRUNCATE_ONLY option if you dont need to be able to recover the database.
:exclamation:TRUNCATE_ONLY will break your recovery path:exclamation:
Do NOT use this in a production environment, or if you must, run a Full backup immediately after.
July 4, 2008 at 4:30 am
Thank you for your response.
Daily we used to take T-log back up which is scheduled and there is no free disk space available in the hard disk.can i do reindexing which some wat increase free space other wise i should add extra hard disk.
July 4, 2008 at 4:33 am
daily scheduled jobs whick takes T-log and differential backups. There is no free space left in the hard disk. is there any remedy to run the jobs with out failing.
July 4, 2008 at 4:36 am
Hi Manoj,
Im new to this filed. can u please explain how Check the fragmented tables and rebild the index on that particulter Tables.
July 4, 2008 at 4:42 am
Scheduled T-log and differential bakup jobs which runs daily.The space allocated to the MDF and LDF are completely filled and there is no space in the hard disk.......
July 4, 2008 at 6:22 am
Use this script to selecr the defragmentation and it will generate the DBCC script as well, so you need to copy that script and run on the Databse.
-- Declare variables
SET NOCOUNT ON
DECLARE @tablename VARCHAR (128)
DECLARE @execstr VARCHAR (255)
DECLARE @objectid INT
DECLARE @indexname VARCHAR (128)
DECLARE @frag DECIMAL
DECLARE @maxfrag DECIMAL
-- Decide on the maximum fragmentation to allow
SELECT @maxfrag = 20.0
-- Declare cursor
DECLARE tables CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
-- Create the table
CREATE TABLE #fraglist (
ObjectName CHAR (255),
ObjectId INT,
IndexName CHAR (255),
IndexId INT,
Lvl INT,
CountPages INT,
CountRows INT,
MinRecSize INT,
MaxRecSize INT,
AvgRecSize INT,
ForRecCount INT,
Extents INT,
ExtentSwitches INT,
AvgFreeBytes INT,
AvgPageDensity INT,
ScanDensity DECIMAL,
BestCount INT,
ActualCount INT,
LogicalFrag DECIMAL,
ExtentFrag DECIMAL)
-- Open the cursor
OPEN tables
-- Loop through all the tables in the database
FETCH NEXT
FROM tables
INTO @tablename
WHILE @@FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
INSERT INTO #fraglist
EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO @tablename
END
-- Close and deallocate the cursor
CLOSE tables
DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged
DECLARE indexes CURSOR FOR
SELECT ObjectName, ObjectId, IndexName, LogicalFrag
FROM #fraglist
WHERE LogicalFrag >= @maxfrag
AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
ORDER BY LogicalFrag DESC
-- Open the cursor
OPEN indexes
-- loop through the indexes
FETCH NEXT
FROM indexes
INTO @tablename, @objectid, @indexname, @frag
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT '--Executing DBCC DBREINDEX (' + CHAR(39) + RTRIM(@tablename)
+ CHAR(39) + ',' + CHAR(39) + RTRIM(@indexname) + CHAR(39) + ', 90)
- fragmentation currently ' + RTRIM(CONVERT(varchar(15),@frag)) + '%'
SELECT @execstr = 'DBCC DBREINDEX (' + CHAR(39) + RTRIM(@tablename)
+ CHAR(39) + ',' + CHAR(39) + RTRIM(@indexname) + CHAR(39) + ', 90)'
--EXEC (@execstr)
SELECT @execstr = @execstr + Char(13) + 'GO'
PRINT @execstr
FETCH NEXT
FROM indexes
INTO @tablename, @objectid, @indexname, @frag
END
-- Close and deallocate the cursor
CLOSE indexes
DEALLOCATE indexes
select * from #fraglist
-- Delete the temporary table
DROP TABLE #fraglist
GO
MCP, MCTS (GDBA/EDA)
July 4, 2008 at 9:11 am
Defragmenting Indexes is a good idea, but it will actually make your current problem worse - Reindexing is a logged operation, and it will grow your LDF file.
You need to:
(1) Delete the older BAK / TRN backup files, to clear at least enough room for a set of Full backups,
(2) Run "BACKUP LOG [your db name here] WITH TRUNCATE_ONLY", on each database,
(3) Run a Full backup on each database,
(4) Shrink the LDF files back to a reasonable size - the easiest way is from SSMS - right click the database, select Tasks|Shrink|Files, select File Type as Log, Select the middle radio button option in the lower section ("Reorganize pages before releasing unused space") enter the size you want to shrink down to in the "Shrink File To" box, click 'OK'. Repeat this step to confirm the shrink worked.
Going forward, make sure that you aren't keeping backup files for too long - we operate on the requirements that (1) everything is always copied to tape before being deleted (2) we always have a recovery path on disk.
You are also going to need to keep track of the database growth - for data and log files. DONT un-necessarily shrink db files (MDF, NDF or LDF) - if the files are re-growing constantly, this is a very expensive operation for SQL, performance will be heavily impacted, both while the files are growing and later on due to fragmented db files on disk.
I have posted a set of scripts for standardizing backups on this site - go to http://www.sqlservercentral.com/scripts/Backup/62095/, I also have an updated version that includes LiteSpeed backups if you need that, let me know. This script creates routines and jobs to (1) run the backups (Full / Differential / T-Log) with some built-in intelligence, (2) delete the backup files after defined periods.
July 8, 2008 at 3:36 pm
Are you on production server?
Else you can delete the log files as these files do generally take a huge space over the period of time.
July 9, 2008 at 4:00 am
Hi,
Even on a non-prod server, deleting .ldf files is a bad habit to fall into as a general practice - they provide for the consistency and recoverability of your databases. If they're out of control in dev, chances are they'll be out of control in production too, where you definitely shouldn't be deleting them. I'd follow Simon's advice - increase the frequency of your log backups and monitor file sizes. Also, try to find out what's causing the growth by looking at what the code's doing. If the code's inefficient then rewrite it. If the code's OK, buy more disk space.
Cath
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply