July 7, 2009 at 1:25 pm
Hi Friends,
I have a scenario. Just need what is going to happen inside sql server.
I have stopped my sql server purposefully and go to the physical location where the data and log files are stored.
I have deleted the msdb Transaction log (.LDF) and tried to restart my sql server and SQL Server has rebuilt the msdb ldf file once again. I can clearly see that in physical location as well inside my ERRORLOG.
In a similar way what i tried was, created a database "testdb" inserted 100 rows and stop my sql server , deleted the .ldf of "testdb" and restarted my sql server. This time it is left in Pending Recovery state( correct me if the state i mentioned i correct or not) in which i cannot expand the database node. So again what i have done is, again stopped my sql server and created a dummy .ldf file. Before stopping my sql server i just checked the filename using the following query.
use master
go
select name,physical_name
from sys.master_files
where name like '%testdb%'
go
I created the dummy LDF file and tried to restart my sql server but still the same case.
As per knowledge i think , there is LSN mismatch in the mdf and ldf files.(correct me if am wrong). If how does it able rebuild the msdb LDF file.
Note : Assume that i dont have any backup of my user database either.
Can anyone help me out in understanding what happening inside sql server?????????
testdb database creation script
---------------------------------
USE master
go
CREATE DATABASE testdb
go
USE testdb
go
CREATE TABLE t1
(id INT)
GO
INSERT INTO t1 SELECT 555
GO 1000
July 7, 2009 at 2:46 pm
Delete that 'fake' ldf that you created and then follow the advice given here.
http://sqlinthewild.co.za/index.php/2009/06/09/deleting-the-transaction-log/
The database is recovery-pending because it wasn't shut down cleanly. In that case the log file is needed to recover the database to a consistent state. You should not ever delete the transaction log. Just because SQL will sometimes recreate it doesn't mean it can always recreate it.
You should always have a backup of the database as well.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 7, 2009 at 4:31 pm
mahesh.vsp (7/7/2009)In a similar way what i tried was, created a database "testdb" inserted 100 rows and stop my sql server , deleted the .ldf of "testdb" and restarted my sql server.
:w00t: you are like Lex Luthor!
Why don't you use your time and imagination to test something productive? 😉
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.July 7, 2009 at 10:52 pm
Hi Paul,
Am new to SQL Server Adminsitration. I was trying to know how to dismantle things and how to construct things when something happens.
It was just my personal interest!!!!!:-)
Thanks Guys!
July 7, 2009 at 11:08 pm
Here is your answer my friend :
1)alter database test set emergency
2) we use to use this command in 2000
dbcc rebuild_log ('test','C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\test_log.ldf')
This will give you an error because its not supported in 2005 .
3) Detach the database in emergency.
4) Use this command
create database test on( NAME = test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\test.mdf' )for attach_rebuild_log
This will give you an error saying that database was not cleanly shutdown.if not you won the battle .If not then go to next step .
5) create database test on( NAME = test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\test.mdf' )for attach_force_rebuild_log..
NOTE: attach_force_rebuild_log is not documented 🙂 ..but it works fine.
Regards
Abhay Chaudhary
Sr.DBA (MCITP/MCTS :SQL Server 2005/2008 ,OCP 9i)
July 7, 2009 at 11:15 pm
Note : you wont be able to detach the database without taking it to emergency .It will complain about the mising log file.
Regards
Abhay Chaudhary
Sr.DBA (MCITP/MCTS :SQL Server 2005/2008 ,OCP 9i)
July 8, 2009 at 12:54 am
Hi Gail,
The Article was simply awesome. Thanks so much man!
Abhay, thanks once again my friend!
July 8, 2009 at 1:02 am
hi_abhay78 (7/7/2009)
5) create database test on( NAME = test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\test.mdf' )for attach_force_rebuild_log..
Don't forget to run CheckDB afterwards to ensure that there's no physical or logical corruption. It's definitely a possibility, especially if the DB was half way through a DDL statement when it was shut down.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 8, 2009 at 6:31 am
Sure Gail!
Thanks once again.
July 9, 2009 at 1:44 pm
Hi Gail & Abhay,
I tested the above scenario's and is working perfectly.
While doing so i have few questions raised in my mind. Here is one regarding the Shrinking of Transaction logs.
Gail I really need u r thoughts on this!!!!!!!!!!!!
Scenario
------------
I have created a database as follows with restricted growth on my transaction log.
use master
go
create database testdb
on (name = testdb_dat_01,
filename = 'D:\SQLData\testdb_dat_01.mdf',
size = 10MB,
filegrowth = 5MB,
maxsize = unlimited -- unrestricted growth
)
log on (name = testdb_log_01,
filename = 'D:\SQLData\testdb_log_01.ldf',
size = 5MB,
maxsize = 10MB -- restricted growth
);
go
-- set full recovery model
alter database testdb
set recovery full
go
USE testdb
GO
SELECT @@SPID
GO
SET IMPLICIT_TRANSACTIONS ON;
GO
Create table t1
(id int);
go
COMMIT TRAN
go
Ceate table t2
(id int)
insert into t2
select 1
union all
select 2
commit tran
checkpoint
select * from t1
select * from t2
-- Now run the idefintelop and log full error occurs
/* Simulating lOG FULL ERROR */
--------------
declare @i int
set @i=1
while @i Properties --> Estimated the shrinking size manually and i have given 15 MB for trasactional log and it is succeeded. It here i think you are clear with what i have done.
Here is where a question raised in my mind!!!!!
Here am the only user who is accessing the database and doing some data modifications/txns, so i know which have been committed and i can predict how much MB needed to be shrinked approximately.Manual checkpoint has been issued to ensure that all my committed txns are written to MDF files. Gail, again issuing a Checkpoint manually i think it is not a good practice but for this example am considering it. So i am able predict the value that how much i can shrink my log file. Right?? But in real time many users will be accessing the database how can i assess how much amount of MB to be shrinked while shrinking my log file. I really need u r thoughts on this!!!!!!!!!!!!
Or else is there any other best practices you follow when encountering such scenario's.
Seeking for more inputs!!!!!!!!
Thanks in Advance.
July 9, 2009 at 1:53 pm
Why do you want to shrink the log at all? You should manage the log so that the file size is adequate for the activity in your database and the frequency of your log backups.
If you're getting an error saying that the log is full it means that the file is too small, so you need to grow it, not shrink it. You can't shrink a log that's full because there's no space within the file to release to the OS.
Shrinking the log, especially on a regular basis is a bad idea. It's just going to have to grow again.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 9, 2009 at 2:26 pm
Hi Gail,
Now i understand the importance of the Log but than in what real time scenarios we go for shrinking???????
July 9, 2009 at 2:36 pm
Only after an unusual activity has grown the log beyond what is needed for normal database activity.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 9, 2009 at 2:38 pm
[highlight=#ffff11]You can't shrink a log that's full because there's no space within the file to release to the OS.[/highlight]
I want to add a point to the above statement.
When i run the indefinte loop which was given above, it is trying to insert data into dirty pages i believe and also inside the T-log and that is the reason why it is throwing Msg 9002 Log Full Error. But when i query the table, i can see only the data which was previously consistent but not the data which i have inserted using the loop. That means it is still an Active transaction which is not yet committed but filled the log. Right???
If that is the case , even after i got the log full Error, but in pratical i tried to insert few records 10 - 50 records into the tables and it is still allowing me to get inserted. what is happening inside my log???????? How it is able allocate space inside my log???
Can you please explain???
July 9, 2009 at 2:41 pm
Can you give me any specific examples on "unusual activities"?
I need some clarity on this!!!!!!!
Don't take me friend, I was just curious.
Viewing 15 posts - 1 through 15 (of 34 total)
You must be logged in to reply to this topic. Login to reply