July 13, 2012 at 4:08 am
Guys were you able to review this?
Let me know if we need at anything more before I can destroy that DB.
July 13, 2012 at 8:41 am
Take a backup of it please, zip it and upload to some file share.
Also please detail step by step how you achieved that.
Right now my server is running some performance benchmarks and that's more important.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 13, 2012 at 5:17 pm
Will do it as soon as I get back on my PC.
July 14, 2012 at 5:40 am
Attached files in the main post & here on connect
July 14, 2012 at 5:45 am
Jonathan Kehayias (7/11/2012)
Post a screenshot of the file properties from Windows Explorer that shows its physical size on disk. I can hack up the metadata in SQL and make the output of those queries show invalid information with relative ease, and it is not entirely uncommon to have incorrect metadata show up from time to time.
Jonathan other than comparing sysfiles, sysaltfiles & similar tables/views can you tell me how to spot incorrect metadata. Recently I'm seeing this a lot on SQL 2000.
July 14, 2012 at 7:00 am
DBCC CHECKDB
Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]
July 14, 2012 at 7:31 am
yup (7/14/2012)
Attached files in the main post & here on connect
Files not accessible.
Still waiting for the steps to repo. A simple create DB does not work
Edit: and to be utterly clear, this is the E drive:
If it expanded by autogrow, I really want to hear how you managed to fill a 2 TB log with a 4MB data file. By my rough calculations that would have required somewhere around 400 000 updates of that full 4MB.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 14, 2012 at 8:05 am
And to be even more utterly clear
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 14, 2012 at 5:37 pm
Mind if I email you the files ?
July 14, 2012 at 5:37 pm
Jonathan Kehayias (7/14/2012)
DBCC CHECKDB
Any other quick way by reading some specific page / headers?
July 14, 2012 at 6:00 pm
CHECKDB would be faster than trying to check the metadata manually, that is why it exists in the product. If you have corrupt metadata in 2000, typically that means someone manually changed one of the system tables, it's not something that SQL Server did. If you understand that database layout and page structures, I guess you could do it manually, but that isn't something you are going to get explained in a forum thread, there are entire books on SQL Server Database Internals.
Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]
July 14, 2012 at 7:52 pm
Yep though've used those but was looking for an easier way, in lots of cases with me mostly on SQL 2000. I'm seeing a difference in sysaltfiles & sysfiles not only there is a difference in space mentioned but also for the file count. Sysfiles is always correct in the way I've seen it but because the SSMS GUI picks its contents from sysaltfiles it shows wrong readings. Any way this will need a seperate thread in itself for now its the > 2 TB log file.
July 14, 2012 at 10:05 pm
How it started?
Initially it came out of a practice where my PSH program will collect the tweets & push it into a table, the DB & LOG auto file growths were disabled on alerts i will simply increment the files with CurrentSz + 100 MB one fine day I couldnt expand more, when checked the HDD was out of space because of a log file that was around 4.1 GB.
Now this was a case on 2 HDDs (2+3tb) turned into a Stripped SwRaid (Dynamic)
How was the log file expanded?
To re-check, I pulled out another 3 TB hdd reformatted as single partition 2.7x TB GPTbasic. Created a DB from TSQL with 4 MB datafile & 512 KB log file. There onwards I'll manually expand the DB some space at a time. I wount dare to create a DB with 2 TB LDF (scared of the time zerofill will take)
July 15, 2012 at 2:57 am
yup (7/14/2012)
Yep though've used those but was looking for an easier way, in lots of cases with me mostly on SQL 2000.
On SQL 2000 CheckDB doesn't run CheckCatalog, so you'll have to run that separately.
CheckDB and CheckCatalog are the easier way.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 15, 2012 at 5:14 am
Hey thanks for the inside peek, let me know if you are able to get the attachments.
Viewing 15 posts - 16 through 30 (of 32 total)
You must be logged in to reply to this topic. Login to reply