SQL Server ignoring maximum file size on transaction log

  • Guys were you able to review this?

    Let me know if we need at anything more before I can destroy that DB.

  • Take a backup of it please, zip it and upload to some file share.

    Also please detail step by step how you achieved that.

    Right now my server is running some performance benchmarks and that's more important.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Will do it as soon as I get back on my PC.

  • Jonathan Kehayias (7/11/2012)


    Post a screenshot of the file properties from Windows Explorer that shows its physical size on disk. I can hack up the metadata in SQL and make the output of those queries show invalid information with relative ease, and it is not entirely uncommon to have incorrect metadata show up from time to time.

    Jonathan other than comparing sysfiles, sysaltfiles & similar tables/views can you tell me how to spot incorrect metadata. Recently I'm seeing this a lot on SQL 2000.

  • DBCC CHECKDB

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • yup (7/14/2012)


    Attached files in the main post & here on connect

    Files not accessible.

    Still waiting for the steps to repo. A simple create DB does not work

    Edit: and to be utterly clear, this is the E drive:

    If it expanded by autogrow, I really want to hear how you managed to fill a 2 TB log with a 4MB data file. By my rough calculations that would have required somewhere around 400 000 updates of that full 4MB.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • And to be even more utterly clear

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Mind if I email you the files ?

  • Jonathan Kehayias (7/14/2012)


    DBCC CHECKDB

    Any other quick way by reading some specific page / headers?

  • CHECKDB would be faster than trying to check the metadata manually, that is why it exists in the product. If you have corrupt metadata in 2000, typically that means someone manually changed one of the system tables, it's not something that SQL Server did. If you understand that database layout and page structures, I guess you could do it manually, but that isn't something you are going to get explained in a forum thread, there are entire books on SQL Server Database Internals.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • Yep though've used those but was looking for an easier way, in lots of cases with me mostly on SQL 2000. I'm seeing a difference in sysaltfiles & sysfiles not only there is a difference in space mentioned but also for the file count. Sysfiles is always correct in the way I've seen it but because the SSMS GUI picks its contents from sysaltfiles it shows wrong readings. Any way this will need a seperate thread in itself for now its the > 2 TB log file.

  • How it started?

    Initially it came out of a practice where my PSH program will collect the tweets & push it into a table, the DB & LOG auto file growths were disabled on alerts i will simply increment the files with CurrentSz + 100 MB one fine day I couldnt expand more, when checked the HDD was out of space because of a log file that was around 4.1 GB.

    Now this was a case on 2 HDDs (2+3tb) turned into a Stripped SwRaid (Dynamic)

    How was the log file expanded?

    To re-check, I pulled out another 3 TB hdd reformatted as single partition 2.7x TB GPTbasic. Created a DB from TSQL with 4 MB datafile & 512 KB log file. There onwards I'll manually expand the DB some space at a time. I wount dare to create a DB with 2 TB LDF (scared of the time zerofill will take)

  • yup (7/14/2012)


    Yep though've used those but was looking for an easier way, in lots of cases with me mostly on SQL 2000.

    On SQL 2000 CheckDB doesn't run CheckCatalog, so you'll have to run that separately.

    CheckDB and CheckCatalog are the easier way.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hey thanks for the inside peek, let me know if you are able to get the attachments.

Viewing 15 posts - 16 through 30 (of 32 total)

You must be logged in to reply to this topic. Login to reply