Are Indexes Actually Changes to the System?

  • Jeff Moden - Thursday, October 4, 2018 9:45 AM

    A part of your gain may actually be because of the way you were maintaining the GUID indexes.  You did use REORGANIZE, right? 😀  Everyone does. :sick:


    Still, I totally agree with your move away from GUIDs especially with performance in mind.  After all, it's not index maintenance that pays the bills. 😉

    At the time, we didn't have a full-time DBA so no index maintenance (or any other type of maintenance) was being performed. It was mostly because of how the tables were related and the types of joins needed. It took a proper design to solve most of the performance issues. After all, no amount of tuning, maintenance or coding will compensate for a poor design.

  • Jeff Moden - Thursday, October 4, 2018 9:40 AM

    If you have a reliable method of determining which specific columns are most responsible for the page splits on a given index, it should be a simple matter write a bit of dynamic sql that 1) looks at the current defaults, 2) looks at average & max values of the existing data & 3) drops and recreates the default constraints based on the values captured in step 2.

    That could kind of work on NCIs because they usually only have 1 or 2 expansive columns in keys but then you have the likes of INCLUDEs and bloody LOBs that end up in the CIs because they just happen to fit "in row".  I actually have an "extras" section in my presentation that tells you how to move existing LOBs out of row and how to guarantee that they'll never go "in row".

    This makes my brain hurt... On one hand, I've been reading your posts for years, you're a brilliant guy who's fed me my fair share of crow (and maybe a pork chop or two) over the years... on the other hand, this flies in the face of all, current, conventional wisdom... Especially since MS has effectively dealt with the "Ascending Key Problem" (not related... but... related). At this point, all I can say is that, because it's you, I'll keep an open mind... My gut says that the "NOT NULL / wide default" solution will be more effective at maintaining high page densities while, at the same time, reducing bad page splits... but... experience also tells me that you're smarter than my gut... Either way, I look forward to seeing more.

    Ah... I was talking mostly about keeping them from splitting and how to properly do index maintenance on them.  In other words, they are, in fact, third only to totally static indexes and truly "Append Only" indexes (neither of which would require any maintenance) and they have fewer page splits than the constant barrage of "good" page splits in an "Append Only" index and they have no "hot spots" if they're unique, but they pretty much suck at everything else.  See my latest post just above this one for more info on that.  Search for "Hell No" to jump to the section of that post that I'm talking about.

    So, as so often happens with you and me (you're a lot smarter on many things than I am), I actually do agree with your gut (and mine).  I wouldn't jump to the bad conclusion of designing tables that use GUIDs as the keys.  They're just the "Best" when it comes to index maintenance and concurrency even though they suck in a whole lot of other areas.

    Once again...Thank you for the clarification. That definitely helps put things into perspective.
    I appreciate the compliment but I'm fairly certain that I spend more time reading your posts and asking myself, "How the hell did he figure that out", than the other way around...

     I actually have an "extras" section in my presentation that tells you how to move existing LOBs out of row and how to guarantee that they'll never go "in row".


    I'm definitely interested in seeing your approach to this...
    One problem that I've had to contend with (unrelated to index maintenance) is overly expensive key lookups on CIs with large text columns
    As it stands now, my go to solution is to, whenever possible, put the offending columns into their own two column tables, with a 1:1 relationship to the table where they'd normally live.
    As an example, a "journal table" with a [UserComments VARCHAR(2000) NULL] column... The UserComments column takes up more than 95% of the space on the CI but is only being referenced in 1% of the queries that use use that CI. The other 99% of the time, it's just being lugged around and unnecessarily forcing 20X the number of pages to be loaded into memory.

    So, switching from this:
    CREATE TABLE dbo.Journal (
        Journal_ID INT NOT NULL IDENTITY (1, 1),
        Referral_ID INT NULL,
        Schedule_ID INT NULL,
        JournalType_ID SMALLINT NULL,
        UserComments VARCHAR (2000) NULL,
        CONSTRAINT    pk_Journal
            PRIMARY KEY CLUSTERED (Journal_ID)
            WITH (DATA_COMPRESSION = PAGE) ON [PRIMARY]
        ) ON [PRIMARY];
    GO

    to this:
    CREATE TABLE dbo.Journal (
        Journal_ID INT NOT NULL IDENTITY (1, 1),
        Referral_ID INT NULL,
        Schedule_ID INT NULL,
        JournalType_ID SMALLINT NULL,
        CONSTRAINT    pk_Journal
            PRIMARY KEY CLUSTERED (Journal_ID)
            WITH (DATA_COMPRESSION = PAGE) ON [PRIMARY]
        ) ON [PRIMARY];
    GO

    CREATE TABLE dbo.Journal_UserComments (
        Journal_ID INT NOT NULL
            CONSTRAINT fk_JournalUserComments_JournalID FOREIGN KEY REFERENCES dbo.Journal(Journal_ID),
        UserComments VARCHAR (2000) NOT NULL    -- no need for a default because no comment means no row.
        CONSTRAINT    pk_JournalUserComments
            PRIMARY KEY CLUSTERED (Journal_ID)
            WITH (DATA_COMPRESSION = PAGE) ON [PRIMARY]
        ) ON [PRIMARY];
    GO

    The impact of the change is pretty massive, especially when it comes to things like Key Lookups.
    The down side is that the "powers that be" aren't exactly open to the idea of re-architecting core tables that are being referenced in hundreds of different procedures.
    Clearly not the same issue but there I suspect that there may be enough overlap to really peak my interest.

  • Jason A. Long - Thursday, October 4, 2018 11:51 AM

    ...
    The impact of the change is pretty massive, especially when it comes to things like Key Lookups.
    The down side is that the "powers that be" aren't exactly open to the idea of re-architecting core tables that are being referenced in hundreds of different procedures.
    Clearly not the same issue but there I suspect that there may be enough overlap to really peak my interest.

    To address the issue of code dependencies on the original Journal table, a view called Journal could join both tables. Sometimes we must patiently wait for that  "teachable moment", like when a brainstorming meeting is called to address recurring performance issues. Folks who are normally resistant to change are more willing to grasp for new ideas when when there is a crisis unfolding.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Jason A. Long - Thursday, October 4, 2018 11:51 AM

    Once again...Thank you for the clarification. That definitely helps put things into perspective.
    I appreciate the compliment but I'm fairly certain that I spend more time reading your posts and asking myself, "How the hell did he figure that out", than the other way around...

     I actually have an "extras" section in my presentation that tells you how to move existing LOBs out of row and how to guarantee that they'll never go "in row".


    I'm definitely interested in seeing your approach to this...
    One problem that I've had to contend with (unrelated to index maintenance) is overly expensive key lookups on CIs with large text columns
    As it stands now, my go to solution is to, whenever possible, put the offending columns into their own two column tables, with a 1:1 relationship to the table where they'd normally live.
    As an example, a "journal table" with a [UserComments VARCHAR(2000) NULL] column... The UserComments column takes up more than 95% of the space on the CI but is only being referenced in 1% of the queries that use use that CI. The other 99% of the time, it's just being lugged around and unnecessarily forcing 20X the number of pages to be loaded into memory.

    So, switching from this:
    CREATE TABLE dbo.Journal (
        Journal_ID INT NOT NULL IDENTITY (1, 1),
        Referral_ID INT NULL,
        Schedule_ID INT NULL,
        JournalType_ID SMALLINT NULL,
        UserComments VARCHAR (2000) NULL,
        CONSTRAINT    pk_Journal
            PRIMARY KEY CLUSTERED (Journal_ID)
            WITH (DATA_COMPRESSION = PAGE) ON [PRIMARY]
        ) ON [PRIMARY];
    GO

    to this:
    CREATE TABLE dbo.Journal (
        Journal_ID INT NOT NULL IDENTITY (1, 1),
        Referral_ID INT NULL,
        Schedule_ID INT NULL,
        JournalType_ID SMALLINT NULL,
        CONSTRAINT    pk_Journal
            PRIMARY KEY CLUSTERED (Journal_ID)
            WITH (DATA_COMPRESSION = PAGE) ON [PRIMARY]
        ) ON [PRIMARY];
    GO

    CREATE TABLE dbo.Journal_UserComments (
        Journal_ID INT NOT NULL
            CONSTRAINT fk_JournalUserComments_JournalID FOREIGN KEY REFERENCES dbo.Journal(Journal_ID),
        UserComments VARCHAR (2000) NOT NULL    -- no need for a default because no comment means no row.
        CONSTRAINT    pk_JournalUserComments
            PRIMARY KEY CLUSTERED (Journal_ID)
            WITH (DATA_COMPRESSION = PAGE) ON [PRIMARY]
        ) ON [PRIMARY];
    GO

    The impact of the change is pretty massive, especially when it comes to things like Key Lookups.
    The down side is that the "powers that be" aren't exactly open to the idea of re-architecting core tables that are being referenced in hundreds of different procedures.
    Clearly not the same issue but there I suspect that there may be enough overlap to really peak my interest.

    That's what I refer to as a "Sister Table" and it's one of the methods that I allude to for decreasing "ExpAnsive" updates on the really useful parts of the CI.  You CAN make it look like one table to the GUI by creating a view with the original table name and renaming the underlying table.  The trouble there is that if it needs to be an updateable view, then you'll need to create an "Instead of Trigger" to do so because there are two tables instead of just one.

    To be clear, I've not finished my experiments on a couple of live tables that suffer a similar problem as what you've stated but the reclassification of wide VARCHAR(NNNN) to VARCHAR(MAX) and then forcing them to always be "Out of Row" seems to (so far) work a treat.  It's just like having a sister table without the headaches. 

    To do such a thing on a new table, you simply need to do the following after the table has been created...

    EXEC sp_tableoption 'schemaname.tablename', 'large value types out of row', 1;

    That will also cause any NEW data in LOB columns to be out of row on existing populated tables.  However, that has no effect on existing rows in existing tables.  In order to get the existing data to move out of row, you have to update all of the existing data to itself using code similar to the following (which is what I ended up doing).

     UPDATE schemaname.tablename
     
       SET someLOBcolumn = someLOBcolumn -- the column updates itself
      WHERE someLOBcolumn IS NOT NULL
    ;

    Of course, the normal caveats exist with LOB columns and searching them... 1) they'll take longer... 2) you just want to kill people that search them. 😀  Of course, the latter of those two things (LOB or not) does give one an excuse to enjoy a beer or five after work. 😛

    And, no... I'm NOT suggesting that anything over VARCHAR(50) (for example) be converted to a LOB and shoved out of row.  As with all else, "It Depends" and "Nothing is a Panacea".  But, in the right circumstances, moving WIDE Varchars out of row into LOB storage can be a both a viable solution and a quick permanent solution.

    There is one more caveat about blobs that many good folks don't understand.  REORGANIZE will NOT rebuild out of row LOBs.  It only shrinks those that end up in row.  Splash one more reason to even think about using REORGANIZE.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Eric M Russell - Thursday, October 4, 2018 12:26 PM

    Jason A. Long - Thursday, October 4, 2018 11:51 AM

    ...
    The impact of the change is pretty massive, especially when it comes to things like Key Lookups.
    The down side is that the "powers that be" aren't exactly open to the idea of re-architecting core tables that are being referenced in hundreds of different procedures.
    Clearly not the same issue but there I suspect that there may be enough overlap to really peak my interest.

    To address the issue of code dependencies on the original Journal table, a view called Journal could join both tables. Sometimes we must patiently wait for that  "teachable moment", like when a brainstorming meeting is called to address recurring performance issues. Folks who are normally resistant to change are more willing to grasp for new ideas when when there is a crisis unfolding.

    I haven't tried using an indexed view to address the issue but my gut says that the optimizer won't recognize the existence of an indexed views CI as an alternative to the underlying table CI when forced to do a lookup... 
    Interesting idea though and certainly worth a try.

  • Eric M Russell - Thursday, October 4, 2018 12:26 PM

    To address the issue of code dependencies on the original Journal table, a view called Journal could join both tables. Sometimes we must patiently wait for that  "teachable moment", like when a brainstorming meeting is called to address recurring performance issues. Folks who are normally resistant to change are more willing to grasp for new ideas when when there is a crisis unfolding.

    This is one of the many reasons why the "Patient DBA" is usually the "Successful DBA".  😀  Sometimes you have to give people the opportunity to fail to make them realize the methods to prevent failure are actually worthwhile doing before they fail.

    You posted while I was typing my last.  It IS a bit of instant gratification to see someone echo the same ideas.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Stepping back to the original question of "Are Indexes Actually Changes to the System?", let's also consider that I've just run across a 14.5GB index with 16 key columns (8 of which suffer "ExpAnsive" updates) with 4 INCLUDEd columns  (one of which is a {gasp!} partially in-row VARCHAR(MAX) "comments" column that also suffers "ExpAnsive" updates) and the index has NEVER seen a user seek, scan, or lookup (I have a system that keeps track of the last usage even across reboots/restarts) that causes massive "bad" page splits (all of which are fully logged).  Add that to all I've previously stated about the unintended consequence of a new index possibly derailing performance in many other areas, then you'll understand my insistence that indexes are not only "changes to the system", but they are, in fact, some of the most important changes to the system that there are.

    And even with all that's been said on this thread, we're only scratching the surface about indexes and the page splits, logging, performance, etc, etc. that are caused or solved by indexes.  It didn't use to matter because most databases were toys compared to what we have now but they really matter now!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • That's what I refer to as a "Sister Table" and it's one of the methods that I allude to for decreasing "ExpAnsive" updates on the really useful parts of the CI. 

    "Sister Table" a much catchier name than what I've been using ("1 to 1 table"). A new "Modenism" for the vocabulator. 😀

    You CAN make it look like one table to the GUI by creating a view with the original table name and renaming the underlying table.  The trouble there is that if it needs to be an updateable view, then you'll need to create an "Instead of Trigger" to do so because there are two tables instead of just one.


    Brilliantly simple idea and certainly easy from an implementation perspective. Probably the easiest version to sell to the higher ups...

    To be clear, I've not finished my experiments on a couple of live tables that suffer a similar problem as what you've stated but the reclassification of wide VARCHAR(NNNN) to VARCHAR(MAX) and then forcing them to always be "Out of Row" seems to (so far) work a treat.  It's just like having a sister table without the headaches.  

    To do such a thing on a new table, you simply need to do the following after the table has been created...
    EXEC sp_tableoption 'schemaname.tablename', 'large value types out of row', 1;

    That will also cause any NEW data in LOB columns to be out of row on existing populated tables.  However, that has no effect on existing rows in existing tables.  In order to get the existing data to move out of row, you have to update all of the existing data to itself using code similar to the following (which is what I ended up doing).

     UPDATE schemaname.tablename
        SET someLOBcolumn = someLOBcolumn -- the column updates itself
      WHERE someLOBcolumn IS NOT NULL
    ;


    Now we're getting into the black magic... I will definitely be testing this.


    Of course, the normal caveats exist with LOB columns and searching them... 1) they'll take longer... 2) you just want to kill people that search them. 😀  Of course, the latter of those two things (LOB or not) does give one an excuse to enjoy a beer or five after work. 😛

    How about app developers who don't want to create new tables to accommodate the different pieces of relevant  information for the various journal types... So they programmatically append string it together and then concatenate it to the users actual comments? And then look surprised when their "comment parsing query" fails code review... As they say, the struggle is real.

    But... While we're talking about it... I wonder if a FULLTEXT index might be a usable work around... I don't have much experience with them but, after a bit if reading, it looks like it might fit the bill.

  • Jason A. Long - Thursday, October 4, 2018 2:16 PM

    How about app developers who don't want to create new tables to accommodate the different pieces of relevant  information for the various journal types... So they programmatically append string it together and then concatenate it to the users actual comments? And then look surprised when their "comment parsing query" fails code review... As they say, the struggle is real.

    I think it's much more fun to build the excuses for the five beer episodes along with some serious pork chop launching drills. 😀

    But... While we're talking about it... I wonder if a FULLTEXT index might be a usable work around... I don't have much experience with them but, after a bit if reading, it looks like it might fit the bill.

    I don't actually care for SQL Server's version of "FULLTEXT" especially when proximity searches and multi-word-find searches go.  I normally end up building my own as a kind of EAV (which FTS does behind the scenes).  Of course, I also use a trigger to keep it up to date.  It also allows you to "step out of the box" when you come across a task that requires you to realize that you're in a box. 😀  Just my opinion, though.  It's kind of like using HierarchyID... while many tout its wonders, I frequently find things that need to be done that either can't be done or can't be done easily.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Oh... as a bit of a sidebar, if you use REORGANIZE during your index maintenance, stop it. You're killing your system by removing critical free space in your indexes without fixing page density in the most used "silos" that occur in your indexes meaning that you're also wasting a shedload of extremely valuable memory and disk space and backup time and restore time and time to do the REORGANIZE and the log file explosions to support it and the fact that it can easily take up to 20 times longer to execute than a rebuild... and all for naught. It's like a bad drug habit... the more you do it, the more you need to do it. That includes you folks that think you have to do it just because you "only" have the Standard Edition.                       

    What critical free space is getting removed? Microsoft's docs says "Compaction is based on the existing fill factor value," and I'm taking this to mean that reorgs split and merge the "leaf" pages based on fill factor (as they specifically says this refers to leaf pages), so I have to assume the non leaf pages are left to their own devices but still should not lose "critical free space" except for inserted leaf references I suppose (but even then, won't they eventually split anyways?) Have you found more details on this, ie., is this documentation in error? Also what sort of pages or free space are you referring to when using the term "silos"?

  • It does exactly what it says so it's not a document error.  It "compact{s}... based on the existing fill factor value".  The documentation also clearly states that REORGANIZE will NOT create additional freespace and it actually does recommend the use of REBUILD to do such a thing.  People are just coming to the wrong conclusion much like they do when they read about and think that ISNUMERIC is actually an ISALLDIGITS function. 😀  What is does is it takes pages that are less full than the Fill Factor and uses them with other pages to at least meet the fill factor.  The trouble is that space between the Fill Factor and the 100% full line is the most critical space (the pages are closer to splitting than below the Fill Factor) and that's why REBUILDs work so well.  REBUILDs remove free space below the Fill Factor and create free space above the Fill Factor.  REORGANIZE simply fills ("removes") the space above the Fill Factor and it always happens at the worst possible time, which is when that free space is actually needed.  Because there is a lot less free space available, it actually causes more fragmentation which starts the cycle over again.

    I'm at work right now and so don't have access to the charts from the experiment runs I've done to prove this problem.  I'll try to remember to post them this evening.

    And, yes... unless it's a truly static or a true "Append Only" index, things will eventually split anyway, which is the main reason why you end up with both logical and physical fragmentation which are the only reasons to do index maintenance to begin with.  That's what the Fill Factor is supposed to help prevent on mid-index inserts but REORGANIZE removes the critical free space (see the previous charts I posted for a partial taste).  There's also a chart above that shows a "siloed" index.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • REBUILDs remove free space below the Fill Factor and create free space above the Fill Factor. REORGANIZE simply fills ("removes") the space above the Fill Factor and it always happens at the worst possible time, which is when that free space is actually needed. Because there is a lot less free space available, it actually causes more fragmentation which starts the cycle over again.

    If reorganize removes the space above the fill factor then its not compacting according to the fill factor, its pretty much compacting DESPITE the fill factor so then in fact the documentation would be incorrect, heck for that matter, reorg would then be simply using an effective 100 percent fill factor so that'd be a pretty big documentation blunder.

    What sort of code are you using to get the free space of each page of your index?

  • patrickmcginnis59 10839 - Thursday, October 4, 2018 4:18 PM

    REBUILDs remove free space below the Fill Factor and create free space above the Fill Factor. REORGANIZE simply fills ("removes") the space above the Fill Factor and it always happens at the worst possible time, which is when that free space is actually needed. Because there is a lot less free space available, it actually causes more fragmentation which starts the cycle over again.

    If reorganize removes the space above the fill factor then its not compacting according to the fill factor, its pretty much compacting DESPITE the fill factor so then in fact the documentation would be incorrect, heck for that matter, reorg would then be simply using an effective 100 percent fill factor so that'd be a pretty big documentation blunder.

    What sort of code are you using to get the free space of each page of your index?

    I guess I'm explaining it wrong...  It does compact UP to the Fill Factor.  Depending on the size of the rows, that may exceed the Fill Factor.  the problem is that it's compacting  instead of providing additional space above the Fill Factor.

    The code I'm using is homegrown.  To get the Free Space for each page, I get the header information using DBCC IND.  I'm also using a recursive CTE to put the pages in logical order rather than just using the physical order.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Thursday, October 4, 2018 4:42 PM

    I guess I'm explaining it wrong...  It does compact UP to the Fill Factor.  Depending on the size of the rows, that may exceed the Fill Factor.  the problem is that it's compacting  instead of providing additional space above the Fill Factor.

    The code I'm using is homegrown.  To get the Free Space for each page, I get the header information using DBCC IND.  I'm also using a recursive CTE to put the pages in logical order rather than just using the physical order.

    I gotcha now, thanks for clarifying!

  • Aaron N. Cutshall - Thursday, October 4, 2018 7:21 AM

    Jeff Moden - Thursday, October 4, 2018 7:03 AM

    Here's a chart of a GUID keyed clustered index (123 bytes wide) that has gone 4.5 weeks with the addition of 10,000 rows per day without any page splits and has just gotten to the point of needing what I call a "Low Threshold Rebuild".

    Of course, "ExpAnsive" updates throw all of that on the ground.

    Jeff,

    I have to question your statement about GUIDs used in a clustered index. As GUID generation is near random (or random-like), how can that be an effective clustered index? In my past experience, replacing GUID values with consecutive values (such as an IDENTITY column) allow for far greater INSERT performance and excellent SEEK performance due to the consecutive nature. Can you perhaps explain that?

    Aaron

    GUID is the best for the MPP world because the randomness will help evenly distribute the data across N nodes (or N databases). When you have a system designed around using resources from more than one computer, if all the data is stuck on 1 node versus the other say, 50 nodes, then 1 computer is doing all of the work versus distributing the workload to the other 50 computers. 

    In the SMP world like with SQL Server, not so much. I once clustered on a GUID index with a table that had like a billion rows. Inserts jumped from like 10 minutes to like 2 hours until optimization on the table/index happened. Basically, what you saw on your end too.

Viewing 15 posts - 31 through 45 (of 71 total)

You must be logged in to reply to this topic. Login to reply