June 10, 2015 at 5:26 pm
What are the advantages of AO over Mirroring.I know high availability, more replicas and backup can be taken in secondary as well.
If we have more than 100 databases on a SQL Server, how can we suggest to have AlwaysOn those??? like in same group, different groups.. etc and why??
I appreciate your answers will help me understanding these features.
June 11, 2015 at 4:16 am
AlwaysOn offers the ability to have multiple readonly replicas, failover is now managed by the WSFC subsystem and can incorporate the inclusion of a TCP IP listener. Aside this the base architecture is not to dissimilar to mirroring
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" π
June 11, 2015 at 6:37 am
snuthalapati (6/10/2015)
What are the advantages of AO over Mirroring.I know high availability, more replicas and backup can be taken in secondary as well.If we have more than 100 databases on a SQL Server, how can we suggest to have AlwaysOn those??? like in same group, different groups.. etc and why??
I appreciate your answers will help me understanding these features.
You can consider AO as glorified mirroring from a low-level functionality perspective.
The single most important difference for most is that AO is Enterprise Edition only.
Most people (especially those new to AO) do not know about the massive issue with making a secondary readable in causing 14-byte version store pointers to be added to PRIMARY data structures.
100+ databases, if they are busy at all, could easily overwhelm even moderate hardware (especially network), for log reader/shipping/replay for either form of "mirroring".
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 11, 2015 at 7:29 am
TheSQLGuru (6/11/2015)
You can consider AO as glorified mirroring from a low-level functionality perspective.
Mirroring on steroids I tend to call it π
TheSQLGuru (6/11/2015)
The single most important difference for most is that AO is Enterprise Edition only.
Yes, although with SQL 2016 this is changing
TheSQLGuru (6/11/2015)
Most people (especially those new to AO) do not know about the massive issue with making a secondary readable in causing 14-byte version store pointers to be added to PRIMARY data structures.
It's actually not as bad as that, there are different combinations to observe. My article link below details the following
http://www.sqlservercentral.com/articles/stairway/107542/[/url]
Perry Whittle: Stairway to HA Level 6
Impact of Row VersioningWhen you enable a database as a readable Secondary in an AlwaysOn Availability group, row versioning is automatically implemented and applies a 14 byte overhead on each row that is modified. In fact all isolation levels are transparently mapped to the snapshot isolation level to avoid redo thread blocking. Without this, report workloads could possibly interfere with the redo thread process.
The addition of the row version data on the Primary depends on the snapshot isolation or read-committed snapshot isolation (RCSI) level setting on the primary database. If row versioning is also explicitly implemented on the Primary database in the group there will be an overhead on the Primary replica too.
This row overhead can be viewed by querying the "max_record_size_in_bytes" column in the DMV "sys.dm_db_index_physical_stats". The table below describes the behaviour of versioning on a readable secondary database under different settings for disk based tables.
From the Books Online link: http://msdn.microsoft.com/en-GB/library/ff878253.aspx
TheSQLGuru (6/11/2015)
100+ databases, if they are busy at all, could easily overwhelm even moderate hardware (especially network), for log reader/shipping/replay for either form of "mirroring".
This is the main area of concern, the default network configurations applied during AO group setups could be a huge bottleneck. When mirroring a large number of databases you'd ideally be pushing your mirror traffic down separate pipes, but you need to know how to design, deploy and support these configurations
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" π
June 11, 2015 at 7:53 am
Interesting...the line in that chart for the combination of Readable Secondary: Yes and Snapshot or RCSI: No in the current documentation at that link actually says
No row versions, but 14-byte overhead
.
One of those versions of the documentation must be incorrect, it seems.
Like Kevin, I also was under the impression that the additional storage applied to the primary regardless of whether snapshot or RCSI was enabled.
That article also seems to confirm that by distinguishing between the storage overhead and whether or not versioning data is actually generated on an instance:
The row version data is not generated by the primary databases. Instead, the secondary databases generate the row versions. However, row versioning increases data storage in both the primary and secondary databases.
.
Something is amiss (or I'm confused, which often happens pre-caffeine). π
June 11, 2015 at 8:39 am
Jacob Wilkins (6/11/2015)
No row versions, but 14-byte overhead
Where are you seeing this, which link?
It's very easy to test, setup a group with a readable secondary and no RCSI on the primary then query the "max_record_size_in_bytes" column in the DMV "sys.dm_db_index_physical_stats".
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" π
June 11, 2015 at 9:01 am
It's the msdn link in your snippet: https://msdn.microsoft.com/en-GB/library/ff878253.aspx#bkmk_CapacityPlanning
This also seems to be confirmed by Paul Randal here: http://sqlperformance.com/2015/03/sql-indexes/unexpected-fragmentation, in the section on fragmentation from readable secondaries.
He says that the 14 bytes is added to the row on the primary, but it's empty and not used for anything there unless snapshot or RCSI is used. That seems to be what that chart's distinguishing between, namely whether the bytes are added on the one hand, and whether they are used for versioning on the other.
From Randal:
As soon as a readable secondary replica of a database is created, any update to a record in the primary replica causes the record to have an empty 14-byte tag added, so that the 14-bytes is properly accounted for in all the log records. The tag isnβt used for anything (unless snapshot isolation is enabled on the primary replica itself), but the fact that itβs created causes the record to expand, and if the page is already full thenβ¦
I'll see if I can get some time over lunch today to check on a server that has a readable secondary but no snapshot or RCSI on the primary. Ultimately that's the real test, whatever documentation and demigods may say π
Cheers!
June 11, 2015 at 10:00 am
hmm, I'm sure I tested this during my deployment. Ok at least that's cleared that up, thanks for the link
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" π
June 11, 2015 at 10:10 am
To follow up, I was able to run a quick test. I created a new table in a database in a lab AG (SQL Server 2012) that did not have a readable secondary. The database had both snapshot isolation and RCSI turned off.
That table had a single column, a CHAR(100). Without the readable secondary, I inserted a row, updated it repeatedly, and not surprisingly nothing changed.
The length shown in sys.dm_db_index_physical_stats and DBCC PAGE with the detailed output (printopt 3) was always 107, and the record type was PRIMARY_RECORD with an attribute of NULL_BITMAP. So far so good, the column was nullable, so all this was expected.
After altering the secondary to be readable and running an update against that row, DBCC PAGE showed an additional attribute, VERSIONING_INFO, and included a section for version information. The length, as shown by both DBCC PAGE and sys.dm_db_index_physical_stats, was now 121.
To Paul Randal's point, however, the version pointer shown for that row in DBCC PAGE was always NULL, no matter how many times I updated it. I then enabled RCSI, ran an update, and now that space for the versioning tag was being used, as the version pointer was populated.
It seems the current documentation and Paul's article correctly describe the behavior I saw in my test.
Cheers!
June 11, 2015 at 10:22 am
A secondary cannot possibly have a 14-byte pointer for reading data (NULL or populated) without the primary first having such alteration made to the row. The primary MUST reflect the master state of the data AND data structure at all times. An empty 14-byte hole is still 14 additional bytes on the primary row, which is a DML operation so dirty page and tlog activity (which must itself be replayed over to secondary). Lets not forget the size increase will very often lead to page splits (more dirty pages/tlog activity/locking/latching) because pretty much everyone out there has default fill factor of 0 on all of their indexes. :crazy:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 11, 2015 at 10:27 am
TheSQLGuru (6/11/2015)
A secondary cannot possibly have a 14-byte pointer for reading data (NULL or populated) without the primary first having such alteration made to the row. The primary MUST reflect the master state of the data AND data structure at all times. An empty 14-byte hole is still 14 additional bytes on the primary row, which is a DML operation so dirty page and tlog activity (which must itself be replayed over to secondary). Lets not forget the size increase will very often lead to page splits (more DML/dirty pages/tlog activity/locking/latching) because pretty much everyone out there has default fill factor of 0 on all of their indexes. :crazy:
Indeed. The point of the test was to illustrate exactly that requirement on the primary π
June 12, 2015 at 4:31 am
to add further, despite this overhead, on 100+ databases the mirror traffic volume would still be my biggest worry. Unless you separate the mirror traffic you could be heading for trouble
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" π
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply