All of this brings you back to the *root* of the original question.  In what 
situations does it make sense to increase the replication frequency?

And I think we have an answer here:  

First we acknowledge, that a low frequency does not reduce the number of 
changes that need to be replicated, unless there are short-term temporary 
changes that can be eliminated from replication (such as group created and then 
quickly deleted before it ever replicated.)

Second we acknowledge that the total traffic for replication is linearly 
determined by the number and type of changes, plus some overhead for 
establishing and tearing down the connection.  

So a higher frequency replication produces more bandwidth consumption in 
precisely three ways:  Increased overhead for establishing and tearing down 
connections, decreased efficiency of compression when a smaller number of 
objects are being replicated, and increased payload for situations where 
temporary objects didn't need to be replicated.  The question is, how can we 
determine the significance of these three increases?

There are two ways I've been able to find, to measure bandwidth consumed by 
replication.  Either go to perfmon, and watch the NTDS / DRA counters, or build 
an isolated DC whose sole function is to enable you to measure this.  If you 
give a DC at some remote site only the role of DC, and you use wireshark to 
capture traffic to/from other DC's, that should be an easy and reliable way to 
actually measure the replication traffic.

I'm in progress right now, building a DC at a remote site.  I should be able to 
measure the bandwidth with that machine soon, to serve as a cross-reference, 
sanity check of the NTDS DRA counters.

Measurement #1:
Right now, I've been able to measure 7K bytes used to replicate when no changes 
occurred, using perfmon.  And I was able to measure 11K used to replicate a new 
user account creation or deletion.  This is on-par with the object sizes that 
are listed in http://technet.microsoft.com/en-us/library/bb742457.aspx#ECAA

Pending the acquisition of more information, I'm going to have to say, 7K every 
15 minutes is nothing.  So from this standpoint, the information supports 
pretty much always increasing the frequency to max.

Measurement #2:
I'm going to go out on a limb and conjecture that compressing one or multiple 
objects does not significantly affect the overall compression ratio.  I'll be 
looking to test this conjecture by measurement, but I'm pretty confident right 
now declaring the difference is not significant, and again conclude that 
increasing replication frequency is generally a good idea.

Measurement #3:
The number of temporary objects that otherwise wouldn't need to be replicated 
is organization-specific, determined by the behavior of your sysadmins.  So I 
can't make any generalization or conclusion on this subject.  I can say that 
for the organizations where I've worked so far, this traffic would be 
negligible, but it could be significant for other organizations.

_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to