Not sure if it matters ,but the connection-mesh had mention in earlier
drbd 9 documentation as "not fully implemented" to overcome the issue
you declare the mesh in long form, this might give you more predictable
results, worth a try, though dont forget to open all the relevant ports
on your firewall.
#mesh 1-2
connection{
host 6787-dblapro-edss port (this hosts port #) ;
host 6788-dblapro-edss port (this hosts port #) ;
}
#mesh 1-3
connection
{
host 6787-dblapro-edss port (this hosts port #) ;
host 6789-dblapro-edss port (this hosts port #) ;
}
#mesh 2-3
connection
{
host 6788-dblapro-edss port (this hosts port #) ;
host 6789-dblapro-edss port (this hosts port #) ;
}
On 3/10/2016 1:59 AM, Eugene Istomin wrote:
Hello guys,
please help us to understand strange verify behaviour described below.
I have a feeling of verify emulation instead of really verify the data.
Thanks!
---
Best regards,
Eugene Istomin
On Friday, 19 February 2016 12:52:53 EET Eugene Istomin wrote:
Hello,
any comments? =)
Verify test results are seems very strange for us.
---
Best regards,
Eugene Istomin
On Wednesday, 17 February 2016 18:34:27 EET Eugene Istomin wrote:
Hello,
we done some tests with DRBD9 9.0.1
Some preliminary results:
#### 1. Verify depends on "connection-mesh" position. ####
connection-mesh {
hosts 6787-dblapro-edss 6788-dblapro-edss 6789-dblapro-edss;
}
If the first node is switched off - verify on the second if failed:
6788-dblapro-edss# drbdadm verify all
storage: State change failed: (-15) Need a connection to start verify
or resync
Command 'drbdsetup verify storage 0 0' terminated with exit code 11
storage role:Secondary
disk:UpToDate
6787-dblapro-edss connection:Connecting
6789-dblapro-edss role:Secondary
peer-disk:UpToDate
After "drbdadm up all" on the first (6787-dblapro-edss) and "drbdadm
down all" on the second (6788-dblapro-edss) seems working correctly
(with node 2 unavailability problems, but thats ok)
storage role:Secondary
disk:UpToDate
6787-dblapro-edss role:Secondary
replication:VerifyS peer-disk:UpToDate done:100.00
6788-dblapro-edss connection:Connecting
So, "connection-mesh" host position matters (not just for verify, may be).
#### 2. Verify strange verify behaviour ####
the same three nodes, start with fully OK status for all nodes.
#storage role:Primary
disk:UpToDate
6788-dblapro-edss role:Secondary
peer-disk:UpToDate
6789-dblapro-edss role:Secondary
peer-disk:UpToDate
Than, the magic:
1. On first (Primary role) node # mount /dev/drbd/by-res/storage/0
/media/storage # dd if=/dev/urandom of=/media/storage/1 bs=1M
count=100 -rw-r--r-- 1 root root 104857600 Feb 17 18:17
/media/storage/1
2. Down Second node and mount as ordinary disk (we use external
metadata)# drbdadm down all # mount /dev/disk/by-label/storage
/media/storage/ # dd if=/dev/urandom of=/media/storage/2 bs=1M
count=100 # umount /media/storage/ # drbdadm up all # drbdadm
status - all UpToDate. Thats OK, no changes from DRBD metadata
point of view.
3. Second node - verify wait till sync is done
# ... storage role:Secondary
disk:UpToDate
6787-dblapro-edss role:Primary
replication:VerifyS peer-disk:UpToDate done:100.00
6789-dblapro-edss role:Secondary
replication:VerifyS peer-disk:UpToDate done:100.00
1. Mount Secondary node disk (assuming to see only "./1" file) #
drbdadm down all # mount /dev/disk/by-label/storage
/media/storage/ # ls -l /media/storage/
-rw-r--r-- 1 root root 104857600 Feb 17 18:21 1
-rw-r--r-- 1 root root 104857600 Feb 17 18:21 2
What really verify do?
#### 3. No information about speeds, timing & bandwith ####
How long will replication lasts?
What is current bandwidth?
The questions are for initial sync. resync & verify.
BTW, current implementation (for ex. done:80.94) is faulty sometimes.
Thanks for DRBD9 mesh topology! =)
---
Best regards,
Eugene Istomin
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user