a) it turns out the counter  (8847740/11287100)M goes down, not up, deh, never 
noticed

b) ran plain  rsync across eth0 (public, with switches/routers) and eth1 (nic 
to nic)
eth0 sent 585260755954 bytes  received 10367 bytes  116690412.98 bytes/sec
eth1 sent 585260755954 bytes  received 10367 bytes  122580535.41 bytes/sec
so my LSI raid card is behaving and DRBD is slowing the initialization down 
somehow.
Found chapter 15 and will try some suggestions but ideas welcome.

c) for grins
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by 
mockbuild@Build64R6, 2014-08-17 19:26:04
 0: cs:SyncTarget ro:Secondary/Secondary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:3601728 dw:3601408 dr:0 al:0 bm:0 lo:4 pe:11 ua:3 ap:0 ep:1 wo:f 
oos:109374215324
        [>....................] sync'ed:  0.1% (106810756/106814272)M
        finish: 731:23:05 speed: 41,532 (31,868) want: 41,000 K/sec

100TB in 731 hours would  be 30 days. Can I expect large delta data replication 
to go equally slow using DRDB?

-Henk



________________________________________
From: [email protected] [[email protected]] 
on behalf of Meij, Henk [[email protected]]
Sent: Thursday, October 23, 2014 9:57 AM
To: Philipp Reisner; [email protected]
Subject: Re: [DRBD-user] drbd storage size

Thanks for the write up y'll.  I'll have to think about #3 not sure I grasp it 
fully.

Last night I started a 12 TB test and started first initialization for 
observation (0 is primary).
I have node0:eth1 wired directly into node1:eth1 with 10 foot CAT 6 cable 
(MTU=9000)
Data from node1 to node0
PING 10.10.52.232 (10.10.52.232) 8970(8998) bytes of data.
8978 bytes from 10.10.52.232: icmp_seq=1 ttl=64 time=0.316 ms

This morning's progress report from node1:(drbd v8.4.5)

        [===>................] sync'ed: 21.7% (8847740/11287100)M
        finish: 62:50:50 speed: 40,032 (39,008) want: 68,840 K/sec

which confuses me: 8.8M out of 11.3M is 77.8% synced, not? I will let this test 
finish before I do a dd attempt.

iostat reveals %idle cpu 99%+ and little to no %iowait (near 0%), iotop 
confirms very little IO (<5 K/s), typical data
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz 
avgqu-sz   await  svctm  %util
sdb1              0.00   231.00    0.00  156.33     0.00 79194.67   506.58     
0.46    2.94   1.49  23.37

Something is throttling this IO as 40M/s is about half of what I was hoping 
for. Will dig some more.

-Henk

________________________________________
From: [email protected] [[email protected]] 
on behalf of Philipp Reisner [[email protected]]
Sent: Thursday, October 23, 2014 9:17 AM
To: [email protected]
Subject: Re: [DRBD-user] drbd storage size

Am Donnerstag, 23. Oktober 2014, 08:55:03 schrieb Digimer:
> On 23/10/14 04:00 AM, Philipp Reisner wrote:
> > 2a) Initialize both backend devices to a known state.
> >
> >      I.e. dd if=/dev/zero of=/dev/sdb1 bs=$((1024*1024)) oflag=direct
>
> Question;
>
>    What I've done in the past to speed up initial sync is to create the
> DRBD device, pause-sync, then do your 'dd if=/dev/zero ...' trick to
> /dev/drbd0. This effectively drives the resync speed to the max possible
> and ensures full sync across both nodes. Is this a sane approach?
>

Yes, sure that is a way to do it. (I have the impression that is something
form the drbd-8.3 world.)

I do not know from the top of the head if that will be faster than the
built-in background resync in drbd-8.4.

Best,
 Phil

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to