t could cause this behaviour, please
let me know.
greetings from germany
volker
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
one finishes, the load would increase even higher than
5-6 with the second dd command being uninterruptible.
Interestingly dd _always_ reports speeds of 200-350MB which is obviously
not the case.
Any more ideas?
greetings
volker
___
drbd-user mailing list
(that is Megabytes) of throughput. The only possible limit here
would be the syncer rate of 25MB/s, but the network-link is only
saturated during a resync.
Any more ideas with this info?
best regards
volker
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
01:01:30 load 1.2
01:02:00 load 0.8
regards
volker
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Hi,
>> What surprises me the most is the delay between the load rising and the
>> finished dd. a short timeline to make myself clear:
>
> To rid yourself of surprises, either pass the oflag=direct to dd or
> follow it by a call to `sync`.
We're getting there. Calling 'sync' right after dd (no of
Hi,
>> Now, how do i "debug" sync? :-)
>
> your writes are being delayed. What "bug" are you trying to find?
Well, without drbd in use the writes on the very same
- sdb-device
- volume_group
- lvm_volume
are not being delayed. And currently im looking at drbd as a possible
cause. It must not n
00% 392MB 39.2MB/s 00:10
> - what latencies are you seeing i.e. ping times when idle, ping times
> during dd
idle and while dd'ing:
64 bytes from fallback.content.domain.de.de (80.237.137.130): icmp_seq=3
ttl=64 time=0.093 ms
- volker
_
eady using proto B and throughput is definatelly not the problem
here. if i can 'create' the problem locally without a secondary
attached, how can the network-link be of relevance?
- volker
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Hi,
>> http://public.blafoo.org/drbd/top-primary.txt
>> http://public.blafoo.org/drbd/top-secondary.txt
>
> ok ... still drbd 8.3.8 ... what does "iostat -dx" say during your test?
sadly, yes. elrepo is not an option here because its unofficial.
iostat is quite interesting starting at line 5 at
n some bugs.
Even though the update on a test-host went flawlessly:
Is there anything particular i need to look after before/after the
update of the packages?
Any notes on compatibility between 8.3.8-1 and 8.3.12-1 i should be
aware of?
Once the host is live again, i will report if that did
ot being able handle the amount
of I/O-Requests generated by the whole environment.
Im not sure where to go from here. If we find a solution, i'll let you
know... :-)
- volker
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbi
ot being able handle the amount
of I/O-Requests generated by the whole environment.
Im not sure where to go from here. If we find a solution, i'll let you
know... :-)
- volker
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbi
my test-folder. that yields "too
many links" as an error
Dont get me wrong, i'm not saying this is DRBDs fault. But since it is
involved and everything above DRBD seems(!) to be ok, i think its ok to
ask here.
Thanks for reading :-)
- volker
_
Cluster Resource is configured with the ocf:linbit:drbd resource agent.
Any suggestions to solve that riddle?
Regards
Volker Pense
---
Volker Pense
Sagittarius-IT
Pfungstädter Str. 22
64297 Darmstadt
Tel.: 0151-25661357
Fax: 06151-7877381
Mail: vpe...@sagittarius-it.de
Web: w
e Infiniband SDP-Handbook also does not require to do more to transform TCP
Code to SDP.
Now to our questions:
* Are we complete useless idiots? This time we feel like.
* Do we have done something obviously wrong?
If not
* Where can we dig in deeper?
* Who can direct us t
ACTIVE/ACTIVE yet?
Please give us some hints to satisfy our curiousity.
Best Regards
Volker
--
inqbus it-consulting +49 ( 341 ) 5643800
Dr. Volker Jaenisch http://www.inqbus.de
HerloÃsohnstr.12 0 4 1 5 5Leipzig
server pair B should beat server pair A by at least by a factor of 3.
In case of a sync:
* Pair A delivers a sync rate of 20 Mb/sec.
* Pair B delivers a sync rate of 1 MB/sec
Any ideas?
Any help appreciated
Volker
--
=
inqbus
using the same CISCO switch and 10G-Backbone has a good rate.
Next test I will do is to reimplement drbd8 in the proxmox-kernel and do
the comparison drbd8/9 on the same hardware.
Cheers
Volker
--
=
inqbus Scientific ComputingDr. Volke
case of the bonding the three lines this command would
be "ifdown bond0". Now you understand why we have the DSL line at all
:-). For not geographically seperated setups we use additional serial
lines to achive our guiding priciple.
Looking for
Servus !
Am 24.02.2017 um 15:53 schrieb Lars Ellenberg:
> On Fri, Feb 24, 2017 at 03:08:04PM +0100, Dr. Volker Jaenisch wrote:
>> If both 10Gbit links fail then the bond0 aka the worker connection fails
>> and DRBD goes - as expected - into split brain. But that is not the problem
Hi Igor!
Am 24.02.2017 um 23:56 schrieb Igor Cicimov:
> Hi Volker,
>
>
> resource vm-100-disk-1 {
> template-file "/var/lib/drbd.d/drbdmanage_global_common.conf";
>
>net {
>allow-two-primaries yes;
>
>
> Dual-primar
estigate further and come back to you.
Thanks alot for your input leading me to this network problem!
Cheers,
Volker
--
=
inqbus Scientific Computing Dr. Volker Jaenisch
Richard-Strauss-Straße 1 +49(08861) 690 474 0
86956
Works not
4) Works
5) Works
I will open another thread for this last issue.
Many thanks for all of you. Sorry for stealing your time
Cheers,
Volker
Just for completeness:
The main problem is still that the drbd does not recover if the
connection is restored:
> But this is not what we like
> I may have forgotten to file an issue there :-/
Should we address this topic on the pacemaker mailing list. I still
think it would be an important improvement.
Cheers,
Volker
--
=
inqbus Sc
rbd ┊ hydra4 ┊ LVM ┊ vg_drbd ┊ 14.70 TiB
┊ 18.19 TiB ┊ False ┊ Ok ┊
╰───╯
Any pointers appreciated
Vol
Dear Gábor!
Thank you so much!
This was exactly what I was looking for:
linstor resource toggle-disk hydra1 vm-118-disk-1 -s pool_drbd
Works like a charm.
Cheers,
Volker
On 03.12.20 07:32, Gábor Hernádi wrote:
> Hi,
>
> > But all the volumes are now and remain in diskless sta
are using an
Infiniband Network with TTLs wide below the rotational latency of our
spinning disks.]
Cheers,
Volker
--
=
inqbus Scientific ComputingDr. Volker Jaenisch
Hungerbichlweg 3 +49 (8860) 9222 7 92
869
elp appreciated
Volker
--
=
inqbus Scientific ComputingDr. Volker Jaenisch
Hungerbichlweg 3 +49 (8860) 9222 7 92
86977 Burggen https://inqbus
y. Only that
the label then does not match the logical "context" any more.
What linbit lacks is simply the abstraction and distinction of "display
label" and "internal unique identifier".
Cheers,
Volker
--
====
Dear Andreas!
> @Volker the community will be very happy if you provide a consistent and
> reliable patch.
>
Thank you for your confidence in me.
But, sorry, I program in Python, Cython and VUE.JS, and not in Java. I
fixed some things in drbdmanage (which was the Python predecessor o
30 matches
Mail list logo