Hi:
I am testing ceph-mon brain split . I have read the code . If I
understand it right , I know it won't be brain split. But I think
there is still another problem. My ceph version is 0.94.10. And here
is my test detail :
3 ceph-mons , there ranks are 0, 1, 2 respectively.I stop the rank 1
mon
of the leader by quorum.
>
> Maybe with 2 mon daemons and closing the communication between each of them
> every mon daemon will believe that can be a leader because every daemon will
> have the que quorum of 1 with no other vote.
>
> Just saying :)
>
>
> On Jul 4, 2017 12
a view num.
In election phase:
they send the view num , rank num .
when receiving the election message, it compare the view num (
higher is leader ) and rank num ( lower is leader).
On Tue, Jul 4, 2017 at 9:25 PM, Joao Eduardo Luis wrote:
> On 07/04/2017 06:57 AM, Z Will wr
current leader, then it will decide
whether to stand by for a while and try later or start a leader
election based on the information got from probing phase.
Do you think this will be OK ?
On Wed, Jul 5, 2017 at 6:26 PM, Joao Eduardo Luis wrote:
> On 07/05/2017 08:01 AM, Z Will wrote:
>&
rom
other mons when needed for increasing performance.
other logic is same as before
What do you think of it ?
On Thu, Jul 6, 2017 at 10:31 PM, Sage Weil wrote:
> On Thu, 6 Jul 2017, Z Will wrote:
>> Hi Joao :
>>
>> Thanks for thorough analysis . My initial concern
d it only need to change little code, any
suggesstion for this ? Do I lack of any considerations ?
On Wed, Jul 5, 2017 at 6:26 PM, Joao Eduardo Luis wrote:
> On 07/05/2017 08:01 AM, Z Will wrote:
>>
>> Hi Joao:
>> I think this is all because we choose the monitor with
For large cluster , there will be a lot of change at any time, this
means the pressure of mon will be big at some time, because all change
will go through leader , so for this , the local storage for mon
should be good enough, I think this maybe a conderation .
On Tue, Jul 11, 2017 at 11:29 AM,
I think if you want to delete through gc,
increase this
OPTION(rgw_gc_processor_max_time, OPT_INT, 3600) // total run time
for a single gc processor work
decrease this
OPTION(rgw_gc_processor_period, OPT_INT, 3600) // gc processor cycle time
Or , I think if there is some option to bypass the gc
Hi all:
I have tried to install a vm using rbd as disk, following the
steps from ceph doc, but met some problems. the packages environment
is following:
CentOS Linux release 7.2.1511 (Core)
libvirt-2.0.0-10.el7_3.9.x86_64
libvirt-python-2.0.0-2.el7.x86_64
virt-manager-common-1.4.0-2.el7.noarch
support - I did so on
> latest Debain (stretch)
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Z
> Will
> Sent: Friday, August 25, 2017 8:49 AM
> To: Ceph-User
> Subject: [ceph-users] libvirt + rbd questions
>
>
Hi:
I have tried to use nginx + fastcgi + radosgw , and benchmarked
it with cosbench. I tuned the nginx and centos configration , but
coundn't get the desirable performance. I met the following problem.
1、 I tried to tuned the centos net.ipv4... parameters, and I got a
lot of CLOSE_WAIT s
Hi:
I want to know the best radosgw performance in practice now. What
is best W/ iops ? If I have 10 concurrent PUTs . The files are
different size. And for small files ,like <100k ,I hope the response
time is millisecond level. We now have a ceph cluster with 45 hosts,
540 osds. What sho
Hi:
Recently, I trid to configure radosgw with nginx as frontend,
benchmark it with cosbench. And I found a strange thing. My os-related
configuration are,
net.core.netdev_max_backlog = 1000
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
Hi:
Very sorry for the last email, it was an accident. Recently, I
trid to configure radosgw (0.94.7) with nginx as frontend, benchmark
it with cosbench. And I found a strange thing. My os-related
configuration are,
net.core.netdev_max_backlog = 1000
net.core.somaxconn = 1024
In rgw_main.cc,
Hi:
I used nginx + fastcti + radosgw , when configure radosgw with "rgw
print continue = true " In RFC 2616 , it says An origin server that
sends a 100 (Continue) response MUST ultimately send a final status
code, once the request body is received and processed, unless it
terminates the transport
Hello gurus:
My name is will . I have just study ceph and have a lot of
interest in it . We are using ceph 0.94.10. And I am tring to tune the
performance of ceph to satisfy our requirements. We are using it as
object store now. Even though I have tried some different
configuration. But I sti
Hi all :
I have some questions about the durability of ceph. I am trying
to mesure the durability of ceph .I konw it should be related with
host and disk failing probability, failing detection time, when to
trigger the recover and the recovery time . I use it with multiple
replication, say k
Hi Patrick:
I want to ask a very tiny question. How much 9s do you claim your
storage durability? And how is it calculated ? Based on the data you
provided , have you find some failure model to refine the storage
durability ?
On Thu, Jun 15, 2017 at 12:09 AM, David Turner wrote:
> I understa
You can try this binary cosbench release package.
https://github.com/intel-cloud/cosbench/releases/download/v0.4.2.c4/0.4.2.c4.zip
.
On Fri, Jun 16, 2017 at 3:08 PM, fridifree wrote:
> Thank you for your comment.
>
> I feel that this tool is very buggy.
> Do you have up to date guide for this too
19 matches
Mail list logo