On Thu, 4 Apr 2019 at 13:32, Dan van der Ster wrote:
>
> There are several more fixes queued up for v12.2.12:
>
> 16b7cc1bf9 osd/OSDMap: add log for better debugging
> 3d2945dd6e osd/OSDMap: calc_pg_upmaps - restrict optimization to
> origin pools only
> ab2dbc2089 osd/OSDMap: drop local pool filt
On Mon, 8 Apr 2019 at 05:01, Matt Benjamin wrote:
>
> Hi Christian,
>
> Dynamic bucket-index sharding for multi-site setups is being worked
> on, and will land in the N release cycle.
>
What about removing orphaned shards on the master? Is the existing
tools able to work with that?
On the secon
Hi Anthony,
Thanks for answering.
>> Which SSD model and firmware are you using? Which HBA?
Well, from what I can see it's basically from all our SSD's, which
unfortunately varies a bit.
But from the example I posted the particular disk was,
SSD SATA 6.0 Gb/s/0/100/1/0/0.8.0 /dev/s
Which OS are you using?
With CentOS we find that the heap is not always automatically
released. (You can check the heap freelist with `ceph tell osd.0 heap
stats`).
As a workaround we run this hourly:
ceph tell mon.* heap release
ceph tell osd.* heap release
ceph tell mds.* heap release
-- Dan
O
hi,all
I had test the cloud sync module in radosgw. ceph verion is 13.2.5 ,
git commit id is cbff874f9007f1869bfd3821b7e33b2a6ffd4988;
when sync to a aws s3 endpoint ,get http 400 error , so I use http:// protocol
,use the tcpick tool to dump some message like this.
PUT /wuxi01 HTT
Possibly the client doesn't like the server returning SecType = "none";
Maybe try SecType = "sys":?
Leon L. Robinson
> On 6 Apr 2019, at 12:06,
> wrote:
>
> Hi all,
>
> I have recently setup a Ceph cluster and on request using CephFS (MDS
> version: ceph version 13.2.5 (cbff874f9007f1869b
Hi there,
i'm new to ceph and just got my first cluster running.
Now i'd like to know if the performance we get is expectable.
Is there a website with benchmark results somewhere where i could have a look
to compare with our HW and our results?
This are the results:
rados bench single threaded:
The log appears to be missing all the librbd log messages. The process
seems to stop at attempting to open the image from the remote cluster:
2019-04-05 12:07:29.992323 7f0f3bfff700 20
rbd::mirror::image_replayer::OpenImageRequest: 0x7f0f28018a20
send_open_image
Assuming you are using the default
Hi Jason,
On Prod side, we have cluster ceph and on DR side we renamed to cephdr
Accordingly, we renamed the ceph.conf to cephdr.conf on DR side.
This setup used to work and one day we tried to promote the DR to verify the
replication and since then it's been a nightmare.
The resync didn’t work
On Mon, Apr 8, 2019 at 9:47 AM Vikas Rana wrote:
>
> Hi Jason,
>
> On Prod side, we have cluster ceph and on DR side we renamed to cephdr
>
> Accordingly, we renamed the ceph.conf to cephdr.conf on DR side.
>
> This setup used to work and one day we tried to promote the DR to verify the
> replica
It's definitely ceph-mgr that is struggling here. It uses 100% of a cpu for for
several tens of seconds and reports the followinf in its log a few times before
anything gets displayed
Traceback (most recent call last):
File "/usr/local/share/ceph/mgr/dashboard/services/exception.py", line 88, i
Hi Wes,
I just filed a bug ticket in the Ceph tracker about this:
http://tracker.ceph.com/issues/39140
Will work on a solution ASAP.
Thanks,
Ricardo Dias
On 08/04/19 15:41, Wes Cilldhaire wrote:
> It's definitely ceph-mgr that is struggling here. It uses 100% of a cpu for
> for several tens o
Thank you
- On 9 Apr, 2019, at 12:50 AM, Ricardo Dias rd...@suse.com wrote:
> Hi Wes,
>
> I just filed a bug ticket in the Ceph tracker about this:
>
> http://tracker.ceph.com/issues/39140
>
> Will work on a solution ASAP.
>
> Thanks,
> Ricardo Dias
>
> On 08/04/19 15:41, Wes Cilldhaire
Hey everyone,
The CFP for DevConf US [1] ends today! I have submitted for us to have
a Ceph Foundation booth, BOF space and two presentations myself which
you can find on our CFP coordination pad [2]. I'll update here if our
booth is accepted and a call for help.
If you're planning on attending a
On Mon, Apr 08, 2019 at 06:38:59PM +0800, 黄明友 wrote:
>
> hi,all
>
>I had test the cloud sync module in radosgw. ceph verion is
>13.2.5 , git commit id is
>cbff874f9007f1869bfd3821b7e33b2a6ffd4988;
Reading src/rgw/rgw_rest_client.cc
shows that it only generates v2 signatu
Hi Yuri,
both issues from Round 2 relate to unsupported expansion for main device.
In fact it doesn't work and silently bypasses the operation in you case.
Please try with a different device...
Also I've just submitted a PR for mimic to indicate the bypass, will
backport to Luminous once mim
One of the difficulties with the osd_memory_target work is that we can't
tune based on the RSS memory usage of the process. Ultimately it's up to
the kernel to decide to reclaim memory and especially with transparent
huge pages it's tough to judge what the kernel is going to do even if
memory h
Hello Simon,
Another idea is to increase choose_total_tries.
Hth
Mehmet
Am 7. März 2019 09:56:17 MEZ schrieb Martin Verges :
>Hello,
>
>try restarting every osd if possible.
>Upgrade to a recent ceph version.
>
>--
>Martin Verges
>Managing director
>
>Mobile: +49 174 9335695
>E-Mail: martin.ver.
We have two separate RGW clusters running Luminous (12.2.8) that have started
seeing an increase in PGs going active+clean+inconsistent with the reason being
caused by an omap_digest mismatch. Both clusters are using FileStore and the
inconsistent PGs are happening on the .rgw.buckets.index poo
On Mon, Apr 8, 2019 at 3:19 PM Bryan Stillwell wrote:
>
> We have two separate RGW clusters running Luminous (12.2.8) that have started
> seeing an increase in PGs going active+clean+inconsistent with the reason
> being caused by an omap_digest mismatch. Both clusters are using FileStore
> and
> On Apr 8, 2019, at 4:38 PM, Gregory Farnum wrote:
>
> On Mon, Apr 8, 2019 at 3:19 PM Bryan Stillwell wrote:
>>
>> There doesn't appear to be any correlation between the OSDs which would
>> point to a hardware issue, and since it's happening on two different
>> clusters I'm wondering if the
Hi @all,
I'm using Ceph rados gateway installed via ceph-ansible with the Nautilus
version. The radosgw are behind a haproxy which add these headers (checked
via tcpdump):
X-Forwarded-Proto: http
X-Forwarded-For: 10.111.222.55
where 10.111.222.55 is the IP address of the client. The rad
Refer "rgw log http headers" under
http://docs.ceph.com/docs/nautilus/radosgw/config-ref/
Or even better in the code https://github.com/ceph/ceph/pull/7639
Thanks,
-Pavan.
On 4/8/19, 8:32 PM, "ceph-users on behalf of Francois Lafont"
wrote:
Hi @all,
I'm using Ceph rados gatewa
23 matches
Mail list logo