memory usage was high even when backfills is set to "1".
On Mon, Sep 23, 2019 at 8:54 PM Robert LeBlanc wrote:
> On Fri, Sep 20, 2019 at 5:41 AM Amudhan P wrote:
> > I have already set "mon osd memory target to 1Gb" and I have set
> max-backfill from 1 to 8.
>
> Reducing the number of backfills
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> The intent of this change is to increase iops on bluestore, it was
implemented in 14.2.4 but it is a
> general bluestore issue not specific to Nautilus.
I am confused. Is it not like this that an increase in iops on bluestore
= increase in overall iops? It is specific to Nautilus, becaus
As the signature shows, please send an email to ceph-users-le...@ceph.io
for unsubscribing.
hou guanghua wrote:
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Hi Alberto,
Did you tried with the "--addv" option? Here's an example:
monmaptool --addv
[v2::,v1::]
Cheers,
Ricardo Dias
On 23/09/19 16:07, Corona, Alberto wrote:
> Hi folks,
>
> While practicing some disaster recovery I noticed that it currently seems
> impossible to add both a v1 and v2
Hi All,
I have a question about "orphaned" objects in default.rgw.buckets.data pool.
Few days ago i ran "radosgw-admin orphans find ..."
[dc-1 root@mon-1 tmp]$ radosgw-admin orphans list-jobs
[
"orphans-find-1"
]
Today I checked the result. I listed orphaned objects by command:
$# for i in
Hi,
ceph health reports
1 MDSs report slow metadata IOs
1 MDSs report slow requests
This is the complete output of ceph -s:
root@ld3955:~# ceph -s
cluster:
id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae
health: HEALTH_ERR
1 MDSs report slow metadata IOs
1 MDSs repor
On 24/09/2019 10:25, Marc Roos wrote:
> The intent of this change is to increase iops on bluestore, it was
implemented in 14.2.4 but it is a
> general bluestore issue not specific to Nautilus.
I am confused. Is it not like this that an increase in iops on bluestore
= increase in overall io
Hi,
you need to fix the non active PGs first. They are also probably the
reason for the blocked requests.
Regards,
Burkhard
On 9/24/19 1:30 PM, Thomas wrote:
Hi,
ceph health reports
1 MDSs report slow metadata IOs
1 MDSs report slow requests
This is the complete output of ceph -s:
root@
Can you please advise how to fix this (manually)?
My cluster is not getting healthy since 14 days now.
Am 24.09.2019 um 13:35 schrieb Burkhard Linke:
> Hi,
>
>
> you need to fix the non active PGs first. They are also probably the
> reason for the blocked requests.
>
>
> Regards,
>
> Burkhard
>
>
Hi everyone,
I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual:
https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/
But i stopped in this problem: In manual says:
"Set the client’s CHAP username to myiscsiusername and password to
myiscsipassword:
> /iscsi-target...at:rh
On Tue, Sep 24, 2019 at 12:27 AM Amudhan P wrote:
>
> memory usage was high even when backfills is set to "1".
Memory usage will not decrease by adding more backfills. EC is very
CPU and RAM intensive during recovery as it has to rebuild the shards.
I don't know if reducing stripe size or object
https://www.thegeekdiary.com/centos-rhel-67-why-the-files-in-tmp-directory-gets-deleted-periodically/
Am 24.09.19 um 14:53 schrieb Lenz Grimmer:
> On 9/24/19 1:37 PM, Miha Verlic wrote:
>
>> I've got slightly different problem. After a few days of running fine,
>> dashboard stops working because i
On Tue, Sep 24, 2019 at 4:56 AM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Can you please advise how to fix this (manually)?
> My cluster is not getting healthy since 14 days now.
> >> Reduced data availability: 33 pgs inactive, 32 pgs peering
> >> Degraded data red
My radosgw-admin orphans find generated +64 shards and it show a lot of
_shadow_ , _multipart and other undefined object type.
Waiting for someone clarify what to do with the output.
Regards
De: P. O.
Enviado el: martes, 24 de septiembre de 2019 11:26
Para: ceph-users@ceph.io
Asun
Hi everyone,
I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual:
https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/
But i stopped in this problem: In manual says:
"Set the client’s CHAP username to myiscsiusername and password to
myiscsipassword:
> /iscsi-target...at:rh
On Tue, Sep 17, 2019 at 8:03 AM Sasha Litvak
wrote:
>
> * I am bothered with a quality of the releases of a very complex system that
> can bring down a whole house and keep it down for a while. While I wish the
> QA would be perfect, I wonder if it would be practical to release new
> packages to
On 09/24/2019 01:08 PM, Gesiel Galvão Bernardes wrote:
> Hi everyone,
>
> I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual:
>
> https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/
>
> But i stopped in this problem: In manual says:
> "Set the client’s CHAP username to m
Hi Thomas,
How does your crush map/tree look?
If your crush failure domain is by host, then your 96x 8T disks will be as
useful as you're 1.6T disks, because smallest failure domain is your limiting
factor.
So you can either redistribute your disks to be 16x8T+32x1.6T per host, or you
could g
IRT a testing/cutting edge repo, the non-LTS versions of Ceph have been
removed because very few people ever used them and tested them. The
majority of people that would be using the testing repo would be people
needing a bug fix ASAP. Very few people would actually use this regularly
and its effec
Den tis 24 sep. 2019 kl 23:35 skrev David Turner :
>
> At work I haven't had a problem with which version of Ceph is being
> installed because we always have local mirrors of the repo that we only
> update with the upstream repos when we're ready to test a new version in
> our QA environments long
22 matches
Mail list logo