Hello,
Yes cache tier are replicated size 3
JC
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
What you do when you have the pressure of producing outstanding outcomes? Does
your mind feel stressed or tired? Do you feel lethargic for writing your
assignment? Well, it is quite natural that you feel exhausted when you have to
work under stress. Due to this, you may not pay full attention to
What you do when you have the pressure of producing outstanding outcomes? Does
your mind feel stressed or tired? Do you feel lethargic for writing your
assignment? Well, it is quite natural that you feel exhausted when you have to
work under stress. Due to this, you may not pay full attention to
Hi,
if you shared your drivegroup config we might be able to help identify
your issue. ;-)
The last example in [1] shows the "wal_devices" filter for splitting
wal and db.
Regards,
Eugen
[1] https://docs.ceph.com/docs/master/cephadm/drivegroups/#dedicated-wal-db
Zitat von Tony Liu :
Hi,
taking a quick look at the script I noticed that you're trying to dd
from old to new device and then additionally run the
'ceph-bluestore-tool bluefs-bdev-migrate' which seems redundant to me,
I believe 'ceph-bluestore-tool bluefs-bdev-migrate' is supposed to
migrate the data itself.
Hi,
I accidentally destroyed the wrong OSD in my cluster. It is now marked
as "destroyed" but the HDD is still there and data was not touch AFAICT.
I was able to avtivate it again using ceph-volume lvm activate and I can
make the OSD as "in" but it's status is not changing from "destroyed".
Hi friends,
Since deployment of our Ceph cluster we've been plagued by slow
metadata error.
Namely, cluster goes into HEALTH_WARN with a message similar to this
one:
2 MDSs report slow metadata IOs
1 MDSs report slow requests
1 slow ops, oldest one blocked for 32 sec, daemons [osd.22,osd.4] have
Hi,
there have been several threads about this topic [1], most likely it's
the metadata operation during the cleanup that saturates your disks.
The recommended settings seem to be:
[osd]
osd op queue = wpq
osd op queue cut off = high
This helped us a lot, the number of slow requests has dec
Hi Eugen,
On Mon, 2020-08-24 at 14:26 +, Eugen Block wrote:
> Hi,
>
> there have been several threads about this topic [1], most likely
> it's
> the metadata operation during the cleanup that saturates your disks.
>
> The recommended settings seem to be:
>
> [osd]
> osd op queue = wpq
> o
Hi everyone, a bucket was overquota, (default quota of 300k objects per
bucket), I enabled the object quota for this bucket and set a quota of 600k
objects.
We are on Luminous (12.2.12) and dynamic resharding is disabled, I manually do
the resharding from 3 to 6 shards.
Since then, radosgw-ad
Hi everyone,
I have a serious problem which currently exists of my entire Ceph no longer
being able to provide service. As if yesterday I added 10 OSD's total 2 per
node, the rebalance started and took some IO but seemed to be doing its work.
This morning the cluster was still processing the re
Hi,
I am trying to add a OSD host with not clean disks.
I added the host.
ceph orch host add storage-3 10.6.50.82
Host is added, but the disks on that host are not listed.
I assume because they are not clean?
I zapped all disks.
ceph orch device zap storage-3 /dev/sdb
..
All disks are clea
Your options while staying on Xenial are only to Nautilus.
In the below chart, X is provided by the Ceph repos, U denotes from Ubuntu
repos.
rel
jewel
luminus
mimic
nautilus
octopus
trusty
X
X
xenial
XU
X
X
X
bionic
U
X
X
X
focal
XU
Octopus is only supported on bionic and focal.
Xenial
Thanks for your advice,
add one osd on another host does the trick. But I didn't understand why 2 osd
is enought ? I would have gess 3 or 5 since I have replicated size of 3 and 5,
and ec with size of 5
___
ceph-users mailing list -- ceph-users@ceph.io
While continuing my saga with the rgw orphans and dozens of terabytes of wasted
space I have used the rgw-orphan-list tool. after about 45 mins the tool has
crashed (((
# time rgw-orphan-list .rgw.buckets
Pool is ".rgw.buckets".
Note: output files produced will be tagged with the current tim
The office is a productivity suite, developed and maintained by one of the
biggest companies, market leaders in technology, Microsoft Office. You must
already have gotten your answer, Microsoft is not just another company that
poses to be developing great software with just a team of 100 members
Thanks Eugen for pointing it out.
I reread this link.
https://ceph.readthedocs.io/en/latest/rados/configuration/bluestore-config-ref/
It seems that, for the mix of HDD and SSD, I don't need to create
WAL device, just primary on HDD and DB on SSD, and WAL will be
using DB device cause it's faster. I
Hi,
you could try to use ceph-volume lvm create --data DEV --db DEV and
inspect the output to learn what is being done.
I am not sure about the right Syntax now but you should find related
Information via search ...
Hth
Mehmet
Am 23. August 2020 05:52:29 MESZ schrieb Tony Liu :
>Hi,
>
>I
On 25/08/2020 6:07 am, Tony Liu wrote:
I don't need to create
WAL device, just primary on HDD and DB on SSD, and WAL will be
using DB device cause it's faster. Is that correct?
Yes.
But be aware that the DB sizes are limited to 3GB, 30GB and 300GB.
Anything less than those sizes will have
On 2020-08-24 20:35, Mathijs Smit wrote:
> Hi everyone,
>
> I have a serious problem which currently exists of my entire Ceph no longer
> being able to provide service. As if yesterday I added 10 OSD's total 2 per
> node, the rebalance started and took some IO but seemed to be doing its work.
>
> > I don't need to create
> > WAL device, just primary on HDD and DB on SSD, and WAL will be using
> > DB device cause it's faster. Is that correct?
>
> Yes.
>
>
> But be aware that the DB sizes are limited to 3GB, 30GB and 300GB.
> Anything less than those sizes will have a lot of untilised
> -Original Message-
> From: Anthony D'Atri
> Sent: Monday, August 24, 2020 7:30 PM
> To: Tony Liu
> Subject: Re: [ceph-users] Re: Add OSD with primary on HDD, WAL and DB on
> SSD
>
> Why such small HDDs? Kinda not worth the drive bays and power, instead
> of the complexity of putting W
22 matches
Mail list logo