Hi,
I've got a Ceph cluster with this status:
health: HEALTH_WARN
3 large omap objects
After looking into it I see that the issue comes from objects in the
'.rgw.gc' pool.
Investigating it I found that the gc.* objects have a lot of OMAP keys:
for OBJ in $(rados -p .rgw.gc ls);
Hello Casey
Thank you for your reply.
To close this subject, one last question.
Do you know if it is possible to rotate the key defined by
"rgw_crypt_default_encryption_key=" ?
Best Regards
Francois Scheurer
From: Casey Bodley
Sent: Tuesday, May 28
Am 24.05.19 um 14:43 schrieb Paul Emmerich:
> * SSD model? Lots of cheap SSDs simply can't handle more than that
The customer currently has 12 Micron 5100 1,92TB (Micron_5100_MTFDDAK1)
SSDs and will get a batch of Micron 5200 in the next days
We have identified the performance settings in the BIO
Hi everyone,
We are currently using Ceph as the backend for our OpenStack blockstorage. For
backup of these disks we thought about also using ceph (just with hdd's instead
of ssd's). As we will have some volumes that will be backuped daily and that
will probably not change too often I searched
On 5/29/19 5:40 AM, Konstantin Shalygin wrote:
> block.db should be 30Gb or 300Gb - anything between is pointless. There
> is described why:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033286.html
Following some discussions we had at the past Cephalocon I beg to differ
on t
On Tue, May 28, 2019 at 11:50:01AM -0700, Gregory Farnum wrote:
You’re the second report I’ve seen if this, and while it’s confusing,
you should be Abel to resolve it by restarting your active manager
daemon.
Maybe this is related? http://tracker.ceph.com/issues/40011
On Sun, May 26, 2
Hello Robert,
We have identified the performance settings in the BIOS as a major factor
>
could you share your insights what options you changed to increase
performance and could you provide numbers to it?
Many thanks in advance
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail
It would be interesting to learn the improvements types and the BIOS changes
that helped you.
Thanks
> From: "Martin Verges"
> To: "Robert Sander"
> Cc: "ceph-users"
> Sent: Wednesday, 29 May, 2019 10:19:09
> Subject: Re: [ceph-users] performance in a small cluster
> Hello Robert,
>> We h
Hi,
Am 29.05.19 um 11:19 schrieb Martin Verges:
>
> We have identified the performance settings in the BIOS as a major
> factor
>
> could you share your insights what options you changed to increase
> performance and could you provide numbers to it?
Most default perfomance settings nowa
Hi,
It doesn't look like SIGHUP causes the osd's to trigger conf reload from
files? Is there any other way I can do that, without restarting?
I prefer having most of my config in files, but it's annoying that I need
to cause the cluster to go in HEALTH_WARN in order to reload them.
Thanks for re
On 5/29/19 11:41 AM, Johan Thomsen wrote:
> Hi,
>
> It doesn't look like SIGHUP causes the osd's to trigger conf reload from
> files? Is there any other way I can do that, without restarting?
>
No, there isn't. I suggest you look into the new config store which is
in Ceph since the Mimic rele
Hi,
is anyone running an active-passive nfs-ganesha cluster with cephfs backend and
using the rados_kv recovery backend? My setup runs fine, but takeover is giving
me a headache. On takeover I see the following messages in ganeshas log file:
29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 :
Thank you for a lot of detailed and useful information :)
I'm tempted to ask a related question on SSD endurance...
If 60GB is the sweet spot for each DB/WAL partition, and the SSD has
spare capacity, for example, I'd budgeted 266GB per DB/WAL.
Would it then be better to make a 60GB "sweet spot"
Hi everyone,
This is the last week to submit for the Ceph Day Netherlands CFP
ending June 3rd:
https://ceph.com/cephdays/netherlands-2019/
https://zfrmz.com/E3ouYm0NiPF1b3NLBjJk
--
Mike Perez (thingee)
On Thu, May 23, 2019 at 10:12 AM Mike Perez wrote:
>
> Hi everyone,
>
> We will be having Ce
On further thought, Im now thinking this is telling me which rank is stopped
(2), not that two ranks are stopped. I guess I am still curious about why this
information is retained here and can rank 2 be made active again? If so, would
this be cleaned up out of "stopped"?
The state diagram here:
Can anyone help with this? Why can't I optimize this cluster, the pg counts
and data distribution is way off.
__
I enabled the balancer plugin and even tried to manually invoke it but it
won't allow any changes. Looking at ceph osd df, it's not even at all.
Thoughts?
root@hostadm
I had this with balancer active and "crush-compat"
MIN/MAX VAR: 0.43/1.59 STDDEV: 10.81
And by increasing the pg of some pools (from 8 to 64) and deleting empty
pools, I went to this
MIN/MAX VAR: 0.59/1.28 STDDEV: 6.83
(Do not want to go to this upmap yet)
-Original Message-
Fr
Hi Tarek,
what's the output of "ceph balancer status"?
In case you are using "upmap" mode, you must make sure to have a
min-client-compat-level of at least Luminous:
http://docs.ceph.com/docs/mimic/rados/operations/upmap/
Of course, please be aware that your clients must be recent enough (especi
Hi Oliver,
Thank you for the response, I did ensure that min-client-compact-level is
indeed Luminous (see below). I have no kernel mapped rbd clients. Ceph
versions reports mimic. Also below is the output of ceph balancer status.
One thing to note, I did enable the balancer after I already filled
Hi Tarek,
Am 29.05.19 um 18:49 schrieb Tarek Zegar:
> Hi Oliver,
>
> Thank you for the response, I did ensure that min-client-compact-level is
> indeed Luminous (see below). I have no kernel mapped rbd clients. Ceph
> versions reports mimic. Also below is the output of ceph balancer status. One
Hi Oliver
Here is the output of the active mgr log after I toggled balancer off / on,
I grep'd out only "balancer" as it was far to verbose (see below). When I
look at ceph osd df I see it optimized :)
I would like to understand two things however, why is "prepared 0/10
changes" zero if it actual
On Wed, May 29, 2019 at 9:36 AM Robert Sander
wrote:
> Am 24.05.19 um 14:43 schrieb Paul Emmerich:
> > * SSD model? Lots of cheap SSDs simply can't handle more than that
>
> The customer currently has 12 Micron 5100 1,92TB (Micron_5100_MTFDDAK1)
> SSDs and will get a batch of Micron 5200 in the n
On Wed, May 29, 2019 at 11:37 AM Robert Sander
wrote:
> Hi,
>
> Am 29.05.19 um 11:19 schrieb Martin Verges:
> >
> > We have identified the performance settings in the BIOS as a major
> > factor
> >
> > could you share your insights what options you changed to increase
> > performance and
Hi Tarek,
that's good news, glad my hunch was correct :-).
Am 29.05.19 um 19:31 schrieb Tarek Zegar:
> Hi Oliver
>
> Here is the output of the active mgr log after I toggled balancer off / on, I
> grep'd out only "balancer" as it was far to verbose (see below). When I look
> at ceph osd df I
These OSDs are far too small at only 10GiB for the balancer to try and
do any work. It's not uncommon for metadata like OSDMaps to exceed
that size in error states and in any real deployment a single PG will
be at least that large.
There are probably parameters you can tweak to try and make it work
On Wed, 2019-05-29 at 13:49 +, Stolte, Felix wrote:
> Hi,
>
> is anyone running an active-passive nfs-ganesha cluster with cephfs backend
> and using the rados_kv recovery backend? My setup runs fine, but takeover is
> giving me a headache. On takeover I see the following messages in ganesha
Hi Wido,
When you run `radosgw-admin gc list`, I assume you are *not* using the
"--include-all" flag, right? If you're not using that flag, then
everything listed should be expired and be ready for clean-up. If after
running `radosgw-admin gc process` the same entries appear in
`radosgw-admin gc l
Good afternoon,
I’m about to expand my cluster from 380 to 480 OSDs (5 nodes with 20 disks per
node) and am trying to determine the best way to go about this task.
I deployed the cluster with ceph ansible and everything worked well. So I’d
like to add the new nodes with ceph ansible as well.
T
I want to deny deletes on one of my buckets. I tried to run "s3cmd
setpolicy". I tried two configs (json files). I do not get any error code
and when I try to do getpolicy I see the same json. However, when I delete
objects present in the bucket I am able to delete the object. Please let me
know wh
29 matches
Mail list logo