Hi,
In trying to understand RGW pool usage I've noticed the pool called
*default.rgw.meta* pool has a large number of objects in it. Suspiciously
about twice as many objects in my *default.rgw.buckets.index* pool.
As I delete and add buckets, the number of objects in both pools decrease
and incre
Hi Gregory,
Thanks for your answer.
I had to add another step emit to your suggestion to make it work:
step take default
step chooseleaf indep 4 type host
step emit
step take default
step chooseleaf indep 4 type host
step emit
However, now the same OSD is chosen twice for every PG:
# crushtool
Hi Nico,
What Ceph version are you running? There were changes in recovery priorities
merged into jewel 10.2.7+ and luminous which should cover exactly this case.
Regards,
Bartek
> Wiadomość napisana przez Nico Schottelius w
> dniu 03.02.2018, o godz. 12:55:
>
>
> Good morning,
>
> after
On Mon, Feb 5, 2018 at 12:45 PM, Thomas Bennett wrote:
> Hi,
>
> In trying to understand RGW pool usage I've noticed the pool called
> default.rgw.meta pool has a large number of objects in it. Suspiciously
> about twice as many objects in my default.rgw.buckets.index pool.
>
> As I delete and add
Hi Orit,
Thanks for the reply, much appreciated.
You cannot see the omap size using rados ls but need to use rados omap
> commands.
You can use this script to calculate the bucket index size:
> https://github.com/mkogan1/ceph-utils/blob/master/
> scripts/get_omap_kv_size.sh
Great. I had not e
Hi,
I noticed a severe inverse correlation between IOPS and throughput
For example:
running rados bench write with t=32 shows and average IOPS 1426
and bandwidth 5.5 MB/sec
running it with default ( t = 16 ) average IOPS is 49 and bandwidth is 200
MB/s
Is this expected behavior ?
How do I
Good data point on not trimming when non active+clean PGs are present. So
am I reading this correct? It grew to 32GB? Did it end up growing beyond
that, what was the max? Also is only ~18PGs per OSD a reasonable amount of
PGs per OSD? I would think about quadruple that would be ideal. Is this an
ar
Thanks a lot who shared thoughts and own experience on that topic! It seems that Frédéric's input is
exactly I've been looking for. Thanks Frédéric!
Jason Dillaman wrote on 02/02/18 19:24:
Concur that it's technically feasible by restricting access to
"rbd_id.", "rbd_header..",
"rbd_object_map.
Hi all,
In the release notes of 12.2.2 the following is stated:
> Standby ceph-mgr daemons now redirect requests to the active
messenger, easing configuration for tools & users accessing the web
dashboard, restful API, or other ceph-mgr module services.
However, it doesn't seem to be the cas
On Mon, Feb 5, 2018 at 5:06 PM, Hans van den Bogert
wrote:
> Hi all,
>
> In the release notes of 12.2.2 the following is stated:
>
> > Standby ceph-mgr daemons now redirect requests to the active
> messenger, easing configuration for tools & users accessing the web
> dashboard, restful API, or
The tests are pretty clearly using different op sizes there. I believe the
default is 16*4MB, but the first one is using 32*4KB. So obviously the
curves are very different!
On Mon, Feb 5, 2018 at 6:47 AM Steven Vacaroaia wrote:
> Hi,
>
> I noticed a severe inverse correlation between IOPS and thr
On Mon, Feb 5, 2018 at 3:23 AM Caspar Smit wrote:
> Hi Gregory,
>
> Thanks for your answer.
>
> I had to add another step emit to your suggestion to make it work:
>
> step take default
> step chooseleaf indep 4 type host
> step emit
> step take default
> step chooseleaf indep 4 type host
> step e
On Mon, 5 Feb 2018, Gregory Farnum wrote:
> On Mon, Feb 5, 2018 at 3:23 AM Caspar Smit wrote:
>
> > Hi Gregory,
> >
> > Thanks for your answer.
> >
> > I had to add another step emit to your suggestion to make it work:
> >
> > step take default
> > step chooseleaf indep 4 type host
> > step emit
Hi Patrick,
Thanks for the info. Looking at the fuse options in the man page, I should
be able to pass "-o uid=$(id -u)" at the end of the ceph-fuse command.
However, when I do, it returns with an unknown option for fuse and
segfaults. Any pointers would be greatly appreciated. This is the result
Dear Nick & Wido,
Many thanks for your helpful advice; our cluster has returned to HEALTH_OK
One caveat is that a small number of pgs remained at "activating".
By increasing mon_max_pg_per_osd from 500 to 1000 these few osds
activated, allowing the cluster to rebalance fully.
i.e. this was n
Hi Jakub,
Le 05/02/2018 à 12:26, Jakub Jaszewski a écrit :
Hi Frederic,
Many thanks for your contribution to the topic!
I've just set logging level 20 for filestore via
ceph tell osd.0 config set debug_filestore 20
but so far
found
nothing by keyword 'split'
in /var/log/ceph/ceph-osd.0.
Hello,
I'm not a "storage-guy" so please excuse me if I'm missing /
overlooking something obvious.
My question is in the area "what kind of performance am I to expect
with this setup". We have bought servers, disks and networking for our
future ceph-cluster and are now in our "testing-phase" an
On 02/05/2018 04:54 PM, Wes Dillingham wrote:
Good data point on not trimming when non active+clean PGs are present.
So am I reading this correct? It grew to 32GB? Did it end up growing
beyond that, what was the max?Also is only ~18PGs per OSD a reasonable
amount of PGs per OSD? I would think
Hi ceph list,
we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph
hammer 0.94.10. The cluster is now 3 years old an we plan with a new
cluster for a high iops project. We use replicated pools 3/2 and have
not the best latency on our switch backend.
ping -s 8192 10.10.10.40
8200
Hi All,
I might really be bad at searching, but I can't seem to find the ceph
health status through the new(ish) restful api. Is that right? I know
how I could retrieve it through a Python script, however I'm trying to
keep our monitoring application as layer cake free as possible -- as
such a res
i'm trying to setup radosgw on a brand new clutser, but I'm running into an
issue where it's not listening on the default port (7480)
here's my install script:
ceph-deploy new $NODE
ceph-deploy install --release luminous $NODE
ceph-deploy install --release luminous --rgw $NODE
ceph-d
Hi,
first of all just in case, it looks like your script does not deploy any OSDs
as you go straight from MON to RGW.
then, RGW does listen by default on 7480 and what you see on 6789 is the MON
listening.
Investigation:
- Make sure your ceph-radosgw process is running first.
- If not running,
Hello,
> I'm not a "storage-guy" so please excuse me if I'm missing /
> overlooking something obvious.
>
> My question is in the area "what kind of performance am I to expect
> with this setup". We have bought servers, disks and networking for our
> future ceph-cluster and are now in our "testi
Thanks, JC,
You’re right I didn’t deploy any OSDs at that point. I didn’t think that would
be a problem since the last `ceph-deploy` command completed without error and
its log ended with:
The Ceph Object Gateway (RGW) is now running on host storage-test01 and default
port 7480
Maybe that’s a
Hi
see inline
JC
> On Feb 5, 2018, at 18:14, Piers Haken wrote:
>
> Thanks, JC,
>
> You’re right I didn’t deploy any OSDs at that point. I didn’t think that
> would be a problem since the last `ceph-deploy` command completed without
> error and its log ended with:
>
> The Ceph Object Gat
Hello,
On Mon, 5 Feb 2018 22:04:00 +0100 Tobias Kropf wrote:
> Hi ceph list,
>
> we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph
> hammer 0.94.10.
Do I smell Proxmox?
> The cluster is now 3 years old an we plan with a new
> cluster for a high iops project. We use replicat
Hi,
We have a 5 nodes Ceph cluster. Four of them are OSD server. One is
monitor, manager and RGW. At first, we use the default logroate setting, so
all ceph processes will be restarted everyday, but RGW and manager goes
down basically per week. To prevent this, we set the logroate to per month.
An
/offtopic
When and where did you get those?
I wonder if they're available again, had 0 luck getting any last year.
I was behold P3700 in Russia since December 2017 with real quantity on
stock, not just a "price with out of stock".
https://market.yandex.ru/catalog/55316/list?text=intel%20p3
On Tue, Jan 30, 2018 at 10:32:04AM +0100, Ingo Reimann wrote:
> What could be the problem,and how may I solve that?
For anybody else tracking this, the logs & debugging info are filed at
http://tracker.ceph.com/issues/22928
--
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
Hello,
We are seeing slow requests while recovery process going on.
I am trying to slow down the recovery process. I set osd_recovery_max_active
and osd_recovery_sleep as below :
--
ceph tell osd.* injectargs '--osd_recovery_max_active 1'
ceph tell osd.* injectargs '--osd_recovery_sleep
On Tue, 6 Feb 2018 13:01:12 +0530 Karun Josy wrote:
> Hello,
>
> We are seeing slow requests while recovery process going on.
>
> I am trying to slow down the recovery process. I set osd_recovery_max_active
> and osd_recovery_sleep as below :
> --
> ceph tell osd.* injectargs '--osd_re
Hi Christian,
Thank you for your help.
Ceph version is 12.2.2. So is this value bad ? Do you have any suggestions ?
So to reduce the max chunk ,I assume I can choose something like
7 << 20 ,ie 7340032 ?
Karun Josy
On Tue, Feb 6, 2018 at 1:15 PM, Christian Balzer wrote:
> On Tue, 6 Feb 2018
32 matches
Mail list logo