All,
I am working on getting RADOSGW to work with LDAP and things seem like they
should be set, but I suspect that there are certain attributes that need to
exist for the user to work.
If I create a user using "radosgw-admin user create", I am able to use that
access/secret key successfully, bu
>>Does "rbd op threads = N" solve bottleneck? IMHO it is possible to make this
>>value automated by QEMU from num-queues. If now not.
http://tracker.ceph.com/issues/15034
https://github.com/ceph/ceph/pull/8459
it's forced to 1 for now
- Mail original -
De: "Dzianis Kahanovich"
À: "ader
Hi,
My ceph cluster include 5 OSDs. 3 osds are installed in the host 'strony-tc'
and 2 are in the host 'strony-pc'. Recently, both of hosts were rebooted due to
power cycles. After all of disks are mounted again, the ceph-osd are in the
'down' status. I tried cmd, "sudo start ceph-osd id=x', to
Hi,
My ceph cluster include 5 OSDs. 3 osds are installed in the host 'strony-tc'
and 2 are in the host 'strony-pc'. Recently, both of hosts were rebooted due to
power cycles. After all of disks are mounted again, the ceph-osd are in the
'down' status. I tried cmd, "sudo start ceph-osd id=x', to
Hi All...
Just dropping a small email to share our experience on how to recover a
pg from a cephfs metadata pool.
The reason why I am sharing this information is because the general
understanding on how to recover a pg (check [1]) relies on identifying
incorrect objects by comparing checksum
Hello,
On Mon, 12 Sep 2016 19:28:50 -0500 shiva rkreddy wrote:
> By saying "old clients" did you mean, (a) Client VMs running old Operating
> System (b) Client VMs/Volumes that are in-use for a long time and across
> ceph releases ? Was there any tuning done to fix it?
>
I'm pretty sure he mea
By saying "old clients" did you mean, (a) Client VMs running old Operating
System (b) Client VMs/Volumes that are in-use for a long time and across
ceph releases ? Was there any tuning done to fix it?
Thanks,
On Mon, Sep 12, 2016 at 3:05 PM, Wido den Hollander wrote:
>
> > Op 12 september 201
On Mon, Sep 12, 2016 at 11:00 AM, Ilya Moldovan wrote:
> Thanks, John
>
> But why listing files in a directory with about a million files takes
> about 30 minutes?
Unless you've enabled experimental features, you're trying to read in
a single 1-million-inode directory object — and it sounds like
> Op 12 september 2016 om 16:14 schreef Василий Ангапов :
>
>
> Hello, colleagues!
>
> I have Ceph Jewel cluster of 10 nodes (Centos 7 kernel 4.7.0), 290
> OSDs total with journals on SSDs. Network is 2x10Gb public and 2x10GB
> cluster.
> I do constantly see periodic slow requests being follow
> Op 12 september 2016 om 18:47 schreef "WRIGHT, JON R (JON R)"
> :
>
>
> Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN
> because of 'blocked requests > 32 sec'. Seems to be related to writes.
>
> Has anyone else seen this? Or can anyone suggest what the problem mi
Hi Tom, a few things you can check into. Some of these depend on how many
OSDs you¹re trying to run on a single chassis.
# up PIDs, otherwise you may run out of the ability to spawn new threads
kernel.pid_max=4194303
# up available mem for sudden bursts, like during benchmarking
Vm.min_free_kbyte
If somebody hit this issue, this can be resolved by creating subuser as
radosgw-admin subuser create --uid=s3User --subuser="s3User:swiftUser"
--access=full
Thanks & Regards,
Naga Venkata
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of B,
Naga Venkata
Sent: Friday,
Are you running ‘ls’ or are you doing something like: 'getfattr -d -m
ceph.dir.* /path/to/your/ceph/mount’ ?
—Lincoln
> On Sep 12, 2016, at 1:00 PM, Ilya Moldovan wrote:
>
> Thanks, John
>
> But why listing files in a directory with about a million files takes
> about 30 minutes?
>
> Ilya Mo
The parameters you configured for keystone in ceph.conf are correct? Can you
provide your radosgw configuration in ceph.conf?
And include radosgw.log after radosgw service restart and during swift list.
Thanks & Regards,
Naga Venkata
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
Thanks, John
But why listing files in a directory with about a million files takes
about 30 minutes?
Ilya Moldovan
2016-09-08 20:59 GMT+03:00, Ilya Moldovan :
> Hello!
>
> How CephFS calculates the directory size? As I know there is two
> implementations:
>
> 1. Recursive directory traversal lik
Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN
because of 'blocked requests > 32 sec'. Seems to be related to writes.
Has anyone else seen this? Or can anyone suggest what the problem might be?
Thanks!
___
ceph-users mailing
Trying to understand why some OSDs (6 out of 21) went down in my cluster while
running a CBT radosbench benchmark. From the logs below, is this a networking
problem between systems, or is it some kind of FileStore problem.
Looking at one crashed OSD log, I see the following crash error:
2016-0
Hey Alexey,
sorry - it seems that the log files does not contain the debug message
which i got @ the command line
here it is
- http://slexy.org/view/s20A6m2Tfr
Mehmet
Am 2016-09-12 15:48, schrieb Alexey Sheplyakov:
Hi,
This is the actual logfile for osd.10
> - http://slexy.org/view/s21l
Hello Alexey,
this time i did not get any error with your given command
ceph-osd -d --flush-journal --debug_filestore 20/20 --debug_journal
20/20 -i 10
for osd.10
- http://slexy.org/view/s21dWEKymn
*but* i have tried another osd (12) and got indeed an error :*(
for osd.12
- http://slexy.or
Hello, colleagues!
I have Ceph Jewel cluster of 10 nodes (Centos 7 kernel 4.7.0), 290
OSDs total with journals on SSDs. Network is 2x10Gb public and 2x10GB
cluster.
I do constantly see periodic slow requests being followed by "wrongly
marked me down" record in ceph.log like this:
root@ed-ds-c171
Hi,
> This is the actual logfile for osd.10
> - http://slexy.org/view/s21lhpkLGQ
Unfortunately this log does not contain any new data -- for some reason the
log levels haven't changed (see line 36369).
Could you please try the following command:
ceph-osd -d --flush-journal --debug_filestore 20/2
after adding more osd's and having a big backfill running 2 of my osd's
keep on stopping.
We also ercently upgraded from 0.94.7 to 0.94.9 but i do not know if
that is related.
the log say.
0> 2016-09-12 10:31:08.288858 7f8749125880 -1 osd/PGLog.cc: In
function 'static void PGLog::read_
Hello,
Here a resume of the trouble shooting story on my radosgw.
after some manipulations on the zone definition, we get stuck in a situation
where we cannot update zones and zonegroups anymore.
This situation has affected bucket manipulation too :
> radosgw-admin bucket list
> 2016-09-06 09:
23 matches
Mail list logo