Hi,
I like to know if someone of you have some kind of a formula to set
the right number of shards for a bucket.
We have currently a Bucket with 30M objects and expect that it will go
up to 50M.
At the moment we have 64 Shards configured, but it was told me that
this is much to less.
Any hints /
Hi,
I need some help to fix a broken cluster. I think we broke the cluster,
but I want to know your opinion and if you see a possibility to recover it.
Let me explain what happend.
We have a cluster (Version 0.94.9) in two datacenters (A and B). In each
12 nodes á 60 ODSs. In A we have 3 mon
On Fri, Oct 14, 2016 at 7:27 AM, Manuel Lausch wrote:
> Hi,
>
> I need some help to fix a broken cluster. I think we broke the cluster, but
> I want to know your opinion and if you see a possibility to recover it.
>
> Let me explain what happend.
>
> We have a cluster (Version 0.94.9) in two datac
Hi all,
after encountering a warning about one of my OSDs running out of space i tried
to study better how data distribution works.
I'm running a Hammer Ceph cluster v. 0.94.7
I did some test with crushtool trying to figure out how to achieve even data
distribution across OSDs.
Let's take
Unfortunately, it was all in the unlink operation. Looks as if it took nearly
20 hours to remove the dir, roundtrip is a killer there. What can be done to
reduce RTT to the MDS? Does the client really have to sequentially delete
directories or can it have internal batching or parallelization?
-
Hi All,
Recently upgraded from Kilo->Mitaka on my OpenStack deploy and now
radowsgw nodes (jewel) are unable to validate keystone tokens.
Initially I though it was because radowsgw relies on admin_token
(which is a a bad idea, but ...) and that's now deperecated. I
verified the token was still
On Fri, Oct 14, 2016 at 11:41 AM, Heller, Chris wrote:
> Unfortunately, it was all in the unlink operation. Looks as if it took nearly
> 20 hours to remove the dir, roundtrip is a killer there. What can be done to
> reduce RTT to the MDS? Does the client really have to sequentially delete
> dir
Ok. Since I’m running through the Hadoop/ceph api, there is no syscall boundary
so there is a simple place to improve the throughput here. Good to know, I’ll
work on a patch…
On 10/14/16, 3:58 PM, "Gregory Farnum" wrote:
On Fri, Oct 14, 2016 at 11:41 AM, Heller, Chris wrote:
> Unfortu
On Fri, Oct 14, 2016 at 1:11 PM, Heller, Chris wrote:
> Ok. Since I’m running through the Hadoop/ceph api, there is no syscall
> boundary so there is a simple place to improve the throughput here. Good to
> know, I’ll work on a patch…
Ah yeah, if you're in whatever they call the recursive tree
On Thu, Oct 13, 2016 at 5:19 PM, Stillwell, Bryan J
wrote:
> On 10/13/16, 2:32 PM, "Alfredo Deza" wrote:
>
>>On Thu, Oct 13, 2016 at 11:33 AM, Stillwell, Bryan J
>> wrote:
>>> I have a basement cluster that is partially built with Odroid-C2 boards
>>>and
>>> when I attempted to upgrade to the 10.
On 10/14/16, 2:29 PM, "Alfredo Deza" wrote:
>On Thu, Oct 13, 2016 at 5:19 PM, Stillwell, Bryan J
> wrote:
>> On 10/13/16, 2:32 PM, "Alfredo Deza" wrote:
>>
>>>On Thu, Oct 13, 2016 at 11:33 AM, Stillwell, Bryan J
>>> wrote:
I have a basement cluster that is partially built with Odroid-C2
>>>
Just a thought, but since a directory tree is a first class item in cephfs,
could the wire protocol be extended with an “recursive delete” operation,
specifically for cases like this?
On 10/14/16, 4:16 PM, "Gregory Farnum" wrote:
On Fri, Oct 14, 2016 at 1:11 PM, Heller, Chris wrote:
>
If you are running 10.2.3 on your cluster, then I would strongly recommend to
NOT delete files in parallel as you might hit
http://tracker.ceph.com/issues/17177
-Mykola
From: Heller, Chris
Sent: Saturday, 15 October 2016 03:36
To: Gregory Farnum
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-
On Fri, Oct 14, 2016 at 6:26 PM, wrote:
> If you are running 10.2.3 on your cluster, then I would strongly recommend
> to NOT delete files in parallel as you might hit
> http://tracker.ceph.com/issues/17177
I don't think these have anything to do with each other. What gave you
the idea simultane
I was doing parallel deletes until the point when there are >1M objects in the
stry. Then delete fails with ‘no space left’ error. If one would deep-scrub
those pgs containing corresponidng metadata, they turn to be inconsistent. In
worst case one would get virtually empty folders that have size
On Fri, Oct 14, 2016 at 7:45 PM, wrote:
> I was doing parallel deletes until the point when there are >1M objects in
> the stry. Then delete fails with ‘no space left’ error. If one would
> deep-scrub those pgs containing corresponidng metadata, they turn to be
> inconsistent. In worst case one w
16 matches
Mail list logo