> Op 6 februari 2016 om 0:08 schreef Tyler Bishop
> :
>
>
> I have ceph pulling down from eu. What *origin* should I setup rsync to
> automatically pull from?
>
> download.ceph.com is consistently broken.
>
download.ceph.com should be your best guess, since that is closest.
The US however s
Hi,
Great! So that would be se.ceph.com?
There is a ceph-mirrors list for mirror admins, so let me know when you are
ready to set up so I can add you there.
Wido
> Op 6 februari 2016 om 8:22 schreef Josef Johansson :
>
>
> Hi Wido,
>
> We're planning on hosting here in Sweden.
>
> I can let
Hi Wido,
We're planning on hosting here in Sweden.
I can let you know when we're ready.
Regards
Josef
On Sat, 30 Jan 2016 15:15 Wido den Hollander wrote:
> Hi,
>
> My PR was merged with a script to mirror Ceph properly:
> https://github.com/ceph/ceph/tree/master/mirroring
>
> Currently ther
Hi,
Am 06.02.2016 um 07:15 schrieb Yan, Zheng:
>> On Feb 6, 2016, at 13:41, Michael Metz-Martini | SpeedPartner GmbH
>> wrote:
>> Am 04.02.2016 um 15:38 schrieb Yan, Zheng:
On Feb 4, 2016, at 17:00, Michael Metz-Martini | SpeedPartner GmbH
wrote:
Am 04.02.2016 um 09:43 schrieb Y
> On Feb 6, 2016, at 13:41, Michael Metz-Martini | SpeedPartner GmbH
> wrote:
>
> Hi,
>
> sorry for the delay - productional system unfortunately ;-(
>
> Am 04.02.2016 um 15:38 schrieb Yan, Zheng:
>>> On Feb 4, 2016, at 17:00, Michael Metz-Martini | SpeedPartner GmbH
>>> wrote:
>>> Am 04.02
Hi,
sorry for the delay - productional system unfortunately ;-(
Am 04.02.2016 um 15:38 schrieb Yan, Zheng:
>> On Feb 4, 2016, at 17:00, Michael Metz-Martini | SpeedPartner GmbH
>> wrote:
>> Am 04.02.2016 um 09:43 schrieb Yan, Zheng:
>>> On Thu, Feb 4, 2016 at 4:36 PM, Michael Metz-Martini | Spe
I believe this is referring to combining the previously separate queues
into a single queue (PrioritizedQueue and soon to be WeightedPriorityQueue)
in ceph. That way client IO and recovery IO can be better prioritized in
the Ceph code. This is all before the disk queue.
Robert LeBlanc
Sent from a
On Fri, Feb 05, 2016 at 12:43:52PM -0700, Austin Johnson wrote:
> All,
>
> I'm running a small infernalis cluster.
>
> I think I've either a) found a bug, or b) need to be retrained on how to
> use a keyboard. ;)
>
> For some reason I cannot get a radosgw daemons (using upstart) to accept a
> co
Cullen,
We operate a cluster with 4 nodes, each has 2xE5-2630, 64gb ram, 10x4tb
spinners. We've recently replaced 2xm550 journals with a single p3700 nvme
drive per server and didn't see the performance gains we were hoping for.
After making the changes below we're now seeing significantly better
I saw the following in the release notes for Infernalis, and I'm wondering
where I can find more information about it?
* There is now a unified queue (and thus prioritization) of client IO,
recovery, scrubbing, and snapshot trimming.
I've tried checking the docs for more details, but didn't have
I have ceph pulling down from eu. What *origin* should I setup rsync to
automatically pull from?
download.ceph.com is consistently broken.
- Original Message -
From: "Tyler Bishop"
To: "Wido den Hollander"
Cc: "ceph-users"
Sent: Friday, February 5, 2016 5:59:20 PM
Subject: Re: [ceph
Not sure how many folks use the CFQ scheduler to use Ceph IO priority, but
there’s a CFQ change that probably needs to be evaluated for Ceph purposes.
http://lkml.iu.edu/hypermail/linux/kernel/1602.0/00820.html
This might be a better question for the dev list.
Warren Wang
This email and any fi
We would be happy to mirror the project.
http://mirror.beyondhosting.net
- Original Message -
From: "Wido den Hollander"
To: "ceph-users"
Sent: Saturday, January 30, 2016 9:14:59 AM
Subject: [ceph-users] Ceph mirrors wanted!
Hi,
My PR was merged with a script to mirror Ceph properly:
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Sage Weil
> Sent: 05 February 2016 18:45
> To: Samuel Just
> Cc: Jason Dillaman ; Nick Fisk ;
> ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org
> Subject: Re: cls_r
All,
I'm running a small infernalis cluster.
I think I've either a) found a bug, or b) need to be retrained on how to
use a keyboard. ;)
For some reason I cannot get a radosgw daemons (using upstart) to accept a
config change through the "ceph-deploy config push" method.
If I start radosgw thro
On Fri, 5 Feb 2016, Samuel Just wrote:
> On Fri, Feb 5, 2016 at 7:53 AM, Jason Dillaman wrote:
> > #1 and #2 are awkward for existing pools since we would need a tool to
> > inject dummy omap values within existing images. Can the cache tier
> > force-promote it from the EC pool to the cache wh
It seems like the cache tier should force promote when it gets an op
the backing pool doesn't support. I think using the cache-pin
mechanism would make sense.
-Sam
On Fri, Feb 5, 2016 at 7:53 AM, Jason Dillaman wrote:
> #1 and #2 are awkward for existing pools since we would need a tool to injec
On Fri, Feb 5, 2016 at 3:48 PM, Kenneth Waegeman
wrote:
> Hi,
>
> In my attempt to retry, I ran 'ceph mds newfs' because removing the fs was
> not working (because the mdss could not be stopped).
> With the new fs, I could again start syncing. After 10-15min it all crashed
> again. The log now sho
#1 and #2 are awkward for existing pools since we would need a tool to inject
dummy omap values within existing images. Can the cache tier force-promote it
from the EC pool to the cache when an unsupported op is encountered? There is
logic like that in jewel/master for handling the proxied wri
Hi,
In my attempt to retry, I ran 'ceph mds newfs' because removing the fs
was not working (because the mdss could not be stopped).
With the new fs, I could again start syncing. After 10-15min it all
crashed again. The log now shows some other stacktrace.
-9> 2016-02-05 15:26:29.015197
On Wed, 27 Jan 2016, Nick Fisk wrote:
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Jason Dillaman
> > Sent: 27 January 2016 14:25
> > To: Nick Fisk
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Possible Cache T
Burkhard Linke writes:
>
> The default weight is the size of the OSD in tera bytes. Did you
use
> a very small OSD partition for test purposes, e.g. 20 GB? In that
> case the weight is rounded and results in an effective weight of
> 0.0. As a result the OSD will not be used
Hi Zoltan, thanks for the answer.
Because replacing hdfs:// with ceph:// and use CephFs doesn't work for all
haddop componentes out of the box (unless in my tests), for example I had
issues with Hbase, then with Yarn, Hue, etc (I'm using the cloudera
distribution but I also tried with separate
On Fri, Feb 5, 2016 at 6:39 AM, Stephen Lord wrote:
>
> I looked at this system this morning, and the it actually finished what it was
> doing. The erasure coded pool still contains all the data and the cache
> pool has about a million zero sized objects:
>
>
> GLOBAL:
> SIZE AVAIL R
I looked at this system this morning, and the it actually finished what it was
doing. The erasure coded pool still contains all the data and the cache
pool has about a million zero sized objects:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
15090G 9001G608
On Fri, Feb 5, 2016 at 9:36 AM, Kenneth Waegeman
wrote:
>
>
> On 04/02/16 16:17, Gregory Farnum wrote:
>>
>> On Thu, Feb 4, 2016 at 1:42 AM, Kenneth Waegeman
>> wrote:
>>>
>>> Hi,
>>>
>>> Hi, we are running ceph 9.2.0.
>>> Overnight, our ceph state went to 'mds mds03 is laggy' . When I checked
>>
Thanks for this thread. We just did the same mistake (rmfailed) on our
hammer cluster which broke it similarly. The addfailed patch worked
for us too.
-- Dan
On Fri, Jan 15, 2016 at 6:30 AM, Mike Carlson wrote:
> Hey ceph-users,
>
> I wanted to follow up, Zheng's patch did the trick. We re-added
Hi - As per the OSD calculations: no of OSD * 100/pool size => 96
*100/3 = 3200 => 4096
So 4096 is correct pg_num.
this case - PG are correct number as per the recommended.
On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli wrote:
> As message satates, you must increase placement group number for
On 04/02/16 16:17, Gregory Farnum wrote:
On Thu, Feb 4, 2016 at 1:42 AM, Kenneth Waegeman
wrote:
Hi,
Hi, we are running ceph 9.2.0.
Overnight, our ceph state went to 'mds mds03 is laggy' . When I checked the
logs, I saw this mds crashed with a stacktrace. I checked the other mdss,
and I saw
29 matches
Mail list logo