On 04/01/2013 05:35 PM, Mark Nelson wrote:
On 03/31/2013 06:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the performance of the hard drives.
I was expecting to have a performance penalty but not so import
Object expiration is on the roadmap but hasn't been scheduled for a
specific release yet:
http://tracker.ceph.com/issues/4099
It's possible it may land in the Dumpling release (due in August). We
should hopefully be able to confirm this in the next few weeks.
Neil
On Mon, Apr 1, 2013 at 8:30 PM
On Fri, Mar 29, 2013 at 01:46:16PM -0700, Josh Durgin wrote:
> The issue was that the qemu rbd driver was blocking the main qemu
> thread when flush was called, since it was using a synchronous flush.
> Fixing this involves patches to librbd to add an asynchronous flush,
> and a patch to qemu to us
Hi all.
I was reviewing the s3 and swift api support matrices:
http://ceph.com/docs/master/radosgw/s3/
http://ceph.com/docs/master/radosgw/swift
and I noticed that there is no support for s3 'lifecycle' or swift
'delete-after' capabilities. I was curious to know if these are no a road
map, or if
Another sprint and another release! This is the last development release
before v0.61 Cuttlefish, which is due out in 4 weeks (around May 1). The
next few weeks will be focused on making sure everything we've built over
the last few months is rock solid and ready for you. We will have an -rc
On 03/31/2013 06:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the performance of the hard drives.
I was expecting to have a performance penalty but not so important.
I suspect something is not correct in t
Anyone on this one ?
On 03/31/2013 04:37 PM, Matthieu Patou wrote:
Hi,
I was doing some testing with iozone and found that performance of an
exported rdb volume where 1/3 of the performance of the hard drives.
I was expecting to have a performance penalty but not so important.
I suspect somet
It looks like you are using a domain name instead of an IP address. Try it
with the IP address. Are the chmod settings correct on the keyring? Once we
resolve this, let me know how we can improve the docs here:
http://ceph.com/docs/master/cephfs/fstab/
On Sat, Mar 30, 2013 at 10:55 AM, Marco Aro
On Mon, Apr 1, 2013 at 2:16 PM, Sam Lang wrote:
> On Mon, Apr 1, 2013 at 5:59 AM, Papaspyrou, Alexander
> wrote:
>> Folks,
>>
>> we are trying to setup a ceph cluster with about 40 or so OSDs on our
>> hosting provider's infrastructure. Our rollout works with Opscode Chef, and
>> I'm driving my p
On Mon, Apr 1, 2013 at 5:59 AM, Papaspyrou, Alexander
wrote:
> Folks,
>
> we are trying to setup a ceph cluster with about 40 or so OSDs on our
> hosting provider's infrastructure. Our rollout works with Opscode Chef, and
> I'm driving my people to automate away everything they can.
>
> I've worke
Pal,
Are you still seeing this problem? It looks like you have a bad
crushmap. Can you post that to the list if you changed it?
-slang [developer @ http://inktank.com | http://ceph.com]
On Wed, Mar 20, 2013 at 11:41 AM, "Gergely Pál - night[w]"
wrote:
> Hello!
>
> I've deployed a test ceph cl
On 04/01/2013 06:07 AM, Papaspyrou, Alexander wrote:
Folks,
we are in the process of setting up a ceph cluster with about 40 OSDs
spread over 25 or so machines within our hosting provider's infrastructure.
Unfortunately, we have certain limitations from the provider side that
we cannot really o
Folks,
we are in the process of setting up a ceph cluster with about 40 OSDs spread over 25 or so machines within our hosting provider's infrastructure.
Unfortunately, we have certain limitations from the provider side that we cannot really overcome:
1- We only have one public network, no
Folks,
we are trying to setup a ceph cluster with about 40 or so OSDs on our hosting provider's infrastructure. Our rollout works with Opscode Chef, and I'm driving my people to automate away everything they can.
I've worked my way through the documentation, but a few questions were left ope
In addition, i was able to extract some logs from the last time
active/peering problem happened.
http://pastebin.com/BakFREFP
It ends with me restarting it.
On Mon, Apr 1, 2013 at 10:23 AM, Erdem Agaoglu wrote:
> Hi all,
>
> We are currently in process of enlarging our bobtail cluster size by
>
Hi all,
We are currently in process of enlarging our bobtail cluster size by adding
OSDs. We have 12 disks per machine and we are creating one OSD per disk,
adding them one by one as recommended. Only thing we don't do is starting
with a small weight and increasing it slowly. Weights are all 1.
I
16 matches
Mail list logo