Hi,
I'm testing ceph 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a) under
load.
My configuration has 2 mds, 3 mon and 16 osd - mon and mds are on separate
servers, osd distributed on 8 servers
3 servers with several processes read and write via libcephfs.
Restart of active mds leads to infini
>From: Gruher, Joseph R
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
>> wrote:
>>
>>> root@cephtest01:~# ssh cephtest02 wget -q -O-
>>> 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' |
>>> apt-key add -
>>>
>>> gpg:
On 11.09.2013 20:05, Prasanna Gholap wrote:
By the link about aws, rbd.ko isn't included yet in linux aws . I'll try to
build the kernel manually and proceed for rbd.
Thanks for your help.
If your requirement is modern Linux (not Ubuntu exclusive) you can use
Fedora (AMIs are built with unmod
What's the output of "ceph -s", and have you tried running the MDS
with any logging enabled that we can check out?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Sun, Sep 15, 2013 at 8:24 AM, Serge Slipchenko
wrote:
> Hi,
>
> I'm testing ceph 0.67.3 (408cd61584c72c0d97b774
Hi all:
I have a 30G rbd block device as virtual machine disk, Aleady installed
ubuntu 12.04. About 1G space used.
When I want to deploy vm, I made a "rbd cp". Then problem came, it copy 30G
data instead of 1G. And this action take lots of time.
Any ideal? I just want make it faster to deploy vm
First-time list poster here, and I'm pretty stumped on this one. My
problem hasn't really been discussed on the list before, so I'm hoping
that I can get this figured out since it's stopping me from learning
more about ceph. I've tried this with the journal on the same disk and
on a separate SSD, b
On Mon, Sep 16, 2013 at 09:20:29AM +0800, 王根意 wrote:
> Hi all:
>
> I have a 30G rbd block device as virtual machine disk, Aleady installed
> ubuntu 12.04. About 1G space used.
>
> When I want to deploy vm, I made a "rbd cp". Then problem came, it copy 30G
> data instead of 1G. And this action tak
Hi Ceph Users
We setup a radosgw per ceph doku. While everything works fine we found out that
different access_keys share the same bucket namespace.
So when access_key A creates a bucket "test" access_key B cannot create a
bucket with name "test".
Is it possible to separate the account's so that