Looks to me like it may have failed to initialise RADOS. Do you have librados
installed and configured (ceph.conf, etc.)?
librgw_create ->
-> RGWLib::init
-> RGWStoreManager::get_storage
-> RGWStoreManager::init_storage_provider
-> RGWRados::initialize
-> init_rados
Hi, everyone.
I'm a newbie about ceph and trying to do some test to understand the behavior
of ceph. The following situation really overwhelmed me:
I first killed a osd, which made the size of the acting set of some pg became
1. Then I set min_size from 1 to 2, after which I started the kille
Sorry, I forgot to tell that those pgs assigned to the kill osd are still
writable after I raise min_size from 1 to 2 but before I restarted the killed
osd.
Forwarding messages
From: "xxhdx1985126"
Date: 2016-10-09 18:08:45
To: "ceph-us...@ceph.com"
Subject: PG go "incomple
Thank your reply. I don’t know which configuration or step causes rados
initialization to fail。
/usr/lib64/
librgw.so.2.0.0
librados.so.2.0.0
/etc/ceh/ceph.conf:
[global]
mon_host = 192.168.77.61
> 在 2016年10月9日,下午4:33,Brad Hubbard 写道:
>
> Looks to me like it may have failed to initialise RAD
Hi,
Yesterday morning I added two more OSD nodes and changed the crushmap from
disk to node. It looked to me like everything went ok besides some disks
missing that I can re-add later, but the cluster status hasn't changed
since then. Here is the output of ceph -w:
cluster 395fb046-0062-4252
Hi All ceph users,
I am interesting to learn ceph bug.
Some bugs that I learned pointing particular ceph version or tags for their
buggy part.
However I found some bugs put Q/A as the source.
For instance http://tracker.ceph.com/issues/9301
when I checked out on giant and firefly version, I saw
On Mon, Oct 10, 2016 at 9:21 AM, agung Laksono wrote:
>
> Hi All ceph users,
>
> I am interesting to learn ceph bug.
>
> Some bugs that I learned pointing particular ceph version or tags for their
> buggy part.
> However I found some bugs put Q/A as the source.
> For instance http://tracker.ceph.c
thank you for the answer Brad,
so the thing that I have to do is that:
- see the commit version
- checkout to the commit version
- see the log ( git log -p -1)
- ultimately, checkout the older one.
Am I correct?
On Mon, Oct 10, 2016 at 6:28 AM, Brad Hubbard wrote:
> On Mon, Oct 10, 2016 at 9:
Hi guys,
Is there a kind of orchestration tool to ease the installation and
deployment od Ceph?
In comparison with Mirantis for Openstack.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co
On Mon, Oct 10, 2016 at 9:51 AM, agung Laksono wrote:
> thank you for the answer Brad,
>
> so the thing that I have to do is that:
> - see the commit version
> - checkout to the commit version
> - see the log ( git log -p -1)
> - ultimately, checkout the older one.
>
> Am I correct?
Yes, that's
Yes you're right.
Sometimes it tested not in a single step back then.
but it also possible to be tested some version steps in the previous
commits.
thanks for helping!
On Mon, Oct 10, 2016 at 7:18 AM, Brad Hubbard wrote:
>
>
> On Mon, Oct 10, 2016 at 9:51 AM, agung Laksono
> wrote:
> > thank
On Mon, Oct 10, 2016 at 9:52 AM, AJ NOURI wrote:
> Hi guys,
> Is there a kind of orchestration tool to ease the installation and
> deployment od Ceph?
There is of course ceph-deploy and also https://github.com/ceph/ceph-ansible
>
> In comparison with Mirantis for Openstack.
>
> Thanks
>
> __
On Sat, Oct 8, 2016 at 2:05 AM, Kjetil Jørgensen wrote:
> Hi
>
> On Fri, Oct 7, 2016 at 6:31 AM, Yan, Zheng wrote:
>>
>> On Fri, Oct 7, 2016 at 8:20 AM, Kjetil Jørgensen
>> wrote:
>> > And - I just saw another recent thread -
>> > http://tracker.ceph.com/issues/17177 - can be an explanation of m
I've enabled RBD mirroring on my test clusters and it seems to be working well,
my question is 'Can we store the RBD mirror journal on a different pool?'
Currently when I do something like rados ls -p sas I see
rbd_data.a67d02eb141f2.0bd1
rbd_data.a67d02eb141f2.0b73
rbd_
On Sun, Oct 9, 2016 at 9:58 PM, yiming xie wrote:
> Thank your reply. I don’t know which configuration or step causes rados
> initialization to fail。
> /usr/lib64/
> librgw.so.2.0.0
> librados.so.2.0.0
>
> /etc/ceh/ceph.conf:
> [global]
> mon_host = 192.168.77.61
What document are you following
ceph -v
ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
nfs-ganesha:2.3 stable
1. install ceph following docs.ceph.com on node1
2.install librgw2-devel.x86_64 on node2
3.install nfs-ganmesha on node2
cmake -DUSE_FSAL_RGW=ON ../src/
make
make install
4.vi /etc/cep/ceph.conf
On Mon, Oct 10, 2016 at 4:37 PM, yiming xie wrote:
> ceph -v
> ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
> nfs-ganesha:2.3 stable
>
> 1. install ceph following docs.ceph.com on node1
> 2.install librgw2-devel.x86_64 on node2
> 3.install nfs-ganmesha on node2
> cmake -DUSE
17 matches
Mail list logo