Hi Sébastien,
Thanks for reply!
As per the configuration in the link , now I am able to access multiple
ceph pools through Cinder.
But i have one question - As per the link
http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova we need to
provide the pool name for the parameter "*libvi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 27/02/14 09:29, Vikrant Verma wrote:
> But i have one question - As per the link
> http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova
> we need to provide the pool name for the parameter
> "*libvirt_images_rbd_pool*" in nova.conf of
Hi Michael,
On Tue, Feb 25, 2014 at 10:01:31PM +, Michael wrote:
> Just wondering if there was a reason for no packages for Ubuntu Saucy in
> http://ceph.com/packages/ceph-extras/debian/dists/. Could do with
> upgrading to fix a few bugs but would hate to have to drop Ceph from
> being hand
Thanks Tim, I'll give the raring packages a try.
Found a tracker for Saucy packages, looks like the person they were
assigned to hasn't checking in for a fair while so they might have just
been overlooked http://tracker.ceph.com/issues/6726.
-Michael
On 27/02/2014 13:33, Tim Bishop wrote:
Hi
Hi Larry and Yehuda,
Feeling happy that my Rados gateway working fine.
I have done 3 mistakes
1) hostname is used instead of {fqdn}
2) radosgw keyring is generated only "+r" instead of "rw"
3) same keyring should be copied to all cluster nodes
4) run radosgw manually
thanks for support
Srinivas
On Thu, Feb 27, 2014 at 7:53 AM, Erik Tank wrote:
> On http://ceph.com/docs/master/radosgw/adminops/ a "configurable 'admin'
> resource entry point" is described. I have the URI, however, I can not find
> information on how to get/set the 'admin' credentials.
>
> Any help is greatly appreciated
On http://ceph.com/docs/master/radosgw/adminops/ a "configurable 'admin'
resource entry point" is described. I have the URI, however, I can not find
information on how to get/set the 'admin' credentials.
Any help is greatly appreciated,
Erik Tank
et...@liquidweb.com
__
I recently added a 3rd node to my cluster, and increased the pool size to 3.
Latency was initially so bad that OSDs were being kicked out for being
unresponsive. I checked the list, and changed
osd max backfills = 1
osd recovery op priority = 1
That's helped. OSDs aren't so slow that they ge
Hi,
I was handed a Ceph cluster that had just lost quorum due to 2/3 mons
(b,c) running out of disk space (using up 15GB each). We were trying to
rescue this cluster without service downtime. As such we freed up some
space to keep mon b running a while longer, which succeeded, quorum
restored (a,b
On Thu, Feb 27, 2014 at 4:25 PM, Marc wrote:
> Hi,
>
> I was handed a Ceph cluster that had just lost quorum due to 2/3 mons
> (b,c) running out of disk space (using up 15GB each). We were trying to
> rescue this cluster without service downtime. As such we freed up some
> space to keep mon b runn
Hi,
thanks for the reply. I updated one of the new mons. And after a
resonably long init phase (inconsistent state), I am now seeing these:
2014-02-28 01:05:12.344648 7fe9d05cb700 0 cephx: verify_reply coudln't
decrypt with error: error decoding block for decryption
2014-02-28 01:05:12.345599 7f
Hi all,
last release I propose a KeyValueStore prototype(get info from
http://sebastien-han.fr/blog/2013/12/02/ceph-performance-interesting-things-going-on).
It contains some performance results and problems. Now I'd like to
refresh our thoughts on KeyValueStore.
KeyValueStore is pursuing FileSto
I'm looking for the debug messages in Client.cc, which uses ldout
(library debugging). I increased the client debug level for all
daemons (i.e. under [global] in ceph.conf) and verified that it got
set:
$ ceph --admin-daemon /var/run/ceph/ceph-mon.issdm-3.asok config show
| grep client
"client": "
Thanks for the report !
Results seem to be encouraging . (Is is leveldb keystore ?)
Thanks to fio-rbd, it'll be easier to do random io benchmark now !
(I'm waiting to see if rocksdb will improve things in the future)
Regards,
Alexandre
- Mail original -
De: "Haomai Wang"
À: ceph-us
14 matches
Mail list logo