Re: [ceph-users] Core dump while getting a volume real size with a python script

2015-10-29 Thread Giuseppe Civitella
... and this is the core dump output while executing the "rbd diff" command: http://paste.openstack.org/show/477604/ Regards, Giuseppe 2015-10-28 16:46 GMT+01:00 Giuseppe Civitella : > Hi all, > > I'm trying to get the real disk usage of a Cinder volume converting

[ceph-users] Core dump while getting a volume real size with a python script

2015-10-28 Thread Giuseppe Civitella
Hi all, I'm trying to get the real disk usage of a Cinder volume converting this bash commands to python: http://cephnotes.ksperis.com/blog/2013/08/28/rbd-image-real-size I wrote a small test function which has already worked in many cases but it stops with a core dump while trying to calculate t

Re: [ceph-users] pgs stuck unclean on a new pool despite the pool size reconfiguration

2015-10-02 Thread Giuseppe Civitella
try to unset it > for that pool and see what happens, or create a new pool without hashpspool > enabled from the start. Just a guess. > > Warren > > From: Giuseppe Civitella giuseppe.civite...@gmail.com>> > Date: Friday, October 2, 2015 at 10:05 AM > To: ceph-users ma

[ceph-users] pgs stuck unclean on a new pool despite the pool size reconfiguration

2015-10-02 Thread Giuseppe Civitella
Hi all, I have a Firefly cluster which has been upgraded from Emperor. It has 2 OSD hosts and 3 monitors. The cluster has default values for what concerns size and min_size of the pools. Once upgraded to Firefly, I created a new pool called bench2: ceph osd pool create bench2 128 128 and set its si

Re: [ceph-users] Binding a pool to certain OSDs

2015-04-15 Thread Giuseppe Civitella
ny PGs. > > Saverio > > 2015-04-14 18:52 GMT+02:00 Giuseppe Civitella < > giuseppe.civite...@gmail.com>: > > Hi Saverio, > > > > I first made a test on my test staging lab where I have only 4 OSD. > > On my mon servers (which run other services) I have 16BG RAM,

Re: [ceph-users] Binding a pool to certain OSDs

2015-04-14 Thread Giuseppe Civitella
Remeber that everytime to create a new pool you add PGs into the > system. > > Saverio > > > 2015-04-14 17:58 GMT+02:00 Giuseppe Civitella < > giuseppe.civite...@gmail.com>: > > Hi all, > > > > I've been following this tutorial to realize my setup:

Re: [ceph-users] Binding a pool to certain OSDs

2015-04-14 Thread Giuseppe Civitella
ards, Giuseppe 2015-04-13 18:26 GMT+02:00 Giuseppe Civitella : > Hi all, > > I've got a Ceph cluster which serves volumes to a Cinder installation. It > runs Emperor. > I'd like to be able to replace some of the disks with OPAL disks and > create a new pool which us

[ceph-users] Binding a pool to certain OSDs

2015-04-13 Thread Giuseppe Civitella
Hi all, I've got a Ceph cluster which serves volumes to a Cinder installation. It runs Emperor. I'd like to be able to replace some of the disks with OPAL disks and create a new pool which uses exclusively the latter kind of disk. I'd like to have a "traditional" pool and a "secure" one coexisting

[ceph-users] Rbd image's data deletion

2015-03-03 Thread Giuseppe Civitella
Hi all, what happens to data contained in an rbd image when the image itself gets deleted? Are the data just unlinked or are them destroyed in a way that make them unreadable? thanks Giuseppe ___ ceph-users mailing list ceph-users@lists.ceph.com http://

[ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-14 Thread Giuseppe Civitella
Hi all, I'm working on a lab setup regarding Ceph serving rbd images as ISCSI datastores to VMWARE via a LIO box. Is there someone that already did something similar wanting to share some knowledge? Any production deployments? What about LIO's HA and luns' performances? Thanks Giuseppe __

[ceph-users] Ceph-deploy install and pinning on Ubuntu 14.04

2014-12-20 Thread Giuseppe Civitella
Hi all, I'm using deph-deploy on Ubuntu 14.04. When I do a ceph-deploy install I see packages getting installed from ubuntu repositories instead of ceph's ones, am I missing something? Do I need to do some pinning on repositories? Thanks ___ ceph-users

Re: [ceph-users] active+degraded on an empty new cluster

2014-12-10 Thread Giuseppe Civitella
gt; > > On Tue, Dec 9, 2014 at 9:45 AM, Gregory Farnum wrote: > >> It looks like your OSDs all have weight zero for some reason. I'd fix >> that. :) >> -Greg >> >> On Tue, Dec 9, 2014 at 6:24 AM Giuseppe Civitella < >> giuseppe.civit

Re: [ceph-users] active+degraded on an empty new cluster

2014-12-09 Thread Giuseppe Civitella
backfill": [], "last_backfill_started": "0\/\/0\/\/-1", "backfill_info": { "begin": "0\/\/0\/\/-1", "end": "0\/\/0\/\/-1", "objects": []}, "

[ceph-users] active+degraded on an empty new cluster

2014-12-09 Thread Giuseppe Civitella
Hi all, last week I installed a new ceph cluster on 3 vm running Ubuntu 14.04 with default kernel. There is a ceph monitor a two osd hosts. Here are some datails: ceph -s cluster c46d5b02-dab1-40bf-8a3d-f8e4a77b79da health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean monmap e1