... and this is the core dump output while executing the "rbd diff" command:
http://paste.openstack.org/show/477604/
Regards,
Giuseppe
2015-10-28 16:46 GMT+01:00 Giuseppe Civitella
:
> Hi all,
>
> I'm trying to get the real disk usage of a Cinder volume converting
Hi all,
I'm trying to get the real disk usage of a Cinder volume converting this
bash commands to python:
http://cephnotes.ksperis.com/blog/2013/08/28/rbd-image-real-size
I wrote a small test function which has already worked in many cases but it
stops with a core dump while trying to calculate t
try to unset it
> for that pool and see what happens, or create a new pool without hashpspool
> enabled from the start. Just a guess.
>
> Warren
>
> From: Giuseppe Civitella giuseppe.civite...@gmail.com>>
> Date: Friday, October 2, 2015 at 10:05 AM
> To: ceph-users ma
Hi all,
I have a Firefly cluster which has been upgraded from Emperor.
It has 2 OSD hosts and 3 monitors.
The cluster has default values for what concerns size and min_size of the
pools.
Once upgraded to Firefly, I created a new pool called bench2:
ceph osd pool create bench2 128 128
and set its si
ny PGs.
>
> Saverio
>
> 2015-04-14 18:52 GMT+02:00 Giuseppe Civitella <
> giuseppe.civite...@gmail.com>:
> > Hi Saverio,
> >
> > I first made a test on my test staging lab where I have only 4 OSD.
> > On my mon servers (which run other services) I have 16BG RAM,
Remeber that everytime to create a new pool you add PGs into the
> system.
>
> Saverio
>
>
> 2015-04-14 17:58 GMT+02:00 Giuseppe Civitella <
> giuseppe.civite...@gmail.com>:
> > Hi all,
> >
> > I've been following this tutorial to realize my setup:
ards,
Giuseppe
2015-04-13 18:26 GMT+02:00 Giuseppe Civitella
:
> Hi all,
>
> I've got a Ceph cluster which serves volumes to a Cinder installation. It
> runs Emperor.
> I'd like to be able to replace some of the disks with OPAL disks and
> create a new pool which us
Hi all,
I've got a Ceph cluster which serves volumes to a Cinder installation. It
runs Emperor.
I'd like to be able to replace some of the disks with OPAL disks and create
a new pool which uses exclusively the latter kind of disk. I'd like to have
a "traditional" pool and a "secure" one coexisting
Hi all,
what happens to data contained in an rbd image when the image itself gets
deleted?
Are the data just unlinked or are them destroyed in a way that make them
unreadable?
thanks
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://
Hi all,
I'm working on a lab setup regarding Ceph serving rbd images as ISCSI
datastores to VMWARE via a LIO box. Is there someone that already did
something similar wanting to share some knowledge? Any production
deployments? What about LIO's HA and luns' performances?
Thanks
Giuseppe
__
Hi all,
I'm using deph-deploy on Ubuntu 14.04. When I do a ceph-deploy install I
see packages getting installed from ubuntu repositories instead of ceph's
ones, am I missing something? Do I need to do some pinning on repositories?
Thanks
___
ceph-users
gt;
>
> On Tue, Dec 9, 2014 at 9:45 AM, Gregory Farnum wrote:
>
>> It looks like your OSDs all have weight zero for some reason. I'd fix
>> that. :)
>> -Greg
>>
>> On Tue, Dec 9, 2014 at 6:24 AM Giuseppe Civitella <
>> giuseppe.civit
backfill": [],
"last_backfill_started": "0\/\/0\/\/-1",
"backfill_info": { "begin": "0\/\/0\/\/-1",
"end": "0\/\/0\/\/-1",
"objects": []},
"
Hi all,
last week I installed a new ceph cluster on 3 vm running Ubuntu 14.04 with
default kernel.
There is a ceph monitor a two osd hosts. Here are some datails:
ceph -s
cluster c46d5b02-dab1-40bf-8a3d-f8e4a77b79da
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1
14 matches
Mail list logo