Stas,
as you said: "Each server has 15G flash for ceph journal and 12*2Tb
SATA disk for"
What is this 15G flash and is it used for all 12 SATA drives?
On Thu, Oct 15, 2015 at 1:05 PM, John Spray wrote:
> On Thu, Oct 15, 2015 at 8:46 PM, Butkeev Stas wrote:
>> Thank you for your comment. I know
rw-r--r-- 1 root root 1048576000 Oct 15 19:03 journal
>
> /CEPH_JOURNAL/osd/ceph-3:
> total 1024000
> -rw-r--r-- 1 root root 1048576000 Oct 15 19:03 journal
> ...
> --
> Best Regards,
> Stanislav Butkeev
>
>
> 15.10.2015, 23:26, "Max Yehorov" :
>>
I was not able to trigger eviction using percentage settings. I run
the hot pool into "cluster is full" and the eviction did not start. As
an option a threshold on # of objects did trigger an eviction.
Unfortunately it stalled all the writes to the hot pool until the
eviction was complete.
On Th
I am trying to add a filestore OSD node to my cluster and got this
during ceph-deploy activate.
The message still appears when "ceph-disk activate" is run as root. Is
this functionality broken in 9.1.0 or is it something misconfigured on
my box? And /var/lib/ceph is chown'ed to ceph:ceph.
[WARNING
I am trying to pass deep-flatten during clone creation and got this:
rbd clone --image-feature deep-flatten d0@s0 d1
rbd: image format can only be set when creating or importing an image
On Fri, Oct 23, 2015 at 6:27 AM, Jason Dillaman wrote:
>> After reading and understanding your mail, i moved
#L3449
On Fri, Oct 23, 2015 at 1:53 PM, Max Yehorov wrote:
> I am trying to pass deep-flatten during clone creation and got this:
>
> rbd clone --image-feature deep-flatten d0@s0 d1
>
> rbd: image format can only be set when creating or importing an image
>
> On Fri, Oct 23, 2
Hi,
If anyone has some insight or comments on the question:
Q) Flatten with IO activity
For example I have a clone chain:
IMAGE(PARENT)
image1(-)
image2(image1@snap0)
image2 is mapped, mounted and has some IO activity.
How safe to flatten image2 if it has ongoing IO
thanks.
___
re: python library
you can do some mon calls using this:
##--
from ceph_argparse import json_command as json_command
rados_inst = rados.Rados()
cluster_handle = rados_inst.connect()
cmd = {'prefix': 'pg dump', 'dumpcontents': ['summary', ], 'format': 'json'}
retcode, jsonret, errstr
There was definitely missing the "list watchers" command.
On Wed, Mar 8, 2017 at 4:16 PM, Josh Durgin wrote:
> On 03/08/2017 02:15 PM, Kent Borg wrote:
>>
>> On 03/08/2017 05:08 PM, John Spray wrote:
>>>
>>> Specifically?
>>> I'm not saying you're wrong, but I am curious which bits in particular