Hello Michael,
1. Perhaps I'm misunderstanding, but can Ceph present a SCSI interface? I
don't understand how that would help with reducing the size of the rbd.
4. Heh. Tell Me about it [3]. But based on that experience, it *seemed* like
I could read ok on the different nodes where the rbd wa
1. How about enabling trim/discard support in virtio-SCSI and using fstrim?
That might work for you.
4. Well you can mount them rw in multiple vm's with predictably bad results,
so I don't see any reason why you could not specify ro as a mount option and do
ok.
Sent from my iPad
> On Oct 21
Hello,
Are there any current Perl modules for Ceph? I found a thread [1] from
2011 with a version of Ceph::RADOS, but it only has functions to deal with
pools, and the ->list_pools function causes a seg. fault.
I'm interested in controlling Ceph via script / application and I was
wondering [hopi
So I have tried to enable usage logging on a new production Ceph RadosGW
cluster but nothing seems to show up.
I have added to the [client.radosgw.] section the following
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rg
On Sun, 20 Oct 2013, Ugis wrote:
> >> output follows:
> >> #pvs -o pe_start /dev/rbd1p1
> >> 1st PE
> >> 4.00m
> >> # cat /sys/block/rbd1/queue/minimum_io_size
> >> 4194304
> >> # cat /sys/block/rbd1/queue/optimal_io_size
> >> 4194304
> >
> > Well, the parameters are being set at least. Mike
On Mon, Oct 21, 2013 at 9:50 AM, 鹏 wrote:
>
> hi all,
> today , I ceph cluster has soming wrong! one of my mdss is loaggy!
>
>#ceph -s
>health HEALTH_WARN 1 is laggy
>
>I restart it
>
> # scrvice ceph -a restart mds.1
>
> It is ok at first! but a few M
hi all,
today , I ceph cluster has soming wrong! one of my mdss is loaggy!
#ceph -s
health HEALTH_WARN 1 is laggy
I restart it
# scrvice ceph -a restart mds.1
It is ok at first! but a few Minutes latter!
#ceph -s
health HEALT
Moved to ceph-devel, and opened http://tracker.ceph.com/issues/6598
Have you tried to reproduce this on dumpling or later?
Thanks!
sage
On Sat, 19 Oct 2013, Andrey Korolyov wrote:
> Hello,
>
> I was able to reproduce following on the top of current cuttlefish:
>
> - create pool,
> - delete i
On 10/20/2013 08:18 AM, Ugis wrote:
output follows:
#pvs -o pe_start /dev/rbd1p1
1st PE
4.00m
# cat /sys/block/rbd1/queue/minimum_io_size
4194304
# cat /sys/block/rbd1/queue/optimal_io_size
4194304
Well, the parameters are being set at least. Mike, is it possible that
having minimum_io
>> output follows:
>> #pvs -o pe_start /dev/rbd1p1
>> 1st PE
>> 4.00m
>> # cat /sys/block/rbd1/queue/minimum_io_size
>> 4194304
>> # cat /sys/block/rbd1/queue/optimal_io_size
>> 4194304
>
> Well, the parameters are being set at least. Mike, is it possible that
> having minimum_io_size set to
10 matches
Mail list logo