need
to delete that rbd anyway.
Ugis
2015-06-06 8:53 GMT+03:00 Ugis :
> Hi,
>
> I had recent problem with flapping hdd and in result I need to delete
> broken rbd.
> Problem is all operations towards this rbd stuck. I even cannot delete
> rbd - it sits on 6% done and I found th
any way
will do that eventually helps to delete that rbd.
Ugis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks a lot, that helped.
Should have cleaned leftovers before :)
Ugis
2013/12/3 Gregory Farnum :
> CRUSH is failing to map all the PGs to the right number of OSDs.
> You've got a completely empty host which has ~1/3 of the cluster's
> total weight, and that is probably why —
pgs.
state on remapped pgs like:
{ "state": "active+remapped",
"epoch": 9420,
"up": [
9],
"acting": [
9,
5],
Any help/hints how to trigger those stuck pgs to up state on 2 osds?
Ugis
2013/11/22 Ugis :
> Updat
!
Zabbix is free & open source.
http://www.zabbix.com/download.php
Good luck!
Ugis
2013/11/21 John Kinsella :
> As an OSD is just a partition, you could use any of the monitoring packages
> out there? (I like opsview…)
>
> We use the check-ceph-status nagios plugin[1] to monitor
"pos": 3}]},
{ "id": -7,
"name": "ceph8",
"type_id": 1,
"type_name": "host",
"weight": 0,
"alg": "straw",
"hash":
5]},
"empty": 0,
"dne": 0,
"incomplete": 0,
"last_epoch_started": 9159},
"recovery_state": [
{ "name": "Started\/Primary\/Active",
"enter_time": "2013-11-21 16
iting_on_backfill": 0,
"backfill_pos": "0\/\/0\/\/-1",
"backfill_info": { "begin": "0\/\/0\/\/-1",
"end": "0\/\/0\/\/-1",
"objects": []},
ault_policy = "allocate"
mirror_image_fault_policy = "remove"
use_mlockall = 0
monitoring = 1
polling_interval = 15
}
Hope something can be done still, or I will have to move several TB
off the LVM :)
Anyway, it does not feel like the problem cause is clear. May be I
Mike, is it possible that
> having minimum_io_size set to 4m is causing some read amplification
> in LVM, translating a small read into a complete fetch of the PE (or
> somethinga long those lines)?
>
> Ugis, if your cluster is on the small side, it might be interesting to see
&
> Ugis, please provide the output of:
>
> RBD_DEVICE=
> pvs -o pe_start $RBD_DEVICE
> cat /sys/block/$RBD_DEVICE/queue/minimum_io_size
> cat /sys/block/$RBD_DEVICE/queue/optimal_io_size
>
> The 'pvs' command will tell you where LVM aligned the start of the
acpi mmx fxsr ss
e sse2 ss ht tm
pbe syscall nx lm constant_tsc pebs bts nopl pni dtes64 monitor ds_cpl
cid cx16 xtpr lahf_lm
bogomips: 6400.15
clflush size: 64
cache_alignment : 128
address sizes : 36 bits physical, 48 bits virtual
power mana
matures faster than btrfs.
Ugis
2013/9/11 Mark Nelson :
> On 09/11/2013 08:58 AM, Ugis wrote:
>>
>> Hi,
>>
>> I wonder is ocfs2 suitable for hosting OSD data?
>> In ceph documentation only XFS, ext4 and btrfs are discussed, but
>> looking at ocfs2 feature lis
also work for single node.
Just wondering would OSD work on ocfs2 and what would performance
characteristics be.
Any thoughts/experience?
BR,
Ugis Racko
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
ph via FC.
http://linux-iscsi.org/wiki/Target
Ugis
2013/7/5 Gregory Farnum :
> On Thu, Jul 4, 2013 at 7:09 PM, huangjun wrote:
>> hi,all
>> there are some questions about ceph.
>> 1) can i get the osd list that hold objects consist of a file by
>> command-line?
>
&
Thanks! Rethinking same first example I think it is doable even like
shown there. Nothing prevents mapping osds to host-like entities
whatever they are called.
2013/6/20 Gregory Farnum :
> On Thursday, June 20, 2013, Edward Huyer wrote:
>>
>> > Hi,
>> >
>> > I am thinking how to make ceph with 2 p
ke 2 pools work on same hardware?
Ugis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
will allways be palce
for advanced know-how tuning, this would just be for easy estimated
calculations to get started.
Both things seem to naturally land to http://wiki.ceph.com/ and be hosted
there as current central ceph knowledge base, right? :)
What do you think on ch
18 matches
Mail list logo