Re: [ceph-users] mounting a VM rbd image as a /dev/rbd0 device

2016-08-25 Thread Oleksandr Natalenko
You'd: 1) inspect /dev/rbd0 with fdisk -l to get partitions offsets; 2) mount desired partition with -o offset= option. On четвер, 25 серпня 2016 р. 17:31:52 EEST Deneau, Tom wrote: > If I have an rbd image that is being used by a VM and I want to mount it > as a read-only /dev/rbd0 kernel device

[ceph-users] ceph-release RPM has broken URL

2016-06-22 Thread Oleksandr Natalenko
Hello. ceph-release-1-1.el7.noarch.rpm [1] is considered to be broken now because it contains wrong baseurl: === baseurl=http://ceph.com/rpm-hammer/rhel7/$basearch === That leads to 404 for yum trying to use it. I believe, "rhel7" should be replaced by "el7", and ceph-release-1-2.el7.noarch

Re: [ceph-users] jewel upgrade : MON unable to start

2016-05-02 Thread Oleksandr Natalenko
Why do you upgrade osds first if it is necessary to upgrade mons before everything else? On May 2, 2016 5:31:43 PM GMT+03:00, SCHAER Frederic wrote: >Hi, > >I'm < sort of > following the upgrade instructions on CentOS 7.2. >I upgraded 3 OSD nodes without too many issues, even if I would rewrite

Re: [ceph-users] Status of CephFS

2016-04-13 Thread Oleksandr Natalenko
Any direct experience with CephFS? Haven't tried anything newer than Hammer, but in Hammer CephFS is unable to back-press very active clients. For example, rsyncing lots of files to Ceph mount could result in MDS log overflow and OSD slow requests, especially if MDS log in located on SSD and

Re: [ceph-users] Status of CephFS

2016-04-13 Thread Oleksandr Natalenko
13.04.2016 11:31, Vincenzo Pii wrote: The setup would include five nodes, two monitors and three OSDs, so data would be redundant (we would add the MDS for CephFS, of course). You need uneven number of mons. In your case I would setup mons on all 5 nodes, or at least on 3 of them. ___

Re: [ceph-users] about PG_Number

2015-11-13 Thread Oleksandr Natalenko
"Learning Ceph" book gives us the following formula: PGs = OSDs × 100 / (replicas × pools) Saying, you have 10 OSDs and 5 pools with replica 2, you get: PGs = 10 × 100 / (2 × 5) = 100 PGs (per pool) It is also advised to round PGs count up to nearest power of 2. In this case, to 128. In typ

Re: [ceph-users] Permanent MDS restarting under load

2015-11-10 Thread Oleksandr Natalenko
10.11.2015 22:38, Gregory Farnum wrote: Which requests are they? Are these MDS operations or OSD ones? Those requests appeared in ceph -w output and are the follows: https://gist.github.com/5045336f6fb7d532138f Is that correct that there are OSD operations blocked? osd.3 is one of data poo

[ceph-users] Permanent MDS restarting under load

2015-11-10 Thread Oleksandr Natalenko
Hello. We have CephFS deployed over Ceph cluster (0.94.5). We experience constant MDS restarting under high IOPS workload (e.g. rsyncing lots of small mailboxes from another storage to CephFS using ceph-fuse client). First, cluster health goes to HEALTH_WARN state with the following disclaime

Re: [ceph-users] pgs per OSD

2015-11-05 Thread Oleksandr Natalenko
(128*2+256*2+256*14+256*5)/15 =~ 375. On Thursday, November 05, 2015 10:21:00 PM Deneau, Tom wrote: > I have the following 4 pools: > > pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash > rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool > stripe_width 0 poo