Re: [ceph-users] Ceph random read IOPS

2017-06-26 Thread Christian Wuerdig
Well, preferring faster clock CPUs for SSD scenarios has been floated several times over the last few months on this list. And realistic or not, Nick's and Kostas' setup are similar enough (testing single disk) that it's a distinct possibility. Anyway, as mentioned measuring the performance counter

Re: [ceph-users] cannot open /dev/xvdb: Input/output error

2017-06-26 Thread Mykola Golub
On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote: > > Il 25/06/2017 21:52, Mykola Golub ha scritto: > >On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote: > >>I can see the error even if I easily run list-mapped: > >> > >># rbd-nbd list-mapped > >>/dev

[ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
Hi, I have this OSD: root@ceph-storage-rbx-1:~# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.70432 root default -2 10.85216 host ceph-storage-rbx-1 0 3.61739 osd.0up 1.0 1.0 2 3.61739 os

Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Ashley Merrick
Hello, Will need to see a full export of your crush map rules. Depends what the failure domain is set to. ,Ash Sent from my iPhone On 26 Jun 2017, at 4:11 PM, Stéphane Klein mailto:cont...@stephane-klein.info>> wrote: Hi, I have this OSD: root@ceph-storage-rbx-1:~# ceph osd tree ID WEIGHT

[ceph-users] Snapshot removed, cluster thrashed...

2017-06-26 Thread Marco Gaiarin
I've hitted some strange things in my ceph cluster, and i'm asking some fedback here. Some cluster info: 3 nodes, 12 OSD (4 per node, symmetrical), size=3. Proxmox based, still on hammer, so used for RBD only. The cluser, was bult using some spare server, and there's a node that are 'underpowered

Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
2017-06-26 11:15 GMT+02:00 Ashley Merrick : > Will need to see a full export of your crush map rules. > This is my crush map rules: # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable choosel

Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Ashley Merrick
Your going across host’s so each replication will be on a different host. ,Ashley Sent from my iPhone On 26 Jun 2017, at 4:39 PM, Stéphane Klein mailto:cont...@stephane-klein.info>> wrote: 2017-06-26 11:15 GMT+02:00 Ashley Merrick mailto:ash...@amerrick.co.uk>>: Will need to see a full expo

Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
2017-06-26 11:48 GMT+02:00 Ashley Merrick : > Your going across host’s so each replication will be on a different host. > Thanks :) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ideas on the UI/UX improvement of ceph-mgr: Cluster Status Dashboard

2017-06-26 Thread Massimiliano Cuttini
Hi Saumay, i think you should take in account to track SMART on every SSD founded. If it has SMART capabilities, then track its test (or commit tests) and display its values on the dashboard (or separate graph). This allow ADMINS to forecast the next OSD will die. Preventing is better than Res

[ceph-users] Primary Affinity / EC Pool

2017-06-26 Thread Ashley Merrick
Have some 8TB drives I am looking to remove from cluster Long term however would like to make use of Primary Affinity to decrease the reads going to these drives. I have a replication and erasure code pool, I understand when setting the primary Affinity to 0 no PG’s will have their Primary PG s

Re: [ceph-users] Snapshot removed, cluster thrashed...

2017-06-26 Thread Peter Maloney
On 06/26/17 11:36, Marco Gaiarin wrote: > ... > Three question: > > a) while a 'snapshot remove' action put system on load? > > b) as for options like: > > osd scrub during recovery = false > osd recovery op priority = 1 > osd recovery max active = 5 > osd max backfill

Re: [ceph-users] v12.1.0 Luminous RC released

2017-06-26 Thread Ashley Merrick
With the EC Overwite support, if currently running behind a cache tier in Jewel will the overwrite still be of benefit through the cache tier and remove the need to promote the full block to make any edits? Or we better totally removing the cache tier once fully upgraded? ,Ashley Sent from my

Re: [ceph-users] Sparse file info in filestore not propagated to other OSDs

2017-06-26 Thread Piotr Dalek
On 17-06-21 03:24 PM, Sage Weil wrote: > On Wed, 21 Jun 2017, Piotr Dałek wrote: >> On 17-06-14 03:44 PM, Sage Weil wrote: >>> On Wed, 14 Jun 2017, Paweł Sadowski wrote: [snip] Is it safe to enable "filestore seek hole", are there any tests that verifies that everything related

Re: [ceph-users] Snapshot removed, cluster thrashed...

2017-06-26 Thread Lindsay Mathieson
On 26/06/2017 7:36 PM, Marco Gaiarin wrote: Last week i've used by the first time the snapshot feature. I've done some test, before, on some ''spare'' VM doing snapshot on a powered off VM (as expected, was merely istantaneus) and on a powered on one (clearly, snapshotting the RAM pose some stres

Re: [ceph-users] Object repair not going as planned

2017-06-26 Thread Brady Deetz
Resolved. After all of the involved OSDs had been down for a while, I brought them back up and issued another ceph pg repair. We are clean now. On Sun, Jun 25, 2017 at 11:54 PM, Brady Deetz wrote: > I should have mentioned, I'm running ceph jewel 10.2.7 > > On Sun, Jun 25, 2017 at 11:46 PM, Bra

Re: [ceph-users] Ceph random read IOPS

2017-06-26 Thread Willem Jan Withagen
On 26-6-2017 09:01, Christian Wuerdig wrote: > Well, preferring faster clock CPUs for SSD scenarios has been floated > several times over the last few months on this list. And realistic or > not, Nick's and Kostas' setup are similar enough (testing single disk) > that it's a distinct possibility. >

Re: [ceph-users] Snapshot removed, cluster thrashed...

2017-06-26 Thread David Turner
Snapshots are not a free action. To create them it's near enough free, but deleting objects in Ceph is an n^2 operation. Being on Hammer you do not have access to the object map feature on RBDs which drastically reduces the n^2 problem by keeping track of which objects it actually needs to delete

Re: [ceph-users] Ideas on the UI/UX improvement of ceph-mgr: Cluster Status Dashboard

2017-06-26 Thread Brady Deetz
+1 on SMART tracking On Mon, Jun 26, 2017 at 5:19 AM, Massimiliano Cuttini wrote: > Hi Saumay, > > i think you should take in account to track SMART on every SSD founded. > If it has SMART capabilities, then track its test (or commit tests) and > display its values on the dashboard (or separate

Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread David Turner
Just so you're aware of why that's the case, the line step chooseleaf firstn 0 type host in your crush map under the rules section says "host". If you changed that to "osd", then your replicas would be unique per OSD instead of per server. If you had a larger cluster and changed it to "rack" an

Re: [ceph-users] Ceph random read IOPS

2017-06-26 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Willem Jan Withagen > Sent: 26 June 2017 14:35 > To: Christian Wuerdig > Cc: Ceph Users > Subject: Re: [ceph-users] Ceph random read IOPS > > On 26-6-2017 09:01, Christian Wuerdig wrote: > >

Re: [ceph-users] Multi Tenancy in Ceph RBD Cluster

2017-06-26 Thread David Turner
I don't know specifics on Kubernetes or creating multiple keyrings for servers, so I'll leave those for someone else. I will say that if you are kernel mapping your RBDs, then the first tenant to do so will lock the RBD and no other tenant can map it. This is built into Ceph. The original tenant

Re: [ceph-users] Snapshot removed, cluster thrashed...

2017-06-26 Thread Marco Gaiarin
Mandi! Lindsay Mathieson In chel di` si favelave... > Have you tried restoring a snapshot? I found it unusablly slow - as in hours No, still no; i've never restored a snapshot... -- dott. Marco Gaiarin GNUPG Key ID: 240A3D66 Associazione ``La Nostra Famig

Re: [ceph-users] Snapshot removed, cluster thrashed...

2017-06-26 Thread Jason Dillaman
Restoring a snapshot involves copying the entire image from the snapshot revision to the HEAD revision. The faster approach would be to just create a clone from the snapshot. 2017-06-26 10:59 GMT-04:00 Marco Gaiarin : > Mandi! Lindsay Mathieson > In chel di` si favelave... > >> Have you tried re

Re: [ceph-users] Ceph random read IOPS

2017-06-26 Thread Maged Mokhtar
On 2017-06-26 15:34, Willem Jan Withagen wrote: > On 26-6-2017 09:01, Christian Wuerdig wrote: > >> Well, preferring faster clock CPUs for SSD scenarios has been floated >> several times over the last few months on this list. And realistic or >> not, Nick's and Kostas' setup are similar enough (

Re: [ceph-users] cannot open /dev/xvdb: Input/output error

2017-06-26 Thread Massimiliano Cuttini
On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote: Il 25/06/2017 21:52, Mykola Golub ha scritto: On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote: I can see the error even if I easily run list-mapped: # rbd-nbd list-mapped /dev/nbd0 2017-06-

Re: [ceph-users] cannot open /dev/xvdb: Input/output error

2017-06-26 Thread Mykola Golub
On Mon, Jun 26, 2017 at 07:12:31PM +0200, Massimiliano Cuttini wrote: > >In your case (rbd-nbd) this error is harmless. You can avoid them > >setting in ceph.conf, [client] section something like below: > > > > admin socket = /var/run/ceph/$name.$pid.asok > > > >Also to make every rbd-nbd process

Re: [ceph-users] Multi Tenancy in Ceph RBD Cluster

2017-06-26 Thread Mayank Kumar
Thanks David, few more questions:- - Is there a way to limit the capability of the keyring which is used to map/unmap/lock to only allow those operations and nothing else using that specific keyring - For a single pool, is there a way to generate multiple keyrings where a rbd cannot be mapped by te

[ceph-users] free space calculation

2017-06-26 Thread Papp Rudolf Péter
Dear cephers, Could someone show me an url where can I found how ceph calculate the available space? I've installed a small ceph (Kraken) environment with bluestore OSDs. The servers contains 2 disks and 1 ssd. The disk 1. part is UEFI (~500 MB), 2. raid (~50GB), 3. ceph disk (450-950MB). 1

Re: [ceph-users] free space calculation

2017-06-26 Thread David Turner
What is the output of `lsblk`? On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter wrote: > Dear cephers, > > Could someone show me an url where can I found how ceph calculate the > available space? > > I've installed a small ceph (Kraken) environment with bluestore OSDs. > The servers contains 2

Re: [ceph-users] free space calculation

2017-06-26 Thread David Turner
The output of `sudo df -h` would also be helpful. Sudo/root is generally required because the OSD folders are only readable by the Ceph user. On Mon, Jun 26, 2017 at 4:37 PM David Turner wrote: > What is the output of `lsblk`? > > On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter wrote: > >> D

Re: [ceph-users] free space calculation

2017-06-26 Thread Papp Rudolf Péter
Hi David! lsblk: NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:00 931,5G 0 disk ├─sda18:10 476M 0 part ├─sda28:20 46,6G 0 part │ └─md0 9:00 46,5G 0 raid1 / └─sda38:30 884,5G 0 part /var/lib/ceph/osd/ceph-3 sdb 8:16 0 931,5G 0 disk ├

Re: [ceph-users] free space calculation

2017-06-26 Thread Papp Rudolf Péter
sudo df -h: udev3,9G 0 3,9G 0% /dev tmpfs 790M 19M 771M 3% /run /dev/md0 46G 2,5G 41G 6% / tmpfs 3,9G 0 3,9G 0% /dev/shm tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup /dev/sdb1

Re: [ceph-users] Multi Tenancy in Ceph RBD Cluster

2017-06-26 Thread Jason Dillaman
On Mon, Jun 26, 2017 at 2:55 PM, Mayank Kumar wrote: > Thanks David, few more questions:- > - Is there a way to limit the capability of the keyring which is used to > map/unmap/lock to only allow those operations and nothing else using that > specific keyring Since RBD is basically just a collect

Re: [ceph-users] free space calculation

2017-06-26 Thread David Turner
And the `sudo df -h`? Also a `ceph df` might be helpful to see what's going on. On Mon, Jun 26, 2017 at 4:41 PM Papp Rudolf Péter wrote: > Hi David! > > lsblk: > > NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT > sda 8:00 931,5G 0 disk > ├─sda18:10 476M 0 part > ├─sda28

Re: [ceph-users] qemu-kvm vms start or reboot hang long time while using the rbd mapped image

2017-06-26 Thread Jason Dillaman
May I ask why you are using krbd with QEMU instead of librbd? On Fri, Jun 16, 2017 at 12:18 PM, 码云 wrote: > Hi All, > Recently.I meet a question and I did'nt find out any thing for explain it. > > Ops process like blow: > ceph 10.2.5 jewel, qemu 2.5.0 centos 7.2 x86_64 > create pool rbd_vms 3

Re: [ceph-users] free space calculation

2017-06-26 Thread David Turner
I'm not seeing anything that would show anything to indicate a problem. The weights, cluster size, etc all say that ceph only sees 30GB per osd. I don't see what is causing the discrepancy. Anyone else have any ideas? On Mon, Jun 26, 2017, 5:02 PM Papp Rudolf Péter wrote: > sudo df -h: > udev

Re: [ceph-users] Ceph random read IOPS

2017-06-26 Thread Christian Balzer
On Mon, 26 Jun 2017 15:06:46 +0100 Nick Fisk wrote: > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Willem Jan Withagen > > Sent: 26 June 2017 14:35 > > To: Christian Wuerdig > > Cc: Ceph Users > > Subject: Re: [ceph-users] Ceph ran

Re: [ceph-users] free space calculation

2017-06-26 Thread Papp Rudolf Péter
sudo df -h: udev3,9G 0 3,9G 0% /dev tmpfs 790M 19M 771M 3% /run /dev/md0 46G 2,5G 41G 6% / tmpfs 3,9G 0 3,9G 0% /dev/shm tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup /dev/sdb1

[ceph-users] ceph-mon not starting on Ubuntu 16.04 with Luminous RC

2017-06-26 Thread Wido den Hollander
Hi, Just checking before I start looking into ceph-deploy if the behavior I'm seeing is correct. On a freshly installed Ubuntu 16.04 + Luminous 12.1.0 system I see that my ceph-mon services aren't starting on boot. Deployed Ceph on three machines: alpha, bravo and charlie. Using 'alpha' I've

[ceph-users] ?????? qemu-kvm vms start or reboot hang long time whileusing the rbd mapped image

2017-06-26 Thread ????
Hi Jason, In one VDI integrated test environment, we need to known the best practise. It seems like librbd performance weak than krbd. qemu 2.5.0 is not link to librbd unless manual configure and compile it. By the way, rbd and libceph ko code are both adjusted lots of place in the centos7.3, ar