Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-22 Thread Alexandre DERUMIER
>>Just an update, there seems to be no proper way to pass iothread >>parameter from openstack-nova (not at least in Juno release). So a >>default single iothread per VM is what all we have. So in conclusion a >>nova instance max iops on ceph rbd will be limited to 30-40K. Thanks for the update

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-22 Thread Stefan Priebe - Profihost AG
Am 22.06.2015 um 09:08 schrieb Alexandre DERUMIER : >>> Just an update, there seems to be no proper way to pass iothread >>> parameter from openstack-nova (not at least in Juno release). So a >>> default single iothread per VM is what all we have. So in conclusion a >>> nova instance max iops

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-22 Thread Irek Fasikhov
It is already possible to do in proxmox 3.4 (with the latest updates qemu-kvm 2.2.x). But it is necessary to register in the conf file iothread:1. For single drives the ambiguous behavior of productivity. 2015-06-22 10:12 GMT+03:00 Stefan Priebe - Profihost AG < s.pri...@profihost.ag>: > > Am 22.

Re: [ceph-users] [SOLVED] rbd performance issue - can't find bottleneck

2015-06-22 Thread Jacek Jarosiewicz
On 06/18/2015 12:23 PM, Mark Nelson wrote: I'm just guessing, but because your read performance is slow as well, you may multiple issues going on. The Intel 530 being slow at O_DSYNC writes is one of them, but it's possible there is something else too. If I were in your position I think I'd try

[ceph-users] How does CephFS export storage?

2015-06-22 Thread Joakim Hansson
Hi list! I'm doing an internship at a company looking to start using ceph. My question is quite simple; how does cephfs export storage to client? What protocol is used (NFS, iscsi etc.)? The only thing I've managed to find is this: http://docs.ceph.com/docs/cuttlefish/faq/ which state "Ceph doesn’t

Re: [ceph-users] How does CephFS export storage?

2015-06-22 Thread Timofey Titovets
cephfs is just fs, like ext4 and btrfs & etc. But you can export it by NFS or Samba share. P.S. I did test NFS kernel implementation and NFS-ganesha, both have stability problems in my tests (strange deadlocks). 2015-06-22 11:16 GMT+03:00 Joakim Hansson : > Hi list! > I'm doing an internship at a

Re: [ceph-users] [SOLVED] rbd performance issue - can't find bottleneck

2015-06-22 Thread Alexandre DERUMIER
Hi, >>I have a last question though - is the kernel rbd implementation going >>to be improved? or should we just forget about that and just use librbd? From my tests, I have around same performance between both with kernel 4.0. with kernel 3.16 I had bad speed regression This is with 4k block.

Re: [ceph-users] How does CephFS export storage?

2015-06-22 Thread Dan van der Ster
Hah! Nice way to invoke Cunningham's Law ;) > how does cephfs export storage to client? Ceph exports storage via it's own protocol... the RADOS protocol for object IO and a sort of "CephFS" protocol to overlay filesystem semantics on top of RADOS. Ceph doesn't use NFS or iSCSI itself -- clients "

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-22 Thread Alexandre DERUMIER
>>It is already possible to do in proxmox 3.4 (with the latest updates qemu-kvm >>2.2.x). But it is necessary to register in the conf file iothread:1. For >>single drives the ambiguous behavior of productivity. Yes and no ;) Currently in proxmox 3.4, iothread:1 generate only 1 iothread for all

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-22 Thread Irek Fasikhov
| Proxmox 4.0 will allow to enable|disable 1 iothread by disk. Alexandre, Useful option! In proxmox 3.4 will it be possible to add at least in the configuration file? Or it entails a change in the source code KVM? Thanks. 2015-06-22 11:54 GMT+03:00 Alexandre DERUMIER : > >>It is already possible

Re: [ceph-users] EC pool needs hosts equal to k + m?

2015-06-22 Thread Loic Dachary
Hi Nigel, On 22/06/2015 02:52, Nigel Williams wrote:> I recall a post to the mailing list in the last week(s) where someone said that for an EC Pool the failure-domain defaults to having k+m hosts in some versions of Ceph? > > Can anyone recall the post? have I got the requirement correct? Yes

Re: [ceph-users] Explanation for "ceph osd set nodown" and "ceph osd cluster_snap"

2015-06-22 Thread Jan Schermer
Thanks. Nobody else knows anything about “cluster_snap”? It is mentioned in the docs, but that’s all… Jan > On 19 Jun 2015, at 12:49, Carsten Schmitt > wrote: > > Hi Jan, > > On 06/18/2015 12:48 AM, Jan Schermer wrote: >> 1) Flags available in ceph osd set are >> >> pause|noup|nodown|noout

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-22 Thread Alexandre DERUMIER
>>In proxmox 3.4 will it be possible to add at least in the configuration file? >>Or it entails a change in the source code KVM? >>Thanks. This small patch on top of qemu-server should be enough (I think it should apply on 3.4 sources without problem) https://git.proxmox.com/?p=qemu-server.gi

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Jan Schermer
I don’t run Ceph on btrfs, but isn’t this related to the btrfs snapshotting feature ceph uses to ensure a consistent journal? Jan > On 19 Jun 2015, at 14:26, Lionel Bouton > wrote: > > On 06/19/15 13:42, Burkhard Linke wrote: >> >> Forget the reply to the list.

Re: [ceph-users] radosgw did not create auth url for swift

2015-06-22 Thread Vickie ch
Dear Venkat, Finally, create user and make sure subuser created that I can upload files and test on Hammer. But I still need to find out why not working on apache and how to make this work on Firefly. I wrote a simple steps, FYR. Hope it helps! Dear Best wishes, Mika 2015-06-22 14:34

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-22 Thread Alexandre DERUMIER
>>Oh so it only works for virtio disks? I'm using scsi with the virtio PCI >>controller. It's working too with virtio-scsi, but it's not thread safe yet. Also virtio-scsi disk hot-unplug crash qemu with iothread. Paolo from qemu said that it should be ready in coming releases (qemu 2.6 - 2.7).

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Krzysztof Nowicki
AFAIK the snapshots are useful when the journal sits inside the OSD filesystem. In case the journal is on a separate filesystem/device then OSD BTRFS snapshots can be safely disabled. I have done so on my OSDs as they all use external journals and experienced a reduction in periodic writes, but the

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Lionel Bouton
On 06/22/15 11:27, Jan Schermer wrote: > I don’t run Ceph on btrfs, but isn’t this related to the btrfs > snapshotting feature ceph uses to ensure a consistent journal? It's possible: if I understand correctly the code, the btrfs filestore backend creates a snapshot when syncing the journal. I'm a

[ceph-users] how does cephfs export storage to client?

2015-06-22 Thread Joakim Hansson
Thanks for the answers guys. I actually have a virtual ceph cluster running with cephfs, the question was asked in case my supervisor asks when I demonstrate my poc :) I did try to re-export RBD's via both NFS and iSCSI (and got it working) but the extra nodes needed for export and to provide a fai

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Lionel Bouton
On 06/19/15 13:23, Erik Logtenberg wrote: > I believe this may be the same issue I reported some time ago, which is > as of yet unsolved. > > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg19770.html > > I used strace to figure out that the OSD's were doing an incredible > amount of getx

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Krzysztof Nowicki
pon., 22.06.2015 o 13:11 użytkownik Lionel Bouton napisał: > On 06/22/15 11:27, Jan Schermer wrote: > > I don’t run Ceph on btrfs, but isn’t this related to the btrfs > snapshotting feature ceph uses to ensure a consistent journal? > > > It's possible: if I understand correctly the code, the btrf

Re: [ceph-users] Expanding a ceph cluster with ansible

2015-06-22 Thread Sebastien Han
Hi Bryan, It shouldn’t be a problem for ceph-ansible to expand a cluster even if it wasn’t deployed with it. I believe this requires a bit of tweaking on the ceph-ansible, but it’s not much. Can you elaborate on what went wrong and perhaps how you configured ceph-ansible? As far as I understoo

[ceph-users] radosgw socket is not created

2015-06-22 Thread Makkelie, R (ITCDCC) - KLM
i followed the following doc http://docs.ceph.com/docs/master/radosgw/config/ and have set this to [client.radosgw.gateway] host = qj6xe keyring = /etc/ceph/keyring.radosgw.gateway rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock log_file = /var/log/ceph/radosgw.log rgw_enable_us

Re: [ceph-users] radosgw socket is not created

2015-06-22 Thread B, Naga Venkata
Follow this doc http://docs.ceph.com/docs/v0.80.5/radosgw/config/ if you are using firefly. Thanks & Regards, venkat From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Makkelie, R (ITCDCC) - KLM Sent: Monday, June 22, 2015 8:22 PM To: ceph-users@lists.ceph.com Subject: [ce

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Erik Logtenberg
I have the journals on a separate disk too. How do you disable the snapshotting on the OSD? Thanks, Erik. On 22-06-15 12:27, Krzysztof Nowicki wrote: > AFAIK the snapshots are useful when the journal sits inside the OSD > filesystem. In case the journal is on a separate filesystem/device then >

[ceph-users] Anyone using Ganesha with CephFS?

2015-06-22 Thread Lincoln Bryant
Hi Cephers, Is anyone successfully using Ganesha for re-exporting CephFS as NFS? I’ve seen some blog posts about setting it up and the basic functionality seems to be there. Just wondering if anyone in the community is actively using it, and could relate some experiences. —Lincoln

Re: [ceph-users] New cluster in unhealthy state

2015-06-22 Thread Dave Durkee
Nick, I removed the failed OSD's yet I am still in the same state? ceph> status cluster b4419183-5320-4701-aae2-eb61e186b443 health HEALTH_WARN 32 pgs degraded 64 pgs stale 32 pgs stuck degraded 246 pgs stuck inactive 64 pgs stuc

[ceph-users] CEPH-GW replication, disable /admin/log

2015-06-22 Thread Michael Kuriger
Is it possible to disable the replication of /admin/log and other replication logs? It seems that This log replication is occupying a lot of time in my cluster(s). I’d like to only replicate user’s data. Thanks! [yp] Michael Kuriger Sr. Unix Systems Engineer • mk7...@yp.com

Re: [ceph-users] latest Hammer for Ubuntu precise

2015-06-22 Thread Gabri Mate
As far as I see the packages are there but the Packages file wasn't updated (correctly?) that's why we, Precise users do not see the updates. I am still wondering whether this is intentional or not. Probably not. :) Hopefully it will be sorted out soon. Mate On 00:14 Mon 22 Jun , Andrei Mikha

Re: [ceph-users] New cluster in unhealthy state

2015-06-22 Thread Dave Durkee
I am seeing the following in the osd log files: 2015-06-22 10:47:53.966056 7f7837cdc700 0 -- 10.0.0.2:6800/2787 >> 10.0.0.2:6802/3018 pipe(0x55ac800 sd=72 :6800 s=0 pgs=0 cs=0 l=0 c=0x4c444c0).accept connect_seq 2 vs existing 1 state standby 2015-06-22 10:47:53.966219 7f7837bdb700 0 -- 10.0.0.

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Lionel Bouton
On 06/22/15 17:21, Erik Logtenberg wrote: > I have the journals on a separate disk too. How do you disable the > snapshotting on the OSD? http://ceph.com/docs/master/rados/configuration/filestore-config-ref/ : filestore btrfs snap = false ___ ceph-users

Re: [ceph-users] Unexpected disk write activity with btrfs OSDs

2015-06-22 Thread Krzysztof Nowicki
pon., 22.06.2015 o 20:09 użytkownik Lionel Bouton < lionel-subscript...@bouton.name> napisał: > On 06/22/15 17:21, Erik Logtenberg wrote: > > I have the journals on a separate disk too. How do you disable the > > snapshotting on the OSD? > http://ceph.com/docs/master/rados/configuration/filestore-

Re: [ceph-users] latest Hammer for Ubuntu precise

2015-06-22 Thread Andrei Mikhailovsky
Thanks Mate, I was under the same impression. Could someone at Inktank please help us with this problem? Is this intentional or has it simply been an error? Thanks Andrei -- Andrei Mikhailovsky Director Arhont Information Security Web: http://www.arhont.com http://www.wi-foo.com Te

[ceph-users] ceph0.72 tgt wmware performance very bad

2015-06-22 Thread maoqi1982
Hi list: my cluster include 4 server ,12 osd(4 osd/server), 1 mon(1 server), 1Gbps link, ceph version is 0.72 , the cluster status is ok, client is vmware vcenter. use rbd as tgt backend,expose 2TB LUN via iscsi to vmware,the performance is very bad. the bw just 10kB/s. but when use windows7 as

Re: [ceph-users] ceph0.72 tgt wmware performance very bad

2015-06-22 Thread Timofey Titovets
Which backend you use in TGT for rbd? 2015-06-23 5:44 GMT+03:00 maoqi1982 : > Hi list: > my cluster include 4 server ,12 osd(4 osd/server), 1 mon(1 server), 1Gbps > link, ceph version is 0.72 , the cluster status is ok, client is vmware > vcenter. > use rbd as tgt backend,expose 2TB LUN via iscs

Re: [ceph-users] ceph0.72 tgt wmware performance very bad

2015-06-22 Thread Nick Fisk
Try turning off the HW Accelerated features in VMware (VAAI). From memory I think it's the accelerated Init one which causes the problem. > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Timofey Titovets > Sent: 23 June 2015 06:36 > To: maoq

Re: [ceph-users] New cluster in unhealthy state

2015-06-22 Thread Nick Fisk
Ok, some things to check/confirm - Make sure all your networking is ok, we have seen lots of problems related to jumbo frames not being correctly configured across nodes/switches. Test with pinging large packets between hosts. This includes separate public/cluster networks. -