Re: [ceph-users] Apply for an official mirror at CN

2017-04-05 Thread Wido den Hollander
> Op 5 april 2017 om 8:14 schreef SJ Zhu : > > > Wido, ping? > This might take a while! Has to go through a few hops for this to get fixed. It's on my radar! Wido > On Sat, Apr 1, 2017 at 8:40 PM, SJ Zhu wrote: > > On Sat, Apr 1, 2017 at 8:10 PM, Wido den Hollander wrote: > >> Great! Very

[ceph-users] write to ceph hangs

2017-04-05 Thread Laszlo Budai
Hello, We have an issue when writing to ceph. From time to time the write operation seems to hang for a few seconds. We've seen the https://bugzilla.redhat.com/show_bug.cgi?id=1389503, and there it is said that when the qemu process would reach the max open files limit, then "the guest OS shou

Re: [ceph-users] Apply for an official mirror at CN

2017-04-05 Thread Shinobu Kinjo
Adding Patrick who might be the best person. Regards, On Wed, Apr 5, 2017 at 6:16 PM, Wido den Hollander wrote: > >> Op 5 april 2017 om 8:14 schreef SJ Zhu : >> >> >> Wido, ping? >> > > This might take a while! Has to go through a few hops for this to get fixed. > > It's on my radar! > > Wido >

[ceph-users] bluestore - OSD booting issue continuosly

2017-04-05 Thread nokia ceph
Hello, Env:- 11.2.0 bluestore, EC 4+1 , RHEL7.2 We are facing one OSD's booting again and again which caused the cluster crazy :( . As you can see one PG got in inconsistent state while we tried to repair that partular PG, as its primary OSD's went down. After some time we found some tr

Re: [ceph-users] radosgw leaking objects

2017-04-05 Thread Luis Periquito
To try and make my life easier do you have such a script already done? Also, has the source of the orphans been found, or will they continue to happen after the upgrade to the newer version? thanks, On Mon, Apr 3, 2017 at 4:59 PM, Yehuda Sadeh-Weinraub wrote: > On Mon, Apr 3, 2017 at 1:32 AM, L

Re: [ceph-users] Client's read affinity

2017-04-05 Thread Wes Dillingham
This is a big development for us. I have not heard of this option either. I am excited to play with this feature and the implications it may have in improving RBD reads in our multi-datacenter RBD pools. Just to clarify the following options: "rbd localize parent reads = true" and "crush location

Re: [ceph-users] radosgw global quotas - how to set in jewel?

2017-04-05 Thread Casey Bodley
A new set of 'radosgw-admin global quota' commands were added for this, which we'll backport to kraken and jewel. You can view the updated documentation here: http://docs.ceph.com/docs/master/radosgw/admin/#reading-writing-global-quotas Thanks again for pointing this out, Casey On 04/03/2017

[ceph-users] CDM Today @ 12:30p EDT

2017-04-05 Thread Patrick McGarry
Hey cephers, Just a friendly reminder that the Ceph Developer Monthly call will be at 12:30 EDT / 16:30 UTC. We already have a few things on the docket for discussion, but if you are doing any Ceph feature or backport work, please add your item to the list to be discussed: http://wiki.ceph.com/CD

Re: [ceph-users] Client's read affinity

2017-04-05 Thread Alejandro Comisario
Another thing that i would love to ask and clarify is, would this work for openstack vms that uses cinder, instead of vms that uses direct integration between nova and ceph ? We use cinder bootable volumes and normal cinder attached volumes to vms. thx On Wed, Apr 5, 2017 at 10:36 AM, Wes Dilling

Re: [ceph-users] Client's read affinity

2017-04-05 Thread Jason Dillaman
Yes, it's a general solution for any read-only parent images. This will *not* help localize reads for any portions of your image that have already been copied-on-written from the parent image down to the cloned image (i.e. the Cinder volume or Nova disk). On Wed, Apr 5, 2017 at 10:25 AM, Alejandro

Re: [ceph-users] Apply for an official mirror at CN

2017-04-05 Thread Patrick McGarry
Ok, I have added the following to ceph dns: cnINCNAMEmirrors.ustc.edu.cn. Wido, I haven't added this to the website, doc, or anywhere else for informational purposes, but the mechanics should be live and accepting traffic in a bit here. Let me know if you guys need anything else. Than

Re: [ceph-users] Apply for an official mirror at CN

2017-04-05 Thread SJ Zhu
On Wed, Apr 5, 2017 at 10:55 PM, Patrick McGarry wrote: > Ok, I have added the following to ceph dns: > > cnINCNAMEmirrors.ustc.edu.cn. Great, thanks. Besides, I have enabled HTTPS for https://cn.ceph.com just now. -- Regards, Shengjing Zhu __

[ceph-users] ceph df space for rgw.buckets.data shows used even when files are deleted

2017-04-05 Thread Deepak Naidu
Folks, Trying to test the S3 object GW. When I try to upload any files the space is shown used(that's normal behavior), but when the object is deleted it shows as used(don't understand this). Below example. Currently there is no files in the entire S3 bucket, but it still shows space used. An

[ceph-users] librbd + rbd-nbd

2017-04-05 Thread Prashant Murthy
Hi all, I wanted to ask if anybody is using librbd (user mode lib) with rbd-nbd (kernel module) on their Ceph clients. We're currently using krbd, but that doesn't support some of the features (such as rbd mirroring). So, I wanted to check if anybody has experience running with nbd + librbd on th

Re: [ceph-users] ceph df space for rgw.buckets.data shows used even when files are deleted

2017-04-05 Thread Ben Hines
Ceph's RadosGW uses garbage collection by default. Try running 'radosgw-admin gc list' to list the objects to be garbage collected, or 'radosgw-admin gc process' to trigger them to be deleted. -Ben On Wed, Apr 5, 2017 at 12:15 PM, Deepak Naidu wrote: > Folks, > > > > Trying to test the S3 obje

Re: [ceph-users] ceph df space for rgw.buckets.data shows used even when files are deleted

2017-04-05 Thread Deepak Naidu
Thanks Ben. Is there are tuning param I need to use to fasten the process. "rgw_gc_max_objs": "32", "rgw_gc_obj_min_wait": "7200", "rgw_gc_processor_max_time": "3600", "rgw_gc_processor_period": "3600", -- Deepak From: Ben Hines [mailto:bhi...@gmail.com] Sent: Wednesday, Apri

Re: [ceph-users] clock skew

2017-04-05 Thread Dan Mick
> Just to follow-up on this: we have yet experienced a clock skew since we > starting using chrony. Just three days ago, I know, bit still... did you mean "we have not yet..."? > Perhaps you should try it too, and report if it (seems to) work better > for you as well. > > But again, just three

[ceph-users] performance issues

2017-04-05 Thread PYH
Hi, we have 21 hosts, each has 12 disks (4T sata), no SSD as journal or cache tier. so the total OSD number is 21x12=252. there are three separate hosts for monitor nodes. network is 10Gbps. replicas are 3. under this setup, we can get only 3000+ IOPS for random writes for whole cluster.test

Re: [ceph-users] performance issues

2017-04-05 Thread PYH
what I meant is, when the total IOPS reach to 3000+, the total cluster gets very slow. so any idea? thanks. On 2017/4/6 9:51, PYH wrote: Hi, we have 21 hosts, each has 12 disks (4T sata), no SSD as journal or cache tier. so the total OSD number is 21x12=252. there are three separate hosts fo

Re: [ceph-users] performance issues

2017-04-05 Thread Christian Balzer
Hello, first and foremost, do yourself and everybody else a favor by thoroughly searching net and thus the ML archives. This kind of question has come up and been answered countless times. On Thu, 6 Apr 2017 09:59:10 +0800 PYH wrote: > what I meant is, when the total IOPS reach to 3000+, the t

[ceph-users] rbd iscsi gateway question

2017-04-05 Thread Brady Deetz
I apologize if this is a duplicate of something recent, but I'm not finding much. Does the issue still exist where dropping an OSD results in a LUN's I/O hanging? I'm attempting to determine if I have to move off of VMWare in order to safely use Ceph as my VM storage. _

Re: [ceph-users] rbd iscsi gateway question

2017-04-05 Thread Adrian Saul
I am not sure if there is a hard and fast rule you are after, but pretty much anything that would cause ceph transactions to be blocked (flapping OSD, network loss, hung host) has the potential to block RBD IO which would cause your iSCSI LUNs to become unresponsive for that period. For the mo

[ceph-users] 3 monitor down and recovery

2017-04-05 Thread 云平台事业部
Hello, I am simulating the recovery when all of our 3 monitors are down in our test environment. I refered to the Ceph mon troubleshoting document, but encourtered the problem that “OSD has the store locked”. I stop all the 3 mom, and plan to get the monmap from the OSDs. To get the monmap, the