Re: [ceph-users] XFS attempt to access beyond end of device

2017-03-27 Thread Brad Hubbard
On Tue, Mar 28, 2017 at 4:22 PM, Marcus Furlong wrote: > On 22 March 2017 at 19:36, Brad Hubbard wrote: >> On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong wrote: > >>> [435339.965817] [ cut here ] >>> [435339.965874] WARNING: at fs/xfs/xfs_aops.c:1244 >>> xfs_vm_releasepa

Re: [ceph-users] XFS attempt to access beyond end of device

2017-03-27 Thread Marcus Furlong
On 22 March 2017 at 19:36, Brad Hubbard wrote: > On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong wrote: >> [435339.965817] [ cut here ] >> [435339.965874] WARNING: at fs/xfs/xfs_aops.c:1244 >> xfs_vm_releasepage+0xcb/0x100 [xfs]() >> [435339.965876] Modules linked in: vfa

Re: [ceph-users] ceph-rest-api's behavior

2017-03-27 Thread Brad Hubbard
I've copied Dan who may have some thoughts on this and has been involved with this code. On Tue, Mar 28, 2017 at 3:58 PM, Mika c wrote: > Hi Brad, >Thanks for your help. I found that's my problem. Forget attach file name > with words ''keyring". > > And sorry to bother you again. Is it possib

Re: [ceph-users] ceph-rest-api's behavior

2017-03-27 Thread Mika c
Hi Brad, Thanks for your help. I found that's my problem. Forget attach file name with words ''keyring". And sorry to bother you again. Is it possible to create a minimum privilege client for the api to run? Best wishes, Mika 2017-03-24 19:32 GMT+08:00 Brad Hubbard : > On Fri, Mar 24, 201

Re: [ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-03-27 Thread Richard Hesse
Nix the second question, as I understand it, ceph doesn't work in mixed IPv6 and legacy IPv4 environments. Still, would like to hear from people running it in SLAAC environments. On Mon, Mar 27, 2017 at 12:49 PM, Richard Hesse wrote: > Has anyone run their Ceph OSD cluster network on IPv6 using

Re: [ceph-users] RBD image perf counters: usage, access

2017-03-27 Thread Masha Atakova
Hi Yang, > Do you mean get the perf counters via api? At first this counter is only for a particular ImageCtx (connected client), then you can read the counters by the perf dump command in my last mail I think. Yes, I did mean to get counters via API. And looks like I can adapt this admin-daem

Re: [ceph-users] New hardware for OSDs

2017-03-27 Thread Christian Balzer
Hello, On Mon, 27 Mar 2017 17:48:38 +0200 Mattia Belluco wrote: > I mistakenly answered to Wido instead of the whole Mailing list ( weird > ml settings I suppose) > > Here it is my message: > > > Thanks for replying so quickly. I commented inline. > > On 03/27/2017 01:34 PM, Wido den Holland

Re: [ceph-users] New hardware for OSDs

2017-03-27 Thread Christian Balzer
Hello, On Mon, 27 Mar 2017 16:09:09 +0100 Nick Fisk wrote: > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Wido den Hollander > > Sent: 27 March 2017 12:35 > > To: ceph-users@lists.ceph.com; Christian Balzer > > Subject: Re: [ceph-

Re: [ceph-users] How to check SMR vs PMR before buying disks?

2017-03-27 Thread Christian Balzer
On Mon, 27 Mar 2017 17:32:53 -0600 Adam Carheden wrote: > What's the biggest PMR disk I can buy, and how do I tell if a disk is PMR? > > I'm well aware that I shouldn't use SMR disks: > http://ceph.com/planet/do-not-use-smr-disks-with-ceph/ > > But newegg and the like don't seem to advertise SMR

[ceph-users] How to check SMR vs PMR before buying disks?

2017-03-27 Thread Adam Carheden
What's the biggest PMR disk I can buy, and how do I tell if a disk is PMR? I'm well aware that I shouldn't use SMR disks: http://ceph.com/planet/do-not-use-smr-disks-with-ceph/ But newegg and the like don't seem to advertise SMR vs PMR and I can't even find it on manufacturer's websites (at least

Re: [ceph-users] libjemalloc.so.1 not used?

2017-03-27 Thread Alexandre DERUMIER
you need to recompile ceph with jemalloc, without have tcmalloc dev librairies. LD_PRELOAD has never work for jemalloc and ceph - Mail original - De: "Engelmann Florian" À: "ceph-users" Envoyé: Lundi 27 Mars 2017 16:54:33 Objet: [ceph-users] libjemalloc.so.1 not used? Hi, we are tes

Re: [ceph-users] osds down after upgrade hammer to jewel

2017-03-27 Thread George Mihaiescu
Make sure the OSD processes on the Jewel node are running. If you didn't change the ownership to user ceph, they won't start. > On Mar 27, 2017, at 11:53, Jaime Ibar wrote: > > Hi all, > > I'm upgrading ceph cluster from Hammer 0.94.9 to jewel 10.2.6. > > The ceph cluster has 3 servers (one

[ceph-users] Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration."

2017-03-27 Thread ceph . novice
Hi Cephers.   Couldn't find any special documentation about the "S3 object expiration" so I assume it should work "AWS S3 like" (?!?) ...  BUT ... we have a test cluster based on 11.2.0 - Kraken and I set some object expiration dates via CyberDuck and DragonDisk, but the objects are still there,

Re: [ceph-users] disk timeouts in libvirt/qemu VMs...

2017-03-27 Thread Peter Maloney
I can't guarantee it's the same as my issue, but from that it sounds the same. Jewel 10.2.4, 10.2.5 tested hypervisors are proxmox qemu-kvm, using librbd 3 ceph nodes with mon+osd on each -faster journals, more disks, bcache, rbd_cache, fewer VMs on ceph, iops and bw limits on client side, jumbo

[ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-03-27 Thread Richard Hesse
Has anyone run their Ceph OSD cluster network on IPv6 using SLAAC? I know that ceph supports IPv6, but I'm not sure how it would deal with the address rotation in SLAAC, permanent vs outgoing address, etc. It would be very nice for me, as I wouldn't have to run any kind of DHCP server or use static

[ceph-users] disk timeouts in libvirt/qemu VMs...

2017-03-27 Thread Hall, Eric
In an OpenStack (mitaka) cloud, backed by a ceph cluster (10.2.6 jewel), using libvirt/qemu (1.3.1/2.5) hypervisors on Ubuntu 14.04.5 compute and ceph hosts, we occasionally see hung processes (usually during boot, but otherwise as well), with errors reported in the instance logs as shown below.

Re: [ceph-users] radosgw global quotas - how to set in jewel?

2017-03-27 Thread Graham Allan
I'm following up to myself here, but I'd love to hear if anyone knows how the global quotas can be set in jewel's radosgw. I haven't found anything which has an effect - the documentation says to use: radosgw-admin region-map get > regionmap.json ...edit the json file radosgw-admin region-map s

[ceph-users] osds down after upgrade hammer to jewel

2017-03-27 Thread Jaime Ibar
Hi all, I'm upgrading ceph cluster from Hammer 0.94.9 to jewel 10.2.6. The ceph cluster has 3 servers (one mon and one mds each) and another 6 servers with 12 osds each. The monitoring and mds have been succesfully upgraded to latest jewel release, however after upgrade the first osd server(

Re: [ceph-users] New hardware for OSDs

2017-03-27 Thread Mattia Belluco
I mistakenly answered to Wido instead of the whole Mailing list ( weird ml settings I suppose) Here it is my message: Thanks for replying so quickly. I commented inline. On 03/27/2017 01:34 PM, Wido den Hollander wrote: > >> Op 27 maart 2017 om 13:22 schreef Christian Balzer : >> >> >> >> Hell

Re: [ceph-users] New hardware for OSDs

2017-03-27 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Wido den Hollander > Sent: 27 March 2017 12:35 > To: ceph-users@lists.ceph.com; Christian Balzer > Subject: Re: [ceph-users] New hardware for OSDs > > > > Op 27 maart 2017 om 13:22 schreef C

[ceph-users] libjemalloc.so.1 not used?

2017-03-27 Thread Engelmann Florian
Hi, we are testing Ceph as block storage (XFS based OSDs) running in a hyper converged setup with KVM as hypervisor. We are using NVMe SSD only (Intel DC P5320) and I would like to use jemalloc on Ubuntu xenial (current kernel 4.4.0-64-generic). I tried to use /etc/default/ceph and uncommented

Re: [ceph-users] Questions on rbd-mirror

2017-03-27 Thread Dongsheng Yang
Jason, do you think it's good idea to introduce a rbd_config object to record some configurations of per-pool, such as default_features. That means, we can set some configurations differently in different pool. In this way, we can also handle the per-pool setting in rbd-mirror. Thanx Yang O

Re: [ceph-users] Questions on rbd-mirror

2017-03-27 Thread Jason Dillaman
On Mon, Mar 27, 2017 at 4:00 AM, Dongsheng Yang wrote: > Hi Fulvio, > > On 03/24/2017 07:19 PM, Fulvio Galeazzi wrote: > > Hallo, apologies for my (silly) questions, I did try to find some doc on > rbd-mirror but was unable to, apart from a number of pages explaining how to > install it. > > My en

[ceph-users] 答复: leveldb takes a lot of space

2017-03-27 Thread Chenyehua
@ Niv Azriel : What is your leveldb version and has it been fixed now? @ Wido den Hollander : I also meet a similar problem: the size of my leveldb is about 17GB(300+ osds), there are a lot of sst files(each sst file is 2MB) in /var/lib/ceph/mon. (a networ

Re: [ceph-users] OSDs cannot match up with fast OSD map changes (epochs) during recovery

2017-03-27 Thread Wido den Hollander
> Op 27 maart 2017 om 8:41 schreef Muthusamy Muthiah > : > > > Hi Wido, > > Yes slow map update was happening and CPU hitting 100%. So it indeed seems you are CPU bound at that moment. That's indeed a problem when you have a lot of map changes to work through on the OSDs. It's recommended t

Re: [ceph-users] object store backup tool recommendations

2017-03-27 Thread Blair Bethwaite
I suppose the other option here, which I initially dismissed because Red Hat are not supporting it, is to have a CephFS dir/tree bound to a cache-tier fronted EC pool. Is anyone having luck with such a setup? On 3 March 2017 at 21:40, Blair Bethwaite wrote: > Hi Marc, > > Whilst I agree CephFS wo

Re: [ceph-users] object store backup tool recommendations

2017-03-27 Thread Blair Bethwaite
Thanks for the useful reply Robin and sorry for not getting back sooner... > On Fri, Mar 03, 2017 at 18:01:00 +, Robin H. Johnson wrote: > On Fri, Mar 03, 2017 at 10:55:06 +1100, Blair Bethwaite wrote: >> Does anyone have any recommendations for good tools to perform >> file-system/tree backup

Re: [ceph-users] New hardware for OSDs

2017-03-27 Thread Wido den Hollander
> Op 27 maart 2017 om 13:22 schreef Christian Balzer : > > > > Hello, > > On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote: > > > Hello all, > > we are currently in the process of buying new hardware to expand an > > existing Ceph cluster that already has 1200 osds. > > That's quite s

Re: [ceph-users] leveldb takes a lot of space

2017-03-27 Thread Wido den Hollander
> Op 26 maart 2017 om 9:44 schreef Niv Azriel : > > > after network issues, ceph cluster fails. > leveldb grows and takes a lot of space > ceph mon cant write to leveldb because there is not enough space on > filesystem. > (there is a lot of ldb file on /var/lib/ceph/mon) > It is normal that t

Re: [ceph-users] New hardware for OSDs

2017-03-27 Thread Christian Balzer
Hello, On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote: > Hello all, > we are currently in the process of buying new hardware to expand an > existing Ceph cluster that already has 1200 osds. That's quite sizable, is the expansion driven by the need for more space (big data?) or to incre

Re: [ceph-users] Recompiling source code - to find exact RPM

2017-03-27 Thread nokia ceph
Hey Brad, Many thanks for the explanation... > ~~~ > WARNING: the following dangerous and experimental features are enabled: > ~~~ > Can I ask why you want to disable this warning? We using bluestore with kraken, we are aware that this is in tech preview. To hide these warning compiled like thi

[ceph-users] New hardware for OSDs

2017-03-27 Thread Mattia Belluco
Hello all, we are currently in the process of buying new hardware to expand an existing Ceph cluster that already has 1200 osds. We are currently using 24 * 4 TB SAS drives per osd with an SSD journal shared among 4 osds. For the upcoming expansion we were thinking of switching to either 6 or 8 TB

[ceph-users] Kraken + Bluestore

2017-03-27 Thread Ashley Merrick
Hi, Does anyone have any cluster of a decent scale running on Kraken and bluestore? How are you finding it? Have you had any big issues arise? Was it running non bluestore before and have you noticed any improvement? Read ? Write? IOPS? ,Ashley Sent from my iPhone _

Re: [ceph-users] RBD image perf counters: usage, access

2017-03-27 Thread Dongsheng Yang
On 03/27/2017 04:06 PM, Masha Atakova wrote: Hi Yang, Hi Masha, Thank you for your reply. This is very useful indeed that there are many ImageCtx objects for one image. But in my setting, I don't have any particular ceph client connected to ceph (I could, but this is not the point). I'm

Re: [ceph-users] RBD image perf counters: usage, access

2017-03-27 Thread Masha Atakova
Hi Yang, Thank you for your reply. This is very useful indeed that there are many ImageCtx objects for one image. But in my setting, I don't have any particular ceph client connected to ceph (I could, but this is not the point). I'm trying to get metrics for particular image while not perfor

Re: [ceph-users] Questions on rbd-mirror

2017-03-27 Thread Dongsheng Yang
Hi Fulvio, On 03/24/2017 07:19 PM, Fulvio Galeazzi wrote: Hallo, apologies for my (silly) questions, I did try to find some doc on rbd-mirror but was unable to, apart from a number of pages explaining how to install it. My environment is CenOS7 and Ceph 10.2.5. Can anyone help me understand

[ceph-users] PG Calculation query

2017-03-27 Thread nokia ceph
Hello, We are facing some performance issue with rados bench marking on a 5 node cluster with PG num 4096 vs 8192. As per the PG calculation below is our specification Size OSD % Data Targets PG count 5 340 100 100 8192 5 340 100 50 4096 With 8192 PG count we got good performance with 409