[ceph-users] EINVAL: (22) Invalid argument while doing ceph osd crush move

2016-06-13 Thread George Shuklin
Hello. I'm playing with crush, and got issue with ceph osd crush move command. I've added to the map intermediate 'blabla' bucket type: type 0 osd type 1 blabla type 2 host I've added few 'blabla' buckets, and now osd tree now looks like this: -12 1.0 host ssd-pp11 1 0.25000

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-13 Thread David
"rbd ls" does work with 4.6 (just tested with 4.6.1-1.el7.elrepo.x86_64). That's against a 10.2.0 cluster with ceph-common-10.2.0-0 What's the error you're getting? Are you using default rbd pool or specifying pool with '-p'? I'd recommend checking your ceph-common package. Thanks, On Fri, Jun 1

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-13 Thread Ilya Dryomov
On Fri, Jun 10, 2016 at 9:29 PM, Michael Kuriger wrote: > Hi Everyone, > I’ve been running jewel for a while now, with tunables set to hammer. > However, I want to test the new features but cannot find a fully compatible > Kernel for CentOS 7. I’ve tried a few of the elrepo kernels - elrepo-ke

Re: [ceph-users] Move RGW bucket index

2016-06-13 Thread Василий Ангапов
Is there any way to move existing non-sharded bucket index to sharded one? Or is there any way (online or offline) to move all objects from non-sharded bucket to sharded one? 2016-06-13 11:38 GMT+03:00 Sean Redmond : > Hi, > > I have a few buckets here with >10M objects in them and the index pool

Re: [ceph-users] Move RGW bucket index

2016-06-13 Thread Sean Redmond
Hi, AFAIK its not possible to moved from non-sharded bucket index to sharded, it must be set at the time a bucket is created. You could use 's3cmd sync' to copy data from one bucket to another bucket, but with 5M objects its going to take a long time. Thanks On Mon, Jun 13, 2016 at 12:11 PM, Ва

[ceph-users] Issue installing ceph with ceph-deploy

2016-06-13 Thread Fran Barrera
Hi all, I have a problem installing ceph jewel with ceph-deploy (1.5.33) on ubuntu 14.04.4 (openstack instance). This is my setup: ceph-admin ceph-mon ceph-osd-1 ceph-osd-2 I've following these steps from ceph-admin node: I have the user "ceph" created in all nodes and access from ssh key. 1

Re: [ceph-users] Issue installing ceph with ceph-deploy

2016-06-13 Thread George Shuklin
I believe this is the source of issues (cited line). Purge all ceph packages from this node and remove user/group 'ceph', than retry. On 06/13/2016 02:46 PM, Fran Barrera wrote: [ceph-admin][WARNIN] usermod: user ceph is currently used by process 1303 ___

[ceph-users] Regarding Bi-directional Async Replication

2016-06-13 Thread Venkata Manojawa Paritala
Hi, Can you please let me know if bi-directional asynchronous replication between 2 Ceph clusters is possible? If yes, can you please guide me on how to do it? Greatly appreciate your quick response. Thanks & Regards, Manoj Paritala ___ ceph-users mail

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-13 Thread Jason Dillaman
Alternatively, if you are using RBD format 2 images, you can run "rados -p listomapvals rbd_directory" to ensure it has a bunch of key/value pairs for your images. There was an issue noted [1] after upgrading to Jewel where the omap values were all missing on several v2 RBD image headers -- resul

Re: [ceph-users] Journal partition owner's not change to ceph

2016-06-13 Thread Christian Sarrasin
In case this is useful, the steps to make this work are given here: http://tracker.ceph.com/issues/13833#note-2 (the bug context documents the shortcoming; I believe this happens if you create the journal partition manually). HTH, Christian On 12/06/16 10:18, Anthony D'Atri wrote: The GUID

[ceph-users] Question about object partial writes in RBD

2016-06-13 Thread George Shuklin
Hello. How objects are handled in the rbd? If user writes 16k in the RBD image with 4Mb object size, how much would be written in the OSD? 16k x replication or 4Mb x replication (+journals for both cases)? Thanks. ___ ceph-users mailing list ceph-us

Re: [ceph-users] Question about object partial writes in RBD

2016-06-13 Thread Wido den Hollander
> Op 13 juni 2016 om 16:07 schreef George Shuklin : > > > Hello. > > How objects are handled in the rbd? If user writes 16k in the RBD image > with 4Mb object size, how much would be written in the OSD? 16k x > replication or 4Mb x replication (+journals for both cases)? > librbd will write

[ceph-users] Issue with Calamari 1.3-7

2016-06-13 Thread Venkata Manojawa Paritala
Hi, I am having issues adding RedHat Ceph (10.2.1) nodes to Calamari 1.3-7. Below are more details. 1. On RHEL 7.2 VMs, configured Ceph (10.2.1) cluster with 3 mons and 19 osds . 2. Configured Calamari 1.3-7 on one node. Installation was done through ICE_SETUP with ISO Image. Diamond packages we

[ceph-users] strange unfounding of PGs

2016-06-13 Thread Csaba Tóth
Hi! I have a soo strange problem. At friday night i upgraded my small ceph cluster from hammer to jewel. Everything went so well, but the chowning of osd datadir took a lot time, so i skipped two osd and do the run-as-root trick. Yesterday evening i wanted to fix this, shutted down the first OSD a

Re: [ceph-users] Issue installing ceph with ceph-deploy

2016-06-13 Thread Tu Holmes
I have seen this. Just stop ceph and kill any ssh processes related to it. I had the same issue, and the fix for me was to enable root login, ssh to the node as root and run the env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install -o

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-06-13 Thread Mathias Buresch
Hey, I opened an issue at tracker.ceph.com -> http://tracker.ceph.com/issues /16266-Original Message- From: Brad Hubbard To: Mathias Buresch Cc: jsp...@redhat.com , ceph-us...@ceph.com Subject: Re: [ceph-users] Ceph Status - Segmentation Fault Date: Thu, 2 Jun 2016 09:50:20 +1000 Could

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-13 Thread Michael Kuriger
I just realized that this issue is probably because I’m running jewel 10.2.1 on the servers side, but accessing from a client running hammer 0.94.7 or infernalis 9.2.1 Here is what happens if I run rbd ls from a client on infernalis. I was testing this access since we weren’t planning on build

[ceph-users] Clearing Incomplete Clones State

2016-06-13 Thread Lazuardi Nasution
Hi, I have removed cache tiering due to "missing hit_sets" warning. After removing, I want to try to add tiering again with the same cache pool and storage pool, but I can't even the cache pool is empty or forced to clear. Following is some output. How can I deal with this? Is it possible to clear

Re: [ceph-users] PGs Realationship on Cache Tiering

2016-06-13 Thread Oliver Dzombic
Hi, no, its two independent pools. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzomb

[ceph-users] PGs Realationship on Cache Tiering

2016-06-13 Thread Lazuardi Nasution
Hi, Is there any relation between PGs on cache pool and PGs on storage pool of cache tiering? Is is manadatory to set pg(p)_num between cache pool and storage pool to be equal? Best regards, ___ ceph-users mailing list ceph-users@lists.ceph.com http://l

[ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Oliver Dzombic
Hi, i am for sure not really experienced yet with ceph or with cache tier, but to me it seems to behave strange. Setup: pool 3 'ssd_cache' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 190 flags hashpspool,incomplete_clones tier_of 4 cache

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-06-13 Thread Brad Hubbard
On Tue, Jun 14, 2016 at 2:26 AM, Mathias Buresch wrote: > Hey, > > I opened an issue at tracker.ceph.com -> http://tracker.ceph.com/issues > /16266 Hi Mathias, Thanks! I've added some information in that bug as I came across this same issue working on something else and saw your bug this mornin

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Samuel Just
I'd have to look more closely, but these days promotion is probabilistic and throttled. During each read of those objects, it will tend to promote a few more of them depending on how many promotions are in progress and how hot it thinks a particular object is. The lack of a speed up is a bummer,

Re: [ceph-users] Cache pool with replicated pool don't work properly.

2016-06-13 Thread Hein-Pieter van Braam
Hi, I don't really have a solution but I can confirm I had the same problem trying to deploy my new Jewel cluster. I reinstalled the cluster with Hammer and everything is working as I expect it to (that is; writes hit the backing pool asynchronously) Although other than you I noticed the same pro

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Oliver Dzombic
Hi, yeah, for sure you are right ! Sorry... Its Centos 7 ( default kernel 3.10 ) ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269) In the very end i want/need a setup where all is going to the cache tier ( read and write ). Writes shall be, after some time without modificati

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Christian Balzer
Hello, On Mon, 13 Jun 2016 16:52:19 -0700 Samuel Just wrote: > I'd have to look more closely, but these days promotion is > probabilistic and throttled. Unconfigurable and exclusively so? >During each read of those objects, it > will tend to promote a few more of them depending on how many >

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Christian Balzer
Hello, On Tue, 14 Jun 2016 01:57:49 +0200 Oliver Dzombic wrote: > Hi, > > yeah, for sure you are right ! > > Sorry... > > Its Centos 7 ( default kernel 3.10 ) > ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269) > > > > In the very end i want/need a setup where all is going

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Oliver Dzombic
Hi, if there isnt a problem, why are now 358 objects inside the cache pool running multiple times dd if=file of=/dev/zero while every full read of this 1.5 GB file produces around 8 objects inside the cache pool. Its the same file, read again and again. But at no point its read from the cache.

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Christian Balzer
On Tue, 14 Jun 2016 02:38:27 +0200 Oliver Dzombic wrote: > Hi, > > if there isnt a problem, why are now 358 objects inside the cache pool > running multiple times dd if=file of=/dev/zero while every full read of > this 1.5 GB file produces around 8 objects inside the cache pool. > My response (n

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Oliver Dzombic
Hi Christian, if i read a 1,5 GB file, which is not changing at all. Then i expect the agent to copy it one time from the cold pool to the cache pool. In fact its every time making a new copy. I can see that by increasing disc usage of the cache and the increasing object number. And the non ex

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Christian Balzer
Hello, On Tue, 14 Jun 2016 02:52:43 +0200 Oliver Dzombic wrote: > Hi Christian, > > if i read a 1,5 GB file, which is not changing at all. > > Then i expect the agent to copy it one time from the cold pool to the > cache pool. > Before Jewel, that is what you would have seen, yes. Did you re

Re: [ceph-users] strange cache tier behaviour with cephfs

2016-06-13 Thread Nick Fisk
osd_tier_promote_max_objects_sec and osd_tier_promote_max_bytes_sec is what you are looking for, I think by default its set to 5MB/s, which would roughly correlate to why you are only seeing around 8 objects each time being promoted. This was done like this as too many promotions hurt performance,

Re: [ceph-users] strange unfounding of PGs

2016-06-13 Thread Nick Fisk
Did you enable the sortbitwise flag as per the upgrade instructions, as there is a known bug with it? I don't know why these instructions haven't been amended in light of this bug. http://tracker.ceph.com/issues/16113 > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@

[ceph-users] UnboundLocalError: local variable 'region_name' referenced before assignment

2016-06-13 Thread Parveen Sharma
Hi, I'm getting "UnboundLocalError: local variable 'region_name' referenced before assignment" error while placing an object in my earlier created bucket using my RADOSGW with boto. My package details: $ sudo rpm -qa | grep rados librados2-10.2.1-0.el7.x86_64 libradosstriper1-10.2.1-0.el7.x86_

Re: [ceph-users] [Ceph-community] Issue with Calamari 1.3-7

2016-06-13 Thread Venkata Manojawa Paritala
Hi Salih, Yes, we performed "calamari-ctl initialize" & "ceph-deploy calamari connect node1 node2 .." and our cluster status is Healthy (OK). Not sure if this might create problem, but we configured the cpeh cluster first and then calamari. Thanks, Manoj On Mon, Jun 13, 2016 at 10:29 PM,