Hello.
I'm playing with crush, and got issue with ceph osd crush move command.
I've added to the map intermediate 'blabla' bucket type:
type 0 osd
type 1 blabla
type 2 host
I've added few 'blabla' buckets, and now osd tree now looks like this:
-12 1.0 host ssd-pp11
1 0.25000
"rbd ls" does work with 4.6 (just tested with 4.6.1-1.el7.elrepo.x86_64).
That's against a 10.2.0 cluster with ceph-common-10.2.0-0
What's the error you're getting? Are you using default rbd pool or
specifying pool with '-p'? I'd recommend checking your ceph-common package.
Thanks,
On Fri, Jun 1
On Fri, Jun 10, 2016 at 9:29 PM, Michael Kuriger wrote:
> Hi Everyone,
> I’ve been running jewel for a while now, with tunables set to hammer.
> However, I want to test the new features but cannot find a fully compatible
> Kernel for CentOS 7. I’ve tried a few of the elrepo kernels - elrepo-ke
Is there any way to move existing non-sharded bucket index to sharded
one? Or is there any way (online or offline) to move all objects from
non-sharded bucket to sharded one?
2016-06-13 11:38 GMT+03:00 Sean Redmond :
> Hi,
>
> I have a few buckets here with >10M objects in them and the index pool
Hi,
AFAIK its not possible to moved from non-sharded bucket index to sharded,
it must be set at the time a bucket is created.
You could use 's3cmd sync' to copy data from one bucket to another bucket,
but with 5M objects its going to take a long time.
Thanks
On Mon, Jun 13, 2016 at 12:11 PM, Ва
Hi all,
I have a problem installing ceph jewel with ceph-deploy (1.5.33) on ubuntu
14.04.4 (openstack instance).
This is my setup:
ceph-admin
ceph-mon
ceph-osd-1
ceph-osd-2
I've following these steps from ceph-admin node:
I have the user "ceph" created in all nodes and access from ssh key.
1
I believe this is the source of issues (cited line).
Purge all ceph packages from this node and remove user/group 'ceph',
than retry.
On 06/13/2016 02:46 PM, Fran Barrera wrote:
[ceph-admin][WARNIN] usermod: user ceph is currently used by process 1303
___
Hi,
Can you please let me know if bi-directional asynchronous replication
between 2 Ceph clusters is possible? If yes, can you please guide me on how
to do it?
Greatly appreciate your quick response.
Thanks & Regards,
Manoj Paritala
___
ceph-users mail
Alternatively, if you are using RBD format 2 images, you can run
"rados -p listomapvals rbd_directory" to ensure it has
a bunch of key/value pairs for your images. There was an issue noted
[1] after upgrading to Jewel where the omap values were all missing on
several v2 RBD image headers -- resul
In case this is useful, the steps to make this work are given here:
http://tracker.ceph.com/issues/13833#note-2 (the bug context documents
the shortcoming; I believe this happens if you create the journal
partition manually).
HTH,
Christian
On 12/06/16 10:18, Anthony D'Atri wrote:
The GUID
Hello.
How objects are handled in the rbd? If user writes 16k in the RBD image
with 4Mb object size, how much would be written in the OSD? 16k x
replication or 4Mb x replication (+journals for both cases)?
Thanks.
___
ceph-users mailing list
ceph-us
> Op 13 juni 2016 om 16:07 schreef George Shuklin :
>
>
> Hello.
>
> How objects are handled in the rbd? If user writes 16k in the RBD image
> with 4Mb object size, how much would be written in the OSD? 16k x
> replication or 4Mb x replication (+journals for both cases)?
>
librbd will write
Hi,
I am having issues adding RedHat Ceph (10.2.1) nodes to Calamari 1.3-7.
Below are more details.
1. On RHEL 7.2 VMs, configured Ceph (10.2.1) cluster with 3 mons and 19
osds .
2. Configured Calamari 1.3-7 on one node. Installation was done through
ICE_SETUP with ISO Image. Diamond packages we
Hi!
I have a soo strange problem. At friday night i upgraded my small ceph
cluster from hammer to jewel. Everything went so well, but the chowning of
osd datadir took a lot time, so i skipped two osd and do the run-as-root
trick. Yesterday evening i wanted to fix this, shutted down the first OSD
a
I have seen this.
Just stop ceph and kill any ssh processes related to it.
I had the same issue, and the fix for me was to enable root login, ssh to
the node as root and run the env DEBIAN_FRONTEND=noninteractive
DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends
install -o
Hey,
I opened an issue at tracker.ceph.com -> http://tracker.ceph.com/issues
/16266-Original Message-
From: Brad Hubbard
To: Mathias Buresch
Cc: jsp...@redhat.com , ceph-us...@ceph.com
Subject: Re: [ceph-users] Ceph Status - Segmentation Fault
Date: Thu, 2 Jun 2016 09:50:20 +1000
Could
I just realized that this issue is probably because I’m running jewel 10.2.1 on
the servers side, but accessing from a client running hammer 0.94.7 or
infernalis 9.2.1
Here is what happens if I run rbd ls from a client on infernalis. I was
testing this access since we weren’t planning on build
Hi,
I have removed cache tiering due to "missing hit_sets" warning. After
removing, I want to try to add tiering again with the same cache pool and
storage pool, but I can't even the cache pool is empty or forced to clear.
Following is some output. How can I deal with this? Is it possible to clear
Hi,
no, its two independent pools.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzomb
Hi,
Is there any relation between PGs on cache pool and PGs on storage pool of
cache tiering? Is is manadatory to set pg(p)_num between cache pool and
storage pool to be equal?
Best regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
Hi,
i am for sure not really experienced yet with ceph or with cache tier,
but to me it seems to behave strange.
Setup:
pool 3 'ssd_cache' replicated size 2 min_size 1 crush_ruleset 1
object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 190 flags
hashpspool,incomplete_clones tier_of 4 cache
On Tue, Jun 14, 2016 at 2:26 AM, Mathias Buresch
wrote:
> Hey,
>
> I opened an issue at tracker.ceph.com -> http://tracker.ceph.com/issues
> /16266
Hi Mathias,
Thanks!
I've added some information in that bug as I came across this same
issue working on something else and saw your bug this mornin
I'd have to look more closely, but these days promotion is
probabilistic and throttled. During each read of those objects, it
will tend to promote a few more of them depending on how many
promotions are in progress and how hot it thinks a particular object
is. The lack of a speed up is a bummer,
Hi,
I don't really have a solution but I can confirm I had the same problem
trying to deploy my new Jewel cluster. I reinstalled the cluster with
Hammer and everything is working as I expect it to (that is; writes hit
the backing pool asynchronously)
Although other than you I noticed the same pro
Hi,
yeah, for sure you are right !
Sorry...
Its Centos 7 ( default kernel 3.10 )
ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
In the very end i want/need a setup where all is going to the cache tier
( read and write ).
Writes shall be, after some time without modificati
Hello,
On Mon, 13 Jun 2016 16:52:19 -0700 Samuel Just wrote:
> I'd have to look more closely, but these days promotion is
> probabilistic and throttled.
Unconfigurable and exclusively so?
>During each read of those objects, it
> will tend to promote a few more of them depending on how many
>
Hello,
On Tue, 14 Jun 2016 01:57:49 +0200 Oliver Dzombic wrote:
> Hi,
>
> yeah, for sure you are right !
>
> Sorry...
>
> Its Centos 7 ( default kernel 3.10 )
> ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)
>
>
>
> In the very end i want/need a setup where all is going
Hi,
if there isnt a problem, why are now 358 objects inside the cache pool
running multiple times dd if=file of=/dev/zero while every full read of
this 1.5 GB file produces around 8 objects inside the cache pool.
Its the same file, read again and again.
But at no point its read from the cache.
On Tue, 14 Jun 2016 02:38:27 +0200 Oliver Dzombic wrote:
> Hi,
>
> if there isnt a problem, why are now 358 objects inside the cache pool
> running multiple times dd if=file of=/dev/zero while every full read of
> this 1.5 GB file produces around 8 objects inside the cache pool.
>
My response (n
Hi Christian,
if i read a 1,5 GB file, which is not changing at all.
Then i expect the agent to copy it one time from the cold pool to the
cache pool.
In fact its every time making a new copy.
I can see that by increasing disc usage of the cache and the increasing
object number.
And the non ex
Hello,
On Tue, 14 Jun 2016 02:52:43 +0200 Oliver Dzombic wrote:
> Hi Christian,
>
> if i read a 1,5 GB file, which is not changing at all.
>
> Then i expect the agent to copy it one time from the cold pool to the
> cache pool.
>
Before Jewel, that is what you would have seen, yes.
Did you re
osd_tier_promote_max_objects_sec
and
osd_tier_promote_max_bytes_sec
is what you are looking for, I think by default its set to 5MB/s, which
would roughly correlate to why you are only seeing around 8 objects each
time being promoted. This was done like this as too many promotions hurt
performance,
Did you enable the sortbitwise flag as per the upgrade instructions, as there
is a known bug with it? I don't know why these instructions haven't been
amended in light of this bug.
http://tracker.ceph.com/issues/16113
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@
Hi,
I'm getting "UnboundLocalError: local variable 'region_name' referenced
before assignment" error while placing an object in my earlier created
bucket using my RADOSGW with boto.
My package details:
$ sudo rpm -qa | grep rados
librados2-10.2.1-0.el7.x86_64
libradosstriper1-10.2.1-0.el7.x86_
Hi Salih,
Yes, we performed "calamari-ctl initialize" & "ceph-deploy calamari
connect node1 node2 .." and our cluster status is Healthy (OK).
Not sure if this might create problem, but we configured the cpeh cluster
first and then calamari.
Thanks,
Manoj
On Mon, Jun 13, 2016 at 10:29 PM,
35 matches
Mail list logo