Re: [ceph-users] RadosGW zonegroup id error

2016-09-05 Thread Yoann Moulin
Hello, >>> I have an issue with the default zonegroup on my cluster (Jewel 10.2.2), I >>> don't >>> know when this occured, but I think I did a wrong command during the >>> manipulation of zones and regions. Now the ID of my zonegroup is "default" >>> instead of "4d982760-7853-4174-8c05-cec2ef148

Re: [ceph-users] Turn snapshot of a flattened snapshot into regular image

2016-09-05 Thread Eugen Block
I created the same scenario again (IDs have changed), and I executed the info command during the different stages. base image: de4e1e90-7e81-4518-8558-f9eb1cfd3df8 | Test-SLE12SP1 ceph@node1:~/ceph-deploy> rbd -p images --image de4e1e90-7e81-4518-8558-f9eb1cfd3df8 info rbd image 'de4e1e90-

Re: [ceph-users] Turn snapshot of a flattened snapshot into regular image

2016-09-05 Thread Alexey Sheplyakov
Eugen, > It seems as if the nova snapshot creates a full image (flattened) so it doesn't depend on the base image. As far as I understand a (nova) snapshot is actually a standalone image (so you can boot it, convert to a volume, etc). The snapshot method of nova libvirt driver invokes the direct

Re: [ceph-users] stubborn/sticky scrub errors

2016-09-05 Thread Ronny Aasen
there are. and I did find the broken object by triggering a manual scrub and grepping the log file. I have had scrubbing disabled since it reduced client performance too much during recovery. once i found the objects in question it was just a matter of following the example at http://ceph.com

Re: [ceph-users] Turn snapshot of a flattened snapshot into regular image

2016-09-05 Thread Eugen Block
Thanks for the links, they actually help a lot to get a better understanding. So what I'm observing now is how it seems designed to be, but unfortunately, it does not explain what I described in the first message. I have tried several different images to reproduce this, but without success.

[ceph-users] osd dies with m_filestore_fail_eio without dmesg error

2016-09-05 Thread Ronny Aasen
Hello I have a osd that regularly dies on io, especially scrubbing. normaly i would assume a bad disk, and replace it. but then i normaly see messages in dmesg about the device and it's errors. for this OSD there are no errors in dmesg at all after a crash like this. this osd is a 5 disk softw

[ceph-users] Cache-tier's roadmap

2016-09-05 Thread 王文铎
Hello : I use cache-tier in my production system, with ssds for cache-pool and HDs for base-pool, Since my scenario is to support openstack volumes,which requires very strong consistency and availability, I need to care about the future development about ceph cache-tier.So I have a few q

Re: [ceph-users] Antw: Re: Best practices for extending a ceph cluster with minimal client impact data movement

2016-09-05 Thread Sam Wouters
Hi, >>> Now, add the OSDs to the cluster, but NOT to the CRUSHMap. >>> >>> When all the OSDs are online, inject a new CRUSHMap where you add the new >>> OSDs to the data placement. >>> >>> $ ceph osd setcrushmap -i >>> >>> The OSDs will now start to migrate data, but this is throttled by the ma

Re: [ceph-users] radosgw flush_read_list(): d->client_c->handle_data() returned -5

2016-09-05 Thread Henrik Korkuc
On 16-02-27 06:09, Yehuda Sadeh-Weinraub wrote: On Wed, Feb 24, 2016 at 5:48 PM, Ben Hines wrote: Any idea what is going on here? I get these intermittently, especially with very large file. The client is doing RANGE requests on this >51 GB file, incrementally fetching later chunks. 2016-02-

Re: [ceph-users] radosgw flush_read_list(): d->client_c->handle_data() returned -5

2016-09-05 Thread Henrik Korkuc
On 16-09-05 14:36, Henrik Korkuc wrote: On 16-02-27 06:09, Yehuda Sadeh-Weinraub wrote: On Wed, Feb 24, 2016 at 5:48 PM, Ben Hines wrote: Any idea what is going on here? I get these intermittently, especially with very large file. The client is doing RANGE requests on this >51 GB file, incr

Re: [ceph-users] osd dies with m_filestore_fail_eio without dmesg error

2016-09-05 Thread Brad Hubbard
On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote: > Hello > > I have a osd that regularly dies on io, especially scrubbing. > normaly i would assume a bad disk, and replace it. but then i normaly see > messages in dmesg about the device and it's errors. for this OSD > there are no error