Hello,
>>> I have an issue with the default zonegroup on my cluster (Jewel 10.2.2), I
>>> don't
>>> know when this occured, but I think I did a wrong command during the
>>> manipulation of zones and regions. Now the ID of my zonegroup is "default"
>>> instead of "4d982760-7853-4174-8c05-cec2ef148
I created the same scenario again (IDs have changed), and I executed
the info command during the different stages.
base image:
de4e1e90-7e81-4518-8558-f9eb1cfd3df8 | Test-SLE12SP1
ceph@node1:~/ceph-deploy> rbd -p images --image
de4e1e90-7e81-4518-8558-f9eb1cfd3df8 info
rbd image 'de4e1e90-
Eugen,
> It seems as if the nova snapshot creates a full image (flattened) so it
doesn't depend on the base image.
As far as I understand a (nova) snapshot is actually a standalone image (so
you can boot it, convert to a volume, etc).
The snapshot method of nova libvirt driver invokes the direct
there are.
and I did find the broken object by triggering a manual scrub and
grepping the log file. I have had scrubbing disabled since it reduced
client performance too much during recovery.
once i found the objects in question it was just a matter of following
the example at http://ceph.com
Thanks for the links, they actually help a lot to get a better understanding.
So what I'm observing now is how it seems designed to be, but
unfortunately, it does not explain what I described in the first
message. I have tried several different images to reproduce this, but
without success.
Hello
I have a osd that regularly dies on io, especially scrubbing.
normaly i would assume a bad disk, and replace it. but then i normaly
see messages in dmesg about the device and it's errors. for this OSD
there are no errors in dmesg at all after a crash like this.
this osd is a 5 disk softw
Hello :
I use cache-tier in my production system, with ssds for cache-pool and HDs
for base-pool,
Since my scenario is to support openstack volumes,which requires very
strong consistency and availability, I need to care about the future
development about ceph cache-tier.So I have a few q
Hi,
>>> Now, add the OSDs to the cluster, but NOT to the CRUSHMap.
>>>
>>> When all the OSDs are online, inject a new CRUSHMap where you add the new
>>> OSDs to the data placement.
>>>
>>> $ ceph osd setcrushmap -i
>>>
>>> The OSDs will now start to migrate data, but this is throttled by the ma
On 16-02-27 06:09, Yehuda Sadeh-Weinraub wrote:
On Wed, Feb 24, 2016 at 5:48 PM, Ben Hines wrote:
Any idea what is going on here? I get these intermittently, especially with
very large file.
The client is doing RANGE requests on this >51 GB file, incrementally
fetching later chunks.
2016-02-
On 16-09-05 14:36, Henrik Korkuc wrote:
On 16-02-27 06:09, Yehuda Sadeh-Weinraub wrote:
On Wed, Feb 24, 2016 at 5:48 PM, Ben Hines wrote:
Any idea what is going on here? I get these intermittently,
especially with
very large file.
The client is doing RANGE requests on this >51 GB file, incr
On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote:
> Hello
>
> I have a osd that regularly dies on io, especially scrubbing.
> normaly i would assume a bad disk, and replace it. but then i normaly see
> messages in dmesg about the device and it's errors. for this OSD
> there are no error
11 matches
Mail list logo