Hi Jason,
yes, after I also built object-maps for the snapshots the feature is working as
expected.
Thanks
Christoph
On Thu, Aug 04, 2016 at 01:52:54PM -0400, Jason Dillaman wrote:
> Can you run "rbd info vm-208-disk-2@initial.20160729-220225"? You most
> likely need to rebuild the object map
Hello,
first nugget from my new staging/test cluster.
As mentioned yesterday, now running latest Hammer under Debian Jessie
(with sysvinit) and manually created OSDs.
2 nodes with 32GB RAM, fast enough CPU (E5-2620 v3), 2 200GB DC S3610 for
OS and journals, 4 1GB 2.5" SATAs for OSDs.
For my amu
> Op 4 augustus 2016 om 18:17 schreef Shain Miley :
>
>
> Hello,
>
> I am thinking about setting up a second Ceph cluster in the near future,
> and I was wondering about the current status of rbd-mirror.
>
I don't have all the answers, but I will give it a try.
> 1)is it production ready at
On Thu, Aug 4, 2016 at 10:44 PM, K.C. Wong wrote:
> Thank you, Jason.
>
> While I can't find the culprit for the watcher (the watcher never expired,
> and survived a reboot. udev, maybe?), blacklisting the host did allow me
> to remove the device.
It survived a reboot because watch state is persi
Hi,
Lately i am doing a lot of data migration to ceph, using rados with the
--striper option, and sometimes an upload to a ceph pool got interrupted
resulting in a corrupt object that cannot be re-uploaded or removed
using 'rados --striper rm'. Trying that, will result in a error message
like
On Fri, Aug 5, 2016 at 3:42 AM, Wido den Hollander wrote:
>
>> Op 4 augustus 2016 om 18:17 schreef Shain Miley :
>>
>>
>> Hello,
>>
>> I am thinking about setting up a second Ceph cluster in the near future,
>> and I was wondering about the current status of rbd-mirror.
>>
>
> I don't have all the
If you had corruption in your backing RBD parent image snapshot, the
clones may or may not be affected depending on whether or not a CoW
was performed within the clone over the corrupted section (while it
was corrupted). Therefore, the safest course of action would be to
check each guest VM to ensu
It works for us. HereĀ¹s what ours looks like:
rgw frontends = civetweb port=80 num_threads=50
>From netstat:
tcp0 0 0.0.0.0:80 0.0.0.0:* LISTEN
4010203/radosgw
Warren Wang
On 7/28/16, 7:20 AM, "ceph-users on behalf of Zoltan Arnold Nagy"
wrote:
>
On Tuesday, August 2, 2016, Ilya Dryomov wrote:
> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev > wrote:
> > On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin > wrote:
> >> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
> >>> Hi Ilya,
> >>>
> >>> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov