HI:
Is there any way or command to find out the create date-time of block
snapshot?
Any help would be much appreciated.
Best Regards,
WD
-
It seems I've hit this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1231630
is there any way I can recover this cluster? It worked in our test
cluster, but crashed the production one...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
I'm currently running Hammer (0.94.3), created an invalid LRC profile
(typo in the l=, should have been l=4 but was l=3, and now I don't
have enough different ruleset-locality) and created a pool. Is there
any way to delete this pool? remember I can't start the ceph-mon...
On Tue, Oct 13, 2015 at
On Tue, 13 Oct 2015, Haomai Wang wrote:
> resend
>
> On Tue, Oct 13, 2015 at 10:56 AM, Haomai Wang wrote:
> > COOL
> >
> > Interesting that async messenger will consume more memory than simple, in my
> > mind I always think async should use less memory. I will give a look at this
Yeah.. I was su
Hi Haomai,
Great! I haven't had a chance to dig in and look at it with valgrind
yet, but if I get a chance after I'm done with newstore fragment testing
and somnath's writepath work I'll try to go back and dig in if you
haven't had a chance yet.
Mark
On 10/12/2015 09:56 PM, Haomai Wang wro
Hi ,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors
simultaneously while move the folder using one nodes in the same disk other
nodes data replicating with below said error (Copying is not having any
problem). Workaround if we remount the partition this issue get res
You need to disable RBD caching.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distributing or taking any action in reliance on the
On 10/12/2015 11:12 PM, Gregory Farnum wrote:
On Mon, Oct 12, 2015 at 9:50 AM, Mark Nelson wrote:
Hi Guy,
Given all of the recent data on how different memory allocator
configurations improve SimpleMessenger performance (and the effect of memory
allocators and transparent hugepages on RSS memo
On Mon, 12 Oct 2015, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> After a weekend, I'm ready to hit this from a different direction.
>
> I replicated the issue with Firefly so it doesn't seem an issue that
> has been introduced or resolved in any nearby version. I
This just started after we removed some old osd hardware, I feel like it may be
related but at the same time I'm not sure how? All of my pools have the same
ruleset and everything else is working and uploads do work but the testing for
multipart process fails.
Any help would be greatly ap
This is the first Infernalis release candidate. There have been some
major changes since hammer, and the upgrade process is non-trivial.
Please read carefully.
Getting the release candidate
-
The v9.1.0 packages are pushed to the development release repositories::
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Sage Weil
> Sent: 13 October 2015 22:02
> To: ceph-annou...@ceph.com; ceph-de...@vger.kernel.org; ceph-
> us...@ceph.com; ceph-maintain...@ceph.com
> Subject: [ceph-users] v9.1.0 Infernalis rel
On Tue, 13 Oct 2015, Nick Fisk wrote:
> Do you know if any of the Tiering + EC performance improvements
> currently waiting to merge will make the final release or is it likely
> they will get pushed back to Jewel?
>
> Specifically:-
> https://github.com/ceph/ceph/pull/5486
> https://github.com/
I'm adding a node (4 * WD RED 3TB) to our small cluster to bring it up to
replica 3. Given how much headache it has been managing multiple osd's
(including disk failures) on my other nodes, I've decided to put all 4
disks on the new node in a ZFS RAID 10 config with SSD SLOG & Cache with
just one O
On 13/10/15 22:01, Sage Weil wrote:
> * *RADOS*:
> * The RADOS cache tier can now proxy write operations to the base
> tier, allowing writes to be handled without forcing migration of
> an object into the cache.
> * The SHEC erasure coding support is no longer flagged as
> experimen
On Mon, Oct 12, 2015 at 12:50 AM, Burkhard Linke
wrote:
> Hi,
>
> On 10/08/2015 09:14 PM, John Spray wrote:
>>
>> On Thu, Oct 8, 2015 at 7:23 PM, Gregory Farnum wrote:
>>>
>>> On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke
>>> wrote:
Hammer 0.94.3 does not support a 'dump cache' mds co
On Fri, Oct 9, 2015 at 5:49 PM, Francois Lafont wrote:
> Hi,
>
> Thanks for your answer Greg.
>
> On 09/10/2015 04:11, Gregory Farnum wrote:
>
>> The size of the on-disk file didn't match the OSD's record of the
>> object size, so it rejected it. This works for that kind of gross
>> change, but it
Hi Sage...
I've seen that the rh6 derivatives have been ruled out.
This is a problem in our case since the OS choice in our systems is,
somehow, imposed by CERN. The experiments software is certified for SL6
and the transition to SL7 will take some time.
This is kind of a showstopper special
Hi all...
Thank you for the feedback, and I am sorry for my delay in replying.
1./ Just to recall the problem, I was testing cephfs using fio in two
ceph-fuse clients:
- Client A is in the same data center as all OSDs connected at 1 GbE
- Client B is in a different data center (in anoth
On Wed, Oct 14, 2015 at 1:03 AM, Sage Weil wrote:
> On Mon, 12 Oct 2015, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> After a weekend, I'm ready to hit this from a different direction.
>>
>> I replicated the issue with Firefly so it doesn't seem an issue that
>
20 matches
Mail list logo