Snapshots are disabled by default in Jewel as well. Depending on user
feedback about what's most important, we hope to have them ready for Kraken
or the L release (but we'll see).
-Greg
On Friday, March 18, 2016, 施柏安 wrote:
> Hi John,
> Really thank you for your help, and sorry about that I ask
Hi,
Le 18/03/2016 20:58, Mark Nelson a écrit :
> FWIW, from purely a performance perspective Ceph usually looks pretty
> fantastic on a fresh BTRFS filesystem. In fact it will probably
> continue to look great until you do small random writes to large
> objects (like say to blocks in an RBD volum
Thanks Sam.
Since I have prepared a script for this, I decided to go ahead with the
checks.(patience isn't one of my extended attributes)
I've got a file that searches the full erasure encoded spaces and does your
checklist below. I have operated only on one PG so far, the 70.459 on
Hi,
We have a user with a 50GB quota and has now a single bucket with 20GB
of files. They had previous buckets created and removed but the quota
has not decreased. I understand that we do garbage collection but it
has been significantly longer than the defaults that we have not
overridden. They
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Sage,
You patch seems to have resolved the issue for us. We can't reproduce
the problem with ceph_test_rados or our VM test. I also figured out
that those are all backports that were cherry-picked so it was showing
the original commit date. There wa
Hi!
Trying to run bonnie++ on cephfs mounted via the kernel driver on a
centos 7.2.1511 machine resulted in:
# bonnie++ -r 128 -u root -d /data/cephtest/bonnie2/
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Readi
I’d rather like to see this implemented at the hypervisor level, i.e.: QEMU, so
we can have a common layer for all the storage backends.
Although this is less portable...
> On 17 Mar 2016, at 11:00, Nick Fisk wrote:
>
>
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-bo
This tracker ticket happened to go by my eyes today:
http://tracker.ceph.com/issues/12814 . There isn't a lot of detail
there but the headline matches.
-Greg
On Wed, Mar 16, 2016 at 2:02 AM, Nick Fisk wrote:
>
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ce
On Thu, Mar 17, 2016 at 1:41 PM, Gregory Farnum wrote:
> On Thu, Mar 17, 2016 at 3:49 AM, John Spray wrote:
>> Snapshots are disabled by default:
>> http://docs.ceph.com/docs/hammer/cephfs/early-adopters/#most-stable-configuration
>
> Which makes me wonder if we ought to be hiding the .snaps dire
Insofar as I've been able to tell, both BTRFS and ZFS provide similar
capabilities back to CEPH, and both are sufficiently stable for the
basic CEPH use case (Single disk -> single mount point), so the
question becomes this: Which actually provides better performance?
Which is the more highly opti
On Thu, 17 Mar 2016, Robert LeBlanc wrote:
> Also, is this ceph_test_rados rewriting objects quickly? I think that
> the issue is with rewriting objects so if we can tailor the
> ceph_test_rados to do that, it might be easier to reproduce.
It's doing lots of overwrites, yeah.
I was albe to reprod
On Wed, Mar 16, 2016 at 9:46 AM, Kenneth Waegeman
wrote:
> Hi all,
>
> Quick question: Does cephFS pass the fadvise DONTNEED flag and take it into
> account?
> I want to use the --drop-cache option of rsync 3.1.1 to not fill the cache
> when rsyncing to cephFS
It looks like ceph-fuse unfortunatel
Hi
Php aws sdk with personal updates can do that.
First, you need a functionnal php_aws_sdk with your radosgw and an
account (access/secret key) with metadata caps.
I use aws Version 2.8.22 :
$aws = Aws::factory('config.php');
$this->s3client = $aws->get('s3');
http://docs.aws.amazon.c
Hello,
On Wed, 16 Mar 2016 16:22:06 + Stephen Harker wrote:
> On 2016-02-17 11:07, Christian Balzer wrote:
> >
> > On Wed, 17 Feb 2016 10:04:11 +0100 Piotr Wachowicz wrote:
> >
> >> > > Let's consider both cases:
> >> > > Journals on SSDs - for writes, the write operation returns right
> >
I can raise a tracker for this issue since it looks like an intermittent
issue and mostly dependent on specific hardware or it would be better if
you add all the hardware/os details in tracker.ceph.com, also from your
logs it looks like you have
Resource busy issue: Error: Failed to add partition
15 matches
Mail list logo