Sam/Sage,
I saw Giant is forked off today. We need the pull request
(https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please
merge this into Giant when it will be ready ?
Thanks & Regards
Somnath
-Original Message-
From: Samuel Just [mailto:sam.j...@inktank.com]
Se
Hi,
as ceph user, It could be wonderfull to have it for Giant,
optracker performance impact is really huge (See my ssd benchmark on ceph user
mailing)
Regards,
Alexandre Derumier
- Mail original -
De: "Somnath Roy"
À: "Samuel Just"
Cc: "Sage Weil" , ceph-de...@vger.kernel.org,
ce
sorry. that was the wrong log. there was some issue with ceph user while
doing yum remotely. So I tried to install ceph using root user. Below is
the log
[root@ceph-admin ~]# ceph-deploy install ceph-osd
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cl
On Sat, 13 Sep 2014, Alexandre DERUMIER wrote:
> Hi,
> as ceph user, It could be wonderfull to have it for Giant,
> optracker performance impact is really huge (See my ssd benchmark on ceph
> user mailing)
Definitely. More importantly, it resolves a few crashes we've observed.
It's going throug
Thanks Sage!
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Saturday, September 13, 2014 7:32 AM
To: Alexandre DERUMIER
Cc: Somnath Roy; ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com; Samuel
Just
Subject: Re: [ceph-users] OpTracker optimization
On Sat, 13 Sep
Hello Cephers
I have created a Cache pool and looks like cache tiering agent is not able to
flush/evict data as per defined policy. However when i manually evict / flush
data , it migrates data from cache-tier to storage-tier
Kindly advice if there is something wrong with policy or anything els
Hello guys,
I've been trying to map an rbd disk to run some testing and I've noticed that
while I can successfully read from the rbd image mapped to /dev/rbdX, I am
failing to reliably write to it. Sometimes write tests work perfectly well,
especially if I am using large block sizes. But often
Hi Karan,
May be setting the dirty byte ratio (flush) and the full ratio (eviction). Just
try to see if it makes any difference
- cache_target_dirty_ratio .1
- cache_target_full_ratio .2
Tune the percentage as desired relatively to target_max_bytes and
target_max_objects. The first threshold re
David Moreau Simard writes:
>
> Hi,
>
> Trying to update my continuous integration environment.. same deployment
method with the following specs:
> - Ubuntu Precise, Kernel 3.2, Emperor (0.72.2) - Yields a successful,
healthy cluster.
> - Ubuntu Trusty, Kernel 3.13, Firefly (0.80.5) - I have