>
> Because you are not using a cluster aware filesystem - the respective
> mounts
> don't know when changes are made to the underlying block device (rbd) by
> the
> other mount. What you are doing *will* lead to file corruption.
>
> Your need to use a distributed filesystem such as GFS2 or cephfs.
No, if you used cache tiering, It is no need to use ssd journal again.
From: Florent MONTHEL
Date: 2015-01-17 23:43
To: ceph-users
Subject: [ceph-users] Cache pool tiering & SSD journal
Hi list,
With cache pool tiering (in write back mode) enhancement, should I keep to use
SSD journal on SSD ?
C
On Sun, 18 Jan 2015 10:17:50 AM lidc...@redhat.com wrote:
> No, if you used cache tiering, It is no need to use ssd journal again.
Really? writes are as fast as with ssd journals?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.c
On 01/17/2015 08:17 PM, lidc...@redhat.com wrote:
No, if you used cache tiering, It is no need to use ssd journal again.
The cache tiering and SSD journals serve a somewhat different purpose.
In Ceph, all of the data for every single write is written to both the
journal and to the data storag
Hi George,
List disks available:
# $ ceph-deploy disk list {node-name [node-name]...}
Add OSD using osd create:
# $ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
Or you can use the manual steps to prepare and activate disk described
at
http://ceph.com/docs/master/start/quick-c
Hi,
I have upgraded Firefly to Giant on Debian Wheezy and it went without
any problems.
Jiri
On 16/01/2015 06:49, Erik McCormick wrote:
Hello all,
I've got an existing Firefly cluster on Centos 7 which I deployed with
ceph-deploy. In the latest version of ceph-deploy, it refuses to
handl
Hi Jiri,
thanks for the feedback.
My main concern is if it's better to add each OSD one-by-one and wait
for the cluster to rebalance every time or do it all-together at once.
Furthermore an estimate of the time to rebalance would be great!
Regards,
George
Hi George,
List disks availabl
Hi list,
I'm trying to understand the RGW cache consistency model. My Ceph
cluster has multiple RGW instances with HAProxy as the load balancer.
HAProxy would choose one RGW instance to serve the request(with
round-robin).
The question is if RGW cache was enabled, which is the default
behavior, th
when I write a file named "1234%" in the master region, and rgw-agent send copy
obj request which contains "x-amz-copy-source:nofilter_bucket_1/1234%" to
the rep region fail 404 error;
I analysis that rgw-agent can't encode url
"x-amz-copy-source:nofilter_bucket_1/1234%" , but rgw could deco
Hi John,
Good shot!
I've increased the osd_max_write_size to 1GB (still smaller than osd journal
size) and now the mds still running fine after an hour.
Now checking if fs still accessible or not. Will update from time to time.
Thanks again John.
Regards,
Bazli
-Original Message-
From
10 matches
Mail list logo