Hi Milan,
We've done some tests here and our hadoop can talk to RGW successfully
with this SwiftFS plugin. But we haven't tried Spark yet. One thing is
the data locality feature, it actually requires some special
configuration of Swift proxy-server, so RGW is not able to archive the
data locality
FWIW, there was some discussion in OpenStack Swift and their performance tests
showed 255 is not the best in recent XFS. They decided to use large xattr
boundary size(65535).
https://gist.github.com/smerritt/5e7e650abaa20599ff34
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
Hey Patrick,
Looks like the GMT+8 time for the 1st day is wrong, should be 10:00 pm - 7:30
am?
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Patrick McGarry
Sent: Tuesday, June 30, 2015 11:28 PM
To: Ceph Devel; Ceph-Use
Hi Loic,
On Mon, Nov 10, 2014 at 6:44 AM, Loic Dachary wrote:
>
>
> On 05/11/2014 13:57, Jan Pekař wrote:> Hi,
>>
>> is there any possibility to change erasure coding pool parameters ie k and m
>> values on the fly? I want to add more disks to existing erasure pool and
>> change redundancy leve
Hi lists,
I was trying to do some tests on cache tiering. Currently I have a SSD
backed pool t0(1440G) and a HDD backed pool t1(8T). If I create 10
RBDs on the tier, it looks like all these volumes are sharing t0 but
without any size limits for each.
Is it possible to set per RBD quota in the cach
Hi list,
I'm trying to understand the RGW cache consistency model. My Ceph
cluster has multiple RGW instances with HAProxy as the load balancer.
HAProxy would choose one RGW instance to serve the request(with
round-robin).
The question is if RGW cache was enabled, which is the default
behavior, th
o the cache invalidation if necessary?
Sincerely, Yuan
On Mon, Jan 19, 2015 at 10:58 PM, Gregory Farnum wrote:
> On Sun, Jan 18, 2015 at 6:40 PM, ZHOU Yuan wrote:
>> Hi list,
>>
>> I'm trying to understand the RGW cache consistency model. My Ceph
>> cluster has m
all the RGW
>> instances in ceph.conf, and then these RGW instances will
>> automatically do the cache invalidation if necessary?
>>
>>
>> Sincerely, Yuan
>>
>>
>> On Mon, Jan 19, 2015 at 10:58 PM, Gregory Farnum wrote:
>> > On Sun, Jan 18, 2015 at
Hi Loic, thanks for the education!
I’m also trying to understand the new ‘indep’ mode. Is this new mode
designed for Ceph-EC only? It seems that all of the data in 3-copy system
are equivalent and this new algorithm should also work?
Sincerely, Yuan
On Mon, Jan 13, 2014 at 7:37 AM, Loic Dachar
Hi,
From the doc it looks like for the default cephfs-hadoop driver,
Hadoop 2.x is not supported yet. You may need to get a newer
hadoop-cephfs.jar if you need to use YARN?
http://docs.ceph.com/docs/master/cephfs/hadoop/
https://github.com/GregBowyer/cephfs-hadoop
Sincerely, Yuan
On Mon, Oct
Hi,
The directory there should be some simulated hierarchical structure with '/' in
the object names. Do you mind checking the rest objects in ceph pool
.rgw.buckets?
$ rados ls -p .rgw.buckets | grep default.157931.5_hive
If there're still objects come out, you might try to delete them from t
ssue was fixed.
hope this can help.
thanks, -yuan
-Original Message-
From: 张绍文 [mailto:zhangshao...@btte.net]
Sent: Tuesday, November 3, 2015 1:45 PM
To: Zhou, Yuan
Cc: ceph...@lists.ceph.com; ceph-us...@ceph.com
Subject: Re: [Ceph-cn] librados: Objecter returned from getxattrs r=-2
Hi list,
I ran into some issue on customizing the librbd(linking with jemalloc) with
stock qemu in Ubuntu Trusty here.
Stock qemu depends on librbd1 and librados2(0.80.x). These two libraries
will be installed at /usr/lib/x86_64-linux-gnu/lib{rbd,rados}.so. The path
is included in /etc/ld.so.conf.
Hi Hauke,
It's possibly the XFS issue as discussed in the previous thread. I also saw
this issue in some JBOD setup, running with RHEL 7.3
Sincerely, Yuan
On Tue, Aug 15, 2017 at 7:38 PM, Hauke Homburg
wrote:
> Hello,
>
>
> I found some error in the Cluster with dmes -T:
>
> attempt to access
14 matches
Mail list logo