Hi,
Can someone please let me know whether there is any documentation for
installing Havana release of Openstack along with Ceph.
Thanks,
Lalitha.M
===
Please refer to http://www.aricent.com/legal/email_disclaimer.ht
Hi list,
I've noticed that a patch was submitted in October last year to enable
POSIX ACLs in cephfs, but things have gone very quiet on that front
recently.
We're looking to use cephfs in our organisation for resilient storage of
documents, but without ACLs we would have some issues. Are th
that code is already in test branch of ceph-client, I think it will go
into 3.14 kernel.
Regards
Yan, Zheng
On Tue, Jan 21, 2014 at 7:04 PM, Alex Crow wrote:
> Hi list,
>
> I've noticed that a patch was submitted in October last year to enable POSIX
> ACLs in cephfs, but things have gone very qu
Hi,
I am trying to use hadoop distcp while copying data from hdfs to s3. Hadoop
distcp devides the data in to multiple chunks and sends the data parellely
so that faster performance is achieved. However this is failing against
ceph s3 indicating a mismatch between md5 and etag returned by s3. Howe
Hi,
Many thanks for this.
Cheers
Alex
Original Message
*Subject:* Re: [ceph-users] CephFS posix ACLs
*Date:* Tue, 21 Jan 2014 21:47:37 +0800
*From:* Yan, Zheng
*To:* Alex Crow
*CC:* ceph-users@lists.ceph.com
that code is already in test branch of ceph-client, I think it will go
i
On Jan 19, 2014, Sage Weil wrote:
> On Sat, 18 Jan 2014, Sage Weil wrote:
>> Which also means this will bite anybody who ran emperor, too. I think I
>> need to introduce some pool flag or something indicating whether the dirty
>> stats should be scrubbed or not, set only on new pools?
> Push
Hi,
I noticed in the documentation that the OSD should use 3 ports per OSD
daemon running and so when I setup the cluster, I originally opened
enough port to accomodate this (with a small margin so that restart
could proceed even is ports aren't released immediately).
However today I just noticed
Hi Guys,
Thanks for reply Sage.
Yes librados seems to be right direction for our long term development.
For short term, I guess we will stick on CephFS for files, Sorl for
metadata and Riak for thumbnails.
Ara
On 01/20/2014 08:24 PM, Sage Weil wrote:
> On Mon, 20 Jan 2014, Ara Sadoyan wrote:
Hi all,
Is there somewhere a list of parameters that can('t) be changed using
the parameter injection (ceph tell ..injectargs)?
Thanks!!
Kind regards
Kenneth Waegeman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
Hello,
I do not know if it will have all the options, but you can use
$ ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show
This will give the settings for osd.0 .
Regards,
Laurent Barbe
Le 21/01/2014 17:45, Kenneth Waegeman a écrit :
Hi all,
Is there somewhere a list of parameter
Almost! The primary OSD sends out the data to its replicas simultaneously
with putting it into the journal.
-Greg
On Monday, January 20, 2014, Tim Zhang wrote:
> Hi guys,
> I wonder how does store objects. Consider the writing obj process, IMO,
> osd first get obj data from client, then the pri
Hi,
I need a little bit help.
We have an 4-node ceph cluster and the clients run in trouble if one
node is down (due to maintenance).
After the node is switched on again ceph health shows (for a little time):
HEALTH_WARN 4 pgs incomplete; 14 pgs peering; 370 pgs stale; 12 pgs
stuck unclean; 36 req
On Tue, Jan 21, 2014 at 2:23 AM, Lalitha Maruthachalam
wrote:
> Can someone please let me know whether there is any documentation for
> installing Havana release of Openstack along with Ceph.
These slides have some information about how this is done in Mirantis
OpenStack 4.0, including some gotch
Udo,
I think you might have better luck using "ceph osd set noout" before doing
maintenance, rather than "ceph osd set nodown", since you want the node to
be marked down to avoid having I/O directed at it (but not out to avoid
having recovery backfill begin.)
-Aaron
On Tue, Jan 21, 2014 at 10:0
Hi,
I have a cluster that contains 16 OSDs spread over 4 physical
machines. Each machines runs 4 OSD process.
Among those, one isue periodically using 100% of the CPU. if you
aggregate the total CPU time of the process over long periods, you can
clearly see it uses roughtly 6x more CPU than any o
On Tue, Jan 21, 2014 at 10:38 AM, Dmitry Borodaenko
wrote:
> On Tue, Jan 21, 2014 at 2:23 AM, Lalitha Maruthachalam
> wrote:
>> Can someone please let me know whether there is any documentation for
>> installing Havana release of Openstack along with Ceph.
>
> These slides have some information a
Hi,
I have a cluster of two kvm hosts and three ceph servers (osd+mons).
I've been doing some basic performance tests, and I discover that I
ftp server running in the guest is slow compared to a ftp server in
the host. The same with a Samba file server.
For example, ncftpget agaisnt the guest rep
On Tue, Jan 21, 2014 at 10:38 AM, Dmitry Borodaenko
wrote:
> On Tue, Jan 21, 2014 at 2:23 AM, Lalitha Maruthachalam
> wrote:
>> Can someone please let me know whether there is any documentation for
>> installing Havana release of Openstack along with Ceph.
> These slides have some information abo
Hi Sage,
I have a similar question, I need 2 replicas (one on each rack) and I would
like to know whether the following rule always save primary on rack1?
rule data { ruleset 0 type replicated min_size 2 max_size 2 step take rack1
step chooseleaf firstn 1 type host step emit step take rack2 s
Has anyone looked at or better - actually tried - Mellanox's libvma to
accelerate Ceph's inter-OSD, client-OSD, or both? Looks like potential for
drop-in latency improvements, or am I missing something...
--
Cheers,
~Blairo
___
ceph-users mailing list
c
On Wed, 22 Jan 2014 17:54:44 +1100 Blair Bethwaite wrote:
> Has anyone looked at or better - actually tried - Mellanox's libvma to
> accelerate Ceph's inter-OSD, client-OSD, or both? Looks like potential
> for drop-in latency improvements, or am I missing something...
>
That's why I am going to d
21 matches
Mail list logo