Re: [ceph-users] Cluster never reaching clean after osd out

2015-02-24 Thread Stéphane DUGRAVOT
- Mail original - > I have a Cluster of 3 hosts, running Debian wheezy and Backports Kernel > 3.16.0-0.bpo.4-amd64. > For testing I did a > ~# ceph osd out 20 > from a clean state. > Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck unclean > to get up and then go down

Re: [ceph-users] client-ceph [can not connect from client][connect protocol feature mismatch]

2015-03-06 Thread Stéphane DUGRAVOT
Hi Sonal, You can refer to this doc to identify your problem. Your error code is 4204, so * 4000 upgrade to kernel 3.9 * 200 CEPH_FEATURE_CRUSH_TUNABLES2 * 4 CEPH_FEATURE_CRUSH_TUNABLES * http://ceph.com/planet/feature-set-mismatch-error-on-ceph-kernel-clien

[ceph-users] OS file Cache, Ceph RBD cache and Network files systems

2015-03-16 Thread Stéphane DUGRAVOT
if so, what are the recommendations concerning the OS cache ? Thanks a lot. Stephane. -- Université de Lorraine Stéphane DUGRAVOT - Direction du numérique - Infrastructure Jabber : stephane.dugra...@univ-lorraine.fr Tél.: +33 3 83 68 20 98

Re: [ceph-users] OSD Forece Removal

2015-03-20 Thread Stéphane DUGRAVOT
- Mail original - > Hi all, can anybody tell me how can I force delete osds? the thing is that > one node got corrupted because of outage, so there is no way to get those > osd up and back, is there anyway to force the removal from ceph-deploy node? Hi, Try manual : * http://ceph.c

Re: [ceph-users] OSD Forece Removal

2015-03-23 Thread Stéphane DUGRAVOT
1 -2 1 host charlie 1 1 osd.1 up 1 Stephane. Thanks Jesus Chavez SYSTEMS ENGINEER-C.SALES jesch...@cisco.com Phone: +52 55 5267 3146 Mobile: +51 1 5538883255 CCIE - 44433 On Mar 20, 2015, at 3:49 AM, Stéphane DUGRAVOT < stephane.dugra...@univ-lorraine.fr >

Re: [ceph-users] error creating image in rbd-erasure-pool

2015-03-24 Thread Stéphane DUGRAVOT
- Mail original - > Hi Markus, > On 24/03/2015 14:47, Markus Goldberg wrote: > > Hi, > > this is ceph version 0,93 > > I can't create an image in an rbd-erasure-pool: > > > > root@bd-0:~# > > root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated > > pool 'bs3.rep' created > > roo

Re: [ceph-users] when recovering start

2015-04-08 Thread Stéphane DUGRAVOT
- Le 7 Avr 15, à 14:57, lijian a écrit : > Haomai Wang, > the mon_osd_down_out_interval is 300, please refer to my settings, and I use > the > cli 'service ceph stop osd.X' to stop a osd > the pg status change to remap,backfill and recovering ... immediately > so other something wrong with

Re: [ceph-users] when recovering start

2015-04-08 Thread Stéphane DUGRAVOT
- Le 8 Avr 15, à 14:21, lijian a écrit : > Hi Stephane, > I dump from a osd deamon You have to apply the mon_osd_down_out_interval value to monitor and not osd. What is the value on the mon ? Stephane. > Thanks > Jian LI > At 2015-04-08 16:05:04, "Stéphane D

[ceph-users] cephfs ... show_layout deprecated ?

2015-04-22 Thread Stéphane DUGRAVOT
l. is there a way to change the pool id ? How to use layout.* xattrs ? Thanks, Stephane. Ceph version : ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff) The mount is a kernel mount : mount -t ceph 1.2.3.4:/data1 /cephfs -- Université de Lorraine Stéphane DUGRAVOT - Dire

Re: [ceph-users] cephfs ... show_layout deprecated ?

2015-04-22 Thread Stéphane DUGRAVOT
- Le 22 Avr 15, à 11:39, Wido den Hollander a écrit : > On 04/22/2015 11:22 AM, Stéphane DUGRAVOT wrote: > > Hi all, > > When running command : > > cephfs /cephfs/ show_layout > > The result is : > > WARNING: This tool is deprecated. Use the layout.

[ceph-users] backup RGW in federated gateway

2015-06-29 Thread Stéphane DUGRAVOT
area to secondary area ? I wish the copy is realized for example once every night. Can i not use the agent in "automatic" mode, and start the sync on demand ? Stephane. -- Université de Lorraine Stéphane DUGRAVOT - Direction du numérique - Infrastructure Jabber : stephane.dug

[ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Stéphane DUGRAVOT
logies are so new to us that we should consider professional support in our project approach (especially at this point ) . Thanks. Stephane. -- Université de Lorraine Stéphane DUGRAVOT - Direction du numérique - Infrastructure Jabber : stephane.dugra...@univ-lorraine.fr Tél.: +33

[ceph-users] erasure coded pool k=7,m=5

2014-12-23 Thread Stéphane DUGRAVOT
= 5 is acceptable ? 2. Is this is the only one that guarantees us our premise ? 3. And more generally, is there a formula (based on the number of dc, host and OSD) that allows us to calculate the profile ? Thanks. Stephane. -- Université de Lorraine Stéphane DUGRAVOT - Direction du

Re: [ceph-users] backup of radosgw config

2016-11-18 Thread Stéphane DUGRAVOT
- Le 3 Nov 16, à 5:18, Thomas a écrit : > Hi guys, Hi Thomas, This is a question I also asked myself ... Maybe something like : radosgw-admin zonegroup get radosgw-admin zone get And for each user : radosgw-admin metadata get user:uid Anyone ? Stephane. > I'm not sure this was ask

Re: [ceph-users] Configuring Ceph RadosGW with SLA based rados pools

2016-11-18 Thread Stéphane DUGRAVOT
- Le 4 Nov 16, à 21:17, Andrey Ptashnik a écrit : > Hello Ceph team! > I’m trying to create different pools in Ceph in order to have different tiers > (some are fast, small and expensive and others are plain big and cheap), so > certain users will be tied to one pool or another. > - I crea

[ceph-users] maximum number of chunks/files with civetweb ? (status= -2010 http_status=400)

2016-12-21 Thread Stéphane DUGRAVOT
Hi all, In top of our ceph cluster, one application use the rados gateway/S3. This application did not use multipart s3 api, but it split files (for example 1 MB) into chunk of desire size (it have to work on top of several type of storage). For every tests, the application hang when uploadin