- Mail original -
> I have a Cluster of 3 hosts, running Debian wheezy and Backports Kernel
> 3.16.0-0.bpo.4-amd64.
> For testing I did a
> ~# ceph osd out 20
> from a clean state.
> Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck unclean
> to get up and then go down
Hi Sonal,
You can refer to this doc to identify your problem.
Your error code is 4204, so
* 4000 upgrade to kernel 3.9
* 200 CEPH_FEATURE_CRUSH_TUNABLES2
* 4 CEPH_FEATURE_CRUSH_TUNABLES
*
http://ceph.com/planet/feature-set-mismatch-error-on-ceph-kernel-clien
if so, what are the recommendations concerning the OS cache ?
Thanks a lot.
Stephane.
--
Université de Lorraine
Stéphane DUGRAVOT - Direction du numérique - Infrastructure
Jabber : stephane.dugra...@univ-lorraine.fr
Tél.: +33 3 83 68 20 98
- Mail original -
> Hi all, can anybody tell me how can I force delete osds? the thing is that
> one node got corrupted because of outage, so there is no way to get those
> osd up and back, is there anyway to force the removal from ceph-deploy node?
Hi,
Try manual :
*
http://ceph.c
1 -2 1 host charlie 1 1 osd.1 up 1
Stephane.
Thanks
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.com
Phone: +52 55 5267 3146
Mobile: +51 1 5538883255
CCIE - 44433
On Mar 20, 2015, at 3:49 AM, Stéphane DUGRAVOT <
stephane.dugra...@univ-lorraine.fr >
- Mail original -
> Hi Markus,
> On 24/03/2015 14:47, Markus Goldberg wrote:
> > Hi,
> > this is ceph version 0,93
> > I can't create an image in an rbd-erasure-pool:
> >
> > root@bd-0:~#
> > root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated
> > pool 'bs3.rep' created
> > roo
- Le 7 Avr 15, à 14:57, lijian a écrit :
> Haomai Wang,
> the mon_osd_down_out_interval is 300, please refer to my settings, and I use
> the
> cli 'service ceph stop osd.X' to stop a osd
> the pg status change to remap,backfill and recovering ... immediately
> so other something wrong with
- Le 8 Avr 15, à 14:21, lijian a écrit :
> Hi Stephane,
> I dump from a osd deamon
You have to apply the mon_osd_down_out_interval value to monitor and not osd.
What is the value on the mon ?
Stephane.
> Thanks
> Jian LI
> At 2015-04-08 16:05:04, "Stéphane D
l. is there a way
to change the pool id ? How to use layout.* xattrs ?
Thanks,
Stephane.
Ceph version :
ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
The mount is a kernel mount :
mount -t ceph 1.2.3.4:/data1 /cephfs
--
Université de Lorraine
Stéphane DUGRAVOT - Dire
- Le 22 Avr 15, à 11:39, Wido den Hollander a écrit :
> On 04/22/2015 11:22 AM, Stéphane DUGRAVOT wrote:
> > Hi all,
> > When running command :
> > cephfs /cephfs/ show_layout
> > The result is :
> > WARNING: This tool is deprecated. Use the layout.
area to secondary area ?
I wish the copy is realized for example once every night. Can i not use the
agent in "automatic" mode, and start the sync on demand ?
Stephane.
--
Université de Lorraine
Stéphane DUGRAVOT - Direction du numérique - Infrastructure
Jabber : stephane.dug
logies are so new to us
that we should consider professional support in our project approach
(especially at this point ) .
Thanks.
Stephane.
--
Université de Lorraine
Stéphane DUGRAVOT - Direction du numérique - Infrastructure
Jabber : stephane.dugra...@univ-lorraine.fr
Tél.: +33
= 5 is acceptable ?
2. Is this is the only one that guarantees us our premise ?
3. And more generally, is there a formula (based on the number of dc, host
and OSD) that allows us to calculate the profile ?
Thanks.
Stephane.
--
Université de Lorraine
Stéphane DUGRAVOT - Direction du
- Le 3 Nov 16, à 5:18, Thomas a écrit :
> Hi guys,
Hi Thomas,
This is a question I also asked myself ...
Maybe something like :
radosgw-admin zonegroup get
radosgw-admin zone get
And for each user :
radosgw-admin metadata get user:uid
Anyone ?
Stephane.
> I'm not sure this was ask
- Le 4 Nov 16, à 21:17, Andrey Ptashnik a écrit :
> Hello Ceph team!
> I’m trying to create different pools in Ceph in order to have different tiers
> (some are fast, small and expensive and others are plain big and cheap), so
> certain users will be tied to one pool or another.
> - I crea
Hi all,
In top of our ceph cluster, one application use the rados gateway/S3.
This application did not use multipart s3 api, but it split files (for example
1 MB) into chunk of desire size (it have to work on top of several type of
storage).
For every tests, the application hang when uploadin
16 matches
Mail list logo