Hi Dietmar,
I asked the same question not long ago, this this may be relevant to you:
http://www.spinics.net/lists/ceph-users/msg24359.html
Cheers,
Si
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dietmar Rieder
> Sent: 01 March 2016 1
-Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 17 December 2015 23:54
> To: John Spray
> Cc: Simon Hallam; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Metadata Server (MDS) Hardware Suggestions
>
> On Thu, Dec 17, 2015 at 2:06 PM, Joh
NVMe SSDs look like the right
avenue, or will standard SATA SSDs suffice?
Thanks in advance for your help!
Simon Hallam
Please visit our new website at www.pml.ac.uk and follow us on Twitter
@PlymouthMarine
Winner of the Environment & Conservation category, the Charity Awards 2014.
Pl
The way I look at it is:
Would you normally put 18*2TB disks in a single RAID5 volume? If the answer is
no, then a replication factor of 2 is not enough.
Cheers,
Simon
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Javier
C.A.
Sent: 02 October 2015 09:58
To: ceph-use
This may help:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-September/004295.html
Cheers,
Simon
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickie
ch
Sent: 16 September 2015 10:58
To: ceph-users
Subject: [ceph-users] Deploy osd with btrfs not success.
Hi German,
This is what I’m running to redo an OSD as btrfs (not sure if this is the exact
error you’re getting):
DISK_LETTER=( a b c d e f g h i j k l )
i=0
for OSD_NUM in {12..23}; do
sudo /etc/init.d/ceph stop osd.${OSD_NUM}
sudo umount /var/lib/ceph/osd/ceph-${OSD_NUM}
sudo ceph auth del o
Hi Greg, Zheng,
Is this fixed in a later version of the kernel client? Or would it be wise for
us to start using the fuse client?
Cheers,
Simon
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 31 August 2015 13:02
> To: Yan, Zheng
> C
0 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@f23-alpha ~]# ceph -v
ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
Cheers,
Simon
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 27 August 2015 14:24
> To: Yan, Zheng
> Cc: Simo
they don't even attempt to
reconnect until I plug the Ethernet cable back into the original MDS?
Cheers,
Simon
> -Original Message-
> From: Yan, Zheng [mailto:z...@redhat.com]
> Sent: 24 August 2015 12:28
> To: Simon Hallam
> Cc: ceph-users@lists.ceph.com; Gregory Fa
difference.
Cheers,
Simon
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 21 August 2015 12:16
> To: Simon Hallam
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Testing CephFS
>
> On Thu, Aug 20, 2015 at 11:07 AM,
ecause the clients ceph version?
Cheers,
Simon Hallam
Linux Support & Development Officer
Please visit our new website at www.pml.ac.uk and follow us on Twitter
@PlymouthMarine
Winner of the Environment & Conservation category, the Charity Awards 2014.
Plymouth Marine Laboratory (PML) is
This page explains what happens quite well:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#flapping-osds
"We recommend using both a public (front-end) network and a cluster (back-end)
network so that you can better meet the capacity requirements of object
replication. An
12 matches
Mail list logo