Re: [ceph-users] MDS memory sizing

2016-03-01 Thread Simon Hallam
Hi Dietmar, I asked the same question not long ago, this this may be relevant to you: http://www.spinics.net/lists/ceph-users/msg24359.html Cheers, Si > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Dietmar Rieder > Sent: 01 March 2016 1

Re: [ceph-users] Metadata Server (MDS) Hardware Suggestions

2015-12-22 Thread Simon Hallam
-Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: 17 December 2015 23:54 > To: John Spray > Cc: Simon Hallam; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Metadata Server (MDS) Hardware Suggestions > > On Thu, Dec 17, 2015 at 2:06 PM, Joh

[ceph-users] Metadata Server (MDS) Hardware Suggestions

2015-12-17 Thread Simon Hallam
NVMe SSDs look like the right avenue, or will standard SATA SSDs suffice? Thanks in advance for your help! Simon Hallam Please visit our new website at www.pml.ac.uk and follow us on Twitter @PlymouthMarine Winner of the Environment & Conservation category, the Charity Awards 2014. Pl

Re: [ceph-users] Predict performance

2015-10-02 Thread Simon Hallam
The way I look at it is: Would you normally put 18*2TB disks in a single RAID5 volume? If the answer is no, then a replication factor of 2 is not enough. Cheers, Simon From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Javier C.A. Sent: 02 October 2015 09:58 To: ceph-use

Re: [ceph-users] Deploy osd with btrfs not success.

2015-09-16 Thread Simon Hallam
This may help: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-September/004295.html Cheers, Simon From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickie ch Sent: 16 September 2015 10:58 To: ceph-users Subject: [ceph-users] Deploy osd with btrfs not success.

Re: [ceph-users] ceph-deploy prepare btrfs osd error

2015-09-07 Thread Simon Hallam
Hi German, This is what I’m running to redo an OSD as btrfs (not sure if this is the exact error you’re getting): DISK_LETTER=( a b c d e f g h i j k l ) i=0 for OSD_NUM in {12..23}; do sudo /etc/init.d/ceph stop osd.${OSD_NUM} sudo umount /var/lib/ceph/osd/ceph-${OSD_NUM} sudo ceph auth del o

Re: [ceph-users] Testing CephFS

2015-09-01 Thread Simon Hallam
Hi Greg, Zheng, Is this fixed in a later version of the kernel client? Or would it be wise for us to start using the fuse client? Cheers, Simon > -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: 31 August 2015 13:02 > To: Yan, Zheng > C

Re: [ceph-users] Testing CephFS

2015-08-28 Thread Simon Hallam
0 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux [root@f23-alpha ~]# ceph -v ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3) Cheers, Simon > -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: 27 August 2015 14:24 > To: Yan, Zheng > Cc: Simo

Re: [ceph-users] Testing CephFS

2015-08-24 Thread Simon Hallam
they don't even attempt to reconnect until I plug the Ethernet cable back into the original MDS? Cheers, Simon > -Original Message- > From: Yan, Zheng [mailto:z...@redhat.com] > Sent: 24 August 2015 12:28 > To: Simon Hallam > Cc: ceph-users@lists.ceph.com; Gregory Fa

Re: [ceph-users] Testing CephFS

2015-08-24 Thread Simon Hallam
difference. Cheers, Simon > -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: 21 August 2015 12:16 > To: Simon Hallam > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Testing CephFS > > On Thu, Aug 20, 2015 at 11:07 AM,

[ceph-users] Testing CephFS

2015-08-20 Thread Simon Hallam
ecause the clients ceph version? Cheers, Simon Hallam Linux Support & Development Officer Please visit our new website at www.pml.ac.uk and follow us on Twitter @PlymouthMarine Winner of the Environment & Conservation category, the Charity Awards 2014. Plymouth Marine Laboratory (PML) is

Re: [ceph-users] Networking question

2015-05-07 Thread Simon Hallam
This page explains what happens quite well: http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#flapping-osds "We recommend using both a public (front-end) network and a cluster (back-end) network so that you can better meet the capacity requirements of object replication. An