[ceph-users] Infernalis

2016-01-08 Thread HEWLETT, Paul (Paul)
Hi Cephers Just fired up first Infernalis cluster on RHEL7.1. The following: [root@citrus ~]# systemctl status ceph-osd@0.service ceph-osd@0.service - Ceph object storage daemon Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled) Active: active (running) since Fri 2016-01-0

[ceph-users] letting and Infernalis

2016-01-04 Thread HEWLETT, Paul (Paul)
Hi Cephers and Happy New Year I am under the impression that ceph was refactored to allow dynamic enabling of lttng in Infernalis. Is there any documentation on how to enable lttng in Infernalis? (I cannot find anything…) Regards Paul ___ ceph-users

Re: [ceph-users] bug 12200

2016-01-04 Thread HEWLETT, Paul (Paul)
Thanks... On 23/12/2015, 21:33, "Gregory Farnum" wrote: >On Wed, Dec 23, 2015 at 5:20 AM, HEWLETT, Paul (Paul) > wrote: >> Seasons Greetings Cephers.. >> >> Can I assume that http://tracker.ceph.com/issues/12200 is fixed in >> Infernalis? >> >

[ceph-users] bug 12200

2015-12-23 Thread HEWLETT, Paul (Paul)
Seasons Greetings Cephers.. Can I assume that http://tracker.ceph.com/issues/12200 is fixed in Infernalis? Any chance that it can be back ported to Hammer ? (I don’t see it planned) We are hitting this bug more frequently than desired so would be keen to see it fixed in Hammer Regards Paul ___

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
adding partitions 1-2 Regards Paul On 16/12/2015, 09:36, "Loic Dachary" mailto:l...@dachary.org>> wrote: Hi Paul, On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote: When installing Hammer on RHEL7.1 we regularly got the message that partprobe failed to inform the kernel. We are using

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
When installing Hammer on RHEL7.1 we regularly got the message that partprobe failed to inform the kernel. We are using the ceph-disk command from ansible to prepare the disks. The partprobe failure seems harmless and our OSDs always activated successfully. If the Infernalis version of ceph-dis

Re: [ceph-users] Flapping OSDs, Large meta directories in OSDs

2015-12-01 Thread HEWLETT, Paul (Paul)
I believe that ‘filestore xattr use omap’ is no longer used in Ceph – can anybody confirm this? I could not find any usage in the Ceph source code except that the value is set in some of the test software… Paul From: ceph-users mailto:ceph-users-boun...@lists.ceph.com>> on behalf of Tom Chri

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-20 Thread HEWLETT, Paul (Paul)
Flushing a GPT partition table using dd does not work as the table is duplicated at the end of the disk as well Use the sgdisk –Z command Paul From: ceph-users mailto:ceph-users-boun...@lists.ceph.com>> on behalf of Mykola mailto:mykola.dvor...@gmail.com>> Date: Thursday, 19 November 2015 at

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-09 Thread HEWLETT, Paul (Paul)
Hi Jan If I can suggest that you look at: http://engineering.linkedin.com/performance/optimizing-linux-memory-managem ent-low-latency-high-throughput-databases where LinkedIn ended up disabling some of the new kernel features to prevent memory thrashing. Search for Transparent Huge Pages.. RHE

Re: [ceph-users] maximum object size

2015-09-09 Thread HEWLETT, Paul (Paul)
issing anything ? > >Thanks & Regards >Somnath > >-Original Message- >From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >HEWLETT, Paul (Paul) >Sent: Tuesday, September 08, 2015 8:55 AM >To: ceph-users@lists.ceph.com >Subject: [

Re: [ceph-users] maximum object size

2015-09-08 Thread HEWLETT, Paul (Paul)
I found the description in the source code. Apparently one sets attributes on the object to force striping. Regards Paul On 08/09/2015 17:39, "Ilya Dryomov" wrote: >On Tue, Sep 8, 2015 at 7:30 PM, HEWLETT, Paul (Paul) > wrote: >> Hi Ilya >> >> Thanks for that

[ceph-users] maximum object size

2015-09-08 Thread HEWLETT, Paul (Paul)
Hi All We have recently encountered a problem on Hammer (0.94.2) whereby we cannot write objects > 2GB in size to the rados backend. (NB not RadosGW, CephFS or RBD) I found the following issue https://wiki.ceph.com/Planning/Blueprints/Firefly/Object_striping_in_librad os which seems to address th

Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2

2015-06-30 Thread HEWLETT, Paul (Paul)
We are using Ceph (Hammer) on Centos7 and RHEL7.1 successfully. One secret is to ensure that the disk is cleaned prior to ceph-disk command. Because GPT tables are used one must use the Œsgdisk -Z¹ command to purge the disk of all partition tables. We usually issue this command in the RedHat kicks

Re: [ceph-users] Ceph on RHEL7.0

2015-06-02 Thread HEWLETT, Paul (Paul)
Hi Ken Are these packages compatible with Giant or Hammer? We are currently running Hammer - can we use the RBD kernel module from RH7.1 and is the elrepo version of cephFS compatible with Hammer? Regards Paul On 01/06/2015 17:57, "Ken Dreyer" wrote: >For the sake of providing more clarity re

Re: [ceph-users] systemd unit files and multiple daemons

2015-04-23 Thread HEWLETT, Paul (Paul)** CTR **
What about running multiple clusters on the same host? There is a separate mail thread about being able to run clusters with different conf files on the same host. Will the new systemd service scripts cope with this? Paul Hewlett Senior Systems Engineer Velocix, Cambridge Alcatel-Lucent t: +44 1

Re: [ceph-users] ceph-deploy : systemd unit files not deployed to a centos7 nodes

2015-04-17 Thread HEWLETT, Paul (Paul)** CTR **
I would be very keen for this to be implemented in Hammer and am willing to help test it... Paul Hewlett Senior Systems Engineer Velocix, Cambridge Alcatel-Lucent t: +44 1223 435893 m: +44 7985327353 From: ceph-users [ceph-users-boun...@lists.ceph.com]

[ceph-users] ceph-disk command raises partx error

2015-04-13 Thread HEWLETT, Paul (Paul)** CTR **
Hi Everyone I am using the ceph-disk command to prepare disks for an OSD. The command is: ceph-disk prepare --zap-disk --cluster $CLUSTERNAME --cluster-uuid $CLUSTERUUID --fs-type xfs /dev/${1} and this consistently raises the following error on RHEL7.1 and Ceph Hammer viz: partx: specified ra

Re: [ceph-users] Cascading Failure of OSDs

2015-04-09 Thread HEWLETT, Paul (Paul)** CTR **
I use the folowing: cat /sys/class/net/em1/statistics/rx_bytes for the em1 interface all other stats are available Paul Hewlett Senior Systems Engineer Velocix, Cambridge Alcatel-Lucent t: +44 1223 435893 m: +44 7985327353 From: ceph-users [ceph-user

Re: [ceph-users] Giant 0.87 update on CentOs 7

2015-03-23 Thread HEWLETT, Paul (Paul)** CTR **
Hi Steffen We have recently encountered the errors described below. Initially one must set check_obsoletes=1 in the yum priorities.conf file. However subsequent yum updates cause problems. The solution we use is to disable the epel repo by default: yum-config-manager --disable epel and

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-10 Thread HEWLETT, Paul (Paul)** CTR **
(jeschave) [jesch...@cisco.com] Sent: 10 March 2015 12:15 To: HEWLETT, Paul (Paul)** CTR ** Cc: Wido den Hollander; ceph-users Subject: Re: [ceph-users] New eu.ceph.com mirror machine So EPEL is not requiered? Jesus Chavez SYSTEMS ENGINEER-C.SALES jesch...@cisco.com<mailto:jesch...@cisco.

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
March 2015 13:43 To: HEWLETT, Paul (Paul)** CTR **; ceph-users Subject: Re: [ceph-users] New eu.ceph.com mirror machine On 03/09/2015 02:27 PM, HEWLETT, Paul (Paul)** CTR ** wrote: > When did you make the change? > Yesterday > It worked on Friday albeit with these extra lines in ceph.repo:

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
epo to eu.ceph.com ? Paul Hewlett Senior Systems Engineer Velocix, Cambridge Alcatel-Lucent t: +44 1223 435893 m: +44 7985327353 From: Wido den Hollander [w...@42on.com] Sent: 09 March 2015 13:43 To: HEWLETT, Paul (Paul)** CTR **; ceph-users Subject:

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
Velocix, Cambridge Alcatel-Lucent t: +44 1223 435893 m: +44 7985327353 From: Wido den Hollander [w...@42on.com] Sent: 09 March 2015 12:15 To: HEWLETT, Paul (Paul)** CTR **; ceph-users Subject: Re: [ceph-users] New eu.ceph.com mirror machine On 03/09/2015 12:54

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-09 Thread HEWLETT, Paul (Paul)** CTR **
Hi Wildo Has something broken with this move? The following has worked for me repeatedly over the last 2 months: This a.m. I tried to install ceph using the following repo file: [root@citrus ~]# cat /etc/yum.repos.d/ceph.repo [ceph] name=Ceph packages for $basearch baseurl=http://ceph.com/rpm-

Re: [ceph-users] Installation failure

2015-02-16 Thread HEWLETT, Paul (Paul)** CTR **
Thanks for that Travis. Much appreciated. Paul Hewlett Senior Systems Engineer Velocix, Cambridge Alcatel-Lucent t: +44 1223 435893 m: +44 7985327353 From: Travis Rhoden [trho...@gmail.com] Sent: 16 February 2015 15:35 To: HEWLETT, Paul (Paul)** CTR

Re: [ceph-users] Installation failure

2015-02-16 Thread HEWLETT, Paul (Paul)** CTR **
435893 m: +44 7985327353 From: Travis Rhoden [trho...@gmail.com] Sent: 16 February 2015 15:00 To: HEWLETT, Paul (Paul)** CTR ** Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Installation failure Hi Paul, Would you mind sharing/posting the contents of your .repo files

[ceph-users] Installation failure

2015-02-16 Thread HEWLETT, Paul (Paul)** CTR **
Hi all I have been installing ceph giant quite happily for the past 3 months on various systems and use an ansible recipe to do so. The OS is RHEL7. This morning on one of my test systems installation fails with: [root@octopus ~]# yum install ceph ceph-deploy Loaded plugins: langpacks, prioriti