Hi Cephers
Just fired up first Infernalis cluster on RHEL7.1.
The following:
[root@citrus ~]# systemctl status ceph-osd@0.service
ceph-osd@0.service - Ceph object storage daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled)
Active: active (running) since Fri 2016-01-0
Hi Cephers and Happy New Year
I am under the impression that ceph was refactored to allow dynamic enabling of
lttng in Infernalis.
Is there any documentation on how to enable lttng in Infernalis? (I cannot
find anything…)
Regards
Paul
___
ceph-users
Thanks...
On 23/12/2015, 21:33, "Gregory Farnum" wrote:
>On Wed, Dec 23, 2015 at 5:20 AM, HEWLETT, Paul (Paul)
> wrote:
>> Seasons Greetings Cephers..
>>
>> Can I assume that http://tracker.ceph.com/issues/12200 is fixed in
>> Infernalis?
>>
>
Seasons Greetings Cephers..
Can I assume that http://tracker.ceph.com/issues/12200 is fixed in Infernalis?
Any chance that it can be back ported to Hammer ? (I don’t see it planned)
We are hitting this bug more frequently than desired so would be keen to see it
fixed in Hammer
Regards
Paul
___
adding partitions 1-2
Regards
Paul
On 16/12/2015, 09:36, "Loic Dachary"
mailto:l...@dachary.org>> wrote:
Hi Paul,
On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote:
When installing Hammer on RHEL7.1 we regularly got the message that partprobe
failed to inform the kernel. We are using
When installing Hammer on RHEL7.1 we regularly got the message that partprobe
failed to inform the kernel. We are using the ceph-disk command from ansible to
prepare the disks. The partprobe failure seems harmless and our OSDs always
activated successfully.
If the Infernalis version of ceph-dis
I believe that ‘filestore xattr use omap’ is no longer used in Ceph – can
anybody confirm this?
I could not find any usage in the Ceph source code except that the value is set
in some of the test software…
Paul
From: ceph-users
mailto:ceph-users-boun...@lists.ceph.com>>
on behalf of Tom Chri
Flushing a GPT partition table using dd does not work as the table is
duplicated at the end of the disk as well
Use the sgdisk –Z command
Paul
From: ceph-users
mailto:ceph-users-boun...@lists.ceph.com>>
on behalf of Mykola mailto:mykola.dvor...@gmail.com>>
Date: Thursday, 19 November 2015 at
Hi Jan
If I can suggest that you look at:
http://engineering.linkedin.com/performance/optimizing-linux-memory-managem
ent-low-latency-high-throughput-databases
where LinkedIn ended up disabling some of the new kernel features to
prevent memory thrashing.
Search for Transparent Huge Pages..
RHE
issing anything ?
>
>Thanks & Regards
>Somnath
>
>-Original Message-
>From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>HEWLETT, Paul (Paul)
>Sent: Tuesday, September 08, 2015 8:55 AM
>To: ceph-users@lists.ceph.com
>Subject: [
I found the description in the source code. Apparently one sets attributes
on the object to force striping.
Regards
Paul
On 08/09/2015 17:39, "Ilya Dryomov" wrote:
>On Tue, Sep 8, 2015 at 7:30 PM, HEWLETT, Paul (Paul)
> wrote:
>> Hi Ilya
>>
>> Thanks for that
Hi All
We have recently encountered a problem on Hammer (0.94.2) whereby we
cannot write objects > 2GB in size to the rados backend.
(NB not RadosGW, CephFS or RBD)
I found the following issue
https://wiki.ceph.com/Planning/Blueprints/Firefly/Object_striping_in_librad
os which seems to address th
We are using Ceph (Hammer) on Centos7 and RHEL7.1 successfully.
One secret is to ensure that the disk is cleaned prior to ceph-disk
command. Because GPT tables are used one must use the Œsgdisk -Z¹ command
to purge the disk of all partition tables. We usually issue this command
in the RedHat kicks
Hi Ken
Are these packages compatible with Giant or Hammer?
We are currently running Hammer - can we use the RBD kernel module from
RH7.1 and is the elrepo version of cephFS compatible with Hammer?
Regards
Paul
On 01/06/2015 17:57, "Ken Dreyer" wrote:
>For the sake of providing more clarity re
What about running multiple clusters on the same host?
There is a separate mail thread about being able to run clusters with different
conf files on the same host.
Will the new systemd service scripts cope with this?
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1
I would be very keen for this to be implemented in Hammer and am willing to
help test it...
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: ceph-users [ceph-users-boun...@lists.ceph.com]
Hi Everyone
I am using the ceph-disk command to prepare disks for an OSD.
The command is:
ceph-disk prepare --zap-disk --cluster $CLUSTERNAME --cluster-uuid $CLUSTERUUID
--fs-type xfs /dev/${1}
and this consistently raises the following error on RHEL7.1 and Ceph Hammer viz:
partx: specified ra
I use the folowing:
cat /sys/class/net/em1/statistics/rx_bytes
for the em1 interface
all other stats are available
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: ceph-users [ceph-user
Hi Steffen
We have recently encountered the errors described below. Initially one must set
check_obsoletes=1 in the yum priorities.conf file.
However subsequent yum updates cause problems.
The solution we use is to disable the epel repo by default:
yum-config-manager --disable epel
and
(jeschave) [jesch...@cisco.com]
Sent: 10 March 2015 12:15
To: HEWLETT, Paul (Paul)** CTR **
Cc: Wido den Hollander; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine
So EPEL is not requiered?
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.com<mailto:jesch...@cisco.
March 2015 13:43
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine
On 03/09/2015 02:27 PM, HEWLETT, Paul (Paul)** CTR ** wrote:
> When did you make the change?
>
Yesterday
> It worked on Friday albeit with these extra lines in ceph.repo:
epo to eu.ceph.com ?
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: Wido den Hollander [w...@42on.com]
Sent: 09 March 2015 13:43
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject:
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: Wido den Hollander [w...@42on.com]
Sent: 09 March 2015 12:15
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine
On 03/09/2015 12:54
Hi Wildo
Has something broken with this move? The following has worked for me repeatedly
over the last 2 months:
This a.m. I tried to install ceph using the following repo file:
[root@citrus ~]# cat /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-
Thanks for that Travis. Much appreciated.
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: Travis Rhoden [trho...@gmail.com]
Sent: 16 February 2015 15:35
To: HEWLETT, Paul (Paul)** CTR
435893 m: +44 7985327353
From: Travis Rhoden [trho...@gmail.com]
Sent: 16 February 2015 15:00
To: HEWLETT, Paul (Paul)** CTR **
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Installation failure
Hi Paul,
Would you mind sharing/posting the contents of your .repo files
Hi all
I have been installing ceph giant quite happily for the past 3 months on
various systems and use
an ansible recipe to do so. The OS is RHEL7.
This morning on one of my test systems installation fails with:
[root@octopus ~]# yum install ceph ceph-deploy
Loaded plugins: langpacks, prioriti
27 matches
Mail list logo