On Sat, Jul 09, 2016 at 10:43:52AM +, Kevan Rehm wrote:
> Greetings,
>
> I cloned the master branch of ceph at https://github.com/ceph/ceph.git
> onto a Centos 7 machine, then did
>
> ./autogen.sh
> ./configure --enable-xio
> make
BTW, you should be defaulting to cmake if you don't have a sp
Hi,
is the ceph admin socket protocol described anywhere? I want to talk directly
to the socket instead of calling the ceph binary. I searched the doc but didn't
find anything useful.
Thanks,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.co
You need to set the option in the ceph.conf and restart the OSD I think. But it
will only take effect when splitting or merging in the future, it won't adjust
the current folder layout.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul
Hi everyone,
I have problem with swapping drive and partition names on reboot. My Ceph is
Hammer on CentOS7, Dell R730 6xSSD (2xSSD OS RAID1 PERC, 4xSSD=Journal drives),
18x1.8T SAS for OSDs.
Whenever I reboot, drives randomly seem to change names. This is extremely
dangerous and frustrating w
If you can read C code, there is a collectd plugin that talks directly
to the admin socket:
https://github.com/collectd/collectd/blob/master/src/ceph.c
On 10/07/16 10:36, Stefan Priebe - Profihost AG wrote:
Hi,
is the ceph admin socket protocol described anywhere? I want to talk directly
to
Hello,
>Those 2 servers are running Ceph?
>If so, be more specific, what's the HW like, CPU, RAM. network, journal
>SSDs?
Yes, I was hesitating between GlusterFS and Ceph but the latter is much more
scalable and is future-proof.
Both have the same configuration, namely E5 2628L (6c/12t @ 1.9GHz
On Sun, Jul 10, 2016 at 9:36 AM, Stefan Priebe - Profihost AG
wrote:
> Hi,
>
> is the ceph admin socket protocol described anywhere? I want to talk directly
> to the socket instead of calling the ceph binary. I searched the doc but
> didn't find anything useful.
There's no binary involved in se
Thanks...
Do you know when splitting or merging will happen? Is it enough that a
directory is read, eg. through scrub? If possible I would like to initiate
the process
Regards
Paul
On Sun, Jul 10, 2016 at 10:47 AM, Nick Fisk wrote:
> You need to set the option in the ceph.conf and restart the
Am 10.07.2016 um 16:33 schrieb Daniel Swarbrick:
> If you can read C code, there is a collectd plugin that talks directly
> to the admin socket:
>
> https://github.com/collectd/collectd/blob/master/src/ceph.c
thanks can read that.
Stefan
>
> On 10/07/16 10:36, Stefan Priebe - Profihost AG wro
Am 10.07.2016 um 20:08 schrieb John Spray:
> On Sun, Jul 10, 2016 at 9:36 AM, Stefan Priebe - Profihost AG
> wrote:
>> Hi,
>>
>> is the ceph admin socket protocol described anywhere? I want to talk
>> directly to the socket instead of calling the ceph binary. I searched the
>> doc but didn't fi
Hi,
is there a proposed way how to connect from non root f.e. a monitoring
system to the ceph admin socket?
In the past they were created with 777 permissions but now they're 755
which prevents me from connecting from our monitoring daemon. I don't
like to set CAP_DAC_OVERRIDE for the monitoring
On Sun, Jul 10, 2016 at 09:32:33PM +0200, Stefan Priebe - Profihost AG wrote:
>
> Am 10.07.2016 um 16:33 schrieb Daniel Swarbrick:
> > If you can read C code, there is a collectd plugin that talks directly
> > to the admin socket:
> >
> > https://github.com/collectd/collectd/blob/master/src/ceph.
Hello,
On Sun, 10 Jul 2016 12:46:39 + (UTC) William Josefsson wrote:
> Hi everyone,
>
> I have problem with swapping drive and partition names on reboot. My
> Ceph is Hammer on CentOS7, Dell R730 6xSSD (2xSSD OS RAID1 PERC,
> 4xSSD=Journal drives), 18x1.8T SAS for OSDs.
>
> Whenever I rebo
Hello,
On Sun, 10 Jul 2016 14:33:36 + (GMT) m.da...@bluewin.ch wrote:
> Hello,
>
> >Those 2 servers are running Ceph?
> >If so, be more specific, what's the HW like, CPU, RAM. network, journal
> >SSDs?
>
> Yes, I was hesitating between GlusterFS and Ceph but the latter is much
> more scala
Hello,
This is an interesting topic and would like to know a solution to this
problem. Does that mean we should never use Dell storage as ceph storage
device? I have similar setup with Dell 4 iscsi LUNs attached to openstack
controller and compute node in active-active situation.
As they were in
Hi Brendan,
On Friday, July 8, 2016, Brendan Moloney wrote:
> Hi,
>
> We have a smallish Ceph cluster for RBD images. We use snapshotting for
> local incremental backups. I would like to start sending some of these
> snapshots to an external cloud service (likely Amazon) for disaster
> recovery
Hi cephers.
I need your help for some issues.
The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
I've experienced one of OSDs was killed himself.
Always it issued suicide timeout message.
Below is detailed logs.
=
On Mon, Jul 11, 2016 at 11:48:57AM +0900, 한승진 wrote:
> Hi cephers.
>
> I need your help for some issues.
>
> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>
> I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 OSDs).
>
> I've experienced one of OSDs was killed himself.
On Mon, Jul 11, 2016 at 1:21 PM, Brad Hubbard wrote:
> On Mon, Jul 11, 2016 at 11:48:57AM +0900, 한승진 wrote:
>> Hi cephers.
>>
>> I need your help for some issues.
>>
>> The ceph cluster version is Jewel(10.2.1), and the filesytem is btrfs.
>>
>> I run 1 Mon and 48 OSD in 4 Nodes(each node has 12 O
Hello, guys
I to face a task poor performance into windows 2k12r2 instance running on rbd
(openstack cluster). RBD disk have a size 17Tb. My ceph cluster consist from:
- 3 monitors nodes (Celeron G530/6Gb RAM, DualCore E6500/2Gb RAM, Core2Duo
E7500/2Gb RAM). Each node have 1Gbit network to fron
Hello,
On Mon, 11 Jul 2016 07:35:02 +0300 K K wrote:
>
> Hello, guys
>
> I to face a task poor performance into windows 2k12r2 instance running
> on rbd (openstack cluster). RBD disk have a size 17Tb. My ceph cluster
> consist from:
> - 3 monitors nodes (Celeron G530/6Gb RAM, DualCore E6500/2G
> I hope the fastest of these MONs (CPU and storage) has the lowest IP
> number and thus is the leader.
no, the lowest IP has slowest CPU. But zabbix didn't show any load at all mons.
> Also what Ceph, OS, kernel version?
ubuntu 16.04 kernel 4.4.0-22
> Two GbE ports, given the "frontend" up ther
Hi Cephers,
I am proposing drop the support of i386. as we don't compile Ceph with
any i386 gitbuilder now[1] and hence don't test the i386 builds on
sepia on a regular basis. Also, based on the assumption that people
don't use i386 in production, I think we can drop it from the minimum
hardware d
23 matches
Mail list logo