Re: [ceph-users] SSD randwrite performance

2016-05-26 Thread Christian Balzer
Hello, On Wed, 25 May 2016 11:30:03 +0300 Max A. Krasilnikov wrote: > Hello! > > On Wed, May 25, 2016 at 11:45:29AM +0900, chibi wrote: > > > > Hello, > > > On Tue, 24 May 2016 21:20:49 +0300 Max A. Krasilnikov wrote: > > >> Hello! > >> > >> I have cluster with 5 SSD drives as OSD backed

Re: [ceph-users] Falls cluster then one node switch off

2016-05-26 Thread Никитенко Виталий
Hello! >>mon_osd_down_out_subtree_limit = host Thanks! This realy help me!!! >>So again, not a full duplication of the data, but a significant amount. If on the host who was left alone, will step down at this time one OSD. ALL data will available? Or part of data, pgs which are marked as 'active

Re: [ceph-users] SSD randwrite performance

2016-05-26 Thread Max A. Krasilnikov
Hello! On Thu, May 26, 2016 at 04:01:27PM +0900, chibi wrote: > >>> I have cluster with 5 SSD drives as OSD backed by SSD journals, one > >>> per osd. One osd per node. > >>> > >> More details will help identify other potential bottlenecks, such as: > >> CPU/RAM > >> Kernel, OS version. >> >>

Re: [ceph-users] Falls cluster then one node switch off

2016-05-26 Thread Christian Balzer
Hello, On Thu, 26 May 2016 15:42:03 +0700 Никитенко Виталий wrote: > Hello! > >>mon_osd_down_out_subtree_limit = host > Thanks! This realy help me!!! > Glad to help. ^.^ > >>So again, not a full duplication of the data, but a significant amount. > If on the host who was left alone, will step d

Re: [ceph-users] How do I start ceph jewel in CentOS?

2016-05-26 Thread Mikaël Guichard
Hi, Thanks Benjeman, It works. A little bit late but could help somebody else with this example. I use ceph.target to start a rados gateway on centos. To activate service, I just do : cd /etc/systemd/system mkdir ceph.target.wants cd ceph.target.wants ln -s /usr/lib/systemd/system/ceph-radosgw@

[ceph-users] Can't Start / Stop Ceph jewel under Centos 7.2

2016-05-26 Thread Hauke Homburg
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hello, I need to test my icinga Monitoring fpr Ceph- So i want to shutdown Ceph Services on one Server. But systemctl start ceph.target doesn't run. How can stop the Services Thanks for Help Hauke - -- www.w3-creative.de www.westchat.de -B

[ceph-users] symlink to journal not created as it should with cep-deploy prepare. (jewel)

2016-05-26 Thread Stefan Eriksson
Hi When we deploy new OSD’s we see an issue where the journal symlink to the external path we provided with ceph-deploy is not created, instead ceph-deploy creates a new local journal on the osd itself. Here is the log: Running ceph: 10.2.1 on Centos 7. ceph-deploy osd prepare ceph01-osd02:sd

Re: [ceph-users] Jewel ubuntu release is half cooked

2016-05-26 Thread Ernst Pijper
Hi Andrei, Can you share your udev hack that you had to use? Currently, i add "/usr/sbin/ceph-disk activate-all” to /etc/rc.local to activate all OSDs at boot. After the first reboot after upgrading to jewel, the journal disks are owned by ceph:ceph. Also, links are created in /etc/systemd/sys

[ceph-users] CoreOS Cluster of 7 machines and Ceph

2016-05-26 Thread EnDSgUy EnDSgUy
Hello All,  I am looking for some help to design the Ceph for the cluster of 7 machines running on CoreOS with fleet and docker. I am still thinking what's the best way for the moment.  Has anybody done something similair and could advise on his experiences?    The primary purpose is - be able to

Re: [ceph-users] jewel 10.2.1 lttng & rbdmap.service

2016-05-26 Thread Max Vernimmen
Hi Ken, Kefu No we did not have EPEL-7 enabled. We can add it, it would be the same for us as adding efficios although slightly more reusable ☺ Am I correct in assuming that lttng is only used for debugging and/or development work ? If that is true, is it perhaps possible to build ceph in such

Re: [ceph-users] Blocked ops, OSD consuming memory, hammer

2016-05-26 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I've seen something similar to this when bringing an OSD back into a cluster that has a lot of I/O that is "close" to the max performance of the drives. For Jewell, there is a "mon osd prime pg temp" [0] which really helped reduce the huge memory usa

Re: [ceph-users] Can't Start / Stop Ceph jewel under Centos 7.2

2016-05-26 Thread Michael Kuriger
Did you update to ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)? This issue should have been resolved with the last update. (It was for us)   Michael Kuriger Sr. Unix Systems Engineer r mk7...@yp.com |  818-649-7235 -Original Message- From: ceph-users [mailto:ceph

Re: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

2016-05-26 Thread Albert.K.Chong (git.usca07.Newegg) 22201
Hi, Can anyone help on this topic? Albert From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Albert.K.Chong (git.usca07.Newegg) 22201 Sent: Wednesday, May 25, 2016 3:04 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/

Re: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

2016-05-26 Thread Albert.K.Chong (git.usca07.Newegg) 22201
I read this article more than three times. Everytime I retried I follow the instruction purge/purgedata. I even reinstalled all the vm twice and I still stop in the same step. -Original Message- From: Christian Balzer [mailto:ch...@gol.com] Sent: Wednesday, May 25, 2016 11:18 PM To:

Re: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

2016-05-26 Thread Fulvio Galeazzi
Hallo, as I spent the whole afternoon on a similar issue... :-) Run purge (will also remove ceph packages, I am assuming you don't care much about the existing stuff), on all nodes mon/osd/admin remove rm -rf /var/lib/ceph/ on OSD nodes make sure you mount all partitions, then re

Re: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

2016-05-26 Thread Michael Kuriger
Are you using an old ceph.conf with the original FSID from your first attempt (in your deploy directory)? [yp] Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |* 818-649-7235 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Albert.K.

Re: [ceph-users] help removing an rbd image?

2016-05-26 Thread Kevan Rehm
Samuel, Back again. I converted my cluster to use 24 filestore OSDs, and ran the following test three times: rbd -p ssd_replica create --size 100G image1 rbd --pool ssd_replica bench-write --io-size 2M --io-threads 16 --io-total 100G --io-pattern seq image1 rbd -p ssd_replica rm image1 and in

Re: [ceph-users] help removing an rbd image?

2016-05-26 Thread Samuel Just
IIRC, the rbd_directory isn't supposed to be removed, so that's fine. In summary, it works correctly with filestore, but not with bluestore? In that case, the next step is to file a bug. Ideally, you'd reproduce with only 3 osds with debugging (specified below) on all osds from cluster startup t

Re: [ceph-users] using jemalloc in trusty

2016-05-26 Thread Joshua M. Boniface
This looks to have done it; no indications of tcmalloc in my new packages. Thanks! Joshua M. Boniface Linux System Ærchitect - Boniface Labs Sigmentation fault: core dumped On 24/05/16 08:56 PM, Alexandre DERUMIER wrote: And if we still need to add explicit support, does anyone have any advi

Re: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

2016-05-26 Thread Albert.K.Chong (git.usca07.Newegg) 22201
This time I really clean up everything with purge/purgedata and ensured no warning message. Go over the quick start guild again.. Still failed in the same step but sounds like related to permission as below. [node2][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster cep

[ceph-users] Meaning of the "host" parameter in the section [client.radosgw.{instance-name}] in ceph.conf?

2016-05-26 Thread Francois Lafont
Hi, a) My first question is perfectly summarized in the title. ;) Indeed, here is a typical section [client.radosgw.{instance-name}] in the ceph.conf of a radosgw server "rgw-01": -- # The instance-name is "gateway" here. [client.radosgw.gateway

Re: [ceph-users] Error 400 Bad Request when accessing Ceph

2016-05-26 Thread Andrey Ptashnik
Sean, rados.domain.com – is a round robin A type dns record that points to two rados gateway nodes ceph-2.domain.com and ceph-3.domain.com Bellow is the ceph config: [global] fsid = 0aacc440-efb2-4be8-8586-157cff765598 mon_initial_members = ceph-1, ceph-2, ceph-3 mon_host = 10.102.133.11,10.102

[ceph-users] what do pull request label "cleanup" mean?

2016-05-26 Thread m13913886148
what do pull request label "cleanup" mean? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com