[ceph-users] CentOS7 librbd1-devel problem.

2015-02-17 Thread Leszek Master
Hello all. I have to install qemu on one of my ceph nodes to test somethings. I added there a ceph-giant repository and connceted it to ceph cluster. The problem is that i need to build from sourcess qemu with rbd support and there is no librbd1-devel in the ceph repository. Also in the epel i have

[ceph-users] Power failure recovery woes

2015-02-17 Thread Jeff
Hi, We had a nasty power failure yesterday and even with UPS's our small (5 node, 12 OSD) cluster is having problems recovering. We are running ceph 0.87 3 of our OSD's are down consistently (others stop and are restartable, but our cluster is so slow that almost everything we do times out).

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Udo Lembke
Hi Jeff, is the osd /var/lib/ceph/osd/ceph-2 mounted? If not, does it helps, if you mounted the osd and start with service ceph start osd.2 ?? Udo Am 17.02.2015 09:54, schrieb Jeff: > Hi, > > We had a nasty power failure yesterday and even with UPS's our small (5 > node, 12 OSD) cluster is havi

Re: [ceph-users] Dedicated disks for monitor and mds?

2015-02-17 Thread Francois Lafont
Hello, Le 17/02/2015 05:55, Christian Balzer wrote : >> 1. I have read "10 GB per daemon for the monitor". But is >> I/O disk performance important for a monitor? Is it unreasonable >> to put the working directory of the monitor in the same partition >> of the root filesystem (ie /)? >> > Yes, mo

Re: [ceph-users] Dedicated disks for monitor and mds?

2015-02-17 Thread Francois Lafont
Hi, Le 17/02/2015 11:15, John Spray a écrit : > The MDS does not use local storage at all -- CephFS metadata is stored in > RADOS (i.e. the MDS stores data via the OSDs). Ah ok. So, consequently, I can put the working directory of the mds (ie /var/lib/ceph/mds/ceph-$id/) absolutely everywhere,

Re: [ceph-users] "store is getting too big" on monitors

2015-02-17 Thread Mohamed Pakkeer
Hi Joao, We followed your instruction to create the store dump ceph-kvstore-tool /var/lib/ceph/mon/ceph-FOO/store.db list > store.dump' for above store's location, let's call it $STORE: for m in osdmap pgmap; do for k in first_committed last_committed; do ceph-kvstore-tool $STORE get $m $

[ceph-users] My PG is UP and Acting, yet it is unclean

2015-02-17 Thread B L
Hi All, I have a group of PGs that are up and acting, yet they are not clean, and causing the cluster to be in a warning mode, i.e. non-health. This is my cluster status: $ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 203 pgs stuck unclean; recovery 6/132 obje

Re: [ceph-users] Introducing "Learning Ceph" : The First ever Book on Ceph

2015-02-17 Thread Vivek Varghese Cherian
On Fri, Feb 6, 2015 at 4:23 AM, Karan Singh wrote: > Hello Community Members > > I am happy to introduce the first book on Ceph with the title "*Learning > Ceph*". > > Me and many folks from the publishing house together with technical > reviewers spent several months to get this book compiled an

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Jeff
Udo, Yes, the osd is mounted: /dev/sda4 963605972 260295676 703310296 28% /var/lib/ceph/osd/ceph-2 Thanks, Jeff Original Message Subject: Re: [ceph-users] Power failure recovery woes Date: 2015-02-17 04:23 From: Udo Lembke To: Jeff , ceph-users@lists.ceph

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Jeff
Some additional information/questions: Here is the output of "ceph osd tree" Some of the "down" OSD's are actually running, but are "down". For example osd.1: root 30158 8.6 12.7 1542860 781288 ? Ssl 07:47 4:40 /usr/bin/ceph-osd --cluster=ceph -i 0 -f Is there any way

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Michal Kozanecki
Hi Jeff, What type model drives are you using as OSDs? Any Journals? If so, what model? What does your ceph.conf look like? What sort of load is on the cluster (if it's still "online")? What distro/version? Firewall rules set properly? Michal Kozanecki -Original Message- From: ceph-us

Re: [ceph-users] Power failure recovery woes

2015-02-17 Thread Michal Kozanecki
Oh one more thing, the OSD's partitions/drives, how did they get mounted (mount options)? -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Michal Kozanecki Sent: February-17-15 9:27 AM To: Jeff; ceph-users@lists.ceph.com Subject: Re: [ceph-use

Re: [ceph-users] Ceph Supermicro hardware recommendation

2015-02-17 Thread Mike
17.02.2015 04:11, Christian Balzer пишет: > > Hello, > > re-adding the mailing list. > > On Mon, 16 Feb 2015 17:54:01 +0300 Mike wrote: > >> Hello >> >> 05.02.2015 08:35, Christian Balzer пишет: >>> >>> Hello, >>> > >> LSI 2308 IT >> 2 x SSD Intel DC S3700 400GB >> 2 x SSD Intel

Re: [ceph-users] CentOS7 librbd1-devel problem.

2015-02-17 Thread Ken Dreyer
On 02/17/2015 01:07 AM, Leszek Master wrote: > Hello all. I have to install qemu on one of my ceph nodes to test > somethings. I added there a ceph-giant repository and connceted it to > ceph cluster. The problem is that i need to build from sourcess qemu > with rbd support and there is no librbd1-

Re: [ceph-users] Dedicated disks for monitor and mds?

2015-02-17 Thread John Spray
- Original Message - > From: "Francois Lafont" > To: ceph-users@lists.ceph.com > Sent: Monday, February 16, 2015 4:13:40 PM > Subject: [ceph-users] Dedicated disks for monitor and mds? > 1. I have read "10 GB per daemon for the monitor". But is > I/O disk performance important for a moni

[ceph-users] My PG is UP and Acting, yet it is unclean

2015-02-17 Thread Bahaa A. L.
Hi All, I have a group of PGs that are up and acting, yet they are not clean, and causing the cluster to be in a warning mode, i.e. non-health. This is my cluster status: $ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 203 pgs stuck unclean; recovery 6/132 ob

Re: [ceph-users] Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison

2015-02-17 Thread Irek Fasikhov
Mark, very very good! 2015-02-17 20:37 GMT+03:00 Mark Nelson : > Hi All, > > I wrote up a short document describing some tests I ran recently to look > at how SSD backed OSD performance has changed across our LTS releases. This > is just looking at RADOS performance and not RBD or RGW. It also d

[ceph-users] CephFS and data locality?

2015-02-17 Thread Jake Kugel
Hi, I'm just starting to look at Ceph and CephFS. I see that Ceph supports dynamic object interfaces to allow some processing of object data on the same node where the data is stored [1]. This might be a naive question, but is there any way to get data locality when using CephFS? For example,

Re: [ceph-users] CephFS and data locality?

2015-02-17 Thread Gregory Farnum
On Tue, Feb 17, 2015 at 10:36 AM, Jake Kugel wrote: > Hi, > > I'm just starting to look at Ceph and CephFS. I see that Ceph supports > dynamic object interfaces to allow some processing of object data on the > same node where the data is stored [1]. This might be a naive question, > but is there

[ceph-users] Unexpectedly low number of concurrent backfills

2015-02-17 Thread Florian Haas
Hello everyone, I'm seeing some OSD behavior that I consider unexpected; perhaps someone can shed some insight. Ceph giant (0.87.0), osd max backfills and osd recovery max active both set to 1. Please take a moment to look at the following "ceph health detail" screen dump: HEALTH_WARN 14 pgs ba

[ceph-users] Help needed

2015-02-17 Thread SUNDAY A. OLUTAYO
I am setting up a ceph cluster on Ubuntu 14.04.1 LTS, all went well without error but the "ceph status" after "ceph-deploy mon create-initial" indecate otherwise This is the error message; monclient[hunting]: Error: missing keyring cannot use cephx for authentication librados: client.admin in

Re: [ceph-users] Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison

2015-02-17 Thread Stephen Hindle
I was wondering what the 'CBT' tool is ? Google is useless for that acronym... Thanks! Steve On Tue, Feb 17, 2015 at 10:37 AM, Mark Nelson wrote: > Hi All, > > I wrote up a short document describing some tests I ran recently to look at > how SSD backed OSD performance has changed across our LTS

Re: [ceph-users] Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison

2015-02-17 Thread Karan Singh
Thanks Mark , for a superb explanation . This is indeed very useful. Karan Singh Systems Specialist , Storage Platforms CSC - IT Center for Science, Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland mobile: +358 503 812758 tel.

Re: [ceph-users] Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison

2015-02-17 Thread Mark Nelson
Hi Stephen, It's a benchmark automation tool we wrote that builds a ceph cluster and then runs benchmarks against it. It's still pretty rough (no real error checking, no documentation, etc). We have some partners that are interested in using it too and I'd like to make it useful for the co

Re: [ceph-users] Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison

2015-02-17 Thread Stephen Hindle
Awesome! Thanks Much! On Tue, Feb 17, 2015 at 1:28 PM, Mark Nelson wrote: > Hi Stephen, > > It's a benchmark automation tool we wrote that builds a ceph cluster and > then runs benchmarks against it. It's still pretty rough (no real error > checking, no documentation, etc). We have some partn

Re: [ceph-users] Help needed

2015-02-17 Thread Weeks, Jacob (RIS-BCT)
There should be a *.client.admin.keyring file in the directory you were in while you ran ceph-deploy. Try copying that file to /etc/ceph/ Thanks, Jacob From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of SUNDAY A. OLUTAYO Sent: Tuesday, February 17, 2015 3:39 PM To: ceph-

Re: [ceph-users] Introducing "Learning Ceph" : The First ever Book on Ceph

2015-02-17 Thread Federico Lucifredi
To be exact, the platform used throughout is CentOS 6.4... I am reading my copy right now :) Best -F - Original Message - From: "SUNDAY A. OLUTAYO" To: "Andrei Mikhailovsky" Cc: ceph-users@lists.ceph.com Sent: Monday, February 16, 2015 3:28:45 AM Subject: Re: [ceph-users] Introducing

Re: [ceph-users] Help needed

2015-02-17 Thread SUNDAY A. OLUTAYO
I did that but the problem still persist. Thanks, Sunday Olutayo - Original Message - From: "Jacob Weeks (RIS-BCT)" To: "SUNDAY A. OLUTAYO" , ceph-users@lists.ceph.com, ceph-de...@lists.ceph.com, maintain...@lists.ceph.com Sent: Tuesday, February 17, 2015 9:57:11 PM Subject: RE:

Re: [ceph-users] Help needed

2015-02-17 Thread Alan Johnson
Did you set permissions to "sudo chmod +r /etc/ceph/ceph.client.admin.keyring"? Thx Alan From: ceph-users on behalf of SUNDAY A. OLUTAYO Sent: Tuesday, February 17, 2015 4:59 PM To: Jacob Weeks (RIS-BCT) Cc: ceph-de...@lists.ceph.com; ceph-users@lists.ceph.c

Re: [ceph-users] Help needed

2015-02-17 Thread SUNDAY A. OLUTAYO
I appreciate you all. Yes, this fix it. Thanks, Sunday Olutayo - Original Message - From: "Alan Johnson" To: "SUNDAY A. OLUTAYO" , "Jacob Weeks (RIS-BCT)" Cc: ceph-de...@lists.ceph.com, ceph-users@lists.ceph.com, maintain...@lists.ceph.com Sent: Tuesday, February 17, 2015 10:

Re: [ceph-users] Unexpectedly low number of concurrent backfills

2015-02-17 Thread Gregory Farnum
On Tue, Feb 17, 2015 at 12:09 PM, Florian Haas wrote: > Hello everyone, > > I'm seeing some OSD behavior that I consider unexpected; perhaps > someone can shed some insight. > > Ceph giant (0.87.0), osd max backfills and osd recovery max active > both set to 1. > > Please take a moment to look at

[ceph-users] Ceph Block Device

2015-02-17 Thread Garg, Pankaj
Hi, I have a Ceph cluster and I am trying to create a block device. I execute the following command, and get errors: è sudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open moddep file '/lib/modul

[ceph-users] Happy New Chinese Year!

2015-02-17 Thread xmdxcxz
hi, everyone: Happy New Chinese Year! — 通过 Mailbox 发送___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Happy New Chinese Year!

2015-02-17 Thread Mark Nelson
Xīnnián kuàilè! Mark On 02/17/2015 06:23 PM, xmdx...@gmail.com wrote: hi, everyone: Happy New Chinese Year! — 通过 Mailbox 发送 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/c

Re: [ceph-users] Ceph Block Device

2015-02-17 Thread Brad Hubbard
On 02/18/2015 09:56 AM, Garg, Pankaj wrote: Hi, I have a Ceph cluster and I am trying to create a block device. I execute the following command, and get errors: èsudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_m

Re: [ceph-users] Ceph Block Device

2015-02-17 Thread Garg, Pankaj
Hi Brad, This is Ubuntu 14.04, running on ARM. /lib/modules/3.18.0-02094-gab62ac9/modules.dep.bin doesn't exist. Rmmod rbd command says "rmmod: ERROR: Module rbd is not currently loaded". Running as Root doesn't make any difference. I was running as sudo anyway. Thanks Pankaj -Original Mes

Re: [ceph-users] Ceph Block Device

2015-02-17 Thread Brad Hubbard
On 02/18/2015 11:48 AM, Garg, Pankaj wrote: libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open moddep file Try "sudo moddep" and then running your modprobe again. This seems more like an OS issue than a Ceph specific issue. Cheers, Brad ___

[ceph-users] ceph-giant installation error on centos 6.6

2015-02-17 Thread Wenxiao He
Hello, I need some help as I am getting package dependency errors when trying to install ceph-giant on centos 6.6. See below for repo files and also the yum install output. # lsb_release -a LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4

Re: [ceph-users] Unexpectedly low number of concurrent backfills

2015-02-17 Thread Florian Haas
On Tue, Feb 17, 2015 at 11:19 PM, Gregory Farnum wrote: > On Tue, Feb 17, 2015 at 12:09 PM, Florian Haas wrote: >> Hello everyone, >> >> I'm seeing some OSD behavior that I consider unexpected; perhaps >> someone can shed some insight. >> >> Ceph giant (0.87.0), osd max backfills and osd recovery

Re: [ceph-users] Unexpectedly low number of concurrent backfills

2015-02-17 Thread Gregory Farnum
On Tue, Feb 17, 2015 at 9:48 PM, Florian Haas wrote: > On Tue, Feb 17, 2015 at 11:19 PM, Gregory Farnum wrote: >> On Tue, Feb 17, 2015 at 12:09 PM, Florian Haas wrote: >>> Hello everyone, >>> >>> I'm seeing some OSD behavior that I consider unexpected; perhaps >>> someone can shed some insight.

Re: [ceph-users] ceph-giant installation error on centos 6.6

2015-02-17 Thread Brad Hubbard
On 02/18/2015 12:43 PM, Wenxiao He wrote: Hello, I need some help as I am getting package dependency errors when trying to install ceph-giant on centos 6.6. See below for repo files and also the yum install output. ---> Package python-imaging.x86_64 0:1.1.6-19.el6 will be installed --> Fi