Re: [ceph-users] Not able to achieve active+clean state

2014-07-23 Thread Pratik Rupala
Hi, As per joe and Iban's suggestions, adding one more OSD makes everything fine in my setup, but only if OSDs are made of directories as shown in quick installation steps, not of disks. Is there any restriction while using disks as OSD? Is virtual disk OK to use for OSD? Is 10 GB size of vi

[ceph-users] rbd rm. Error: trim_objectcould not find coid

2014-07-23 Thread Irek Fasikhov
Hi, All. I encountered such a problem. Was the status of one pg - inconsistent. RBD found this device and deleted it, now on the OSD get the following error: cod 0'0 active+inconsistent snaptrimq=[15~1,89~1]] exit Started/Primary/Active/Recovering 0.025609 1 0.53 -8> 2014-07-23 12:03:13

Re: [ceph-users] Ceph and Infiniband

2014-07-23 Thread Andrei Mikhailovsky
Ricardo, Thought to share my testing results. I've been using IPoIB with ceph for quite some time now. I've got QDR osd/mon/client servers to serve rbd images to kvm hypervisor. I've done some performance testing using both rados and guest vm benchmarks while running the last three stable ve

[ceph-users] MON segfaulting when setting a crush ruleset to a pool (firefly 0.80.4)

2014-07-23 Thread Olivier DELHOMME
Hello, I'm running a test cluster (mon and osd are debian 7 with 3.2.57-3+deb7u2 kernel). The client is a debian 7 with a 3.15.4 kernel that I compiled myself. The cluster has 3 monitors and 16 osd servers. I created a pool (periph) and used it a bit and then I decided to create some buckets

[ceph-users] ceph features monitored by nagios

2014-07-23 Thread pragya jain
Hi all, I am studying nagios for monitoring ceph features. different plugins of nagios monitor ceph cluster health, o0sd status, monitor status etc. My questions are: * Does Nagios monitor ceph for cluster, pool and each PG for  - CPU utilization - memory utilization - Network Utilization - tot

Re: [ceph-users] Ceph and Infiniband

2014-07-23 Thread Mark Nelson
On 07/23/2014 03:54 AM, Andrei Mikhailovsky wrote: Ricardo, Thought to share my testing results. I've been using IPoIB with ceph for quite some time now. I've got QDR osd/mon/client servers to serve rbd images to kvm hypervisor. I've done some performance testing using both rados and guest vm b

Re: [ceph-users] ceph features monitored by nagios

2014-07-23 Thread Wolfgang Hennerbichler
Nagios can monitor anything you can script. If there isn’t a plugin for it, write it yourself, it’s really not hard. I’d go for icinga by the way, which is more actively maintained than nagios. On Jul 23, 2014, at 3:07 PM, pragya jain wrote: > Hi all, > > I am studying nagios for monitoring

[ceph-users] ceph errors

2014-07-23 Thread Jay Townsend
Hi Everybody, We are having ceph issues where it recovers then hangs then crashes, we are also now getting btrfs bcache errors and are stuck of what to do. We have got one ceph node up in the cluster but you try and get another one up and that then goes down we really are lost. Any ideas of w

Re: [ceph-users] ceph features monitored by nagios

2014-07-23 Thread Scottix
We use zabbix but the same concept applies in writing your own scripts. We take advantage of the command $ceph -s --format=json 2>/dev/null stderr comes up with some stuff sometimes so we filter that out. On Wed, Jul 23, 2014 at 6:32 AM, Wolfgang Hennerbichler wrote: > Nagios can monitor anythin

[ceph-users] rbd import-diff + erasure coding

2014-07-23 Thread Olivier Bonvalet
Hi, from my tests, I can't import snapshot from a replicated pool (in cluster1) to an erasure-coding pool (in cluster2). Is it a known limitation ? A temporary one ? Or did I make a mistake somewhere ? The cluster1 (aka production) is running Ceph 0.67.9), and the cluster2 (aka backup) is runnin

Re: [ceph-users] rbd import-diff + erasure coding

2014-07-23 Thread Olivier Bonvalet
Ok, I just found this message from Gregory Farnum : « You can't use erasure coded pools directly with RBD. They're only suitable for use with RGW or as the base pool for a replicated cache pool, and you need to be very careful/specific with the configuration. I believe this is well-documented, so c

Re: [ceph-users] problem in ceph installation

2014-07-23 Thread John Wilkins
5-minute quick start was deprecated quite some time ago. Use http://ceph.com/docs/master/start/ On Tue, Jul 22, 2014 at 1:20 AM, Vincenzo Pii wrote: > Ceph packages are already in Ubuntu 14.04 repositories, no need to add > more into any sources.list. > So, undo your changes there and just proc

Re: [ceph-users] MON segfaulting when setting a crush ruleset to a pool (firefly 0.80.4)

2014-07-23 Thread Joao Eduardo Luis
Hey Olivier, On 07/23/2014 02:06 PM, Olivier DELHOMME wrote: Hello, I'm running a test cluster (mon and osd are debian 7 with 3.2.57-3+deb7u2 kernel). The client is a debian 7 with a 3.15.4 kernel that I compiled myself. The cluster has 3 monitors and 16 osd servers. I created a pool (periph)

Re: [ceph-users] question about FileStore read()/write()

2014-07-23 Thread Gregory Farnum
Keep in mind that this coordination is largely happening above the FileStore layer, so you are indeed not seeing any code within the FileStore to support it. :) But operations within the OSD are ordered on a per-PG basis, and while in-progress writes can overlap, a read will be blocked until the wr

[ceph-users] Which OS for fresh install?

2014-07-23 Thread Brian Lovett
I'm evaluating ceph for our new private and public cloud environment. I have a "working" ceph cluster running on centos 6.5, but have had a heck of a time figuring out how to get rbd support to connect to cloudstack. Today I found out that the default kernel is too old, and while I could compile

Re: [ceph-users] Which OS for fresh install?

2014-07-23 Thread Tyler Wilson
Brian, Please see http://ceph.com/docs/master/start/os-recommendations/ I would go with anything with a 'C' rating matching the version of Ceph that you will want to install. On Wed, Jul 23, 2014 at 11:12 AM, Brian Lovett wrote: > I'm evaluating ceph for our new private and public cloud enviro

[ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Steve Anthony
Hello, Recently I've started seeing very slow read speeds from the rbd images I have mounted. After some analysis, I suspect the root cause is related to krbd; if I run the rados benchmark, I see read bandwith in the 400-600MB/s range, however if I attempt to read directly from the block device wi

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Sage Weil
On Wed, 23 Jul 2014, Steve Anthony wrote: > Hello, > > Recently I've started seeing very slow read speeds from the rbd images I > have mounted. After some analysis, I suspect the root cause is related > to krbd; if I run the rados benchmark, I see read bandwith in the > 400-600MB/s range, however

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Steve Anthony
Ah, ok. That makes sense. With one concurrent operation I see numbers more in line with the read speeds I'm seeing from the filesystems on the rbd images. # rados -p bench bench 300 seq --no-cleanup -t 1 Total time run:300.114589 Total reads made: 2795 Read size:4194304 Ban

Re: [ceph-users] Which OS for fresh install?

2014-07-23 Thread Bachelder, Kurt
Using elrepo (http://elrepo.org/tiki/tiki-index.php) by adding to your yum repositories is much simpler than compiling your own kernel - Once you add the repository: 1.) Install the kernel yum install where can be: kernel-lt (long-term support kernel - http:/

Re: [ceph-users] Which OS for fresh install?

2014-07-23 Thread Bachelder, Kurt
To be clear - this is on CentOS 6.5 :) -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Bachelder, Kurt Sent: Wednesday, July 23, 2014 5:10 PM To: Brian Lovett; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Which OS for fresh install? Usin

Re: [ceph-users] Which OS for fresh install?

2014-07-23 Thread Dimitri Maziuk
On 07/23/2014 04:09 PM, Bachelder, Kurt wrote: > 2.) update your grub.conf to boot to the appropriate image (default=0, or > whatever kernel in the list you want to boot from). Actually, edit /etc/sysconfig/kernel, set DEFAULTKERNEL=kernel-lt before installing it. -- Dimitri Maziuk Programmer/

Re: [ceph-users] ceph features monitored by nagios

2014-07-23 Thread Ricardo Rocha
Hi. On Thu, Jul 24, 2014 at 1:07 AM, pragya jain wrote: > Hi all, > > I am studying nagios for monitoring ceph features. > > different plugins of nagios monitor ceph cluster health, o0sd status, > monitor status etc. We use these: https://github.com/rochaporto/ceph-nagios-plugins and sent patch

[ceph-users] how to deploy standalone rsdosgw with Firefly0.80.4 on debian

2014-07-23 Thread debian Only
Dear all i got that Firefly 0.80.4 have new feature that not need install apache and fastcgi, am i right ? *Standalone radosgw (experimental): The radosgw process can now run in a standalone mode without an apache (or similar) web server or fastcgi. This simplifies deployment and can improve perfo

[ceph-users] blocked requests question

2014-07-23 Thread
hello, I have running a ceph cluster(RBD) on production environment to host 200 VMs, Under normal circumstances, ceph's performance is quite good. but when I delete a snapshot or image, ceph cluster will be appear ‍a lot of blocked requests(generally morn than 1000‍), then , the whole cluster hav