Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Haomai Wang
I don't know the actual size of "small io". And what's ceph version you used. But I think it's possible if KeyValueStore only has half performance compared to FileStore in small io size. A new config value let user can tunes it will be introduced and maybe help. All in all, maybe you could tell m

[ceph-users] Number of PGs with multible pools

2014-06-03 Thread Udo Lembke
Hi all, I know the formula ( num osds * 100 / replica ) for pg_num and pgp_num (extend to the next power of 2 value). But does something changed with two (or three) active pools? E.G. we have two pools which should have an pg_num of 4096. Should use the 4096 or 2048 because of two pools? best reg

[ceph-users] one monitor out of quorum after upgrade

2014-06-03 Thread Steffen Thorhauer
Hi, I'm at the process of upgrading my ceph cluster from emperor to firefly. After upgrading my 3 mons there is one out of quorum. ceph health detail HEALTH_WARN 1 mons down, quorum 0,2 u124-11,u124-13 mon.u124-12 (rank 1) addr 10.37.124.12:6789/0 is down (out of quorum) I have tons of followin

Re: [ceph-users] one monitor out of quorum after upgrade

2014-06-03 Thread Steffen Thorhauer
On 06/03/2014 09:19 AM, Steffen Thorhauer wrote: Hi, I'm at the process of upgrading my ceph cluster from emperor to firefly. After upgrading my 3 mons there is one out of quorum. ceph health detail HEALTH_WARN 1 mons down, quorum 0,2 u124-11,u124-13 mon.u124-12 (rank 1) addr 10.37.124.12:6789/

[ceph-users] OSD server alternatives to choose

2014-06-03 Thread Benjamin Somhegyi
Hi, We are at the end of the process of designing and purchasing storage to provide Ceph based backend for VM images, VM boot (ephemeral) disks, persistent volumes (and possibly object storage) for our future Openstack cloud. We considered many options and we chose to prefer commodity storage s

[ceph-users] Fwd: Re: Experiences with Ceph at the June'14 issue of USENIX ; login:

2014-06-03 Thread Constantinos Venetsanopoulos
Forwarding to ceph-users since the thread started there, so that we have everything in a single place. Original Message Subject:Re: Experiences with Ceph at the June'14 issue of USENIX ;login: Date: Tue, 03 Jun 2014 12:12:12 +0300 From: Constantinos Venetsanopoulos

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Robert van Leeuwen
> We are at the end of the process of designing and purchasing storage to > provide Ceph based backend for VM images, VM boot (ephemeral) disks, > persistent volumes (and possibly object storage) for our future Openstack > cloud. > We considered many options and we chose to prefer commodity sto

[ceph-users] crush-ruleset parameter erausre vs replicated

2014-06-03 Thread Kenneth Waegeman
Hi, In the documentation about creating pools (http://ceph.com/docs/master/rados/operations/pools/) , I saw this: {crush_ruleset=ruleset} Description: For erasure pools only. Set the name of the CRUSH ruleset. It must be an existing ruleset matching the requirements of the underlying era

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Christian Balzer
Hello, you are indeed facing the problem of balancing density (and with that cost, though really dense storage pods get more expensive again) versus performance. I would definitely rule out 3) for the reason you're giving and 3.extra for the reason Robert gives, if one of those nodes crashes yo

Re: [ceph-users] crush-ruleset parameter erausre vs replicated

2014-06-03 Thread Loic Dachary
Hi Kenneth, The documentation needs to be updated, I'll do that today. The to set the crush ruleset for a pool you can use http://ceph.com/docs/master/rados/operations/pools/#set-pool-values Cheers On 03/06/2014 11:59, Kenneth Waegeman wrote: > Hi, > > In the documentation about creating pool

Re: [ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ?

2014-06-03 Thread Cedric Lemarchand
Le 03/06/2014 05:47, Alexandre DERUMIER a écrit : > I just found this: > > http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf > > Good to see than ceph begin to be tested by hardware vendor :) > > Whitepaper include radosbench and fio results Very

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Benjamin Somhegyi
Hello Robert & Christian, First, thank you for the general considerations, 3 and 3.extra has been ruled out. > A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do > just fine at half the price with these loads. > If you're that tight on budget, 64GB RAM will do fine, too. >

[ceph-users] Issues related to Ceph (firefly)

2014-06-03 Thread Sherry Shahbazi
Hi guys, There are couple of issues that I faced: 1) Ceph automatically changes /etc/apt/sources.list.d/ceph.list! no matter what did I set (emperor) it would change it to firefly. 2) On one of my hosts, /etc/ceph will not be created, so I have to create /etc/ceph manually and push ceph.conf

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Robert van Leeuwen
> this is a very good point that I totally overlooked. I concentrated more on > the IOPS alignment plus write durability, > and forgot to check the sequential write bandwidth. Again, this totally depends on the expected load. Running lots of VMs usually tends to end up being random IOPS on your

Re: [ceph-users] Issues related to Ceph (firefly)

2014-06-03 Thread jan.zeller
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von Sherry Shahbazi Gesendet: Dienstag, 3. Juni 2014 13:35 An: ceph-users@lists.ceph.com Betreff: [ceph-users] Issues related to Ceph (firefly) Hi guys, There are couple of issues that I faced: 1) Ceph automatically changes

Re: [ceph-users] crush-ruleset parameter erausre vs replicated

2014-06-03 Thread Loic Dachary
Hi again, Here is the proposed update https://github.com/ceph/ceph/pull/1909/files . Does it make sense to you ? Cheers On 03/06/2014 12:27, Loic Dachary wrote: > Hi Kenneth, > > The documentation needs to be updated, I'll do that today. The to set the > crush ruleset for a pool you can use

Re: [ceph-users] Issues related to Ceph (firefly)

2014-06-03 Thread Konrad Gutkowski
Hi, W dniu 03.06.2014 o 13:47 pisze: Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von Sherry Shahbazi Gesendet: Dienstag, 3. Juni 2014 13:35 An: ceph-users@lists.ceph.com Betreff: [ceph-users] Issues related to Ceph (firefly) Hi guys, There are couple of issues tha

Re: [ceph-users] crush-ruleset parameter erausre vs replicated

2014-06-03 Thread Kenneth Waegeman
- Message from Loic Dachary - Date: Tue, 03 Jun 2014 13:52:54 +0200 From: Loic Dachary Subject: Re: [ceph-users] crush-ruleset parameter erausre vs replicated To: Kenneth Waegeman , ceph-users Hi again, Here is the proposed update https://github.com/ceph/ceph/pu

Re: [ceph-users] recommendations for erasure coded pools and profile question

2014-06-03 Thread Kenneth Waegeman
- Message from Loic Dachary - Date: Fri, 23 May 2014 07:37:22 +0200 From: Loic Dachary Subject: Re: [ceph-users] recommendations for erasure coded pools and profile question To: Kenneth Waegeman , ceph-users Hi Kenneth, In the case of erasure coded pools, the "R

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Christian Balzer
On Tue, 3 Jun 2014 10:46:36 + Benjamin Somhegyi wrote: > Hello Robert & Christian, > > First, thank you for the general considerations, 3 and 3.extra has been > ruled out. > > > > A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do > > just fine at half the price with t

Re: [ceph-users] recommendations for erasure coded pools and profile question

2014-06-03 Thread Loic Dachary
Hi Kenneth, On 03/06/2014 14:11, Kenneth Waegeman wrote:> > - Message from Loic Dachary - >Date: Fri, 23 May 2014 07:37:22 +0200 >From: Loic Dachary > Subject: Re: [ceph-users] recommendations for erasure coded pools and profile > question > To: Kenneth Waegeman , ceph

[ceph-users] Information

2014-06-03 Thread yalla.gnan.kumar
Hi All, I need a diagram or a pictorial representation of some sort which outlines the relationship among ceph components like OSD, Pools, PG etc. Also let me know if Inktank conducts any trainings for certification on Ceph. Thanks Kumar This message is for

Re: [ceph-users] v0.67.9 Dumpling released

2014-06-03 Thread James Page
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Sage http://ceph.com/download/ceph-0.67.9.tar.gz appears to be missing - any chance it can be posted so I can SRU it for Ubuntu? Cheers James On 21/05/14 21:02, Sage Weil wrote: > This Dumpling point release fixes several minor bugs. The most

Re: [ceph-users] Information

2014-06-03 Thread Emmanuel Florac
Le Tue, 3 Jun 2014 13:05:06 + écrivait: > I need a diagram or a pictorial representation of some sort which > outlines the relationship among ceph components like OSD, Pools, PG > etc. Also let me know if Inktank conducts any trainings for > certification on Ceph. > See this presentation:

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Benjamin Somhegyi
> > This is a very good point that I totally overlooked. I concentrated > > more on the IOPS alignment plus write durability, and forgot to check > > the sequential write bandwidth. The 400GB Intel S3700 is a lot more > > faster but double the price (around $950) compared to the 200GB. > Indeed, th

[ceph-users] missing 0.81 from ceph.com/downloads/

2014-06-03 Thread Alfredo Deza
It looks like we missed out on a step to get the 0.81 tarballs to ceph.com/downloads/ It just got uploaded. Apologies if you got bit by that! -Alfredo ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-c

Re: [ceph-users] v0.67.9 Dumpling released

2014-06-03 Thread Alfredo Deza
On Tue, Jun 3, 2014 at 9:29 AM, James Page wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Hi Sage > > http://ceph.com/download/ceph-0.67.9.tar.gz appears to be missing - > any chance it can be posted so I can SRU it for Ubuntu? Good catch. We missed getting this one sent to the do

Re: [ceph-users] v0.67.9 Dumpling released

2014-06-03 Thread Alfredo Deza
On Tue, Jun 3, 2014 at 11:06 AM, Alfredo Deza wrote: > On Tue, Jun 3, 2014 at 9:29 AM, James Page wrote: >> -BEGIN PGP SIGNED MESSAGE- >> Hash: SHA256 >> >> Hi Sage >> >> http://ceph.com/download/ceph-0.67.9.tar.gz appears to be missing - >> any chance it can be posted so I can SRU it for

[ceph-users] Ceph internal operations when writing an object

2014-06-03 Thread Vincenzo Pii
Hi All, I would like to understand which operations Ceph is doing internally when a new object is written to the storage using, e.g., librados. Sebastien Han has written something similar to what I need in [1] ("I.1.1. A single write…"), but I would like to get some more detail. So, let's assume

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Cedric Lemarchand
Hello, Le 03/06/2014 12:14, Christian Balzer a écrit : > A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do > just fine at half the price with these loads. > If you're that tight on budget, 64GB RAM will do fine, too. I am interested about this specific thought, could you ela

Re: [ceph-users] Firefly RPMs broken on CentOS 6

2014-06-03 Thread Brian Rak
So, the fix I used is to modify /etc/yum.repos.d/epel.repo and add 'exclude=*ceph*'. It looks like this: [epel] name=Extra Packages for Enterprise Linux 6 - $basearch #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Sushma R
Haomai, I'm using the latest ceph master branch. ceph_smalliobench is a Ceph internal benchmarking tool similar to rados bench and the performance is more or less similar to that reported by fio. I tried to use fio with rbd ioengine ( http://telekomcloud.github.io/ceph/2014/02/26/ceph-performanc

Re: [ceph-users] Firefly RPMs broken on CentOS 6

2014-06-03 Thread Brian Rak
You need to remove the broken epel package (ceph-0.80.1-2.el6.x86_64) and reinstall the 'old' version from the ceph repo. Your machine got upgraded to the broken package, and yum will not automatically fix this (because the broken package has a higher version number then the correct one) On

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Mark Nelson
On 06/03/2014 01:55 PM, Sushma R wrote: Haomai, I'm using the latest ceph master branch. ceph_smalliobench is a Ceph internal benchmarking tool similar to rados bench and the performance is more or less similar to that reported by fio. I tried to use fio with rbd ioengine (http://telekomcloud.

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Danny Al-Gaaf
Am 03.06.2014 20:55, schrieb Sushma R: > Haomai, > > I'm using the latest ceph master branch. > > ceph_smalliobench is a Ceph internal benchmarking tool similar to rados > bench and the performance is more or less similar to that reported by fio. > > I tried to use fio with rbd ioengine ( > http

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Sushma R
ceph version : master (ceph version 0.80-713-g86754cc (86754cc78ca570f19f5a68fb634d613f952a22eb)) fio version : fio-2.1.9-20-g290a gdb backtrace #0 0x76de5249 in AO_fetch_and_add_full (incr=1, p=0x7fff0018) at /usr/include/atomic_ops/sysdeps/gcc/x86.h:68 #1 inc (this=0x7fff0018)

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Sushma R
Haomai/Mark, Sorry, there's a correction for 64K randwrite XFS FileStore latency. It's more or less same as to LevelDB KeyValueStore i.e. ~90 msec. In which case, I don't see LevelDB performing any better than FileStore. Thanks, Sushma On Tue, Jun 3, 2014 at 12:29 PM, Mark Nelson wrote: > On

[ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Jason Harley
Howdy — I’ve had a failure on a small, Dumpling (0.67.4) cluster running on Ubuntu 13.10 machines. I had three OSD nodes (running 6 OSDs each), and lost two of them in a beautiful failure. One of these nodes even went so far as to scramble the XFS filesystems of my OSD disks (I’m curious if i

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Smart Weblications GmbH - Florian Wiessner
Hi, Am 03.06.2014 21:46, schrieb Jason Harley: > Howdy — > > I’ve had a failure on a small, Dumpling (0.67.4) cluster running on Ubuntu > 13.10 machines. I had three OSD nodes (running 6 OSDs each), and lost two of > them in a beautiful failure. One of these nodes even went so far as to > sc

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Jason Harley
# ceph pg 4.ff3 query > { "state": "active+recovering", > "epoch": 1642, > "up": [ > 7, > 26], > "acting": [ > 7, > 26], > "info": { "pgid": "4.ffe", > "last_update": "339'96", > "last_complete": "339'89", > "log_tail": "0'0", > "last_

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Smart Weblications GmbH - Florian Wiessner
Hi, Am 03.06.2014 22:04, schrieb Jason Harley: > # ceph pg 4.ff3 query >> { "state": "active+recovering", >> "epoch": 1642, >> "up": [ >> 7, >> 26], >> "acting": [ >> 7, >> 26], [...] >> "recovery_state": [ >> { "name": "Started\/Primary\/Active",

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Smart Weblications GmbH - Florian Wiessner
Hi, Am 03.06.2014 23:24, schrieb Jason Harley: > On Jun 3, 2014, at 4:17 PM, Smart Weblications GmbH - Florian Wiessner > mailto:f.wiess...@smart-weblications.de>> > wrote: > >> You could try to recreate the osds and start them. Then i think the recovery >> should proceed. If it does not, you co

Re: [ceph-users] Firefly RPMs broken on CentOS 6

2014-06-03 Thread Pedro Sousa
That's it :) thanks a million :) Regards, Pedro Sousa On Tue, Jun 3, 2014 at 7:57 PM, Brian Rak wrote: > You need to remove the broken epel package (ceph-0.80.1-2.el6.x86_64) and > reinstall the 'old' version from the ceph repo. Your machine got upgraded > to the broken package, and yum wil

Re: [ceph-users] Firefly RPMs broken on CentOS 6

2014-06-03 Thread Pedro Sousa
Hi Brian, I've done that but the issue persists: Dependencies Resolved == Package Arch Version RepositorySize

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Jason Harley
On Jun 3, 2014, at 5:58 PM, Smart Weblications GmbH - Florian Wiessner wrote: > I think it would be less painfull if you had removed and the immediatelly > recreate the corrupted osd again to avoid 'holes' in the osd ids. It should > work > with your configuration anyhow, though. I agree with

Re: [ceph-users] OSD server alternatives to choose

2014-06-03 Thread Christian Balzer
Hello, On Tue, 03 Jun 2014 18:52:00 +0200 Cedric Lemarchand wrote: > Hello, > > Le 03/06/2014 12:14, Christian Balzer a écrit : > > A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do > > just fine at half the price with these loads. > > If you're that tight on budget, 64GB

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Haomai Wang
Hi Sushma, On Wed, Jun 4, 2014 at 3:44 AM, Sushma R wrote: > Haomai/Mark, > > Sorry, there's a correction for 64K randwrite XFS FileStore latency. It's > more or less same as to LevelDB KeyValueStore i.e. ~90 msec. > In which case, I don't see LevelDB performing any better than FileStore. > > Th

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Haomai Wang
The fix pull request is https://github.com/ceph/ceph/pull/1912/files. Someone can help to review and merge On Wed, Jun 4, 2014 at 3:38 AM, Sushma R wrote: > ceph version : master (ceph version 0.80-713-g86754cc > (86754cc78ca570f19f5a68fb634d613f952a22eb)) > fio version : fio-2.1.9-20-g290a > >

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-03 Thread Huang Zhiteng
I have some notes about sharing performance results to mailing list like ceph-user. Not directly related to the topic but I think it worth mentioning. I suggest we provide more supporting materials when posting performance data when possible. It may seem lengthy and boring but it really helps ot

[ceph-users] osd log error

2014-06-03 Thread Cao, Buddy
Hello, one of my osd log keeps returning the log as below, do you know what it is? 2014-06-02 19:01:18.222089 7f246ac1d700 0 xfsfilestorebackend(/var/lib/ceph/osd/osd10) set_extsize: FSSETXATTR: (22) Invalid argument Wei Cao ___ ceph-users mailing

Re: [ceph-users] one monitor out of quorum after upgrade

2014-06-03 Thread Steffen Thorhauer
On 06/03/2014 09:19 AM, Steffen Thorhauer wrote: Hi, I'm at the process of upgrading my ceph cluster from emperor to firefly. After upgrading my 3 mons there is one out of quorum. ceph health detail HEALTH_WARN 1 mons down, quorum 0,2 u124-11,u124-13 mon.u124-12 (rank 1) addr 10.37.124.12:6789/