Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Alex Bligh
On 17 Sep 2013, at 21:47, Jason Villalta wrote: > dd if=ddbenchfile of=/dev/null bs=8K > 819200 bytes (8.2 GB) copied, 19.7318 s, 415 MB/s As a general point, this benchmark may not do what you think it does, depending on the version of dd, as writes to /dev/null can be heavily optimised.

Re: [ceph-users] Help with radosGW

2013-09-18 Thread Alexis GÜNST HORN
Hello to all, Thanks for your answers. Well... after an awful night, I found the problem... It was a MTU mistake ! No relation with Ceph ! So sorry for the noise, and thanks again. Best Regards - Cordialement Alexis ___ ceph-users mailing list ceph-u

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
That dd give me this. dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K 819200 bytes (8.2 GB) copied, 31.1807 s, 263 MB/s Which makes sense because the SSD is running as SATA 2 which should give 3Gbps or ~300MBps I am still trying to better understand the speed difference between the

Re: [ceph-users] rbd in centos6.4

2013-09-18 Thread raj kumar
http://rpm.repo.onapp.com/repo/centos/6/x86_64/ On Wed, Sep 18, 2013 at 4:32 AM, Aquino, BenX O wrote: > Hello Ceph Users Group, > > Looking for rbd.ko for Centos6.3_x64 (2.6.32) or Centos6.4_x64 (2.6.38).* > *** > > Or point me to a buildable source or a rpm kernel package that has it.**

[ceph-users] ulimit max user processes (-u) and non-root ceph clients

2013-09-18 Thread Dan Van Der Ster
Hi, We just finished debugging a problem with RBD-backed Glance image creation failures, and thought our workaround would be useful for others. Basically, we found that during an image upload, librbd on the glance api server was consuming many many processes, eventually hitting the 1024 nproc li

[ceph-users] [ANN] ceph-deploy 1.2.5 released!

2013-09-18 Thread Alfredo Deza
Hi all, There is a new release of ceph-deploy, the easy ceph deployment tool. There were a good amount of bug fixes into this release and a wealth of improvements. Thanks to all of you who contributed patches and issues, and thanks to Dmitry Borodaenko and Andrew Woodward for extensively testing

[ceph-users] OSD and Journal Files

2013-09-18 Thread Ian_M_Porter
Dell - Internal Use - Confidential Hi, I read in the ceph documentation that one of the main performance snags in ceph was running the OSDs and journal files on the same disks and you should consider at a minimum running the journals on SSDs. Given I am looking to design a 150 TB cluster, I'm c

Re: [ceph-users] Lost rbd image

2013-09-18 Thread Laurent Barbe
Hello Timofey, You still see your images with "rbd ls"? which format (1 or 2) do you use ? Laurent Barbe Le 18/09/2013 08:54, Timofey a écrit : I rename few images when cluster was in degradeted state. Now I can't map one of them with error: rbd: add failed: (6) No such device or address I

Re: [ceph-users] Lost rbd image

2013-09-18 Thread Timofey
I use format 1. Yes I see images, but can't map it. > Hello Timofey, > > You still see your images with "rbd ls"? > which format (1 or 2) do you use ? > > > Laurent Barbe > > > Le 18/09/2013 08:54, Timofey a écrit : >> I rename few images when cluster was in degradeted state. Now I can't map

Re: [ceph-users] Lost rbd image

2013-09-18 Thread Laurent Barbe
What is return by rbd info ? Do you see your image in rbd_directory object ? (replace rbd by the correct pool) : # rados get -p rbd rbd_directory - | strings Do you have an object called oldname.rbd or newname.rbd ? # rados get -p rbd oldname.rbd - | strings # rados get -p rbd newname.rbd - | st

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Mike Dawson
Ian, There are two schools of thought here. Some people say, run the journal on a separate partition on the spinner alongside the OSD partition, and don't mess with SSDs for journals. This may be the best practice for an architecture of high-density chassis. The other design is to use SSDs f

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Mark Nelson
Excellent overview Mike! Mark On 09/18/2013 10:03 AM, Mike Dawson wrote: Ian, There are two schools of thought here. Some people say, run the journal on a separate partition on the spinner alongside the OSD partition, and don't mess with SSDs for journals. This may be the best practice for an

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Ian_M_Porter
Dell - Internal Use - Confidential Thanks Mike, great info! -Original Message- From: Mike Dawson [mailto:mike.daw...@cloudapt.com] Sent: 18 September 2013 16:04 To: Porter, Ian M; ceph-users@lists.ceph.com Subject: Re: [ceph-users] OSD and Journal Files Ian, There are two schools of thou

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Corin Langosch
Am 18.09.2013 17:03, schrieb Mike Dawson: I think you'll be OK on CPU and RAM. I'm running latest dumpling here and with default settings each osd consumes more than 3 GB RAM peak. So with 48 GB RAM it would not be possible to run the desired 18 osds. I filed a bug report for this here htt

Re: [ceph-users] Lost rbd image

2013-09-18 Thread Laurent Barbe
Which kernel version are you using on client ? Status of pgs ? # uname -a # ceph pg stat Laurent Le 18/09/2013 17:45, Timofey a écrit : yes, format 1: rbd info cve-backup | grep format format: 1 no, about this image: dmesg | grep rbd [ 294.355188] rbd: loaded rbd (rados block device) [

Re: [ceph-users] rbd in centos6.4

2013-09-18 Thread Aquino, BenX O
Thanks Raj, which of these rpm version you've used on production machines. Thanks again in advance. Regards, -ben From: raj kumar [mailto:rajkumar600...@gmail.com] Sent: Wednesday, September 18, 2013 6:09 AM To: Aquino, BenX O Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] rbd in cento

Re: [ceph-users] Help with radosGW

2013-09-18 Thread Darren Birkett
Hi Alexis, Great to hear you fixed your problem! Would you care to describe in more detail what the fix was, in case other people experience the same issues as you did. Thanks Darren On 18 September 2013 10:12, Alexis GÜNST HORN wrote: > Hello to all, > Thanks for your answers. > > Well... af

Re: [ceph-users] Lost rbd image

2013-09-18 Thread Timofey Koolin
Now I try mount cve-backup again. It have mounted ok now and I copy out all data from it. I can't continue use ceph in production now :( It need very high expirence with ceph for fast detect place of error and fast repair it. I try continue use it for data without critical avaliable (for example

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Gruher, Joseph R
>-Original Message- >From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users- >boun...@lists.ceph.com] On Behalf Of Mike Dawson > > you need to understand losing an SSD will cause >the loss of ALL of the OSDs which had their journal on the failed SSD. > >First, you probably don't want

Re: [ceph-users] About ceph testing

2013-09-18 Thread Loic Dachary
Hi David, You're welcome to join the next teuthology meeting. It's going to happen thursday 19th september (i.e. tomorrow from where I stand ) at 6pm paris time ( CEST ). The location ( mumble, irc ... ) will be announced at 5:30pm paris time ( CEST ) on irc.oftc.net#ceph-devel . Cheers On 1

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Mike Dawson
Joseph, With properly architected failure domains and replication in a Ceph cluster, RAID1 has diminishing returns. A well-designed CRUSH map should allow for failures at any level of your hierarchy (OSDs, hosts, racks, rows, etc) while protecting the data with a configurable number of copie

Re: [ceph-users] About ceph testing

2013-09-18 Thread Gregory Farnum
On Tue, Sep 17, 2013 at 10:07 PM, david zhang wrote: > Hi ceph-users, > > Previously I sent one mail to ask for help on ceph unit test and function > test. Thanks to one of your guys, I got replied about unit test. > > Since we are planning to use ceph, but with strict quality bar inside, we > hav

Re: [ceph-users] Index document for radosgw buckets?

2013-09-18 Thread Gregory Farnum
What do you mean by index documents? Objects in each bucket are already kept in an index object; it's how we do listing and things. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Sep 17, 2013 at 11:37 PM, Jeppesen, Nelson wrote: > Is there a way to enable index docume

[ceph-users] ceph-deploy host as admin host

2013-09-18 Thread Warren Wang
Just got done deploying the largest ceph install I've had yet (9 boxes, 179TB), , and I used ceph-deploy, but not without much consternation. I have a question before I file a bug report. Is the expectation that the deploy host will never be used as the admin host? I ran into various issues rela

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
Any other thoughts on this thread guys. I am just crazy to want near native SSD performance on a small SSD cluster? On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta wrote: > That dd give me this. > > dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K > 819200 bytes (8.2 GB) copied, 3

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Warren Wang
FWIW, we run into this same issue, and cannot get a good enough SSD: spinning ratio, and decided on simply running the journals on each (spinning) drive, for hosts that have 24 slots. The problem gets even worse when we're talking about some of the newer boxes. Warren Warren On Wed, Sep 18, 20

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Mike Lowe
Well, in a word, yes. You really expect a network replicated storage system in user space to be comparable to direct attached ssd storage? For what it's worth, I've got a pile of regular spinning rust, this is what my cluster will do inside a vm with rbd writeback caching on. As you can see, l

Re: [ceph-users] problem with ceph-deploy hanging

2013-09-18 Thread Gruher, Joseph R
>>-Original Message- >>From: Alfredo Deza [mailto:alfredo.d...@inktank.com] >> >>Again, in this next coming release, you will be able to tell >>ceph-deploy to just install the packages without mangling your repos >>(or installing keys) > Updated to new ceph-deploy release 1.2.6 today but I

[ceph-users] v0.69 released

2013-09-18 Thread Sage Weil
Our v0.69 development release of Ceph is ready! The most notable user-facing new feature is probably improved support for CORS in the radosgw. There has also been a lot of new work going into the tree behind the scenes on the OSD that is laying the groundwork for tiering and cache pools. As

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
Thank Mike, High hopes right ;) I guess we are not doing too bad compared to you numbers then. Just wish the gap was a little closer between native and ceph per osd. C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s30 -o8 -fsequential -b1024 -BH -LS c:\TestFile.dat sqlio v1.5.SG using system counter

[ceph-users] 10/100 network for Mons?

2013-09-18 Thread Gandalf Corvotempesta
Hi to all. Actually I'm building a test cluster with 3 OSD servers connected with IPoIB for cluster networks and 10GbE for public network. I have to connect these OSDs to some MONs servers located in another rack with no gigabit or 10Gb connection. Could I use some 10/100 networks ports? Which ki

Re: [ceph-users] problem with ceph-deploy hanging

2013-09-18 Thread Alfredo Deza
On Wed, Sep 18, 2013 at 3:58 PM, Gruher, Joseph R wrote: >>>-Original Message- >>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com] >>> >>>Again, in this next coming release, you will be able to tell >>>ceph-deploy to just install the packages without mangling your repos >>>(or installi

Re: [ceph-users] ceph-deploy host as admin host

2013-09-18 Thread Alfredo Deza
On Wed, Sep 18, 2013 at 2:56 PM, Warren Wang wrote: > Just got done deploying the largest ceph install I've had yet (9 boxes, > 179TB), , and I used ceph-deploy, but not without much consternation. I > have a question before I file a bug report. > > Is the expectation that the deploy host will ne

Re: [ceph-users] Index document for radosgw buckets?

2013-09-18 Thread Jeppesen, Nelson
It's a feature Amazon added a few years back, it allows you to see a default document. For example, let's say I have http://mybucket.s3.ceph.com/index.html as my website, I can set my bucket default index to index.html. Then I can browse to http://mybucket.s3.ceph.com, it'll return my webpage.

Re: [ceph-users] ulimit max user processes (-u) and non-root ceph clients

2013-09-18 Thread Gregory Farnum
On Wed, Sep 18, 2013 at 6:33 AM, Dan Van Der Ster wrote: > Hi, > We just finished debugging a problem with RBD-backed Glance image creation > failures, and thought our workaround would be useful for others. Basically, > we found that during an image upload, librbd on the glance api server was >

[ceph-users] Excessive mon memory usage in cuttlefish 0.61.8

2013-09-18 Thread Andrey Korolyov
Hello, Just restarted one of my mons after a month of uptime, memory commit raised ten times high than before: 13206 root 10 -10 12.8g 8.8g 107m S65 14.0 0:53.97 ceph-mon normal one looks like 30092 root 10 -10 4411m 790m 46m S 1 1.2 1260:28 ceph-mon monstore has simul

Re: [ceph-users] OSDMap problem: osd does not exist.

2013-09-18 Thread Yasuhiro Ohara
Hi, My OSDs are not joining the cluster correctly, because the nonce they assume and receive from the peer are different. It says "wrong node" because of the entity_id_t peer_addr (i.e., the combination of the IP address, port number, and the nonce) is different. Now, my questions are: 1, Are th

Re: [ceph-users] Scaling RBD module

2013-09-18 Thread Josh Durgin
On 09/17/2013 03:30 PM, Somnath Roy wrote: Hi, I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public netwo

Re: [ceph-users] OSDMap problem: osd does not exist.

2013-09-18 Thread Sage Weil
Hey, On Wed, 18 Sep 2013, Yasuhiro Ohara wrote: > > Hi, > > My OSDs are not joining the cluster correctly, > because the nonce they assume and receive from the peer are different. > It says "wrong node" because of the entity_id_t peer_addr (i.e., the > combination of the IP address, port number,

[ceph-users] ceph-deploy not including sudo?

2013-09-18 Thread Gruher, Joseph R
Using latest ceph-deploy: ceph@cephtest01:/my-cluster$ sudo ceph-deploy --version 1.2.6 I get this failure: ceph@cephtest01:/my-cluster$ sudo ceph-deploy install cephtest03 cephtest04 cephtest05 cephtest06 [ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster ceph hosts ce

Re: [ceph-users] ulimit max user processes (-u) and non-root ceph clients

2013-09-18 Thread Dan Van Der Ster
On Sep 18, 2013, at 11:50 PM, Gregory Farnum wrote: > On Wed, Sep 18, 2013 at 6:33 AM, Dan Van Der Ster > wrote: >> Hi, >> We just finished debugging a problem with RBD-backed Glance image creation >> failures, and thought our workaround would be useful for others. Basically, >> we found tha

Re: [ceph-users] Rugged data distribution on OSDs

2013-09-18 Thread Mihály Árva-Tóth
Hello Greg, 2013/9/17 Gregory Farnum > Well, that all looks good to me. I'd just keep writing and see if the > distribution evens out some. > You could also double or triple the number of PGs you're using in that > pool; it's not atrocious but it's a little low for 9 OSDs. > Okay I see, thank y