Re: [ceph-users] e release

2013-05-13 Thread Dan van der Ster
On Fri, May 10, 2013 at 8:31 PM, Sage Weil wrote: > So far I've found > a few latin names, but the main problem is that I can't find a single > large list of species with the common names listed. Go here: http://www.marinespecies.org/aphia.php?p=search Search for common name begins with e Taxon r

Re: [ceph-users] monitor upgrade from 0.56.6 to 0.61.1 on squeeze failed!

2013-05-13 Thread Joao Eduardo Luis
On 05/12/2013 03:57 AM, Smart Weblications GmbH - Florian Wiessner wrote: Hi, i upgraded from 0.56.6 to 0.61.1 and tried to restart one monitor: Hello Florian, We are aware and actively working on a fix for this. Ticket: http://tracker.ceph.com/issues/4974 Thanks! -Joao /etc/init.d/

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Greg
Le 13/05/2013 07:38, Olivier Bonvalet a écrit : Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit : Hello folks, I'm in the process of testing CEPH and RBD, I have set up a small cluster of hosts running each a MON and an OSD with both journal and data on the same SSD (ok this is stupid but

[ceph-users] Kernel support syncfs for Centos6.3

2013-05-13 Thread Lenon Join
Hi all, I am test ceph 0.56.6 with CentOS 6.3 I have one server, use raid 6, and then divided into 2 partitions (2 OSD) With CentOS 6.3 (kernel 2.6.32-358), OSDs on the same server frequently error: " . osd.x [WRN] slow request x seconds old .

Re: [ceph-users] e release

2013-05-13 Thread Rick Richardson
This might be taking some artistic license, but "Elegant Eledone " has a nice ring to it. On May 13, 2013 3:42 AM, "Dan van der Ster" wrote: > On Fri, May 10, 2013 at 8:31 PM, Sage Weil wrote: > > So far I've found > > a few latin names, but the main problem is that I can't find a single > > la

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Mark Nelson
On 05/13/2013 07:26 AM, Greg wrote: Le 13/05/2013 07:38, Olivier Bonvalet a écrit : Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit : Hello folks, I'm in the process of testing CEPH and RBD, I have set up a small cluster of hosts running each a MON and an OSD with both journal and data on

Re: [ceph-users] e release

2013-05-13 Thread Loic Dachary
On 05/13/2013 03:32 PM, Rick Richardson wrote: > This might be taking some artistic license, but "Elegant Eledone " has a nice > ring to it. +1 :-) > > On May 13, 2013 3:42 AM, "Dan van der Ster" > wrote: > > On Fri, May 10, 2013 at 8:31 PM, Sage Weil

Re: [ceph-users] e release

2013-05-13 Thread Dave Spano
Personally, just naming the release Emperor after the emperor nautilus or Encornet sounds nice. Word-wise, it seems to fit with release names like Argonaut, whereas Elegant Eledone sounds more like an Ubuntu release. Dave Spano - Original Message - From: "Rick Richardson" Cc:

Re: [ceph-users] e release

2013-05-13 Thread Steven Presser
+1 for Encornet On 05/13/2013 10:31 AM, Dave Spano wrote: Personally, just naming the release Emperor after the emperor nautilus or Encornet sounds nice. Word-wise, it seems to fit with release names like Argonaut, whereas Elegant Eledone sounds more like an Ubuntu release. Dave Spano

Re: [ceph-users] e release

2013-05-13 Thread John Wilkins
Here a link to Nautiloids beginning with E: http://en.wikipedia.org/wiki/List_of_nautiloids#E On Mon, May 13, 2013 at 7:37 AM, Steven Presser wrote: > +1 for Encornet > > > On 05/13/2013 10:31 AM, Dave Spano wrote: > > Personally, just naming the release Emperor after the emperor nautilus or > En

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Greg
Le 13/05/2013 15:55, Mark Nelson a écrit : On 05/13/2013 07:26 AM, Greg wrote: Le 13/05/2013 07:38, Olivier Bonvalet a écrit : Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit : Hello folks, I'm in the process of testing CEPH and RBD, I have set up a small cluster of hosts running each a

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Gandalf Corvotempesta
2013/5/13 Greg : > thanks a lot for pointing this out, it indeed makes a *huge* difference ! >> >> # dd if=/mnt/t/1 of=/dev/zero bs=4M count=100 >> >> 100+0 records in >> 100+0 records out >> 419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s > > (caches dropped before each test of course) What

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Mark Nelson
On 05/13/2013 10:01 AM, Gandalf Corvotempesta wrote: 2013/5/13 Greg : thanks a lot for pointing this out, it indeed makes a *huge* difference ! # dd if=/mnt/t/1 of=/dev/zero bs=4M count=100 100+0 records in 100+0 records out 419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s (caches drop

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Greg
Le 13/05/2013 17:01, Gandalf Corvotempesta a écrit : 2013/5/13 Greg : thanks a lot for pointing this out, it indeed makes a *huge* difference ! # dd if=/mnt/t/1 of=/dev/zero bs=4M count=100 100+0 records in 100+0 records out 419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s (caches droppe

Re: [ceph-users] CRUSH maps for multiple switches

2013-05-13 Thread Gregory Farnum
On Wednesday, May 8, 2013, Gandalf Corvotempesta wrote: > Let's assume 20 OSDs servers and 4x 12 ports switches, 2 for public > network and 2 for cluster netowork > > No link between public switches and no link between cluster switches. > > first 10 OSD servers connected to public switch1 and the

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-13 Thread Mark Nelson
On 05/13/2013 09:52 AM, Greg wrote: Le 13/05/2013 15:55, Mark Nelson a écrit : On 05/13/2013 07:26 AM, Greg wrote: Le 13/05/2013 07:38, Olivier Bonvalet a écrit : Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit : Hello folks, I'm in the process of testing CEPH and RBD, I have set up a sm

Re: [ceph-users] Number of objects per pool?

2013-05-13 Thread Gregory Farnum
On Wednesday, May 8, 2013, Craig Lewis wrote: > Is there a practical limit to the number of objects I can store in a pool? > Nope! > I'm planning to use RADOS Gateway, and I'm planning to start by adding > about 1M objects to the gateway. Once that initial migration is done and > burns in, I

Re: [ceph-users] ceph 0.56.6 with kernel 2.6.32-358 (centos 6.3)

2013-05-13 Thread Gregory Farnum
On Friday, May 10, 2013, Lenon Join wrote: > Hi all, > > I deploy ceph 0.56.6, > > I have 1 server run OSD deamon (format ext4), 1 server run Mon + MDS. > > > > I use RAID 6 with 44TB capacity, I divided into 2 partitions *(ext4)*, > each corresponding to 1 OSD. > > Ceph -s: > >health HEALTH_O

Re: [ceph-users] Striped read

2013-05-13 Thread Gregory Farnum
On Friday, May 10, 2013, wrote: > Hi, > > ** ** > > I’d like to know how a file that’s been striped across multiple > objects/object sets (potentially multiple placement groups) is > reconstituted and returned back to a client? > > ** ** > > For example say I have a 100 MB file, foo that’s

[ceph-users] shared images

2013-05-13 Thread Harald Rößler
Hi Together is there a description of how a shared image works in detail? Can such an image can be used for a shared file system on two virtual machine (KVM) to mount. In my case, write on one machine and read only on the other KVM.Are the changes are visible on the read only KVM? Thanks With Re

[ceph-users] rbd image clone flattening @ client or cluster level?

2013-05-13 Thread w sun
While planning the usage of fast clone from openstack glance image store to cinder volume, I am a little concerned about possible IO performance impact to the cinder volume service node if I have to perform flattening of the multiple image down the road. Am I right to assume the copying of the

Re: [ceph-users] CRUSH maps for multiple switches

2013-05-13 Thread Gregory Farnum
[Please keep conversations on the list.] On Mon, May 13, 2013 at 9:15 AM, Gandalf Corvotempesta wrote: > 2013/5/13 Gregory Farnum : >> What's your goal here? If the switches are completely isolated from each >> other than Ceph is going to have trouble (it expects a fully connected >> network), so

Re: [ceph-users] RBD snapshot - time and consistent

2013-05-13 Thread Gregory Farnum
On Sat, May 11, 2013 at 1:34 AM, Timofey Koolin wrote: > Is snapshot time depend from image size? It shouldn't be. > Do snapshot create consistent state of image for moment at start snapshot? > > For example if I have file system on don't stop IO before start snapshot - > Is it worse than turn o

Re: [ceph-users] Maximums for Ceph architectures

2013-05-13 Thread Gregory Farnum
On Sat, May 11, 2013 at 4:47 AM, Igor Laskovy wrote: > Hi all, > > Does anybody know where to learn about Maximums for Ceph architectures? > For example, I'm trying to find out about the maximum size of rbd image and > cephfs file. Additionally want to know maximum size for RADOS Gateway object >

Re: [ceph-users] Kernel support syncfs for Centos6.3

2013-05-13 Thread Gregory Farnum
On Mon, May 13, 2013 at 6:13 AM, Lenon Join wrote: > Hi all, > > I am test ceph 0.56.6 with CentOS 6.3 > I have one server, use raid 6, and then divided into 2 partitions (2 OSD) > With CentOS 6.3 (kernel 2.6.32-358), OSDs on the same server frequently > error: > > " . osd.x [WRN] slow re

Re: [ceph-users] RBD snapshot - time and consistent

2013-05-13 Thread Leen Besselink
On Mon, May 13, 2013 at 09:39:09AM -0700, Gregory Farnum wrote: > On Sat, May 11, 2013 at 1:34 AM, Timofey Koolin wrote: > > Is snapshot time depend from image size? > > It shouldn't be. > > > Do snapshot create consistent state of image for moment at start snapshot? > > > > For example if I hav

Re: [ceph-users] Kernel support syncfs for Centos6.3

2013-05-13 Thread Mark Nelson
On 05/13/2013 11:50 AM, Gregory Farnum wrote: On Mon, May 13, 2013 at 6:13 AM, Lenon Join wrote: Hi all, I am test ceph 0.56.6 with CentOS 6.3 I have one server, use raid 6, and then divided into 2 partitions (2 OSD) With CentOS 6.3 (kernel 2.6.32-358), OSDs on the same server frequently error

Re: [ceph-users] Kernel support syncfs for Centos6.3

2013-05-13 Thread Dan Mick
On May 13, 2013 9:50 AM, "Gregory Farnum" wrote: > > On Mon, May 13, 2013 at 6:13 AM, Lenon Join wrote: > > Hi all, > > > > I am test ceph 0.56.6 with CentOS 6.3 > > I have one server, use raid 6, and then divided into 2 partitions (2 OSD) > > With CentOS 6.3 (kernel 2.6.32-358), OSDs on the same

Re: [ceph-users] shared images

2013-05-13 Thread Gregory Farnum
On Mon, May 13, 2013 at 9:10 AM, Harald Rößler wrote: > > Hi Together > > is there a description of how a shared image works in detail? Can such > an image can be used for a shared file system on two virtual machine > (KVM) to mount. In my case, write on one machine and read only on the > other KV

Re: [ceph-users] shared images

2013-05-13 Thread Harald Rößler
On Mon, 2013-05-13 at 18:55 +0200, Gregory Farnum wrote: > On Mon, May 13, 2013 at 9:10 AM, Harald Rößler wrote: > > > > Hi Together > > > > is there a description of how a shared image works in detail? Can such > > an image can be used for a shared file system on two virtual machine > > (KVM) to

Re: [ceph-users] shared images

2013-05-13 Thread Gregory Farnum
On Mon, May 13, 2013 at 11:35 AM, Harald Rößler wrote: > On Mon, 2013-05-13 at 18:55 +0200, Gregory Farnum wrote: >> On Mon, May 13, 2013 at 9:10 AM, Harald Rößler >> wrote: >> > >> > Hi Together >> > >> > is there a description of how a shared image works in detail? Can such >> > an image can b

Re: [ceph-users] shared images

2013-05-13 Thread Jens Kristian Søgaard
Hi, Thanks, and sorry maybe I did not explain clearly what I mean or. When I' mounting a rbd image on two KVM machines, then if I am writing a file in one system the other system does not recognize the change of the file system. I thought there is some magic in librbd which give the OS the Th

Re: [ceph-users] shared images

2013-05-13 Thread Dan Mick
On 05/13/2013 09:55 AM, Gregory Farnum wrote: On Mon, May 13, 2013 at 9:10 AM, Harald Rößler wrote: Hi Together is there a description of how a shared image works in detail? Can such an image can be used for a shared file system on two virtual machine (KVM) to mount. In my case, write on on

Re: [ceph-users] Hardware recommendation / calculation for large cluster

2013-05-13 Thread Tim Mohlmann
Hi, Ok, thanks for al the info. Just "fyi", I am a mechanical / electrical marine service engineer. So basicly I think in Pressure, Flow, Contents, Voltage, (mili)amps, power and torque. So I am just trying to relevate it to the same prinicple. Hence my questions. I am certainly not a noob in

Re: [ceph-users] Help! 61.1 killed my monitors in prod

2013-05-13 Thread Stephen Street
On May 10, 2013, at 3:39 PM, Joao Eduardo Luis wrote: > We would certainly be interested in taking a look at logs from those > monitors, and would appreciate if you could set 'debug mon = 20', 'debug auth > = 10' and 'debug ms = 1', and give them a spin until you hit your issue. > I seeing t

Re: [ceph-users] Hardware recommendation / calculation for large cluster

2013-05-13 Thread Leen Besselink
On Mon, May 13, 2013 at 09:30:38PM +0200, Tim Mohlmann wrote: > Hi, > > Ok, thanks for al the info. > > Just "fyi", I am a mechanical / electrical marine service engineer. So > basicly > I think in Pressure, Flow, Contents, Voltage, (mili)amps, power and torque. > So > I am just trying to rel

[ceph-users] Fwd: On Developer Summit topic Ceph stats and monitoring tools

2013-05-13 Thread Leen Besselink
Hi folks, As I did't get a reply on the developer list at the time. I thought I might try it again on the users-list. So what do you think, good idea ? Bad idea ? - Forwarded message from Leen Besselink - Date: Fri, 10 May 2013 00:35:28 +0200 From: Leen Besselink To: ceph-de...@vger.

Re: [ceph-users] rbd image clone flattening @ client or cluster level?

2013-05-13 Thread Josh Durgin
On 05/13/2013 09:17 AM, w sun wrote: While planning the usage of fast clone from openstack glance image store to cinder volume, I am a little concerned about possible IO performance impact to the cinder volume service node if I have to perform flattening of the multiple image down the road. Am I

[ceph-users] RBD Reference Counts for deletion

2013-05-13 Thread Mandell Degerness
I know that there was another report of the bad behavior when deleting an RBD that is currently mounted on a host. My problem is related, but slightly different. We are using openstack and Grizzly Cinder to create a bootable ceph volume. The instance was booted and all was well. The server on w

Re: [ceph-users] RBD Reference Counts for deletion

2013-05-13 Thread Mandell Degerness
Sorry. I should have mentioned, this is using the bobtail version of ceph. On Mon, May 13, 2013 at 1:13 PM, Mandell Degerness wrote: > I know that there was another report of the bad behavior when deleting > an RBD that is currently mounted on a host. My problem is related, > but slightly diffe

Re: [ceph-users] Help! 61.1 killed my monitors in prod

2013-05-13 Thread Joao Eduardo Luis
On 05/13/2013 08:40 PM, Stephen Street wrote: On May 10, 2013, at 3:39 PM, Joao Eduardo Luis wrote: We would certainly be interested in taking a look at logs from those monitors, and would appreciate if you could set 'debug mon = 20', 'debug auth = 10' and 'debug ms = 1', and give them a spi

Re: [ceph-users] RBD Reference Counts for deletion

2013-05-13 Thread Sage Weil
On Mon, 13 May 2013, Mandell Degerness wrote: > Sorry. I should have mentioned, this is using the bobtail version of ceph. > > On Mon, May 13, 2013 at 1:13 PM, Mandell Degerness > wrote: > > I know that there was another report of the bad behavior when deleting > > an RBD that is currently mount

Re: [ceph-users] Trouble with bobtail->cuttlefish upgrade

2013-05-13 Thread Gregory Farnum
See http://tracker.ceph.com/issues/4974; we're testing the fix out for a packaged release now. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Sat, May 11, 2013 at 12:40 AM, Pawel Stefanski wrote: > hello! > > I'm trying to upgrade my test cluster to cuttlefish, but I'm stu

Re: [ceph-users] OSD crash during script, 0.56.4

2013-05-13 Thread Gregory Farnum
On Tue, May 7, 2013 at 9:44 AM, Travis Rhoden wrote: > Hey folks, > > Saw this crash the other day: > > ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca) > 1: /usr/bin/ceph-osd() [0x788fba] > 2: (()+0xfcb0) [0x7f19d1889cb0] > 3: (gsignal()+0x35) [0x7f19d0248425] > 4: (abort()+0x1

Re: [ceph-users] OSD crash during script, 0.56.4

2013-05-13 Thread Travis Rhoden
I'm afraid I don't. I don't think I looked when it happened, and searching for one just now came up empty. :/ If it happens again, I'll be sure to keep my eye out for one. FWIW, this particular server (1 out of 5) has 8GB *less* RAM than the others (one bad stick, it seems), and this has happen

Re: [ceph-users] Trouble with bobtail->cuttlefish upgrade

2013-05-13 Thread Smart Weblications GmbH - Florian Wiessner
Am 13.05.2013 22:47, schrieb Gregory Farnum: > See http://tracker.ceph.com/issues/4974; we're testing the fix out for > a packaged release now. I see this has been resolved, when will there be a new package for debian squeeze ready? -- Mit freundlichen Grüßen, Florian Wiessner Smart Webli

[ceph-users] distinguish administratively down OSDs

2013-05-13 Thread Travis Rhoden
Hey folks, This is either a feature request, or a request for guidance to handle something that must be common... =) I have a cluster with dozens of OSDs, and one started having read errors (media errors) from the hard disk. Ceph complained, I took it out of service my marking it down and out.

Re: [ceph-users] Trouble with bobtail->cuttlefish upgrade

2013-05-13 Thread Ian Colle
Florian, It's building now, should be out in a few hours. Ian R. Colle Ceph Program Manager Inktank Cell: +1.303.601.7713 Email: i...@inktank.com Delivering the Future of Storage On 5/13/13 3:37 PM, "Smart Weblicati

Re: [ceph-users] Trouble with bobtail->cuttlefish upgrade

2013-05-13 Thread Smart Weblications GmbH - Florian Wiessner
Am 13.05.2013 23:49, schrieb Ian Colle: > Florian, > > It's building now, should be out in a few hours. thank you. -- Mit freundlichen Grüßen, Florian Wiessner Smart Weblications GmbH Martinsberger Str. 1 D-95119 Naila fon.: +49 9282 9638 200 fax.: +49 9282 9638 205 24/7: +49 900 144 000

Re: [ceph-users] Help! 61.1 killed my monitors in prod

2013-05-13 Thread Stephen Street
Joao, Thanks for you response. Sorry for the marginal quality of the original e-mail.. Better log information in-line. On May 13, 2013, at 1:19 PM, Joao Eduardo Luis wrote: > On 05/13/2013 08:40 PM, Stephen Street wrote: >> >> On May 10, 2013, at 3:39 PM, Joao Eduardo Luis wrote: >>

[ceph-users] HEALTH_ERR 14 pgs inconsistent; 18 scrub errors

2013-05-13 Thread James Harper
After replacing a failed harddisk, ceph health reports "HEALTH_ERR 14 pgs inconsistent; 18 scrub errors" The disk was a total loss so I replaced it, ran mkfs etc and rebuilt the osd and while it has resynchronised everything the above still remains. What should I do to resolve this? Thanks Ja

Re: [ceph-users] HEALTH_ERR 14 pgs inconsistent; 18 scrub errors

2013-05-13 Thread Smart Weblications GmbH - Florian Wiessner
Am 14.05.2013 01:46, schrieb James Harper: > After replacing a failed harddisk, ceph health reports "HEALTH_ERR 14 pgs > inconsistent; 18 scrub errors" > > The disk was a total loss so I replaced it, ran mkfs etc and rebuilt the osd > and while it has resynchronised everything the above still re

Re: [ceph-users] HEALTH_ERR 14 pgs inconsistent; 18 scrub errors

2013-05-13 Thread Smart Weblications GmbH - Florian Wiessner
Am 14.05.2013 02:11, schrieb James Harper: >> >> Am 14.05.2013 01:46, schrieb James Harper: >>> After replacing a failed harddisk, ceph health reports "HEALTH_ERR 14 pgs >> inconsistent; 18 scrub errors" >>> >>> The disk was a total loss so I replaced it, ran mkfs etc and rebuilt the osd >> and whi

Re: [ceph-users] HEALTH_ERR 14 pgs inconsistent; 18 scrub errors

2013-05-13 Thread James Harper
> > Am 14.05.2013 02:11, schrieb James Harper: > >> > >> Am 14.05.2013 01:46, schrieb James Harper: > >>> After replacing a failed harddisk, ceph health reports "HEALTH_ERR 14 > pgs > >> inconsistent; 18 scrub errors" > >>> > >>> The disk was a total loss so I replaced it, ran mkfs etc and rebuilt

[ceph-users] Unable to get RadosGW working on CentOS 6

2013-05-13 Thread Jeff Bachtel
Environment is CentOS 6.4, Apache, mod_fastcgi (from repoforge, so probably without the continue 100 patches). I'm attempting to install radosgw on the 2nd mon host. My setup consistently fails when running s3test.py from http://wiki.debian.org/OpenStackCephHowto (with appropriate values filled in

[ceph-users] ceph monitor crashes

2013-05-13 Thread Mr. NPP
hello, i'm currently running 0.61, with about 44 osd's and 4 monitors, one as a spare. with about 6 hosts. I've been running into an issue where when one ceph host would go down the entire system become unusable. today we recovered from a ssd crash crash for an osd's journal, and it was a lot of

[ceph-users] v0.61.2 released

2013-05-13 Thread Sage Weil
This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks. * mon: fix conversion o

Re: [ceph-users] Help! 61.1 killed my monitors in prod

2013-05-13 Thread Stephen Street
Joao, On May 13, 2013, at 3:24 PM, Stephen Street wrote: > > From the logs, it appears that the monitors are struggling to bind to the > network at system start. If I issue a initctl restart ceph-mon-all to all > nodes running monitors, the system starts functioning correctly. > I found the

Re: [ceph-users] Unable to get RadosGW working on CentOS 6

2013-05-13 Thread Yehuda Sadeh
On Mon, May 13, 2013 at 7:01 PM, Jeff Bachtel wrote: > Environment is CentOS 6.4, Apache, mod_fastcgi (from repoforge, so probably > without the continue 100 patches). I'm attempting to install radosgw on the > 2nd mon host. > > My setup consistently fails when running s3test.py from > http://wiki

[ceph-users] Regd: Ceph-deploy

2013-05-13 Thread Sridhar Mahadevan
Hi, I am trying to setup ceph and I am using ceph-deploy. I am following the steps in object store quick guide. When I execute ceph-deploy gatherkeys it throws up the following error. Unable to find /etc/ceph/ceph.client.admin.keyring Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring Unable