Re: [ceph-users] Problems during first install

2014-08-05 Thread Tijn Buijs
Hello Pratik, I'm using virtual disks as OSDs. I prefer virtual disks over directories because this resembles the production environment a bit better. I'm using VirtualBox for virtualisation. The OSDs are dynamic disks, not pre-allocated, but this shouldn't be a problem, right? I don't have the

Re: [ceph-users] librbd tuning?

2014-08-05 Thread Mark Kirkwood
On 05/08/14 03:52, Tregaron Bayly wrote: Does anyone have any insight on how we can tune librbd to perform closer to the level of the rbd kernel module? In our lab we have a four node cluster with 1GbE public network and 10GbE cluster network. A client node connects to the public network with 1

[ceph-users] Openstack Havana root fs resize don't work

2014-08-05 Thread Hauke Bruno Wollentin
Hi folks, we use Ceph Dumpling as storage backend for Openstack Havana. However our instances are not able to resize its root filesystem. This issue just occurs for the virtual root disk. If we start instances with an attached volume, the virtual volume disks size is correct. Our infrastructur

Re: [ceph-users] Placement groups forever in "creating" state and dont map to OSD

2014-08-05 Thread Kapil Sharma
Ideally you should see all your OSD hosts in the #buckets section and those hosts should contain their respective OSDs. But you mentioned you are using an old release of Ceph, 0.56 is it ? I am not sure if the OSD crush map was different in that release. Regards, Kapil. On Mon, 2014-08-04 at

Re: [ceph-users] Problems during first install

2014-08-05 Thread Pratik Rupala
Hi Tijn, I had also created my first CEPH storage cluster almost as you have created. I had 3 VMs for OSD nodes and 1 VM for Monitor node. All 3 OSD VMs were having one 10 GB virtual disks. so I faced almost same problem as you are facing right now. Then changing disk space from 10 GB to 20 GB

Re: [ceph-users] librbd tuning?

2014-08-05 Thread Mark Nelson
On 08/04/2014 10:52 AM, Tregaron Bayly wrote: Does anyone have any insight on how we can tune librbd to perform closer to the level of the rbd kernel module? In our lab we have a four node cluster with 1GbE public network and 10GbE cluster network. A client node connects to the public network w

[ceph-users] osd disk location - comment field

2014-08-05 Thread Kenneth Waegeman
Hi, I'm trying to find out the location(mountpoint/device) of an osd through ceph dumps, but I don't seem to find a way. I can of course ssh to it and check the symlink under /var/lib/ceph/ceph-{osd_id}, but I would like to parse it out of ceph commands.. I can already find the host of an

Re: [ceph-users] librbd tuning?

2014-08-05 Thread Mark Nelson
On 08/05/2014 02:48 AM, Mark Kirkwood wrote: On 05/08/14 03:52, Tregaron Bayly wrote: Does anyone have any insight on how we can tune librbd to perform closer to the level of the rbd kernel module? In our lab we have a four node cluster with 1GbE public network and 10GbE cluster network. A cli

Re: [ceph-users] Concurrent database with or on top of librados

2014-08-05 Thread Gergely Horváth
Thanks Wido for your comments. On 2014-07-30 15:39, Wido den Hollander wrote: > Nothing specific, librados should give you what you need. Ceph is very > concurrent, so if you write to different objects at the same time those > I/Os will go in parallel. What if I try to write the same object? Two

Re: [ceph-users] Ceph writes stall for long perioids with no disk/network activity

2014-08-05 Thread Mariusz Gronczewski
On Mon, 04 Aug 2014 15:32:50 -0500, Mark Nelson wrote: > On 08/04/2014 03:28 PM, Chris Kitzmiller wrote: > > On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote: > >> I got weird stalling during writes, sometimes I got same write speed > >> for few minutes and after some time it starts stalling

Re: [ceph-users] v0.83 released

2014-08-05 Thread Sage Weil
On Tue, 5 Aug 2014, debian Only wrote: > Good news.  when this release will public in Debian Wheezy pkglist ?thanks > for ur good job I think that, in general, the strategy should be to keep the stable packages (firefly 0.80.x currently) in the stable downstream repos (wheezy). These developmen

Re: [ceph-users] osd disk location - comment field

2014-08-05 Thread Sage Weil
On Tue, 5 Aug 2014, Kenneth Waegeman wrote: > I'm trying to find out the location(mountpoint/device) of an osd through ceph > dumps, but I don't seem to find a way. I can of course ssh to it and check the > symlink under /var/lib/ceph/ceph-{osd_id}, but I would like to parse it out of > ceph comman

Re: [ceph-users] v0.83 released

2014-08-05 Thread Sage Weil
Oops, fixing ceph-maintainers address. On Tue, 5 Aug 2014, Sage Weil wrote: > On Tue, 5 Aug 2014, debian Only wrote: > > Good news.  when this release will public in Debian Wheezy pkglist ?thanks > > for ur good job > > I think that, in general, the strategy should be to keep the stable > packa

Re: [ceph-users] Erroneous stats output (ceph df) after increasing PG number

2014-08-05 Thread Konstantinos Tompoulidis
Konstantinos Tompoulidis writes: > > Sage Weil ...> writes: > > > > > On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote: > > > Hi all, > > > > > > We recently added many OSDs to our production cluster. > > > This brought us to a point where the number of PGs we had assigned to our > > > m

Re: [ceph-users] Concurrent database with or on top of librados

2014-08-05 Thread Wido den Hollander
On 08/05/2014 02:48 PM, Gergely Horváth wrote: Thanks Wido for your comments. On 2014-07-30 15:39, Wido den Hollander wrote: Nothing specific, librados should give you what you need. Ceph is very concurrent, so if you write to different objects at the same time those I/Os will go in parallel.

Re: [ceph-users] Erroneous stats output (ceph df) after increasing PG number

2014-08-05 Thread Sage Weil
On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote: > Sage Weil writes: > > > > > On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote: > > > Hi all, > > > > > > We recently added many OSDs to our production cluster. > > > This brought us to a point where the number of PGs we had assigned to our

Re: [ceph-users] Erroneous stats output (ceph df) after increasing PG number

2014-08-05 Thread Sage Weil
On Tue, 5 Aug 2014, Konstantinos Tompoulidis wrote: > We decided to perform a scrub and see the impact now that we have 4x PGs. > It seems that now that the PGs are "smaller", the impact is not that high. > We kept osd-max-scrubs to 1 which is the default setting. > Indeed the output of "ceph df"

[ceph-users] Vote for Ceph Talks at OpenStack Paris

2014-08-05 Thread Patrick McGarry
Hey cephers, Just wanted to let you know there are a ton of great submissions to the upcoming OpenStack Summit in Paris, and many of them are about Ceph. To make it easy to find the good stuff I have aggregated the Ceph talks for easy perusal and voting. Please head over to the OpenStack site an

Re: [ceph-users] Ceph writes stall for long perioids with no disk/network activity

2014-08-05 Thread Mark Nelson
On 08/05/2014 08:42 AM, Mariusz Gronczewski wrote: On Mon, 04 Aug 2014 15:32:50 -0500, Mark Nelson wrote: On 08/04/2014 03:28 PM, Chris Kitzmiller wrote: On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote: I got weird stalling during writes, sometimes I got same write speed for few minute

[ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread O'Reilly, Dan
I'm trying to bring up a Ceph cluster. In our environment, access to outside repositories is VERY restricted. This causes some BIG problems, namely, going out to the ceph respositories. I opened up a proxy to allow this, but still have problems: [ceph@tm1cldcphal01 wombat-cluster]$ ceph-dep

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread O'Reilly, Dan
Outstanding! I think that’s just what I need. Thanks very much for the quick response. From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Tuesday, August 05, 2014 1:28 PM To: O'Reilly, Dan Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Install Ceph nodes without network proxy a

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread Alfredo Deza
On Tue, Aug 5, 2014 at 3:21 PM, O'Reilly, Dan wrote: > I’m trying to bring up a Ceph cluster. In our environment, access to > outside repositories is VERY restricted. This causes some BIG problems, > namely, going out to the ceph respositories. I opened up a proxy to allow > this, but still h

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread O'Reilly, Dan
Unfortunately, it still fails: tm1cldmonl01][WARNIN] This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. [tm1cldmonl01][INFO ] Running command: sudo rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [tm1cldm

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread Alfredo Deza
On Tue, Aug 5, 2014 at 3:59 PM, O'Reilly, Dan wrote: > Unfortunately, it still fails: > > > > tm1cldmonl01][WARNIN] This system is not registered to Red Hat > Subscription Management. You can use subscription-manager to register. > > [tm1cldmonl01][INFO ] Running command: sudo rpm --import > htt

Re: [ceph-users] OSD daemon code in /var/lib/ceph/osd/ceph-2/ "dissapears" after creating pool/rbd -

2014-08-05 Thread Craig Lewis
You can manually mount it and start the daemon, run ceph-disk-activate, or just reboot the node. A reboot is the easiest. Most setups use udev rules to mount the disks on boot, instead of writing to /etc/fstab. If you want the details of how that works, take a look at /lib/udev/rules.d/95-ceph-

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread O'Reilly, Dan
Nope, still does it. This absolutely SUCKS! It may be a bug, but if I can’t get this working soon, Ceph will be out the door. Where exactly is the code that tries to do the git? From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Tuesday, August 05, 2014 2:05 PM To: O'Reilly, Dan Cc: c

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread Alfredo Deza
On Tue, Aug 5, 2014 at 4:34 PM, O'Reilly, Dan wrote: > Nope, still does it. This absolutely SUCKS! > I am sorry to hear that. Would you mind pasting the complete output of ceph-deploy from the start? The `gpgkey` *should* tell ceph-deploy it needs to grab something different. > It may be a

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread O'Reilly, Dan
OK, I’m getting farther. I changed the ceph-deploy command line to be: ceph-deploy install --no-adjust-repos tm1cldmonl01 That keeps me from needing to grab keys. But now I’m getting : [tm1cldmonl01][WARNIN] Public key for leveldb-1.7.0-2.el6.x86_64.rpm is not installed The repo definitions

[ceph-users] what are these files for mon?

2014-08-05 Thread Jimmy Lu
Hello, I’ve been testing cephfs with 1 monitor. My /var partition keeps on filling up so that the mon process just die because of insufficient space. I drilled down on /var partition that below mon path is taking most of the space with *.sst files. I just curious what these files are and can th

Re: [ceph-users] Openstack Havana root fs resize don't work

2014-08-05 Thread Dinu Vlad
There’s a known issue with Havana’s rbd driver in nova and it has nothing to do with ceph. Unfortunately, it is only fixed in icehouse. See https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1219658 for more details. I can confirm that applying the patch manually works. On 05 Aug 2014, at 1

Re: [ceph-users] Openstack Havana root fs resize don't work

2014-08-05 Thread Jeremy Hanmer
This is *not* a case of that bug. That LP bug is referring to an issue with the 'nova resize' command and *not* with an instance resizing its own root filesystem. I can confirm that the latter case works perfectly fine in Havana if you have things configured properly. A few questions: 1) What w

Re: [ceph-users] librbd tuning?

2014-08-05 Thread Mark Kirkwood
On 05/08/14 23:44, Mark Nelson wrote: On 08/05/2014 02:48 AM, Mark Kirkwood wrote: On 05/08/14 03:52, Tregaron Bayly wrote: Does anyone have any insight on how we can tune librbd to perform closer to the level of the rbd kernel module? In our lab we have a four node cluster with 1GbE public ne

Re: [ceph-users] [Ceph-community] Remote replication

2014-08-05 Thread Craig Lewis
That depends on which features of Ceph you're using. RadosGW supports replication. It's not real time, but it's near real time. Everything in my primary cluster is copied to my secondary within a few minutes. Take a look at http://ceph.com/docs/master/radosgw/federated-config/ . The details o

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-05 Thread Craig Lewis
You really do want power-loss protection on your journal SSDs. Data centers do have power outages, even with all the redundant grid connections, UPSes, and diesel generators. Losing an SSD will lose of all of the OSDs that are using it as a journal. If the data center loses power, you're probabl

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-05 Thread Mark Kirkwood
It claims to have power loss protection, and reviews appear to back this up (http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review). I can't see a capacitor on the board... so I'm not sure of the mechanism Micron are using on these guys. The thing that requires attention would b

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-05 Thread Mark Kirkwood
A better picture here (http://img1.lesnumeriques.com/test/90/9096/crucial_mx100_512gb_pcb_hq.jpg). A row of small caps clearly visible on right of the left hand image... On 06/08/14 12:40, Mark Kirkwood wrote: It claims to have power loss protection, I can't see a capacitor on the board... so

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-05 Thread O'Reilly, Dan
Final update: after a good deal of messing about, I did finally get this to work. Many thanks for the help From: ceph-users [ceph-users-boun...@lists.ceph.com] On Behalf Of O'Reilly, Dan [daniel.orei...@dish.com] Sent: Tuesday, August 05, 2014 3:04 P