Re: [ceph-users] radosgw creating pool with empty name after upgrade from 0.61.7 cuttlefish to 0.67 dumpling

2013-08-17 Thread Yehuda Sadeh
On Sat, Aug 17, 2013 at 6:25 AM, Øystein Lønning Nerhus wrote: > Hi, > > This seems like a bug. > > # ceph df > NAME ID USED %USED OBJECTS > .rgw.root 26 778 0 3 > .rgw 27 1118 0 8 > .r

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Mikaël Cluseau
On 08/18/2013 08:53 AM, Sage Weil wrote: Yep! It's working without any change in the udev rules files ;) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Mikaël Cluseau
On 08/18/2013 08:53 AM, Sage Weil wrote: Yep! What distro is this? I'm working on Gentoo packaging to get a full stack of ceph and openstack. Overlay here: git clone https://git.isi.nc/cloud/cloud-overlay.git And a small fork of ceph-deploy to add gentoo support: git clone https://git.isi.nc

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Sage Weil
On Sun, 18 Aug 2013, Mika?l Cluseau wrote: > On 08/18/2013 08:44 AM, Mika?l Cluseau wrote: > > On 08/18/2013 08:39 AM, Mika?l Cluseau wrote: > > > > > > # ceph-disk -v activate-all > > > DEBUG:ceph-disk-python2.7:Scanning /dev/disk/by-parttypeuuid > > > > Maybe /dev/disk/by-parttypeuuid is speci

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Sage Weil
On Sun, 18 Aug 2013, Mika?l Cluseau wrote: > On 08/18/2013 08:35 AM, Sage Weil wrote: > > The ceph-disk activate-all command is looking for partitions that are > > marked with the ceph type uuid. Maybe the jouranls are missing? What > > does > > > > ceph-disk -v activate /dev/sdc1 > > > > say

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Mikaël Cluseau
On 08/18/2013 08:44 AM, Mikaël Cluseau wrote: On 08/18/2013 08:39 AM, Mikaël Cluseau wrote: # ceph-disk -v activate-all DEBUG:ceph-disk-python2.7:Scanning /dev/disk/by-parttypeuuid Maybe /dev/disk/by-parttypeuuid is specific? # ls -l /dev/disk total 0 drwxr-xr-x 2 root root 1220 Aug 18 07:0

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-17 Thread Oliver Daudey
Hey Mark, On za, 2013-08-17 at 08:16 -0500, Mark Nelson wrote: > On 08/17/2013 06:13 AM, Oliver Daudey wrote: > > Hey all, > > > > This is a copy of Bug #6040 (http://tracker.ceph.com/issues/6040) I > > created in the tracker. Thought I would pass it through the list as > > well, to get an idea i

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Mikaël Cluseau
On 08/18/2013 08:39 AM, Mikaël Cluseau wrote: # ceph-disk -v activate-all DEBUG:ceph-disk-python2.7:Scanning /dev/disk/by-parttypeuuid Maybe /dev/disk/by-parttypeuuid is specific? # ls -l /dev/disk total 0 drwxr-xr-x 2 root root 1220 Aug 18 07:01 by-id drwxr-xr-x 2 root root 60 Aug 18 07:0

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Mikaël Cluseau
On 08/18/2013 08:35 AM, Sage Weil wrote: The ceph-disk activate-all command is looking for partitions that are marked with the ceph type uuid. Maybe the jouranls are missing? What does ceph-disk -v activate /dev/sdc1 say? Or ceph-disk -v activate-all Where does the 'journal' symlink in

[ceph-users] ceph-deploy mon create / gatherkeys problems

2013-08-17 Thread Sage Weil
Hi everyone, We're trying to get to the bottom of the problems people have been having with ceph-deploy mon create .. and ceph-deploy gatherkeys. There seem to be a series of common pitfalls that are causing these problems. So far we've been chasing them in emails on this list and in various

Re: [ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Sage Weil
On Sun, 18 Aug 2013, Mika?l Cluseau wrote: > Hi, > > troubles with ceph_init (after a test reboot) > > # ceph_init restart osd > # ceph_init restart osd.0 > /usr/lib/ceph/ceph_init.sh: osd.0 not found (/etc/ceph/ceph.conf defines > mon.xxx , /var/lib/ceph defines mon.xxx) > 1 # ceph-disk list > [

Re: [ceph-users] performance questions

2013-08-17 Thread Sage Weil
On Sat, 17 Aug 2013, Jeff Moskow wrote: > Hi, > > When we rebuilt our ceph cluster, we opted to make our rbd storage > replication level 3 rather than the previously configured replication > level 2. > > Things are MUCH slower (5 nodes, 13 osd's) than before even though > most of o

[ceph-users] ceph 0.67, 0.67.1: ceph_init bug

2013-08-17 Thread Mikaël Cluseau
Hi, troubles with ceph_init (after a test reboot) # ceph_init restart osd # ceph_init restart osd.0 /usr/lib/ceph/ceph_init.sh: osd.0 not found (/etc/ceph/ceph.conf defines mon.xxx , /var/lib/ceph defines mon.xxx) 1 # ceph-disk list [...] /dev/sdc : /dev/sdc1 ceph data, prepared, cluster ceph

[ceph-users] performance questions

2013-08-17 Thread Jeff Moskow
Hi, When we rebuilt our ceph cluster, we opted to make our rbd storage replication level 3 rather than the previously configured replication level 2. Things are MUCH slower (5 nodes, 13 osd's) than before even though most of our I/O is read. Is this to be expected? What are th

Re: [ceph-users] Mds lock

2013-08-17 Thread Sage Weil
[moving to ceph-devel] On Fri, 16 Aug 2013, Jun Jun8 Liu wrote: > > Hi all,   > > I am doing some research about mds. > > > > there are so many types lock and states .But I don't found some > document to describe. > >   > > Is there anybody tell me wha

[ceph-users] v0.67.1 Dumpling released

2013-08-17 Thread Sage Weil
This is a bug fix release for Dumpling that resolves a problem with the librbd python bindings (triggered by OpenStack) and a hang in librbd when caching is disabled. OpenStack users are encouraged to upgrade. No other serious bugs have come up since v0.67 came out earlier this week. Notable

Re: [ceph-users] large memory leak on scrubbing

2013-08-17 Thread Sage Weil
Hi Dominic, There is a bug fixed a couple of months back that fixes excessive memory consumption during scrub. You can upgrade to the latest 'bobtail' branch. See http://ceph.com/docs/master/install/debian/#development-testing-packages Installing that package should clear this up. sage O

[ceph-users] radosgw creating pool with empty name after upgrade from 0.61.7 cuttlefish to 0.67 dumpling

2013-08-17 Thread Øystein Lønning Nerhus
Hi, This seems like a bug. # ceph df NAME ID USED %USED OBJECTS .rgw.root 26 778 0 3 .rgw 27 1118 0 8 .rgw.gc28 0 0 32 30

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-17 Thread Mark Nelson
On 08/17/2013 06:13 AM, Oliver Daudey wrote: Hey all, This is a copy of Bug #6040 (http://tracker.ceph.com/issues/6040) I created in the tracker. Thought I would pass it through the list as well, to get an idea if anyone else is running into it. It may only show under higher loads. More info

[ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-17 Thread Oliver Daudey
Hey all, This is a copy of Bug #6040 (http://tracker.ceph.com/issues/6040) I created in the tracker. Thought I would pass it through the list as well, to get an idea if anyone else is running into it. It may only show under higher loads. More info about my setup is in the bug-report above. Her