On Sat, Aug 17, 2013 at 6:25 AM, Øystein Lønning Nerhus wrote:
> Hi,
>
> This seems like a bug.
>
> # ceph df
> NAME ID USED %USED OBJECTS
> .rgw.root 26 778 0 3
> .rgw 27 1118 0 8
> .r
On 08/18/2013 08:53 AM, Sage Weil wrote:
Yep!
It's working without any change in the udev rules files ;)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 08/18/2013 08:53 AM, Sage Weil wrote:
Yep! What distro is this?
I'm working on Gentoo packaging to get a full stack of ceph and openstack.
Overlay here:
git clone https://git.isi.nc/cloud/cloud-overlay.git
And a small fork of ceph-deploy to add gentoo support:
git clone https://git.isi.nc
On Sun, 18 Aug 2013, Mika?l Cluseau wrote:
> On 08/18/2013 08:44 AM, Mika?l Cluseau wrote:
> > On 08/18/2013 08:39 AM, Mika?l Cluseau wrote:
> > >
> > > # ceph-disk -v activate-all
> > > DEBUG:ceph-disk-python2.7:Scanning /dev/disk/by-parttypeuuid
> >
> > Maybe /dev/disk/by-parttypeuuid is speci
On Sun, 18 Aug 2013, Mika?l Cluseau wrote:
> On 08/18/2013 08:35 AM, Sage Weil wrote:
> > The ceph-disk activate-all command is looking for partitions that are
> > marked with the ceph type uuid. Maybe the jouranls are missing? What
> > does
> >
> > ceph-disk -v activate /dev/sdc1
> >
> > say
On 08/18/2013 08:44 AM, Mikaël Cluseau wrote:
On 08/18/2013 08:39 AM, Mikaël Cluseau wrote:
# ceph-disk -v activate-all
DEBUG:ceph-disk-python2.7:Scanning /dev/disk/by-parttypeuuid
Maybe /dev/disk/by-parttypeuuid is specific?
# ls -l /dev/disk
total 0
drwxr-xr-x 2 root root 1220 Aug 18 07:0
Hey Mark,
On za, 2013-08-17 at 08:16 -0500, Mark Nelson wrote:
> On 08/17/2013 06:13 AM, Oliver Daudey wrote:
> > Hey all,
> >
> > This is a copy of Bug #6040 (http://tracker.ceph.com/issues/6040) I
> > created in the tracker. Thought I would pass it through the list as
> > well, to get an idea i
On 08/18/2013 08:39 AM, Mikaël Cluseau wrote:
# ceph-disk -v activate-all
DEBUG:ceph-disk-python2.7:Scanning /dev/disk/by-parttypeuuid
Maybe /dev/disk/by-parttypeuuid is specific?
# ls -l /dev/disk
total 0
drwxr-xr-x 2 root root 1220 Aug 18 07:01 by-id
drwxr-xr-x 2 root root 60 Aug 18 07:0
On 08/18/2013 08:35 AM, Sage Weil wrote:
The ceph-disk activate-all command is looking for partitions that are
marked with the ceph type uuid. Maybe the jouranls are missing? What
does
ceph-disk -v activate /dev/sdc1
say? Or
ceph-disk -v activate-all
Where does the 'journal' symlink in
Hi everyone,
We're trying to get to the bottom of the problems people have been having
with ceph-deploy mon create .. and ceph-deploy gatherkeys. There seem to
be a series of common pitfalls that are causing these problems. So far
we've been chasing them in emails on this list and in various
On Sun, 18 Aug 2013, Mika?l Cluseau wrote:
> Hi,
>
> troubles with ceph_init (after a test reboot)
>
> # ceph_init restart osd
> # ceph_init restart osd.0
> /usr/lib/ceph/ceph_init.sh: osd.0 not found (/etc/ceph/ceph.conf defines
> mon.xxx , /var/lib/ceph defines mon.xxx)
> 1 # ceph-disk list
> [
On Sat, 17 Aug 2013, Jeff Moskow wrote:
> Hi,
>
> When we rebuilt our ceph cluster, we opted to make our rbd storage
> replication level 3 rather than the previously configured replication
> level 2.
>
> Things are MUCH slower (5 nodes, 13 osd's) than before even though
> most of o
Hi,
troubles with ceph_init (after a test reboot)
# ceph_init restart osd
# ceph_init restart osd.0
/usr/lib/ceph/ceph_init.sh: osd.0 not found (/etc/ceph/ceph.conf defines
mon.xxx , /var/lib/ceph defines mon.xxx)
1 # ceph-disk list
[...]
/dev/sdc :
/dev/sdc1 ceph data, prepared, cluster ceph
Hi,
When we rebuilt our ceph cluster, we opted to make our rbd storage
replication level 3 rather than the previously
configured replication level 2.
Things are MUCH slower (5 nodes, 13 osd's) than before even though most
of our I/O is read. Is this to be expected?
What are th
[moving to ceph-devel]
On Fri, 16 Aug 2013, Jun Jun8 Liu wrote:
>
> Hi all,
>
> I am doing some research about mds.
>
>
>
> there are so many types lock and states .But I don't found some
> document to describe.
>
>
>
> Is there anybody tell me wha
This is a bug fix release for Dumpling that resolves a problem with the
librbd python bindings (triggered by OpenStack) and a hang in librbd when
caching is disabled. OpenStack users are encouraged to upgrade. No other
serious bugs have come up since v0.67 came out earlier this week.
Notable
Hi Dominic,
There is a bug fixed a couple of months back that fixes excessive memory
consumption during scrub. You can upgrade to the latest 'bobtail' branch.
See
http://ceph.com/docs/master/install/debian/#development-testing-packages
Installing that package should clear this up.
sage
O
Hi,
This seems like a bug.
# ceph df
NAME ID USED %USED OBJECTS
.rgw.root 26 778 0 3
.rgw 27 1118 0 8
.rgw.gc28 0 0 32
30
On 08/17/2013 06:13 AM, Oliver Daudey wrote:
Hey all,
This is a copy of Bug #6040 (http://tracker.ceph.com/issues/6040) I
created in the tracker. Thought I would pass it through the list as
well, to get an idea if anyone else is running into it. It may only
show under higher loads. More info
Hey all,
This is a copy of Bug #6040 (http://tracker.ceph.com/issues/6040) I
created in the tracker. Thought I would pass it through the list as
well, to get an idea if anyone else is running into it. It may only
show under higher loads. More info about my setup is in the bug-report
above. Her
20 matches
Mail list logo