many thanks . i did and resolved it by :
#ceph osd getcrushmap -o /tmp/crush
#crushtool -i /tmp/crush --enable-unsafe-tunables
--set-choose-local-tries 0 --set-choose-local-fallback-tries 0
--set-choose-total-tries 50 -o /tmp/crush.new
root@ceph-admin:/etc/ceph# ceph osd setcrushmap -i /tmp/crush.
That cluster was not deployed by ceph-deploy; ceph-deploy has never put
entries for the daemons into ceph.conf.
On 08/06/2013 12:08 PM, Kevin Weiler wrote:
Hi again Ceph devs,
I'm trying to deploy ceph using puppet and I'm hoping to add my osds
non-sequentially. I spoke with dmick on #ceph ab
On Thu, 15 Aug 2013, Nulik Nol wrote:
> Thanks, id didn't know about omap, but it is a good idea. I also found
> that Eleanor Cawthon made a tree balancing project over OSDs. After
> analyzing a bit more, I found that some librados and omap functions
> aren't asynchronous. This is a considerable di
On Wed, Aug 14, 2013 at 8:46 PM, Jeppesen, Nelson
wrote:
> Sage et al,
>
> This is an exciting release but I must say I'm a bit confused about some of
> the new rgw details.
>
> Questions:
>
> 1) I'd like to understand how regions work. I assume that's how you get
> multi-site, multi-datacenter
Thanks, id didn't know about omap, but it is a good idea. I also found
that Eleanor Cawthon made a tree balancing project over OSDs. After
analyzing a bit more, I found that some librados and omap functions
aren't asynchronous. This is a considerable disadvantage when writing
a service where you ex
They're unclean because CRUSH isn't generating an acting set of
sufficient size so the OSDs/monitors are keeping them remapped in
order to maintain replication guarantees. Look in the docs for the
crush tunables options for a discussion on this.
-Greg
Software Engineer #42 @ http://inktank.com | ht
On 06/08/13 12:08, Kevin Weiler wrote:
Hi again Ceph devs,
I'm trying to deploy ceph using puppet and I'm hoping to add my osds
non-sequentially. I spoke with dmick on #ceph about this and we both
agreed it doesn't seem possible given the documentation. However, I have
an example of a ceph clust
thanks a lot to your reply.
i know that [5,3], osd.5 is the primary osd, since my replicate size
is 2.and in my testing cluster. test.txt only have this only one
file.
i just mount -t cephfs 192.168.250.15:6789:/ , so means ,use pool data
by default ?
##The acting OSDs however are the OSD num
On Monday, August 5, 2013, Kevin Weiler wrote:
> Thanks for looking Sage,
>
> I came to this conclusion myself as well and this seemed to work. I'm
> trying to replicate a ceph cluster that was made with ceph-deploy
> manually. I noted that these capabilities entries were not in the
> ceph-deploy
On Thu, Aug 15, 2013 at 11:41 AM, Jim Summers wrote:
>
> I ran:
>
> ceph-deploy mon create chost0 chost1
>
> It seemed to be working and then hung at:
>
> [chost0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-chost0/done
> [chost0][INFO ] create a done file to avoid re-doing the mon de
I ran:
ceph-deploy mon create chost0 chost1
It seemed to be working and then hung at:
[chost0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-chost0/done
[chost0][INFO ] create a done file to avoid re-doing the mon deployment
[chost0][INFO ] create the init path if it does not exist
[c
On Thu, Aug 15, 2013 at 11:02 AM, Dewan Shamsul Alam <
dewan.sham...@gmail.com> wrote:
> Hi Bernhard,
>
> I think you didn't notice that ceph-deploy 1.2 has been released and it is
> a python package for now. You need to run the following command to install
> ceph-deploy
>
> pip install ceph-deplo
On Thu, Aug 15, 2013 at 11:10 AM, Jim Summers wrote:
>
> Hello All,
>
> Since the release of dumpling, a couple of things are now not working. I
> can not yum install ceph-deploy and earlier I tried to manually modify the
> ceph.repo file. That did get ceph-deploy installed but it did not work.
On 08/15/2013 05:18 PM, 不坏阿峰 wrote:
mount cephfs to /mnt/mycehfs on debian 7, kernel3.10
eg: have one file
root@test-debian:/mnt/mycephfs# ls -i test.txt
1099511627776 test.txt
root@test-debian:/mnt/mycephfs# ceph osd map volumes test.txt
So you used the pool volumes here when mounting instead
mount cephfs to /mnt/mycehfs on debian 7, kernel3.10
eg: have one file
root@test-debian:/mnt/mycephfs# ls -i test.txt
1099511627776 test.txt
root@test-debian:/mnt/mycephfs# ceph osd map volumes test.txt
osdmap e351 pool 'volumes' (3) object 'test.txt' -> pg 3.8b0b6108 (3.8) ->
up [5,3] acting [5,3
Hello All,
Since the release of dumpling, a couple of things are now not working. I
can not yum install ceph-deploy and earlier I tried to manually modify the
ceph.repo file. That did get ceph-deploy installed but it did not work.
So then I switched to the ceph-release that would bring me back t
Hi Bernhard,
I think you didn't notice that ceph-deploy 1.2 has been released and it is
a python package for now. You need to run the following command to install
ceph-deploy
pip install ceph-deploy
or
easy_install ceph-deploy
Best Regards,
Dewan
On Thu, Aug 15, 2013 at 8:18 PM, bernhard glo
Hi,
Did anyone manage to use striped rbd volumes with OpenStack Cinder (Grizzly)? I
noticed in the current OpenStack master code that there are options for
striping the new _backup_ volumes, but there's still nothing to do with
striping in the master Cinder rbd driver. Is there a way to set some
Hi all,
I would like to use ceph in our company and had some test setups running.
Now all of a sudden ceph-deploy is not in the repos anymore
this is my sources list,
...
deb http://archive.ubuntu.com/ubuntu raring main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu raring-
On Wed, Aug 14, 2013 at 04:24:55PM -0700, Josh Durgin wrote:
> On 08/14/2013 02:22 PM, Michael Morgan wrote:
> >Hello Everyone,
> >
> > I have a Ceph test cluster doing storage for an OpenStack Grizzly
> > platform
> >(also testing). Upgrading to 0.67 went fine on the Ceph side with the
> >clus
I modified my ceph.repo with the correct url for ceph-extras and that got
it to install.
Thanks
On Thu, Aug 15, 2013 at 8:17 AM, Alfredo Deza wrote:
>
>
>
> On Wed, Aug 14, 2013 at 4:27 PM, Jim Summers wrote:
>
>> Hello All,
>>
>> I just re-installed the ceph-release package on my RHEL system
On Wed, Aug 14, 2013 at 4:27 PM, Jim Summers wrote:
> Hello All,
>
> I just re-installed the ceph-release package on my RHEL system in an
> effort to get dumpling installed.
>
> After doing that I can not yum install ceph-deploy. Then I tyum installed
> ceph but still no ceph-deploy?
>
> You mig
The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works because
then the journal is on the same device as the OSD data, so the execution is
different to get them to a working state.
I suspect that there are left over partitions in /dev/sdaa that are causing
this to fail, I *think* th
On Thu, Aug 15, 2013 at 7:45 AM, Nico Massenberg <
nico.massenb...@kontrast.de> wrote:
> Hello there,
>
> I am deploying a development system with 3 hosts. I want to deploy a
> monitor on each of those hosts and several osds, 1 per disk.
> In addition I have an admin machine to use ceph-deploy fro
Hello there,
I am deploying a development system with 3 hosts. I want to deploy a monitor on
each of those hosts and several osds, 1 per disk.
In addition I have an admin machine to use ceph-deploy from. So far I have 1
mon on ceph01 and a total of 6 osds on ceph01 and ceph02 in a healthy cluste
Greg,
Thanks for following up - I hope you had a GREAT vacation.
I eventually deleted and re-added the rbd pool which fixed the hanging
problem but
left we with 114 stuck pages.
Sam suggested that I permanently remove the down osd's and after a few
hours of
rebalancing
Giuseppe,
You could install the kernel from wheezy backports - it is currently at 3.9.
http://backports.debian.org/Instructions/
http://packages.debian.org/source/stable-backports/linux
Regards,
Jeff
On 14 August 2013 10:08, Giuseppe 'Gippa' Paterno' wrote:
> Hi Sage,
> > What kernel version
27 matches
Mail list logo