Hi!
Awesome :)) Thanks for such a great work!
Cheers,
Sébastien
On 10.08.2013 02:52, Alfredo Deza wrote:
I am very pleased to announce the release of ceph-deploy to the Python
Package Index.
The OS packages are yet to come, I will make sure to update this
thread when they do.
For now, if
I am very pleased to announce the release of ceph-deploy to the Python
Package Index.
The OS packages are yet to come, I will make sure to update this thread
when they do.
For now, if you are familiar with Python install tools, you can install
directly from PyPI with pip or easy_install:
pip
On 07/08/13 15:14, Jeppesen, Nelson wrote:
Joao,
Have you had a chance to look at my monitor issues? I Ran ''ceph-mon -i FOO
-compact' last week but it did not improve disk usage.
Let me know if there's anything else I dig up. The monitor still at 0.67-rc2
with the OSDs at .0.61.7.
Hi Nels
Can you attach the output of ceph -s?
-Sam
On Fri, Aug 9, 2013 at 11:10 AM, Suresh Sadhu wrote:
> how to repair laggy storage cluster,able to create images on the pools even
> if HEATH state shows WARN,
>
>
>
> sudo ceph
>
> HEALTH_WARN 181 pgs degraded; 676 pgs stuck unclean; recovery 2/107 degr
how to repair laggy storage cluster,able to create images on the pools even if
HEATH state shows WARN,
sudo ceph
HEALTH_WARN 181 pgs degraded; 676 pgs stuck unclean; recovery 2/107 degraded
(1.869%); mds ceph@ubuntu3 is laggy
Regards
Sadhu
___
ceph-u
Awesome. Thanks Darryl. Do you want to propose a fix to stgt, or shall I?
On Aug 8, 2013 7:21 PM, "Darryl Bond" wrote:
> Dan,
> I found that the tgt-admin perl script looks for a local file
>
> if (-e $backing_store && ! -d $backing_store && $can_alloc == 1) {
>
> A bit nasty, but I created some
On Fri, Aug 9, 2013 at 1:34 AM, Luc Dumaine wrote:
> Hi,
>
> I was able to use ceph-deploy behind a proxy, by defining the appropriate
> environment variables used by wget..
>
> I.E. on ubuntu just add to /etc/environnement:
>
> http_proxy=http://host:port
> ftp_proxy=http://host:port
> https_pro
CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7
ADMINNODE:
root@ubuntuceph900athf1:~# ceph -v
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
root@ubuntuceph900athf1:~#
SERVERNODE:
root@ubuntuceph700athf1:/etc/ceph# ceph -v
ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
On Fri, Aug 09, 2013 at 03:05:22PM +0100, Andrei Mikhailovsky wrote:
> I can confirm that I am having similar issues with ubuntu vm guests using fio
> with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks,
> occasionally guest vm stops responding without leaving anything in the
I can confirm that I am having similar issues with ubuntu vm guests using fio
with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks,
occasionally guest vm stops responding without leaving anything in the logs and
sometimes i see kernel panic on the console. I typically leave th
On Centos 6.4, Ceph 0.61.7.
I had a ceph cluster of 9 osds. Today I destroyed all of the osds, and
recreated 6 new ones.
Then I find all the old pgs are in stale.
[root@ceph0 ceph]# ceph -s
health HEALTH_WARN 192 pgs stale; 192 pgs stuck inactive; 192 pgs stuck
stale; 192 pgs stuck unclean
On 08/09/2013 01:51 PM, Suresh Sadhu wrote:
HI,
To access the storage cluster from kvm hypervisor what are the packages
need to install on kvm hypervisor(do we need to install qemu,ceph on
KVM host? For cloudstack-ceph integration).
You only need librbd and librados
The Ceph CLI tools and s
HI,
To access the storage cluster from kvm hypervisor what are the packages need to
install on kvm hypervisor(do we need to install qemu,ceph on KVM host? For
cloudstack-ceph integration).
MY hypervisor version is rhel6.3.
Regards
Sadhu
___
ce
Hi,
I'm using ceph 0.61.7.
When using ceph-fuse, I couldn't find a way, to only mount one pool.
Is there a way to mount a pool - or is it simply not supported?
Kind Regards,
Georg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.c
Thanks for the suggestion. I had tried stopping each OSD for 30
seconds, then restarting it, waiting 2 minutes and then doing the next
one (all OSD's eventually restarted). I tried this twice.
--
___
ceph-users mailing list
ceph-users@lists.ceph.co
Hi,
I am configuring a single node for developing purposes, but ceph asks
me for keyring. Here is what I do:
[root@localhost ~]# mkcephfs -c /usr/local/etc/ceph/ceph.conf
--prepare-monmap -d /tmp/foo
preparing monmap in /tmp/foo/monmap
/usr/local/bin/monmaptool --create --clobber --add a 127.0.0.1
On Centos 6.4, Ceph 0.61.7.
I had a ceph cluster of 9 osds. Today I destroyed all of the osds, and
recreated 6 new ones.
Then I find all the old pgs are in stale.
[root@ceph0 ceph]# ceph -s
health HEALTH_WARN 192 pgs stale; 192 pgs stuck inactive; 192 pgs stuck
stale; 192 pgs stuck unclean
Hi Josh,
just opened
http://tracker.ceph.com/issues/5919
with all collected information incl. debug-log.
Hope it helps,
Oliver.
On 08/08/2013 07:01 PM, Josh Durgin wrote:
On 08/08/2013 05:40 AM, Oliver Francke wrote:
Hi Josh,
I have a session logged with:
debug_ms=1:debug_rbd=20:deb
On 08/09/2013 10:58 AM, Jeff Moskow wrote:
Hi,
I have a 5 node ceph cluster that is running well (no problems using
any of the
rbd images and that's really all we use).
I have replication set to 3 on all three pools (data, metadata and rbd).
"ceph -s" reports:
Hi,
I have a 5 node ceph cluster that is running well (no problems using
any of the
rbd images and that's really all we use).
I have replication set to 3 on all three pools (data, metadata and rbd).
"ceph -s" reports:
health HEALTH_WARN 3 pgs degraded;
Hi,
I was able to use ceph-deploy behind a proxy, by defining the appropriate
environment variables used by wget..
I.E. on ubuntu just add to /etc/environnement:
http_proxy=http://host:port
ftp_proxy=http://host:port
https_proxy=http://host:port
Regard, Luc.
- Mail original -
De: "H
Hi,
thanks for your answers. It was my fault. I configured all at the
beginning of the [DEFAULT] section of glance-api.conf and
overlooked the default settings later ( the default ubuntu
glance-api.conf has later a default RBD Store Options part )
On 08/08/2013 05:04 PM, Josh Durgin wrote:
So I've had a chance to re-visit this since Bécholey Alexandre was kind
enough to let me know how to compile Ceph with the RDMACM library (thankyou
again!).
At this stage it compiles and runs but there appears to be a problem with
calling rshutdown in Pipe as it seems to just wait forever for the
23 matches
Mail list logo