If you can follow the documentation here:
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/ and
http://ceph.com/docs/master/rados/troubleshooting/ to provide some
additional information, we may be better able to help you.
For example, "ceph osd tree" would help us understand the st
Hi,
I have deployed the ceph object store using ceph-deploy.
I tried to mount cephfs and I got struck with this error.
*sudo mount.ceph 192.168.35.82:/ /mnt/mycephfs -o
name=admin,secret=AQDa5JJRqLxuOxAA77VljIjaAGWR6mGdL12NUQ==*
*mount error 5 = Input/output error*
The output of the command
#
Have you tried restarting your MDS server?
http://ceph.com/docs/master/rados/operations/operating/#operating-a-cluster
On Fri, May 17, 2013 at 12:16 AM, Sridhar Mahadevan
wrote:
> Hi,
>
> I have deployed the ceph object store using ceph-deploy.
> I tried to mount cephfs and I got struck with this
Hi,
I did try to restart the MDS server. The logs show the following error
*[187846.234448] init: ceph-mds (ceph/blade2-qq) main process (15077)
killed by ABRT signal
[187846.234493] init: ceph-mds (ceph/blade2-qq) main process ended,
respawning
[187846.687929] init: ceph-mds (ceph/blade2-qq) main
於 2013年05月16日 20:45, Ulrich Schinz 提到:
Hi there,
today i setup my ceph-cluster. it's up and running fine.
mounting cephfs to my "client"-machine works fine as well.
~# mount
172.17.50.71:6789:/ on /samba/ceph type ceph
(rw,relatime,name=admin,secret=)
touching a file in that directory and set
Are you running the MDS in a VM?
On Fri, May 17, 2013 at 12:40 AM, Sridhar Mahadevan
wrote:
> Hi,
> I did try to restart the MDS server. The logs show the following error
>
> [187846.234448] init: ceph-mds (ceph/blade2-qq) main process (15077) killed
> by ABRT signal
> [187846.234493] init: ceph-
No I am not running MDS in a VM. I have MDS and mon in a single node.
On Fri, May 17, 2013 at 4:03 PM, John Wilkins wrote:
> Are you running the MDS in a VM?
>
> On Fri, May 17, 2013 at 12:40 AM, Sridhar Mahadevan
> wrote:
> > Hi,
> > I did try to restart the MDS server. The logs show the follo
Hi,
thanks for your answer. In fact I have several different problems, which
I tried to solve separatly :
1) I loose 2 OSD, and some pools have only 2 replicas. So some data was
lost.
2) One monitor refuse the Cuttlefish upgrade, so I only have 4 of 5
monitors running.
3) I have 4 old inconsisten
Hi everyone
The image files don't display in mount point when using the command
"rbd-fuse -p poolname -c /etc/ceph/ceph.conf /aa"
but other pools can display image files with the same command. I also create
more sizes and more numbers images than that pool, it's work fine.
How can I track the is
Thanks Gary,
after you throwing me those clues I got furthur but it still isnt working.
It seems there are no i386 deb "python-pushy" packages in either of those
ceph repo's. I also attempted using PIP and got pushy installed but the
ceph-deploy debs still refused to install.
I built another VM
Hi,
I've got a strange issue when I try to initialize an object repository
with ceph-osd --mkfs or when I mount an OSD partition (previously
initialized).
On Archlinux with a kernel >3.8.5, with Ceph v0.61.2 or v0.62 and XFS as
OSD filesystem, each time I run 'ceph-osd -i $i --mkfs' my machine re
It looks like you have the "noout" flag set:
"noout flag(s) set; 1 mons down, quorum 0,1,2,3 a,b,c,e
monmap e7: 5 mons at
{a=10.0.0.1:6789/0,b=10.0.0.2:6789/0,c=10.0.0.5:6789/0,d=10.0.0.6:6789/0,e=10.0.0.3:6789/0},
election epoch 2584, quorum 0,1,2,3 a,b,c,e
osdmap e82502: 50 osds: 48 up, 48
Another thing... since your osd.10 is near full, your cluster may be
fairly close to capacity for the purposes of rebalancing. Have a look
at:
http://ceph.com/docs/master/rados/configuration/mon-config-ref/#storage-capacity
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#no
Hi Matt -
I see in the message below that you are using python 2.6. Ceph-deploy may
have some syntax that is incompatible with that version of python. On wheezy
we tested with the default python 2.7.3 interpreter. You might try using the
newer interpreter, we will also do so more testing t
Hi All,
I have had an issue recently while working on my ceph clusters. The following
issue seems to be true on bobtail and cuttlefish. I have two production
clusters in two different data centers and a test cluster. We are using ceph
to run virtual machines. I use rbd as block devices for
Hi Matt -
Sorry, I just spotted at the end of your message that you are using python
2.7.3. But the modules are installing into the python2.6 directories. I
don't know why that would be happening, and we'll have to dig into more.
Python is tripping over incompatible syntax for some reason.
On Fri, 17 May 2013, Joe Ryner wrote:
> Hi All,
>
> I have had an issue recently while working on my ceph clusters. The
> following issue seems to be true on bobtail and cuttlefish. I have two
> production clusters in two different data centers and a test cluster. We are
> using ceph to run
On 05/17/2013 03:49 PM, Joe Ryner wrote:
> Hi All,
>
> I have had an issue recently while working on my ceph clusters. The
> following issue seems to be true on bobtail and cuttlefish. I have
> two production clusters in two different data centers and a test
> cluster. We are using ceph to run
Yes, I set the "noout" flag to avoid the auto balancing of the osd.25,
which will crash all OSD of this host (already tried several times).
Le vendredi 17 mai 2013 à 11:27 -0700, John Wilkins a écrit :
> It looks like you have the "noout" flag set:
>
> "noout flag(s) set; 1 mons down, quorum 0,1,
Hi Gary,
after a bit of searching on the list I was able to resolve this by
"aptitude install python-setuptools".
seems it's a missing dependency on wheezy "ceph-deploy" install.
thanks for your help
-Matt
On Sat, May 18, 2013 at 6:54 AM, Gary Lowell wrote:
> Hi Matt -
>
> Sorry, I just spo
Great news. There was a patch committed to master last week that added
python-setuptools to the dependency. So the issue shouldn't happen with the
next build.
Cheers,
Gary
On May 17, 2013, at 4:47 PM, Matt Chipman wrote:
> Hi Gary,
>
> after a bit of searching on the list I was able to res
21 matches
Mail list logo