Hi,
I am also seeing the same issue here. My guess the ceph-deploy package was
somewhat left out when the repository was updated? At least to my best
understanding, the Packages file
(http://download.ceph.com/debian-hammer/dists/trusty/main/binary-amd64/Packages)
and Contents
(http://download
ember, 2014 11:16 AM
To: Luke Jing Yuan
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] ceph issue: rbd vs. qemu-kvm
Yes--the image was converted back to raw.
Since the image is mapped via rbd I can run fdisk on it and see both the
partition tables and a normal set of files inside of it.
My s
7.8T 81% z4-cluster-w 3 img ceph ceph
Regards,
Luke
-Original Message-
From: Steven Timm [mailto:t...@fnal.gov]
Sent: Friday, 19 September, 2014 5:18 AM
To: Luke Jing Yuan
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph issue: rbd vs. qemu-kvm
Using image type raw
Hi,
>From the ones we managed to configure in our lab here. I noticed that using
>image format "raw" instead of "qcow2" worked for us.
Regards,
Luke
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steven
Timm
Sent: Thursday, 18 September, 201
Hi,
MDS did finish the replay and working after that but we are wondering should we
leave the mds_wipe_sessions in ceph.conf or remove it.
Regards,
Luke
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Yan, Zheng
Sent: T
HI Greg,
Actually the cluster that my colleague and I is working is rather new and still
have plenty of space left (less than 7% used). What we noticed just before the
MDS gave us this problem, was a temporary network issue in the data center so
we are not sure that could have been the root cau
ds,
Luke
From: hjcho616 [mailto:hjcho...@yahoo.com]
Sent: Friday, 21 March, 2014 12:09 PM
To: Luke Jing Yuan
Cc: Mohd Bazli Ab Karim; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] MDS crash when client goes to sleep
Nope just these segfaults.
[149884.709608] ceph-mds[17366]: segfault at 200 ip 00
Did you see any messages in dmesg saying ceph-mds respawnning or stuffs like
that?
Regards,
Luke
On Mar 21, 2014, at 11:09 AM, "hjcho616"
mailto:hjcho...@yahoo.com>> wrote:
On client, I was no longer able to access the filesystem. It would hang.
Makes sense since MDS has crashed. I tried r
kind enough to assist?
Thanks in advance.
Regards,
Luke
-Original Message-
From: John Spray [mailto:john.sp...@inktank.com]
Sent: Wednesday, 19 March, 2014 1:33 AM
To: Luke Jing Yuan; ceph-users@lists.ceph.com
Cc: Wong Ming Tat; Mohd Bazli Ab Karim
Subject: Re: [ceph-users] Ceph MDS replaying journ
, 2014 5:13 AM
To: Luke Jing Yuan
Cc: Wong Ming Tat; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph MDS replaying journal
Thanks for sending the logs so quickly.
626 2014-03-18 00:58:01.009623 7fba5cbbe700 10 mds.0.journal EMetaBlob.replay
sessionmap v8632368 -(1|2) == table 7235981
e. However, it
> is less destructive than newfs. It may crash when it completes, look
> for output like this at the beginning before any stack trace to
> indicate success:
> writing journal head
> writing EResetJournal entry
> done
>
> We are looking forward to making
Hi,
I am trying to install/upgrade to 1.2.7 but Ubuntu (Precise) is complaining
about unmet dependency which seemed to be python-pushy 0.5.3 which seemed to be
missing. Am I correct to assume so?
Regards,
Luke
--
-
-
DISCLAIMER:
...@mermaidconsulting.dk]
Sent: Wednesday, 31 July, 2013 6:37 PM
To: Luke Jing Yuan
Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Problem with MON after reboot
Hi,
> This happened to me twice actually. Once before I upgraded, using
> 0.61.4 (I solved it by re
cluster is able
to pick the OSD portion up but not the MON.
Regards,
Luke
-Original Message-
From: Jens Kristian Søgaard [mailto:j...@mermaidconsulting.dk]
Sent: Wednesday, 31 July, 2013 6:27 PM
To: Luke Jing Yuan
Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: Re
Dear all,
I am having a issue with MON after reboot, I had originally 3 MONs but after
rebooting one of them, the quorum cannot be established. Digging through the
log of the problematic MON, I found the following messages:
2013-07-31 18:04:18.536331 7f4f41bfd700 0 -- 10.4.132.18:6804/0 >>
10
Dear all,
There may be a small chance that the wip ceph-disk may fail the journal part
despite managed to partition the disk properly. In my case I use the same disk
for both data and journal on /dev/cciss/c0d1, though I am not sure if I was
using the latest from the wip branch. In such case, a
Hi,
I am having a bit of problem with mon after the node was rebooted. I found the
following error messages repeating and wonder if someone have seen similar
issue and what may be the solution to solve it:
2013-07-02 09:55:06.455179 7f866f60a700 0 -- 10.4.132.18:0/0 >>
10.4.132.18:6800/0 pipe
Dear all,
I am having a bit of problem with mon after the node was rebooted. I found the
following error messages repeating and wonder if someone have seen similar
issue and what may be the solution to solve it:
2013-07-02 09:55:06.455179 7f866f60a700 0 -- 10.4.132.18:0/0 >>
10.4.132.18:6800/
Dear all,
I am trying to mount cephfs to 2 different mount points (each should have their
respective pools and keys). While the first mount works (after using set_layout
to get it to the right pool), the second attempt failed with "mount error 12 =
Cannot allocate memory". Did I miss some steps
Hi,
I had not been able to use ceph-deploy to prepare the OSDs. It seemed every
time I execute this particular command (assuming running the data and journal
on a same disk), I ended up with a message:
ceph-disk: Error: Command '['partprobe','/dev/cciss/c0d1']' returned non-zero
exit status 1
Dear all,
I am trying to deploy a small ceph cluster on some rather old HP servers (with
Ubuntu 12.04.2 LTS). The issue I have is that "ceph-deploy list disk" cannot
recognize the disks, as the devices somehow listed as /dev/cciss/c0dx instead
of the more familiar /dev/sdx format. Is there work
21 matches
Mail list logo