Re: [ceph-users] ceph-deploy - default release

2015-11-04 Thread Luke Jing Yuan
Hi, I am also seeing the same issue here. My guess the ceph-deploy package was somewhat left out when the repository was updated? At least to my best understanding, the Packages file (http://download.ceph.com/debian-hammer/dists/trusty/main/binary-amd64/Packages) and Contents (http://download

Re: [ceph-users] ceph issue: rbd vs. qemu-kvm

2014-09-18 Thread Luke Jing Yuan
ember, 2014 11:16 AM To: Luke Jing Yuan Cc: ceph-users@lists.ceph.com Subject: RE: [ceph-users] ceph issue: rbd vs. qemu-kvm Yes--the image was converted back to raw. Since the image is mapped via rbd I can run fdisk on it and see both the partition tables and a normal set of files inside of it. My s

Re: [ceph-users] ceph issue: rbd vs. qemu-kvm

2014-09-18 Thread Luke Jing Yuan
7.8T 81% z4-cluster-w 3 img ceph ceph Regards, Luke -Original Message- From: Steven Timm [mailto:t...@fnal.gov] Sent: Friday, 19 September, 2014 5:18 AM To: Luke Jing Yuan Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph issue: rbd vs. qemu-kvm Using image type raw

Re: [ceph-users] ceph issue: rbd vs. qemu-kvm

2014-09-17 Thread Luke Jing Yuan
Hi, >From the ones we managed to configure in our lab here. I noticed that using >image format "raw" instead of "qcow2" worked for us. Regards, Luke -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steven Timm Sent: Thursday, 18 September, 201

Re: [ceph-users] Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-29 Thread Luke Jing Yuan
Hi, MDS did finish the replay and working after that but we are wondering should we leave the mds_wipe_sessions in ceph.conf or remove it. Regards, Luke -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Yan, Zheng Sent: T

Re: [ceph-users] Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-25 Thread Luke Jing Yuan
HI Greg, Actually the cluster that my colleague and I is working is rather new and still have plenty of space left (less than 7% used). What we noticed just before the MDS gave us this problem, was a temporary network issue in the data center so we are not sure that could have been the root cau

Re: [ceph-users] MDS crash when client goes to sleep

2014-03-20 Thread Luke Jing Yuan
ds, Luke From: hjcho616 [mailto:hjcho...@yahoo.com] Sent: Friday, 21 March, 2014 12:09 PM To: Luke Jing Yuan Cc: Mohd Bazli Ab Karim; ceph-users@lists.ceph.com Subject: Re: [ceph-users] MDS crash when client goes to sleep Nope just these segfaults. [149884.709608] ceph-mds[17366]: segfault at 200 ip 00

Re: [ceph-users] MDS crash when client goes to sleep

2014-03-20 Thread Luke Jing Yuan
Did you see any messages in dmesg saying ceph-mds respawnning or stuffs like that? Regards, Luke On Mar 21, 2014, at 11:09 AM, "hjcho616" mailto:hjcho...@yahoo.com>> wrote: On client, I was no longer able to access the filesystem. It would hang. Makes sense since MDS has crashed. I tried r

Re: [ceph-users] Ceph MDS replaying journal

2014-03-18 Thread Luke Jing Yuan
kind enough to assist? Thanks in advance. Regards, Luke -Original Message- From: John Spray [mailto:john.sp...@inktank.com] Sent: Wednesday, 19 March, 2014 1:33 AM To: Luke Jing Yuan; ceph-users@lists.ceph.com Cc: Wong Ming Tat; Mohd Bazli Ab Karim Subject: Re: [ceph-users] Ceph MDS replaying journ

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread Luke Jing Yuan
, 2014 5:13 AM To: Luke Jing Yuan Cc: Wong Ming Tat; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph MDS replaying journal Thanks for sending the logs so quickly. 626 2014-03-18 00:58:01.009623 7fba5cbbe700 10 mds.0.journal EMetaBlob.replay sessionmap v8632368 -(1|2) == table 7235981

Re: [ceph-users] Ceph MDS replaying journal

2014-03-17 Thread Luke Jing Yuan
e. However, it > is less destructive than newfs. It may crash when it completes, look > for output like this at the beginning before any stack trace to > indicate success: > writing journal head > writing EResetJournal entry > done > > We are looking forward to making

[ceph-users] Missing Dependency for ceph-deploy 1.2.7

2013-10-15 Thread Luke Jing Yuan
Hi, I am trying to install/upgrade to 1.2.7 but Ubuntu (Precise) is complaining about unmet dependency which seemed to be python-pushy 0.5.3 which seemed to be missing. Am I correct to assume so? Regards, Luke -- - - DISCLAIMER:

Re: [ceph-users] Problem with MON after reboot

2013-07-31 Thread Luke Jing Yuan
...@mermaidconsulting.dk] Sent: Wednesday, 31 July, 2013 6:37 PM To: Luke Jing Yuan Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Problem with MON after reboot Hi, > This happened to me twice actually. Once before I upgraded, using > 0.61.4 (I solved it by re

Re: [ceph-users] Problem with MON after reboot

2013-07-31 Thread Luke Jing Yuan
cluster is able to pick the OSD portion up but not the MON. Regards, Luke -Original Message- From: Jens Kristian Søgaard [mailto:j...@mermaidconsulting.dk] Sent: Wednesday, 31 July, 2013 6:27 PM To: Luke Jing Yuan Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com Subject: Re

[ceph-users] Problem with MON after reboot

2013-07-31 Thread Luke Jing Yuan
Dear all, I am having a issue with MON after reboot, I had originally 3 MONs but after rebooting one of them, the quorum cannot be established. Digging through the log of the problematic MON, I found the following messages: 2013-07-31 18:04:18.536331 7f4f41bfd700 0 -- 10.4.132.18:6804/0 >> 10

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Luke Jing Yuan
Dear all, There may be a small chance that the wip ceph-disk may fail the journal part despite managed to partition the disk properly. In my case I use the same disk for both data and journal on /dev/cciss/c0d1, though I am not sure if I was using the latest from the wip branch. In such case, a

[ceph-users] Problem with MON

2013-07-02 Thread Luke Jing Yuan
Hi, I am having a bit of problem with mon after the node was rebooted. I found the following error messages repeating and wonder if someone have seen similar issue and what may be the solution to solve it: 2013-07-02 09:55:06.455179 7f866f60a700 0 -- 10.4.132.18:0/0 >> 10.4.132.18:6800/0 pipe

[ceph-users] Problem with mon

2013-07-02 Thread Luke Jing Yuan
Dear all, I am having a bit of problem with mon after the node was rebooted. I found the following error messages repeating and wonder if someone have seen similar issue and what may be the solution to solve it: 2013-07-02 09:55:06.455179 7f866f60a700 0 -- 10.4.132.18:0/0 >> 10.4.132.18:6800/

[ceph-users] mount error 12

2013-06-18 Thread Luke Jing Yuan
Dear all, I am trying to mount cephfs to 2 different mount points (each should have their respective pools and keys). While the first mount works (after using set_layout to get it to the right pool), the second attempt failed with "mount error 12 = Cannot allocate memory". Did I miss some steps

[ceph-users] Issue with deploying OSD

2013-06-11 Thread Luke Jing Yuan
Hi, I had not been able to use ceph-deploy to prepare the OSDs. It seemed every time I execute this particular command (assuming running the data and journal on a same disk), I ended up with a message: ceph-disk: Error: Command '['partprobe','/dev/cciss/c0d1']' returned non-zero exit status 1

[ceph-users] ceph-deploy list disk issue

2013-06-09 Thread Luke Jing Yuan
Dear all, I am trying to deploy a small ceph cluster on some rather old HP servers (with Ubuntu 12.04.2 LTS). The issue I have is that "ceph-deploy list disk" cannot recognize the disks, as the devices somehow listed as /dev/cciss/c0dx instead of the more familiar /dev/sdx format. Is there work