[ceph-users] Blueprint Submission Open for "Firefly"
Greetings Ceph* denizens, Our next Ceph Developer Summit (CDS) is swiftly approaching! Having spoken with many of you at various events (Ceph Days, Meetups, etc) I know there are quite a few great ideas and project stubs floating around out there. The time has come to start putting them down on paper in order to get them on the agenda for the Firefly summit. If you have an idea for a feature, fix, integration, or whizbang-doohickey™ please do the following: 1) Go here: http://wiki.ceph.com/01Planning/02Blueprints/Firefly 2) Log in / create account (google auth) 3) Click "New Page" 4) Select the "Blueprint" template 5) Fill in the appropriate information (as much as you can) including your name as the current interested party 6) Keep a weather eye out for the schedule (which will be posted soon on the blog and lists) and attend CDS to discuss your idea! That's it. Don't worry if you aren't going to be the one to build the entire thing. This process has been used in the past as a recruiting tool to drum up interest around a particular feature idea. Some of our most successful feature work has come from community-driven efforts around a blueprint. If you have any questions feel free to ask me here, via 'commun...@inktank.com', or on irc (scuttlemonkey on irc.oftc.net/ceph). I look forward to some great ideas and another bang-up CDS. Thanks! Best Regards, Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] radosgw-admin object unlink
This sends the objects through the removal process. If the object has a 'tail', then it's going to be removed later by the garbage collector. On Sat, Oct 12, 2013 at 11:02 PM, Dominik Mostowiec wrote: > Thanks :-) > > This command removes object from rgw index (not mark it as removed)? > > -- > Regards > Domink > > 2013/10/13 Yehuda Sadeh : >> On Sat, Oct 12, 2013 at 4:00 PM, Dominik Mostowiec >> wrote: >>> Hi, >>> How works radosgw-admin object unlink? >>> >>> After: >>> radosgw-admin object unlink --bucket=testbucket 'test_file_1001.txt' >> >> Try: >> >> $ radosgw-admin object unlink --bucket=testbucket >> --object='test_file_1001.txt' >> >> >> Yehuda >> >> >>> >>> File still exists in bucket list: >>> s3 -u list testbucket | grep 'test_file_1001.txt' >>> test_file_1001.txt 2013-10-11T11:46:54Z >>> 5 >>> >>> ceph -v >>> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7) >>> >>> -- >>> Regards >>> Dominik >>> ___ >>> ceph-users mailing list >>> ceph-users@lists.ceph.com >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > -- > Pozdrawiam > Dominik ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] radosgw-admin object unlink
hmm, 'tail' - do you mean file/object content? I thought that this command might be workaround for 'blind bucket'. Am I wrong? -- Regards Dominik 2013/10/13 Yehuda Sadeh : > This sends the objects through the removal process. If the object has > a 'tail', then it's going to be removed later by the garbage > collector. > > On Sat, Oct 12, 2013 at 11:02 PM, Dominik Mostowiec > wrote: >> Thanks :-) >> >> This command removes object from rgw index (not mark it as removed)? >> >> -- >> Regards >> Domink >> >> 2013/10/13 Yehuda Sadeh : >>> On Sat, Oct 12, 2013 at 4:00 PM, Dominik Mostowiec >>> wrote: Hi, How works radosgw-admin object unlink? After: radosgw-admin object unlink --bucket=testbucket 'test_file_1001.txt' >>> >>> Try: >>> >>> $ radosgw-admin object unlink --bucket=testbucket >>> --object='test_file_1001.txt' >>> >>> >>> Yehuda >>> >>> File still exists in bucket list: s3 -u list testbucket | grep 'test_file_1001.txt' test_file_1001.txt 2013-10-11T11:46:54Z 5 ceph -v ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7) -- Regards Dominik ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> >> >> -- >> Pozdrawiam >> Dominik -- Pozdrawiam Dominik ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Using ceph with hadoop error
hi all: I configure the ceph with hadoop system; when I run the commod # hadoop fs -ls It return the folliws : Exception in thread "main" java.lang.NoClassDefFoundError: com/ceph/fs/cephFileAlreadyExisteException at java.lang.class.forName0(Native Method) . . . Caused by : java.lang.classNotFoundException:com.ceph.fs.CephFileAlreadyExistsException at java.net.URLClassLoader$1.run(URLClassLoader.jar:202) at . . . what mistake I make! thank you ! pengft ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] osd down after server failure
Hi, I had server failure that starts from one disk failure: Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023986] sd 4:2:26:0: [sdaa] Unhandled error code Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023990] sd 4:2:26:0: [sdaa] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023995] sd 4:2:26:0: [sdaa] CDB: Read(10): 28 00 00 00 00 d0 00 00 10 00 Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.024005] end_request: I/O error, dev sdaa, sector 208 Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.024744] XFS (sdaa): metadata I/O error: block 0xd0 ("xfs_trans_read_buf") error 5 buf count 8192 Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.025879] XFS (sdaa): xfs_imap_to_bp: xfs_trans_read_buf() returned error 5. Oct 14 03:25:28 s3-10-177-64-6 kernel: [1027260.820288] XFS (sdaa): metadata I/O error: block 0xd0 ("xfs_trans_read_buf") error 5 buf count 8192 Oct 14 03:25:28 s3-10-177-64-6 kernel: [1027260.821194] XFS (sdaa): xfs_imap_to_bp: xfs_trans_read_buf() returned error 5. Oct 14 03:25:32 s3-10-177-64-6 kernel: [1027264.667851] XFS (sdaa): metadata I/O error: block 0xd0 ("xfs_trans_read_buf") error 5 buf count 8192 this caused that the server has been unresponsive. After server restart 3 of 26 osd on it are down. In ceph-osd log after "debug osd = 10" and restart is: 2013-10-14 06:21:23.141936 7fdeb4872700 -1 osd.47 43203 *** Got signal Terminated *** 2013-10-14 06:21:23.142141 7fdeb4872700 -1 osd.47 43203 pausing thread pools 2013-10-14 06:21:23.142146 7fdeb4872700 -1 osd.47 43203 flushing io 2013-10-14 06:21:25.406187 7f02690f9780 0 filestore(/vol0/data/osd.47) mount FIEMAP ioctl is supported and appears to work 2013-10-14 06:21:25.406204 7f02690f9780 0 filestore(/vol0/data/osd.47) mount FIEMAP ioctl is disabled via 'filestore fiemap' config option 2013-10-14 06:21:25.406557 7f02690f9780 0 filestore(/vol0/data/osd.47) mount did NOT detect btrfs 2013-10-14 06:21:25.412617 7f02690f9780 0 filestore(/vol0/data/osd.47) mount syncfs(2) syscall fully supported (by glibc and kernel) 2013-10-14 06:21:25.412831 7f02690f9780 0 filestore(/vol0/data/osd.47) mount found snaps <> 2013-10-14 06:21:25.415798 7f02690f9780 0 filestore(/vol0/data/osd.47) mount: enabling WRITEAHEAD journal mode: btrfs not detected 2013-10-14 06:21:26.078377 7f02690f9780 2 osd.47 0 mounting /vol0/data/osd.47 /vol0/data/osd.47/journal 2013-10-14 06:21:26.080872 7f02690f9780 0 filestore(/vol0/data/osd.47) mount FIEMAP ioctl is supported and appears to work 2013-10-14 06:21:26.080885 7f02690f9780 0 filestore(/vol0/data/osd.47) mount FIEMAP ioctl is disabled via 'filestore fiemap' config option 2013-10-14 06:21:26.081289 7f02690f9780 0 filestore(/vol0/data/osd.47) mount did NOT detect btrfs 2013-10-14 06:21:26.087524 7f02690f9780 0 filestore(/vol0/data/osd.47) mount syncfs(2) syscall fully supported (by glibc and kernel) 2013-10-14 06:21:26.087582 7f02690f9780 0 filestore(/vol0/data/osd.47) mount found snaps <> 2013-10-14 06:21:26.089614 7f02690f9780 0 filestore(/vol0/data/osd.47) mount: enabling WRITEAHEAD journal mode: btrfs not detected 2013-10-14 06:21:26.726676 7f02690f9780 2 osd.47 0 boot 2013-10-14 06:21:26.726773 7f02690f9780 10 osd.47 0 read_superblock sb(16773c25-5054-4451-bf9f-efc1f7f21b89 osd.47 63cf7d70-99cb-0ab1-4006-002f e43203 [41261,43203] lci=[43194,43203]) 2013-10-14 06:21:26.726862 7f02690f9780 10 osd.47 0 add_map_bl 43203 82622 bytes 2013-10-14 06:21:26.727184 7f02690f9780 10 osd.47 43203 load_pgs 2013-10-14 06:21:26.727643 7f02690f9780 10 osd.47 43203 load_pgs ignoring unrecognized meta 2013-10-14 06:21:26.727681 7f02690f9780 10 osd.47 43203 load_pgs 3.df1_TEMP clearing temp osd.47 is still down, I put it out from cluster. 47 1 osd.47 down0 How can I check what is wrong? ceph -v ceph version 0.56.6 (95a0bda7f007a33b0dc7adf4b330778fa1e5d70c) -- Pozdrawiam Dominik ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] 2013年10月14日 14:42:23 自动保存草稿
hi all I follow the mail configure the ceph with hadoop (http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/1809). 1. Install additional packages: libcephfs-java libcephfs-jni using the commonds: ./configure --enable-cephfs-java make & make install cp /src/java/libcephfs.jar /usr/hadoop/lib/ 2. Download http://ceph.com/download/hadoop-cephfs.jar cp hadoop-cephfs.jar /usr/hadoop/lib 3. Symink JNI library cd /usr/hadoop/lib/native/Linux-amd64-64 ln -s /usr/local/lib/libcephfs_jni.so . 4 vim core-site.xml fs.default.name=ceph://192.168.22.158:6789/ fs.ceph.impl=org.apache.hadoop.fs.ceph.CephFileSystem ceph.conf.file=/etc/ceph/ceph.conf and then # hadoop fs -ls ls: cannot access . :no such file or directory #hadoop dfsadmin -report report:FileSystem ceph://192.168.22.158:6789 is not a distributed file System Usage: java DFSAdmin[-report] thanks pengft ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com