Re: [ceph-users] radosgw-admin doesn't list user anymore

2013-10-14 Thread Derek Yarnell
> root@ineri:~# radosgw-admin user info > could not fetch user info: no user info saved Hi Valery, You need to use radosgw-admin metadata list user Thanks, derek -- --- Derek T. Yarnell University of Maryland Institute for Advanced Computer Studies __

Re: [ceph-users] osd down after server failure

2013-10-14 Thread Dong Yuan
>From your informantion, the osd log ended with " 2013-10-14 06:21:26.727681 7f02690f9780 10 osd.47 43203 load_pgs 3.df1_TEMP clearing temp" That means the osd is loading all PG directories from the disk. If there is any I/O error (disk or xfs error), the process couldn't finished. Suggest resta

Re: [ceph-users] qemu-kvm with rbd mem slow leak

2013-10-14 Thread Josh Durgin
On 10/13/2013 07:43 PM, alan.zhang wrote: CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz *2 MEM: 32GB KVM: qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64 Host: CentOS 6.4, kernel 2.6.32-358.14.1.el6.x86_64 Guest: CentOS 6.4, kernel 2.6.32-279.14.1.el6.x86_64 Ceph: ceph version 0.67.4

Re: [ceph-users] Speed limit on RadosGW?

2013-10-14 Thread Chu Duc Minh
My cluster has 3 MON nodes & 6 DATA nodes, all nodes have 2Gbps connectivity (bonding). Each Data node has 14 SATA HDD (osd), each journal on the same disk as OSD. Each MON node run RadosGW too. On Oct 15, 2013 12:34 AM, "Kyle Bader" wrote: > I've personally saturated 1Gbps links on multiple ra

Re: [ceph-users] Using Hadoop With Cephfs

2013-10-14 Thread Noah Watkins
Hi Kai, It doesn't look like there is anything Ceph specific in the Java backtrace you posted. Does you installation work with HDFS? Are there any logs where an error is occurring with the Ceph plugin? Thanks, Noah On Mon, Oct 14, 2013 at 4:34 PM, log1024 wrote: > Hi, > I have a 4-node Ceph clu

[ceph-users] Using Hadoop With Cephfs

2013-10-14 Thread log1024
Hi, I have a 4-node Ceph cluster(2 mon, 1 mds, 2 osd) and a Hadoop node. Currently, I'm trying to replace HDFS with CephFS. I followed the instructions in USING HADOOP WITH CEPHFS. But every time I run bin/start-all.sh to run Hadoop, it failed with: starting namenode, logging to /usr/local/had

Re: [ceph-users] Full OSD with 29% free

2013-10-14 Thread Bryan Stillwell
The filesystem isn't as full now, but the fragmentation is pretty low: [root@den2ceph001 ~]# df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1486562672 270845628 215717044 56% /var/lib/ceph/osd/ceph-1 [root@den2ceph001 ~]# xfs_db -c frag -r /dev

Re: [ceph-users] kvm live migrate wil ceph

2013-10-14 Thread Michael Lowe
I live migrate all the time using the rbd driver in qemu, no problems. Qemu will issue a flush as part of the migration so everything is consistent. It's the right way to use ceph to back vm's. I would strongly recommend against a network file system approach. You may want to look into format

Re: [ceph-users] Full OSD with 29% free

2013-10-14 Thread Michael Lowe
How fragmented is that file system? Sent from my iPad > On Oct 14, 2013, at 5:44 PM, Bryan Stillwell > wrote: > > This appears to be more of an XFS issue than a ceph issue, but I've > run into a problem where some of my OSDs failed because the filesystem > was reported as full even though ther

[ceph-users] Full OSD with 29% free

2013-10-14 Thread Bryan Stillwell
This appears to be more of an XFS issue than a ceph issue, but I've run into a problem where some of my OSDs failed because the filesystem was reported as full even though there was 29% free: [root@den2ceph001 ceph-1]# touch blah touch: cannot touch `blah': No space left on device [root@den2ceph00

[ceph-users] qemu-kvm with rbd mem slow leak

2013-10-14 Thread alan.zhang
CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz *2 MEM: 32GB KVM: qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64 Host: CentOS 6.4, kernel 2.6.32-358.14.1.el6.x86_64 Guest: CentOS 6.4, kernel 2.6.32-279.14.1.el6.x86_64 Ceph: ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)

[ceph-users] kvm live migrate wil ceph

2013-10-14 Thread Jon
Hello, I would like to live migrate a VM between two "hypervisors". Is it possible to do this with a rbd disk or should the vm disks be created as qcow images on a CephFS/NFS share (is it possible to do clvm over rbds? OR GlusterFS over rbds?)and point kvm at the network directory. As I understa

[ceph-users] Production locked: OSDs down

2013-10-14 Thread Mikaël Cluseau
Hi, I have a pretty big problem here... my OSDs are marked down (except one?!) I have ceph ceph version 0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b). I recently had a full monitors so I had to remove them but it seemed to work. # idweighttype nameup/downreweight -115

[ceph-users] xfs log device and osd journal specifications in ceph.conf

2013-10-14 Thread Snider, Tim
3 questions: 1. I'd like to use xfs devices with a separate log device in a ceph cluster. What's the best way to do this? Is it possible to specify xfs log devices in the [osd.x] sections of ceph.conf? E.G.: [osd.0] host = delta devs = /dev/sdx osd

Re: [ceph-users] Speed limit on RadosGW?

2013-10-14 Thread Kyle Bader
I've personally saturated 1Gbps links on multiple radosgw nodes on a large cluster, if I remember correctly, Yehuda has tested it up into the 7Gbps range with 10Gbps gear. Could you describe your clusters hardware and connectivity? On Mon, Oct 14, 2013 at 3:34 AM, Chu Duc Minh wrote: > Hi sorry

Re: [ceph-users] radosgw can still get the object even if this object's physical file is removed on OSDs

2013-10-14 Thread Yehuda Sadeh
On Mon, Oct 14, 2013 at 4:04 AM, david zhang wrote: > Hi ceph-users, > > I uploaded an object successfully to radosgw with 3 replicas. And I located > all the physical paths of 3 replicas on different OSDs. > > i.e, one of the 3 physical paths is > /var/lib/ceph/osd/ceph-2/current/3.5_head/DIR_D/d

Re: [ceph-users] osd down after server failure

2013-10-14 Thread Sage Weil
Is osd.47 the one with the bad disk? I should not start. If there are other osds on the same host that aren't started with 'service ceph start', you may have to mention them by name (the old version of the script would stop on the first error instead of continuing). e.g., service ceph start

Re: [ceph-users] 2013年10月14日 14:42:23 自动保存草稿

2013-10-14 Thread Noah Watkins
Do you have the following in your core-site.xml? > > fs.ceph.impl > org.apache.hadoop.fs.ceph.CephFileSystem > On Sun, Oct 13, 2013 at 11:55 PM, 鹏 wrote: > hi all > I follow the mail configure the ceph with hadoop > (http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/180

Re: [ceph-users] using ceph with hadoop

2013-10-14 Thread Noah Watkins
The error below seems to indicate that Hadoop isn't aware of the `ceph://` file system. You'll need to manually add this to your core-site.xml: >* *>* fs.ceph.impl*>* >org.apache.hadoop.fs.ceph.CephFileSystem*>* * > report:FileSystem ceph://192.168.22.158:6789 is not a distributed

Re: [ceph-users] Using ceph with hadoop error

2013-10-14 Thread Noah Watkins
On Sun, Oct 13, 2013 at 8:28 PM, 鹏 wrote: > hi all: > Exception in thread "main" java.lang.NoClassDefFoundError: > com/ceph/fs/cephFileAlreadyExisteException > at java.lang.class.forName0(Native Method) This looks like a bug, which I'll fixup today. But it shouldn't be related to

[ceph-users] radosgw can still get the object even if this object's physical file is removed on OSDs

2013-10-14 Thread david zhang
Hi ceph-users, I uploaded an object successfully to radosgw with 3 replicas. And I located all the physical paths of 3 replicas on different OSDs. i.e, one of the 3 physical paths is /var/lib/ceph/osd/ceph-2/current/3.5_head/DIR_D/default.4896.65\\u20131014\\u1__head_0646563D__3 Then I manually

Re: [ceph-users] Speed limit on RadosGW?

2013-10-14 Thread Chu Duc Minh
Hi sorry, i missed this mail. > During writes, does the CPU usage on your RadosGW node go way up? No, CPU stay the same & very low (< 10%) When upload small files(300KB/file) over RadosGW: - using 1 process: upload bandwidth ~ 3MB/s - using 100 processes: upload bandwidth ~ 15MB/s When upload

[ceph-users] using ceph with hadoop

2013-10-14 Thread
| hi all I follow the mail configure the ceph with hadoop (http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/1809). 1. Install additional packages: libcephfs-java libcephfs-jni using the commonds: ./configure --enable-cephfs-java make & make install

Re: [ceph-users] radosgw-admin doesn't list user anymore

2013-10-14 Thread Valery Tschopp
We upgraded from 0.61.8 to 0.67.4. The metadata commands works for the users and the buckets: root@ineri ~$ radosgw-admin metadata list bucket [ "a4mesh", "61a75c04-34a5-11e3-9bea-8f8d15b5cf20", "6e22de72-34a5-11e3-afc4-d3f70b676c52", ... root@ineri ~$ radosgw-admin metadata list u

Re: [ceph-users] osd down after server failure

2013-10-14 Thread Dominik Mostowiec
Hi I have found somthing. After restart time was wrong on server (+2hours) before ntp has fixed it. I restarted this 3 osd - it not helps. It is possible that ceph banned this osd? Or after start with wrong time osd has broken hi's filestore? -- Regards Dominik 2013/10/14 Dominik Mostowiec : > H