Re: [ceph-users] multi-mds and directory sharding

2014-04-14 Thread Dan Van Der Ster
  Hi, On 14 Apr 2014 at 00:43:01, Yan, Zheng (uker...@gmail.com(mailto:uker...@gmail.com)) wrote: > On Mon, Apr 14, 2014 at 2:54 AM, Qing Zheng wrote: > > Hi - > > > > We are currently evaluating CephFS's metadata scalability > > and performance. One important feature of CephFS is its s

[ceph-users] Create a volume from a img

2014-04-14 Thread 常乐
Hi all, I am trying to boot from a block device in openstack. So far I have my ceph cluster ready. Openstack glance keystone nova cinder all work. what i can do now is to create an image and save it in Ceph pool images. I can also create a volume without specifying image ID and attach

[ceph-users] Upgrading from Dumpling to Emperor

2014-04-14 Thread Stanislav Yanchev
Hello, I have a question about upgrading from the latest Dumpling version to the latest Emperor version of ceph, and mostly about not hitting bug 6761 because I'm upgrading a ceph cluster in production. Or maybe a workaround like upgrading the OSDs first and then the MONs to Emperor. Regards,

Re: [ceph-users] OSD: GPT Partition for journal on different partition ?

2014-04-14 Thread Christian Balzer
Hello, On Sat, 12 Apr 2014 09:17:05 -0700 Sage Weil wrote: > Hi Florent, > > GPT partitions ate required if the udev-based magic is going to work. > If you opt out of that strategy, you need to mount your file systems > using fstab or similar and start the daemons manually. > THAT would have be

[ceph-users] federated gateways can sync files with English filename but can't sync files with Chinese filename

2014-04-14 Thread wsnote
Hi, everyone! After serevel day's attempt, the files can sync between two zones. But now, I find another quesion.Just files with English filename were synced, but files with Chinese filename were not synced. Does anyone come with the same question? Thanks! -

[ceph-users] CephS3 and s3fs.

2014-04-14 Thread Ирек Фасихов
Hi,All. Does anyone have experience with s3fs+CephS3? I shows an error when uploading a file: kataklysm@linux-41gj:~> s3fs infas /home/kataklysm/s3/ -o url=" http://s3.x-.ru"; kataklysm @ linux-41gj: ~> rsync-av - progress temp / s3 sending incremental file list rsync: failed to set time

Re: [ceph-users] Federated gateways

2014-04-14 Thread Peter Tiernan
i have the following in ceph.conf: [client.radosgw.gateway] host = cephgw keyring = /etc/ceph/keyring.radosgw.gateway rgw print continue = false rgw region = us rgw region root pool = .us.rgw.root rgw zone = us-master rgw zone root pool = .us-master.rgw.root rgw dns name = cephgw

Re: [ceph-users] multi-mds and directory sharding

2014-04-14 Thread Yan, Zheng
On Mon, Apr 14, 2014 at 3:16 PM, Dan Van Der Ster wrote: > > Hi, > > On 14 Apr 2014 at 00:43:01, Yan, Zheng > (uker...@gmail.com(mailto:uker...@gmail.com)) wrote: >> On Mon, Apr 14, 2014 at 2:54 AM, Qing Zheng wrote: >> > Hi - >> > >> > We are currently evaluating CephFS's metadata scalability >>

Re: [ceph-users] Federated gateways

2014-04-14 Thread Peter
Here is log output for request to gateway: 2014-04-14 12:39:20.547012 7f1377aa97c0 20 enqueued request req=0x8ca280 2014-04-14 12:39:20.547036 7f1377aa97c0 20 RGWWQ: 2014-04-14 12:39:20.547038 7f1377aa97c0 20 req: 0x8ca280 2014-04-14 12:39:20.547044 7f1377aa97c0 10 allocated request req=0x8a6d30

Re: [ceph-users] Upgrading from Dumpling to Emperor

2014-04-14 Thread Gregory Farnum
That bug was resolved a long time ago; as long as you're using one of the Emperor point releases you'll be fine. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Mon, Apr 14, 2014 at 1:46 AM, Stanislav Yanchev wrote: > Hello, I have a question about upgrading from the latest

Re: [ceph-users] Upgrading from Dumpling to Emperor

2014-04-14 Thread Stanislav Yanchev
Thanks for the information. Regards, Stanislav Yanchev Core System Administrator [cid:image001.gif@01CF5803.67FFFE90] s.yanc...@maxtelecom.bg www.maxtelecom.bg From: Gregory Farnum [mailto:g...@inktank.com] Sent: Monday, April 14, 2014 4:

[ceph-users] Fwd: RadosGW: bad request

2014-04-14 Thread Gandalf Corvotempesta
-- Forwarded message -- From: Gandalf Corvotempesta Date: 2014-04-09 14:31 GMT+02:00 Subject: Re: [ceph-users] RadosGW: bad request To: Yehuda Sadeh Cc: "ceph-users@lists.ceph.com" 2014-04-07 20:24 GMT+02:00 Yehuda Sadeh : > Try bumping up logs (debug rgw = 20, debug ms = 1). N

Re: [ceph-users] OSD: GPT Partition for journal on different partition ?

2014-04-14 Thread Sage Weil
On Mon, 14 Apr 2014, Christian Balzer wrote: > > Hello, > > On Sat, 12 Apr 2014 09:17:05 -0700 Sage Weil wrote: > > > Hi Florent, > > > > GPT partitions ate required if the udev-based magic is going to work. > > If you opt out of that strategy, you need to mount your file systems > > using fsta

[ceph-users] Ceph / OpenStack Integration Volume Statistics

2014-04-14 Thread Dan Ryder (daryder)
Hello, My team is working on Ceph and Openstack integration, and trying to get volume usage statistics as well as I/O, latency for volumes. I've found through the "virsh" command we should be able to get these stats. However, with "virsh domblkinfo" command, we are getting a problem - "Bad file

Re: [ceph-users] OSD location reset after restart

2014-04-14 Thread Wido den Hollander
On 04/14/2014 04:27 PM, Kenneth Waegeman wrote: Hi all, I had set the crushmap by generating a crushmap file, compiling it with crushtool and setting it in place with 'ceph osd setcrushmap'. For testing, I grouped the disks by type, using it for different pools. So I have 'root=default-sas' and

[ceph-users] OSD location reset after restart

2014-04-14 Thread Kenneth Waegeman
Hi all, I had set the crushmap by generating a crushmap file, compiling it with crushtool and setting it in place with 'ceph osd setcrushmap'. For testing, I grouped the disks by type, using it for different pools. So I have 'root=default-sas' and 'root=default-ssd', and host cephxxx-sas

Re: [ceph-users] [Ceph-community] Ceph with cinder-volume integration failure

2014-04-14 Thread Andrew Woodward
Matteo, The question is better asked to the ceph-users list that aside; You can reference the cinder-volume config from the puppet-cinder moduel [1] to see what we typically do to get ceph working with Cinder. The usual cause for your issue is the lack of the 'export CEPH_ARGS="--id ceph"' in your

Re: [ceph-users] Federated gateways

2014-04-14 Thread Craig Lewis
2014-04-14 12:39:20.556085 7f133f7ee700 10 auth_hdr: GET x-amz-date:Mon, 14 Apr 2014 11:39:01 + / 2014-04-14 12:39:20.556125 7f133f7ee700 15 *calculated digest=TQ5LP8ZeufSqKLumak6Aez4o+Pg=* 2014-04-14 12:39:20.556127 7f133f7ee700 15 *auth_sign=hx94rY3BJn7HQKA6ERaksNMQPRs=* 2014-04-14 1

[ceph-users] Migrate from mkcephfs to ceph-deploy

2014-04-14 Thread Mike Dawson
Hello, I have a production cluster that was deployed with mkcephfs around the Bobtail release. Quite a bit has changed in regards to ceph.conf conventions, ceph-deploy, symlinks to journal partitions, udev magic, and upstart. Is there any path to migrate these OSDs up to the new style setup?

Re: [ceph-users] CephFS MDS manual deployment

2014-04-14 Thread Gregory Farnum
On Thu, Apr 10, 2014 at 7:27 PM, Adam Clark wrote: > Wow, that was quite simple > > mkdir /var/lib/ceph/mds/ceph-0 > ceph auth get-or-create mds.0 mds 'allow' osd 'allow *' mon 'allow *' > > /var/lib/ceph/mds/ceph-0/keyring > ceph-mds --id 0 > > mount -t ceph ceph-mon01:6789:/ /mnt -o name=admin,

Re: [ceph-users] atomic + asynchr

2014-04-14 Thread Gregory Farnum
You just need to wait for the ondisk or complete ack in whatever interface you choose. It won't come back until the data is persisted to all extant copies. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Mon, Apr 7, 2014 at 4:08 PM, Steven Paster wrote: > I am using the Cep

Re: [ceph-users] ceph hbase issue

2014-04-14 Thread Gregory Farnum
This looks like some kind of HBase issue to me (which I can't help with; I've never used it), but I guess if I were looking at Ceph I'd check if it was somehow configured such that the needed files are located in different pools (or other separate security domains) that might be set up wrong. -Greg

Re: [ceph-users] ceph hbase issue

2014-04-14 Thread Noah Watkins
This strikes me as a difference in semantics between HDFS and CephFS, and like Greg said it's probably based on HBase assumptions. It'd be really helpful to find out what the exception is. If you are building the Hadoop bindings from scratch, you can instrument `listStatus` in `CephFileSystem.java`

Re: [ceph-users] pg incomplete, won't create

2014-04-14 Thread Craig Lewis
Once the OSDs drained, the PG stayed in state incomplete. When I stopped down the out OSDs, the PG went to state down+peering After marking the down OSDs lost, the PG went to state down+incomplete After running ceph pg force_create_pg 11.483, the PG went to state creating. It stayed that way

[ceph-users] Radosgw and s3cmd

2014-04-14 Thread Shashank Puntamkar
I am setting up a ceph cluster with amazon s3 like capabilities. I have configured Ceph Object Gateway, radowgw on Ubuntu12.04 as described in ceph documentation.(http://ceph.com/docs/master/radosgw/config/). While I am testing in with s3cmd tool , I am geeting error. though command " s3cmd ls "

Re: [ceph-users] Radosgw and s3cmd

2014-04-14 Thread Yehuda Sadeh
On Mon, Apr 14, 2014 at 10:12 PM, Shashank Puntamkar wrote: > I am setting up a ceph cluster with amazon s3 like capabilities. > I have configured Ceph Object Gateway, radowgw on Ubuntu12.04 as > described in ceph > documentation.(http://ceph.com/docs/master/radosgw/config/). > > While I am testi

Re: [ceph-users] Migrate from mkcephfs to ceph-deploy

2014-04-14 Thread Stanislav Yanchev
Hi Mike, We have just upgraded our ceph cluster yesterday from Bobtail to Emperor. Hopefully till the end of the week we'll deploy new nodes and add SSD journals to the current ones. Then I could share what we hit as a problem. Regards, Stanislav Yanchev Core System Administrator s.yanc...@ma