Re: [ceph-users] RBD Export-Diff With Children Snapshots

2014-06-10 Thread Josh Durgin
On Fri, 6 Jun 2014 17:34:56 -0700 Tyler Wilson wrote: > Hey All, > > Simple question, does 'rbd export-diff' work with children snapshot > aka; > > root:~# rbd children images/03cb46f7-64ab-4f47-bd41-e01ced45f0b4@snap > compute/2b65c0b9-51c3-4ab1-bc3c-6b734cc796b8_disk > compute/54f3b23c-facf-4

Re: [ceph-users] Fwd: CEPH Multitenancy and Data Isolation

2014-06-10 Thread Josh Durgin
On 06/10/2014 01:56 AM, Vilobh Meshram wrote: How does CEPH guarantee data isolation for volumes which are not meant to be shared in a Openstack tenant? When used with OpenStack the data isolation is provided by the Openstack level so that all users who are part of same tenant will be able to ac

Re: [ceph-users] radosgw-agent failed to parse

2014-07-07 Thread Josh Durgin
On 07/04/2014 08:36 AM, Peter wrote: i am having issues running radosgw-agent to sync data between two radosgw zones. As far as i can tell both zones are running correctly. My issue is when i run the radosgw-agent command: radosgw-agent -v --src-access-key --src-secret-key --dest-access-key

Re: [ceph-users] Multipart upload on ceph 0.8 doesn't work?

2014-07-07 Thread Josh Durgin
On 07/07/2014 05:41 AM, Patrycja Szabłowska wrote: OK, the mystery is solved. From https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10368.html "During a multi part upload you can't upload parts smaller than 5M" I've tried to upload smaller chunks, like 10KB. I've changed chunk size to

Re: [ceph-users] question about librbd io

2014-09-10 Thread Josh Durgin
On 09/09/2014 07:06 AM, yuelongguang wrote: hi, josh.durgin: i want to know how librbd launch io request. use case: inside vm, i use fio to test rbd-disk's io performance. fio's pramaters are bs=4k, direct io, qemu cache=none. in this case, if librbd just send what it gets from vm, i mean no ga

Re: [ceph-users] command to flush rbd cache?

2015-02-04 Thread Josh Durgin
On 02/05/2015 07:44 AM, Udo Lembke wrote: Hi all, is there any command to flush the rbd cache like the "echo 3 > /proc/sys/vm/drop_caches" for the os cache? librbd exposes it as rbd_invalidate_cache(), and qemu uses it internally, but I don't think you can trigger that via any user-facing qemu

Re: [ceph-users] wider rados namespace support?

2015-02-12 Thread Josh Durgin
On 02/10/2015 07:54 PM, Blair Bethwaite wrote: Just came across this in the docs: "Currently (i.e., firefly), namespaces are only useful for applications written on top of librados. Ceph clients such as block device, object storage and file system do not currently support this feature." Then fou

Re: [ceph-users] FreeBSD on RBD (KVM)

2015-02-18 Thread Josh Durgin
> From: "Logan Barfield" > We've been running some tests to try to determine why our FreeBSD VMs > are performing much worse than our Linux VMs backed by RBD, especially > on writes. > > Our current deployment is: > - 4x KVM Hypervisors (QEMU 2.0.0+dfsg-2ubuntu1.6) > - 2x OSD nodes (8x SSDs each,

Re: [ceph-users] wider rados namespace support?

2015-02-18 Thread Josh Durgin
27;allow r class-read pool=foo namespace="" object_prefix rbd_id, allow rwx pool=foo namespace=bar' Cinder or other management layers would still want broader access, but these more restricted keys could be the only ones exposed to QEMU. Josh On 13 February 2015 at 05:57, Josh Durgin

Re: [ceph-users] v0.80.8 and librbd performance

2015-03-04 Thread Josh Durgin
On 03/03/2015 03:28 PM, Ken Dreyer wrote: On 03/03/2015 04:19 PM, Sage Weil wrote: Hi, This is just a heads up that we've identified a performance regression in v0.80.8 from previous firefly releases. A v0.80.9 is working it's way through QA and should be out in a few days. If you haven't upg

Re: [ceph-users] qemu-kvm and cloned rbd image

2015-03-04 Thread Josh Durgin
On 03/02/2015 04:16 AM, koukou73gr wrote: Hello, Today I thought I'd experiment with snapshots and cloning. So I did: rbd import --image-format=2 vm-proto.raw rbd/vm-proto rbd snap create rbd/vm-proto@s1 rbd snap protect rbd/vm-proto@s1 rbd clone rbd/vm-proto@s1 rbd/server And then proceeded

Re: [ceph-users] qemu-kvm and cloned rbd image

2015-03-04 Thread Josh Durgin
On 03/04/2015 01:36 PM, koukou73gr wrote: On 03/03/2015 05:53 PM, Jason Dillaman wrote: Your procedure appears correct to me. Would you mind re-running your cloned image VM with the following ceph.conf properties: [client] rbd cache off debug rbd = 20 log file = /path/writeable/by/qemu.$pid.lo

Re: [ceph-users] qemu-kvm and cloned rbd image

2015-03-05 Thread Josh Durgin
On 03/05/2015 12:46 AM, koukou73gr wrote: On 03/05/2015 03:40 AM, Josh Durgin wrote: It looks like your libvirt rados user doesn't have access to whatever pool the parent image is in: librbd::AioRequest: write 0x7f1ec6ad6960 rbd_data.24413d1b58ba.0186 1523712~4096 should_com

Re: [ceph-users] How to see the content of an EC Pool after recreate the SSD-Cache tier?

2015-03-26 Thread Josh Durgin
On 03/26/2015 10:46 AM, Gregory Farnum wrote: I don't know why you're mucking about manually with the rbd directory; the rbd tool and rados handle cache pools correctly as far as I know. That's true, but the rados tool should be able to manipulate binary data more easily. It should probably be

Re: [ceph-users] Error DATE 1970

2015-04-02 Thread Josh Durgin
On 04/01/2015 02:42 AM, Jimmy Goffaux wrote: English Version : Hello, I found a strange behavior in Ceph. This behavior is visible on Buckets (RGW) and pools (RDB). pools: `` root@:~# qemu-img info rbd:pool/kibana2 image: rbd:pool/kibana2 file format: raw virtual size: 30G (32212254720 bytes)

Re: [ceph-users] live migration fails with image on ceph

2015-04-06 Thread Josh Durgin
Like the last comment on the bug says, the message about block migration (drive mirroring) indicates that nova is telling libvirt to copy the virtual disks, which is not what should happen for ceph or other shared storage. For ceph just plain live migration should be used, not block migration.

Re: [ceph-users] Number of ioctx per rados connection

2015-04-08 Thread Josh Durgin
Yes, you can use multiple ioctxs with the same underlying rados connection. There's no hard limit on how many, it depends on your usage if/when a single rados connection becomes a bottleneck. It's safe to use different ioctxs from multiple threads. IoCtxs have some local state like namespace,

Re: [ceph-users] long blocking with writes on rbds

2015-04-08 Thread Josh Durgin
On 04/08/2015 11:40 AM, Jeff Epstein wrote: Hi, thanks for answering. Here are the answers to your questions. Hopefully they will be helpful. On 04/08/2015 12:36 PM, Lionel Bouton wrote: I probably won't be able to help much, but people knowing more will need at least: - your Ceph version, - th

Re: [ceph-users] live migration fails with image on ceph

2015-04-10 Thread Josh Durgin
On 04/08/2015 09:37 PM, Yuming Ma (yumima) wrote: Josh, I think we are using plain live migration and not mirroring block drives as the other test did. Do you have the migration flags or more from the libvirt log? Also which versions of qemu is this? The libvirt log message about qemuMigratio

Re: [ceph-users] v0.80.8 and librbd performance

2015-04-14 Thread Josh Durgin
ps in ceph.conf on the cinder node. This affects delete speed, since rbd tries to delete each object in a volume. Josh From: shiva rkreddy Sent: Apr 14, 2015 5:53 AM To: Josh Durgin Cc: Ken Dreyer; Sage Weil; Ceph Development; ceph-us...@ceph.com Subject: Re: v0.80.8 and librbd performance >

Re: [ceph-users] v0.80.8 and librbd performance

2015-04-15 Thread Josh Durgin
e clues about what's going slower. Josh On Tue, Apr 14, 2015 at 12:36 PM, Josh Durgin mailto:jdur...@redhat.com>> wrote: I don't see any commits that would be likely to affect that between 0.80.7 and 0.80.9. Is this after upgrading an existing cluster? Cou

Re: [ceph-users] Synchronous writes - tuning and some thoughts about them?

2015-06-02 Thread Josh Durgin
On 06/01/2015 03:41 AM, Jan Schermer wrote: Thanks, that’s it exactly. But I think that’s really too much work for now, that’s why I really would like to see a quick-win by using the local RBD cache for now - that would suffice for most workloads (not too many people run big databases on CEPH n

Re: [ceph-users] Ceph asok filling nova open files

2015-06-03 Thread Josh Durgin
On 06/03/2015 02:31 PM, Robert LeBlanc wrote: We are experiencing a problem where nova is opening up all kinds of sockets like: nova-comp 20740 nova 1996u unix 0x8811b3116b40 0t0 41081179 /var/run/ceph/ceph-client.volumes.20740.81999792.asok hitting the open file limits rather quickly

Re: [ceph-users] Ceph asok filling nova open files

2015-06-03 Thread Josh Durgin
r? It'll be in 0.94.3. 0.94.2 is close to release already: http://tracker.ceph.com/issues/11492 Josh Thanks, - Robert LeBlanc GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Wed, Jun 3, 2015 at 4:00 PM, Josh Durgin wrote: On 06/03/2015 02:31 PM,

Re: [ceph-users] Synchronous writes - tuning and some thoughts about them?

2015-06-04 Thread Josh Durgin
On 06/03/2015 04:15 AM, Jan Schermer wrote: Thanks for a very helpful answer. So if I understand it correctly then what I want (crash consistency with RPO>0) isn’t possible now in any way. If there is no ordering in RBD cache then ignoring barriers sounds like a very bad idea also. Yes, that'

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Josh Durgin
On 06/08/2015 11:19 AM, Alexandre DERUMIER wrote: Hi, looking at the latest version of QEMU, It's seem that it's was already this behaviour since the add of rbd_cache parsing in rbd.c by josh in 2012 http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/rbd.c;h=eebc3344620058322bb53ba8376af4a82

Re: [ceph-users] rbd cache + libvirt

2015-06-12 Thread Josh Durgin
qemu.block/2500 Josh - Mail original - De: "Jason Dillaman" À: "Andrey Korolyov" Cc: "Josh Durgin" , "aderumier" , "ceph-users" Envoyé: Lundi 8 Juin 2015 22:29:10 Objet: Re: [ceph-users] rbd cache + libvirt On Mon, Jun 8, 2015 at

Re: [ceph-users] backing Hadoop with Ceph ??

2015-07-17 Thread Josh Durgin
On 07/15/2015 11:48 AM, Shane Gibson wrote: Somnath - thanks for the reply ... :-) Haven't tried anything yet - just starting to gather info/input/direction for this solution. Looking at the S3 API info [2] - there is no mention of support for the "S3a" API extensions - namely "rename" suppor

Re: [ceph-users] Best method to limit snapshot/clone space overhead

2015-07-23 Thread Josh Durgin
On 07/23/2015 06:31 AM, Jan Schermer wrote: Hi all, I am looking for a way to alleviate the overhead of RBD snapshots/clones for some time. In our scenario there are a few “master” volumes that contain production data, and are frequently snapshotted and cloned for dev/qa use. Those snapshots/

Re: [ceph-users] readonly snapshots of live mounted rbd?

2015-08-04 Thread Josh Durgin
On 08/01/2015 07:52 PM, pixelfairy wrote: Id like to look at a read-only copy of running virtual machines for compliance and potentially malware checks that the VMs are unaware of. the first note on http://ceph.com/docs/master/rbd/rbd-snapshot/ warns that the filesystem has to be in a consistent

Re: [ceph-users] Warning regarding LTTng while checking status or restarting service

2015-08-06 Thread Josh Durgin
On 08/06/2015 03:10 AM, Daleep Bais wrote: Hi, Whenever I restart or check the logs for OSD, MON, I get below warning message.. I am running a test cluster of 09 OSD's and 03 MON nodes. [ceph-node1][WARNIN] libust[3549/3549]: Warning: HOME environment variable not set. Disabling LTTng-UST per-

Re: [ceph-users] How can I fetch librbd debug logs?

2013-07-10 Thread Josh Durgin
On 07/06/2013 04:51 AM, Xue, Chendi wrote: Hi, all I wanna fetch debug librbd and debug rbd logs when I am using vm to read / write. Details: I created a volume from ceph and attached it to a vm. So I suppose when I do read/write in the VM, I can get some rbd debug logs in the

Re: [ceph-users] feature set mismatch

2013-07-16 Thread Josh Durgin
On 07/16/2013 06:06 PM, Gaylord Holder wrote: Now whenever I try to map an RBD to a machine, mon0 complains: feature set mismatch, my 2 < server's 2040002, missing 204 missing required protocol features. Your cluster is using newer crush tunables to get better data distribution, but your k

Re: [ceph-users] feature set mismatch

2013-07-17 Thread Josh Durgin
[please keep replies on the list] On 07/17/2013 04:04 AM, Gaylord Holder wrote: On 07/16/2013 09:22 PM, Josh Durgin wrote: On 07/16/2013 06:06 PM, Gaylord Holder wrote: Now whenever I try to map an RBD to a machine, mon0 complains: feature set mismatch, my 2 < server's 2040002,

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-07-17 Thread Josh Durgin
On 07/17/2013 05:59 AM, Maciej Gałkiewicz wrote: Hello Is there any way to verify that cache is enabled? My machine is running with following parameters: qemu-system-x86_64 -machine accel=kvm:tcg -name instance-0302 -S -machine pc-i440fx-1.5,accel=kvm,usb=off -cpu Westmere,+rdtscp,+avx,+osx

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-07-18 Thread Josh Durgin
On 07/17/2013 11:39 PM, Maciej Gałkiewicz wrote: I have created VM with KVM 1.1.2 and all I had was rbd_cache configured in ceph.conf. Cache option in libvirt set to "none": f81d6108-d8c9-4e06-94ef-02b1943a873d

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-07-18 Thread Josh Durgin
On 07/18/2013 11:32 AM, Maciej Gałkiewicz wrote: On 18 Jul 2013 20:25, "Josh Durgin" mailto:josh.dur...@inktank.com>> wrote: > Setting rbd_cache=true in ceph.conf will make librbd turn on the cache > regardless of qemu. Setting qemu to cache=none tells qemu that it

Re: [ceph-users] Kernel's rbd in 3.10.1

2013-07-25 Thread Josh Durgin
On 07/24/2013 09:37 PM, Mikaël Cluseau wrote: Hi, I have a bug in the 3.10 kernel under debian, be it a self compiled linux-stable from the git (built with make-kpkg) or the sid's package. I'm using format-2 images (ceph version 0.61.6 (59ddece17e36fef69ecf40e239aeffad33c9db35)) to make snapsho

Re: [ceph-users] Mounting RBD or CephFS on Ceph-Node?

2013-07-25 Thread Josh Durgin
On 07/23/2013 06:09 AM, Oliver Schulz wrote: Dear Ceph Experts, I remember reading that at least in the past I wasn't recommended to mount Ceph storage on a Ceph cluster node. Given a recent kernel (3.8/3.9) and sufficient CPU and memory resources on the nodes, would it now be safe to * Mount R

Re: [ceph-users] Openstack glance ceph rbd_store_user authentification problem

2013-08-08 Thread Josh Durgin
On 08/08/2013 06:01 AM, Steffen Thorhauer wrote: Hi, recently I had a problem with openstack glance and ceph. I used the http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-glance documentation and http://docs.openstack.org/developer/glance/configuring.html documentation I'm using ubuntu 1

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-08 Thread Josh Durgin
On 08/08/2013 05:40 AM, Oliver Francke wrote: Hi Josh, I have a session logged with: debug_ms=1:debug_rbd=20:debug_objectcacher=30 as you requested from Mike, even if I think, we do have another story here, anyway. Host-kernel is: 3.10.0-rc7, qemu-client 1.6.0-rc2, client-kernel is 3.2.0

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-10 Thread Josh Durgin
On 08/09/2013 08:03 AM, Stefan Hajnoczi wrote: On Fri, Aug 09, 2013 at 03:05:22PM +0100, Andrei Mikhailovsky wrote: I can confirm that I am having similar issues with ubuntu vm guests using fio with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks, occasionally guest vm stops

Re: [ceph-users] rbd map issues: no such file or directory (ENOENT) AND map wrong image

2013-08-12 Thread Josh Durgin
On 08/12/2013 10:19 AM, PJ wrote: Hi All, Before go on the issue description, here is our hardware configurations: - Physical machine * 3: each has quad-core CPU * 2, 64+ GB RAM, HDD * 12 (500GB ~ 1TB per drive; 1 for system, 11 for OSD). ceph OSD are on physical machines. - Each physical machin

Re: [ceph-users] rbd map issues: no such file or directory (ENOENT) AND map wrong image

2013-08-12 Thread Josh Durgin
[re-adding ceph-users so others can benefit from the archives] On 08/12/2013 07:18 PM, PJ wrote: 2013/8/13 Josh Durgin : On 08/12/2013 10:19 AM, PJ wrote: Hi All, Before go on the issue description, here is our hardware configurations: - Physical machine * 3: each has quad-core CPU * 2, 64

Re: [ceph-users] Glance image upload errors after upgrading to Dumpling

2013-08-14 Thread Josh Durgin
On 08/14/2013 02:22 PM, Michael Morgan wrote: Hello Everyone, I have a Ceph test cluster doing storage for an OpenStack Grizzly platform (also testing). Upgrading to 0.67 went fine on the Ceph side with the cluster showing healthy but suddenly I can't upload images into Glance anymore. The upl

Re: [ceph-users] RBD and balanced reads

2013-08-20 Thread Josh Durgin
On 08/19/2013 11:24 AM, Gregory Farnum wrote: On Mon, Aug 19, 2013 at 9:07 AM, Sage Weil wrote: On Mon, 19 Aug 2013, S?bastien Han wrote: Hi guys, While reading a developer doc, I came across the following options: * osd balance reads = true * osd shed reads = true * osd shed reads min laten

Re: [ceph-users] OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

2013-08-20 Thread Josh Durgin
On 08/20/2013 11:20 AM, Vincent Hurtevent wrote: I'm not the end user. It's possible that the volume has been detached without unmounting. As the volume is unattached and the initial kvm instance is down, I was expecting the rbd volume is properly unlocked even if the guest unmount hasn't been

Re: [ceph-users] locking rbd device

2013-08-26 Thread Josh Durgin
On 08/26/2013 12:03 AM, Wolfgang Hennerbichler wrote: hi list, I realize there's a command called "rbd lock" to lock an image. Can libvirt use this to prevent virtual machines from being started simultaneously on different virtualisation containers? wogri Yes - that's the reason for lock co

Re: [ceph-users] locking rbd device

2013-08-26 Thread Josh Durgin
On 08/26/2013 01:49 PM, Josh Durgin wrote: On 08/26/2013 12:03 AM, Wolfgang Hennerbichler wrote: hi list, I realize there's a command called "rbd lock" to lock an image. Can libvirt use this to prevent virtual machines from being started simultaneously on different virtualisa

Re: [ceph-users] Real size of rbd image

2013-08-27 Thread Josh Durgin
On 08/27/2013 01:39 PM, Timofey Koolin wrote: Is way to know real size of rbd image and rbd snapshots? rbd ls -l write declared size of image, but I want to know real size. You can sum the sizes of the extents reported by: rbd diff pool/image[@snap] [--format json] That's the difference s

Re: [ceph-users] Location field empty in Glance when instance to image

2013-08-30 Thread Josh Durgin
On 08/30/2013 03:40 AM, Toni F. [ackstorm] wrote: Sorry, wrong list Anyway i take this oportunity to ask two questions: Somebody knows how i can download a image or snapshot? Cinder has no way to export them, but you can use: rbd export pool/image@snap /path/to/file how the direct url are

Re: [ceph-users] ceph and incremental backups

2013-08-30 Thread Josh Durgin
On 08/30/2013 02:22 PM, Oliver Daudey wrote: Hey Mark, On vr, 2013-08-30 at 13:04 -0500, Mark Chaney wrote: Full disclosure, I have zero experience with openstack and ceph so far. If I am going to use a Ceph RBD cluster to store my kvm instances, how should I be doing backups? 1) I would pref

Re: [ceph-users] from whom and when will rbd_cache* be read

2013-09-01 Thread Josh Durgin
On 09/01/2013 03:35 AM, Kasper Dieter wrote: Hi, under http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/ I found a good description about RBD cache parameters. You're looking at an old branch there - the current description is a bit more clear that this doesn't affect rbd.ko

Re: [ceph-users] rbd cp copies of sparse files become fully allocated

2013-09-09 Thread Josh Durgin
On 09/09/2013 04:57 AM, Andrey Korolyov wrote: May I also suggest the same for export/import mechanism? Say, if image was created by fallocate we may also want to leave holes upon upload and vice-versa for export. Import and export already omit runs of zeroes. They could detect smaller runs (cu

Re: [ceph-users] blockdev --setro cannot set krbd to readonly

2013-09-09 Thread Josh Durgin
On 09/08/2013 01:14 AM, Da Chun Ng wrote: I mapped an image to a system, and used blockdev to make it readonly. But it failed. [root@ceph0 mnt]# blockdev --setro /dev/rbd2 [root@ceph0 mnt]# blockdev --getro /dev/rbd2 0 It's on Centos6.4 with kernel 3.10.6 . Ceph 0.61.8 . Any idea? For reasons

Re: [ceph-users] status of glance/cinder/nova integration in openstack grizzly

2013-09-10 Thread Josh Durgin
On 09/10/2013 01:50 PM, Darren Birkett wrote: One last question: I presume the fact that the 'volume_image_metadata' field is not populated when cloning a glance image into a cinder volume is a bug? It means that the cinder client doesn't show the volume as bootable, though I'm not sure what oth

Re: [ceph-users] rbd cp copies of sparse files become fully allocated

2013-09-10 Thread Josh Durgin
On 09/10/2013 01:51 AM, Andrey Korolyov wrote: On Tue, Sep 10, 2013 at 3:03 AM, Josh Durgin wrote: On 09/09/2013 04:57 AM, Andrey Korolyov wrote: May I also suggest the same for export/import mechanism? Say, if image was created by fallocate we may also want to leave holes upon upload and

Re: [ceph-users] live migration with rbd/cinder/nova - not supported?

2013-09-12 Thread Josh Durgin
On 09/12/2013 11:33 AM, Darren Birkett wrote: Hi Maciej, That's interesting. The following also seems to suggest that nova has those shared storage dependencies for live migration that I spoke about: http://tracker.ceph.com/issues/5938 That's obsolete for Grizzly. True live migration works f

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Josh Durgin
Also enabling rbd writeback caching will allow requests to be merged, which will help a lot for small sequential I/O. On 09/17/2013 02:03 PM, Gregory Farnum wrote: Try it with oflag=dsync instead? I'm curious what kind of variation these disks will provide. Anyway, you're not going to get the s

Re: [ceph-users] Scaling RBD module

2013-09-18 Thread Josh Durgin
On 09/17/2013 03:30 PM, Somnath Roy wrote: Hi, I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public netwo

Re: [ceph-users] Scaling RBD module

2013-09-19 Thread Josh Durgin
/emsclient--vg-home on /home type ext4 (rw) Any idea what went wrong here ? Thanks & Regards Somnath -Original Message- From: Josh Durgin [mailto:josh.dur...@inktank.com] Sent: Wednesday, September 18, 2013 6:10 PM To: Somnath Roy Cc: Sage Weil; ceph-de...@vger.kernel.org; Anirban

Re: [ceph-users] Best practices for managing S3 objects store

2013-09-30 Thread Josh Durgin
On 09/29/2013 07:34 PM, Aniket Nanhe wrote: Hi, We have a Ceph cluster set up and are trying to evaluate Ceph for it's S3 compatible object storage. I came across this best practices document for Amazon S3, which goes over how naming keys in a particular way can improve performance of object GET

Re: [ceph-users] authentication trouble

2013-09-30 Thread Josh Durgin
On 09/26/2013 10:11 AM, Jogi Hofmüller wrote: Dear all, I am fairly new to ceph and just in the process of testing it using several virtual machines. Now I tried to create a block device on a client and fumbled with settings for about an hour or two until the command line rbd --id dovecot c

Re: [ceph-users] RBD Snap removal priority

2013-09-30 Thread Josh Durgin
On 09/27/2013 09:25 AM, Travis Rhoden wrote: Hello everyone, I'm running a Cuttlefish cluster that hosts a lot of RBDs. I recently removed a snapshot of a large one (rbd snap rm -- 12TB), and I noticed that all of the clients had markedly decreased performance. Looking at iostat on the OSD nod

Re: [ceph-users] Loss of connectivity when using client caching with libvirt

2013-10-02 Thread Josh Durgin
On 10/02/2013 10:45 AM, Oliver Daudey wrote: Hey Robert, On 02-10-13 14:44, Robert van Leeuwen wrote: Hi, I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using libvirt to "patch" the ceph disk directly to the qemu instance. I'm using SL6 with the patched qemu packages fr

Re: [ceph-users] Loss of connectivity when using client caching with libvirt

2013-10-02 Thread Josh Durgin
On 10/02/2013 03:16 PM, Blair Bethwaite wrote: Hi Josh, Message: 3 Date: Wed, 02 Oct 2013 10:55:04 -0700 From: Josh Durgin To: Oliver Daudey , ceph-users@lists.ceph.com, robert.vanleeu...@spilgames.com Subject: Re: [ceph-users] Loss of connectivity when using client caching

Re: [ceph-users] Loss of connectivity when using client caching with libvirt

2013-10-02 Thread Josh Durgin
On 10/02/2013 06:26 PM, Blair Bethwaite wrote: Josh, On 3 October 2013 10:36, Josh Durgin wrote: The version base of qemu in precise has the same problem. It only affects writeback caching. You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's cloud archive. Thanks fo

Re: [ceph-users] qemu-kvm with rbd mem slow leak

2013-10-14 Thread Josh Durgin
On 10/13/2013 07:43 PM, alan.zhang wrote: CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz *2 MEM: 32GB KVM: qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64 Host: CentOS 6.4, kernel 2.6.32-358.14.1.el6.x86_64 Guest: CentOS 6.4, kernel 2.6.32-279.14.1.el6.x86_64 Ceph: ceph version 0.67.4

Re: [ceph-users] Is there a way to query RBD usage

2013-10-16 Thread Josh Durgin
On 10/15/2013 08:56 PM, Blair Bethwaite wrote: > Date: Wed, 16 Oct 2013 16:06:49 +1300 > From: Mark Kirkwood mailto:mark.kirkw...@catalyst.net.nz>> > To: Wido den Hollander mailto:w...@42on.com>>, ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Is the

Re: [ceph-users] mounting RBD in linux containers

2013-10-18 Thread Josh Durgin
On 10/18/2013 10:04 AM, Kevin Weiler wrote: The kernel is 3.11.4-201.fc19.x86_64, and the image format is 1. I did, however, try a map with an RBD that was format 2. I got the same error. To rule out any capability drops as the culprit, can you map an rbd image on the same host outside of a con

Re: [ceph-users] poor read performance on rbd+LVM, LVM overload

2013-10-20 Thread Josh Durgin
On 10/20/2013 08:18 AM, Ugis wrote: output follows: #pvs -o pe_start /dev/rbd1p1 1st PE 4.00m # cat /sys/block/rbd1/queue/minimum_io_size 4194304 # cat /sys/block/rbd1/queue/optimal_io_size 4194304 Well, the parameters are being set at least. Mike, is it possible that having minimum_io

Re: [ceph-users] Boot from volume with Dumpling on RDO/CentOS 6 (using backported QEMU 0.12)

2013-10-21 Thread Josh Durgin
On 10/21/2013 09:03 AM, Andrew Richards wrote: Hi Everybody, I'm attempting to get Ceph working for CentOS 6.4 running RDO Havana for Cinder volume storage and boot-from-volume, and I keep bumping into a very unhelpful errors on my nova-compute test node and my cinder controller node. Here is w

Re: [ceph-users] Boot from volume with Dumpling on RDO/CentOS 6 (using backported QEMU 0.12)

2013-10-21 Thread Josh Durgin
cephx like I had to do in Grizzly? No, that's no longer necessary. Josh Thanks, Andy On Oct 21, 2013, at 12:26 PM, Josh Durgin mailto:josh.dur...@inktank.com>> wrote: On 10/21/2013 09:03 AM, Andrew Richards wrote: Hi Everybody, I'm attempting to get Ceph working for Cent

Re: [ceph-users] CloudStack + KVM(Ubuntu 12.04, Libvirt 1.0.2) + Ceph [Seeking Help]

2013-10-21 Thread Josh Durgin
On 10/16/2013 04:25 PM, Kelcey Jamison Damage wrote: Hi, I have gotten so close to have Ceph work in my cloud but I have reached a roadblock. Any help would be greatly appreciated. I receive the following error when trying to get KVM to run a VM with an RBD volume: Libvirtd.log: 2013-10-16 22

Re: [ceph-users] radosgw-agent error

2013-10-30 Thread Josh Durgin
On 10/30/2013 01:54 AM, Mark Kirkwood wrote: On 29/10/13 20:53, lixuehui wrote: Hi,list From the document that a radosgw-agent's right info should like this INFO:radosgw_agent.sync:Starting incremental sync INFO:radosgw_agent.worker:17910 is processing shard number 0 INFO:r

Re: [ceph-users] "rbd map" says "bat option at rw"

2013-11-01 Thread Josh Durgin
On 11/01/2013 03:07 AM, nicolasc wrote: Hi every one, I finally and happily managed to get my Ceph cluster (3 monitors among 8 nodes, each with 9 OSDs) running on version 0.71, but the "rbd map" command shows a weird behaviour. I can list pools, create images and snapshots, alleluia! However, m

Re: [ceph-users] Havana & RBD - a few problems

2013-11-07 Thread Josh Durgin
On 11/08/2013 12:15 AM, Jens-Christian Fischer wrote: Hi all we have installed a Havana OpenStack cluster with RBD as the backing storage for volumes, images and the ephemeral images. The code as delivered in https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L498 f

Re: [ceph-users] Ceph Block Storage QoS

2013-11-07 Thread Josh Durgin
On 11/08/2013 03:50 AM, Wido den Hollander wrote: On 11/07/2013 08:42 PM, Gruher, Joseph R wrote: Is there any plan to implement some kind of QoS in Ceph? Say I want to provide service level assurance to my OpenStack VMs and I might have to throttle bandwidth to some to provide adequate bandwid

Re: [ceph-users] radosgw-agent failed to sync object

2013-11-07 Thread Josh Durgin
On 11/07/2013 09:48 AM, lixuehui wrote: Hi all : After we build a region with two zones distributed in two ceph cluster.Start the agent ,it start works! But what we find in the radosgw-agent stdout is that it failed to sync objects all the time .Paste the info: (env)root@ceph-rgw41:~/myproject#

Re: [ceph-users] Ceph Block Storage QoS

2013-11-07 Thread Josh Durgin
On 11/08/2013 03:13 PM, ja...@peacon.co.uk wrote: On 2013-11-08 03:20, Haomai Wang wrote: On Fri, Nov 8, 2013 at 9:31 AM, Josh Durgin wrote: I just list commands below to help users to understand: cinder qos-create high_read_low_write consumer="front-end" read_iops_sec=1000 writ

Re: [ceph-users] help v.72configure federate gateway failed

2013-11-19 Thread Josh Durgin
Sorry for the delay, I'm still catching up since the openstack conference. Does the system user for the destination zone exist with the same access secret and key in the source zone? If you enable debug rgw = 30 on the destination you can see why the copy_obj from the source zone is failing. Jo

Re: [ceph-users] radosgw-agent AccessDenied 403

2013-11-19 Thread Josh Durgin
On 11/13/2013 09:06 PM, lixuehui wrote: And on the slave zone gateway instence ,the info is like this : 2013-11-14 12:54:24.516840 7f51e7fef700 1 == starting new request req=0xb1e3b0 = 2013-11-14 12:54:24.526640 7f51e7fef700 1 == req done req=0xb1e3b0 http

Re: [ceph-users] Ephemeral RBD with Havana and Dumpling

2013-11-19 Thread Josh Durgin
On 11/14/2013 09:54 AM, Dmitry Borodaenko wrote: On Thu, Nov 14, 2013 at 6:00 AM, Haomai Wang wrote: We are using the nova fork by Josh Durgin https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd - are there more patches that need to be integrated? I hope I can release or push commits

Re: [ceph-users] Librados Error Codes

2013-11-19 Thread Josh Durgin
On 11/19/2013 05:28 AM, Behar Veliqi wrote: Hi, when using the librados c library, the documentation of the different functions just tells that it returns a negative error code on failure, e.g. the rados_read function (http://ceph.com/docs/master/rados/api/librados/#rados_read). Is there anyw

Re: [ceph-users] Size of RBD images

2013-11-20 Thread Josh Durgin
On 11/20/2013 06:53 AM, nicolasc wrote: Thank you Bernhard and Wogri. My old kernel version also explains the format issue. Once again, sorry to have mixed that in the problem. Back to my original inquiries, I hope someone can help me understand why: * it is possible to create an RBD image large

Re: [ceph-users] tracker.ceph.com - public email address visibility?

2013-11-27 Thread Josh Durgin
On 11/27/2013 07:21 AM, James Pearce wrote: I was going to add something to the bug tracker, but it looks to me that contributor email addresses all have public (unauthenticated) visibility? Can this be set in user preferences? Yes, it can be hidden here: http://tracker.ceph.com/my/account ___

Re: [ceph-users] Real size of rbd image

2013-11-27 Thread Josh Durgin
On 11/26/2013 02:22 PM, Stephen Taylor wrote: From ceph-users archive 08/27/2013: On 08/27/2013 01:39 PM, Timofey Koolin wrote: /Is way to know real size of rbd image and rbd snapshots?/ /rbd ls -l write declared size of image, but I want to know real size./ You can sum the sizes of the

Re: [ceph-users] can not get rbd cache perf counter

2013-11-27 Thread Josh Durgin
On 11/27/2013 01:31 AM, Shu, Xinxin wrote: Recently, I want to test performance benefit of rbd cache, i cannot get obvious performance benefit at my setup, then I try to make sure rbd cache is enabled, but I cannot get rbd cache perf counter. In order to identify how to enable rbd cache perf co

Re: [ceph-users] [Big Problem?] Why not using Device'UUID in ceph.conf

2013-11-27 Thread Josh Durgin
On 11/26/2013 01:14 AM, Ta Ba Tuan wrote: Hi James, Proplem is why the Ceph not recommend using Device'UUID in Ceph.conf, when, above error can be occur? I think with the newer-style configuration, where your disks have partition ids setup by ceph-disk instead of entries in ceph.conf, it does

Re: [ceph-users] can not get rbd cache perf counter

2013-11-27 Thread Josh Durgin
alled correctly or this rbd admin socket depends on secified qemu package. -Original Message- From: Josh Durgin [mailto:josh.dur...@inktank.com] Sent: Thursday, November 28, 2013 11:01 AM To: Shu, Xinxin; ceph-us...@ceph.com Subject: Re: [ceph-users] can not get rbd cache perf counter On

Re: [ceph-users] Real size of rbd image

2013-12-02 Thread Josh Durgin
atible to do so. Issue created [1]. Like you said, ignoring any extents marked 'zero' is always fine for this size calculation. Josh [1] http://tracker.ceph.com/issues/6926 Again, I appreciate your help. Steve -Original Message- From: Josh Durgin [mailto:josh.dur...@inkt

Re: [ceph-users] Granularity/efficiency of copy-on-write?

2013-12-03 Thread Josh Durgin
On 12/02/2013 03:26 PM, Bill Eldridge wrote: Hi all, We're looking at using Ceph's copy-on-write for a ton of users' replicated cloud image environments, and are wondering how efficient Ceph is for adding user data to base images - is data added in normal 4kB or 64kB sizes, or can you specify bl

Re: [ceph-users] RBD import slow

2014-09-25 Thread Josh Durgin
On 09/24/2014 04:57 PM, Brian Rak wrote: I've been doing some testing of importing virtual machine images, and I've found that 'rbd import' is at least 2x as slow as 'qemu-img convert'. Is there anything I can do to speed this process up? I'd like to use rbd import because it gives me a little

Re: [ceph-users] librados crash in nova-compute

2014-10-24 Thread Josh Durgin
On 10/24/2014 08:21 AM, Xu (Simon) Chen wrote: Hey folks, I am trying to enable OpenStack to use RBD as image backend: https://bugs.launchpad.net/nova/+bug/1226351 For some reason, nova-compute segfaults due to librados crash: ./log/SubsystemMap.h: In function 'bool ceph::log::SubsystemMap::sh

Re: [ceph-users] Double-mounting of RBD

2014-12-17 Thread Josh Durgin
On 12/17/2014 03:49 PM, Gregory Farnum wrote: On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley wrote: I have a somewhat interesting scenario. I have an RBD of 17TB formatted using XFS. I would like it accessible from two different hosts, one mapped/mounted read-only, and one mapped/mounted

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-18 Thread Josh Durgin
On 12/18/2014 10:49 AM, Travis Rhoden wrote: One question re: discard support for kRBD -- does it matter which format the RBD is? Format 1 and Format 2 are okay, or just for Format 2? It shouldn't matter which format you use. Josh ___ ceph-users mai

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-07 Thread Josh Durgin
image for write access (all handled automatically by librbd). Using watch/notify to coordinate multi-client access would get complex and inefficient pretty fast, and in general is best left to cephfs rather than rbd. Josh On Jan 6, 2015 5:35 PM, "Josh Durgin" mailto:josh.dur...@inktank.c

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-07 Thread Josh Durgin
On Tue, Jan 6, 2015 at 4:19 PM, Josh Durgin wrote: On 01/06/2015 10:24 AM, Robert LeBlanc wrote: Can't this be done in parallel? If the OSD doesn't have an object then it is a noop and should be pretty quick. The number of outstanding operations can be limited to 100 or a 1000 whic

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-07 Thread Josh Durgin
On 01/06/2015 10:24 AM, Robert LeBlanc wrote: Can't this be done in parallel? If the OSD doesn't have an object then it is a noop and should be pretty quick. The number of outstanding operations can be limited to 100 or a 1000 which would provide a balance between speed and performance impact if

Re: [ceph-users] Ephemeral RBD with Havana and Dumpling

2013-12-06 Thread Josh Durgin
On 12/05/2013 02:37 PM, Dmitry Borodaenko wrote: Josh, On Tue, Nov 19, 2013 at 4:24 PM, Josh Durgin wrote: I hope I can release or push commits to this branch contains live-migration, incorrect filesystem size fix and ceph-snapshort support in a few days. Can't wait to see this patch

  1   2   >