[ceph-users] Node down question

2014-11-10 Thread Jason
d changing this, along with any other related settings, to no avail -- for whatever I do, the delay remains at 20 seconds. Anything else to try? Jason ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Jason Harley
Howdy — I’ve had a failure on a small, Dumpling (0.67.4) cluster running on Ubuntu 13.10 machines. I had three OSD nodes (running 6 OSDs each), and lost two of them in a beautiful failure. One of these nodes even went so far as to scramble the XFS filesystems of my OSD disks (I’m curious if i

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Jason Harley
1", > "objects": []}, > "peer_backfill_info": { "begin": "0\/\/0\/\/-1", > "end": "0\/\/0\/\/-1", > "objects": []}, > "ba

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-03 Thread Jason Harley
On Jun 3, 2014, at 5:58 PM, Smart Weblications GmbH - Florian Wiessner wrote: > I think it would be less painfull if you had removed and the immediatelly > recreate the corrupted osd again to avoid 'holes' in the osd ids. It should > work > with your configuration anyhow, though. I agree with

Re: [ceph-users] PG Recovery: HEALTH_ERR to HEALTH_OK

2014-06-05 Thread Jason Harley
, Jason Harley wrote: > On Jun 3, 2014, at 5:58 PM, Smart Weblications GmbH - Florian Wiessner > wrote: > >> I think it would be less painfull if you had removed and the immediatelly >> recreate the corrupted osd again to avoid 'holes' in the osd ids. It should >

[ceph-users] REST API and uWSGI?

2014-06-16 Thread Jason Harley
Howdy — I’d like to run the ceph REST API behind nginx, and uWSGI and UNIX sockets seems like a smart way to do this. Has anyone attempted to get this setup working? I’ve tried writing a uWSGI wrapper as well as just telling ‘uwsgi’ to call the ‘ceph_rest_api’ module without luck. ./JRH

Re: [ceph-users] REST API and uWSGI?

2014-06-17 Thread Jason Harley
On Jun 16, 2014, at 8:52 PM, Wido den Hollander wrote: >> Op 16 jun. 2014 om 19:23 heeft "Jason Harley" het >> volgende geschreven: >> >> Howdy — >> >> I’d like to run the ceph REST API behind nginx, and uWSGI and UNIX sockets >> seems l

[ceph-users] mon: leveldb checksum mismatch

2014-07-03 Thread Jason Harley
Hi list — I’ve got a small dev. cluster: 3 OSD nodes with 6 disks/OSDs each and a single monitor (this, it seems, was my mistake). The monitor node went down hard and it looks like the monitor’s db is in a funny state. Running ‘ceph-mon’ manually with ‘debug_mon 20’ and ‘debug_ms 20’ gave the

Re: [ceph-users] mon: leveldb checksum mismatch

2014-07-03 Thread Jason Harley
Hi Joao, On Jul 3, 2014, at 7:57 PM, Joao Eduardo Luis wrote: > We don't have a way to repair leveldb. Having multiple monitors usually help > with such tricky situations. I know this, but for this small dev cluster I wasn’t thinking about corruption of my mon’s backing store. Silly me :)

Re: [ceph-users] How to create multiple OSD's per host?

2014-08-14 Thread Jason King
and the > ceph-disks don’t persist over a boot cycle. > > > > Is there a document anywhere that anyone knows of that explains a step by > step process for bringing up multiple osd’s per host – 1 hdd with ssd > journal partition per osd? > > Thanks, > > Br

Re: [ceph-users] How to create multiple OSD's per host?

2014-08-14 Thread Jason King
and the > ceph-disks don’t persist over a boot cycle. > > > > Is there a document anywhere that anyone knows of that explains a step by > step process for bringing up multiple osd’s per host – 1 hdd with ssd > journal partition per osd? > > Thanks, > > Br

Re: [ceph-users] How to create multiple OSD's per host?

2014-08-14 Thread Jason King
and the > ceph-disks don’t persist over a boot cycle. > > > > Is there a document anywhere that anyone knows of that explains a step by > step process for bringing up multiple osd’s per host – 1 hdd with ssd > journal partition per osd? > > Thanks, > > Br

Re: [ceph-users] Managing OSDs on twin machines

2014-08-18 Thread Jason Harley
Hi Pierre — You can manipulate your CRUSH map to make use of ‘chassis’ in addition to the default ‘host’ type. I’ve done this with FatTwin and FatTwin^2 boxes with great success. For more reading take a look at: http://ceph.com/docs/master/rados/operations/crush-map/ In particular the ‘Move

Re: [ceph-users] Difference between "object rm" and "object unlink" ?

2014-08-31 Thread Jason King
As the names suggest, the former removes the object from the store while the latter deletes bucket index only. Check the code for more details. Jason 2014-08-29 19:09 GMT+08:00 zhu qiang : > Hi all, >From radosgw-admin commond : > # radosgw-admin object rm --object=my_test

Re: [ceph-users] About IOPS num

2014-08-31 Thread Jason King
Guess you should multiply 27 by bs=4k? Jason 2014-08-29 15:52 GMT+08:00 lixue...@chinacloud.com.cn < lixue...@chinacloud.com.cn>: > > guys: > There's a ceph cluster working and nodes were connected with 10Gb > cable. We defined fio's bs=4k and the object

Re: [ceph-users] How to replace an node in ceph?

2014-09-04 Thread Jason King
Hi, What's the status of your cluster after the node failure? Jason 2014-09-04 21:33 GMT+08:00 Christian Balzer : > > Hello, > > On Thu, 4 Sep 2014 20:56:31 +0800 Ding Dinghua wrote: > > Aside from what Loic wrote, why not replace the network controller or if >

Re: [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-08 Thread Jason King
Check the docs. 2014-09-09 11:02 GMT+08:00 廖建锋 : > Looks like it dosn't work, i noticed that 0.85 added superblock to > leveldb osd, the osd which I alread have do not have superblock > is there anybody can tell me how to upgrade OSDs ? > > > > *发件人:* ceph-users > *发送时间:* 2014-09-09 10:32 >

Re: [ceph-users] Troubleshooting down OSDs: Invalid command: ceph osd start osd.1

2014-09-19 Thread Jason King
Hi, You should try */etc/init.d/ceph* command on the host where the OSD resides. Jason 2014-09-19 16:33 GMT+08:00 Loic Dachary : > Hi, > > The documentation indeed contains an example that does not work. This > should fix it : > https://github.com/dach

Re: [ceph-users] Question regarding rbd cache

2015-03-03 Thread Jason Dillaman
. The whole object would not be written to the OSDs unless you wrote data to the whole object. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Xu (Simon) Chen" To: ceph-users@lists.ceph.com Sent: Wednesday, February 25,

Re: [ceph-users] qemu-kvm and cloned rbd image

2015-03-03 Thread Jason Dillaman
/projects/rbd/issues? Thanks, -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "koukou73gr" To: ceph-users@lists.ceph.com Sent: Monday, March 2, 2015 7:16:08 AM Subject: [ceph-users] qemu-kvm and cloned rbd image Hello

Re: [ceph-users] import-diff requires snapshot exists?

2015-03-03 Thread Jason Dillaman
** rbd/small and backup/small are now consistent through snap2. import-diff automatically created backup/small@snap2 after importing all changes. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Steve Anthony" To:

Re: [ceph-users] Rbd image's data deletion

2015-03-04 Thread Jason Dillaman
An RBD image is split up into (by default 4MB) objects within the OSDs. When you delete an RBD image, all the objects associated with the image are removed from the OSDs. The objects are not securely erased from the OSDs if that is what you are asking. -- Jason Dillaman Red Hat dilla

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Jason Dillaman
s rbd_directory/rbd_children" to see the data within the files. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Matthew Monaco" To: ceph-users@lists.ceph.com Sent: Sunday, April 12, 2015 10:57:46 PM Subject: [ceph-use

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Jason Dillaman
Yes, when you flatten an image, the snapshots will remain associated to the original parent. This is a side-effect from how librbd handles CoW with clones. There is an open RBD feature request to add support for flattening snapshots as well. -- Jason Dillaman Red Hat dilla

Re: [ceph-users] rbd: incorrect metadata

2015-04-14 Thread Jason Dillaman
ldren object so that librbd no longer thinks any image is a child of another. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Matthew Monaco" To: "Jason Dillaman" Cc: ceph-users@lists.ceph.com Sent: Monday, Apri

Re: [ceph-users] hammer (0.94.1) - "image must support layering(38) Function not implemented" on v2 image

2015-04-20 Thread Jason Dillaman
Can you add "debug rbd = 20" your ceph.conf, re-run the command, and provide a link to the generated librbd log messages? Thanks, -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Nikola Ciprich" To: ceph-users

Re: [ceph-users] hammer (0.94.1) - "image must support layering(38) Function not implemented" on v2 image

2015-04-20 Thread Jason Dillaman
'--image-features' when creating the image? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Nikola Ciprich" To: "Jason Dillaman" Cc: ceph-users@lists.ceph.com Sent: Monday, April 20, 2015 12:41:26 PM

Re: [ceph-users] Use object-map Feature on existing rbd images ?

2015-04-29 Thread Jason Dillaman
into Hammer at some point in the future. Therefore, I would recommend waiting for the full toolset to become available. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Christoph Adomeit" To: ceph-users@lists.ceph.com Sent: Tuesda

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-30 Thread Jason Dillaman
The issue appears to be tracked with the following BZ for RHEL 7: https://bugzilla.redhat.com/show_bug.cgi?id=1187533 -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Wido den Hollander" To: "Somnath Ro

Re: [ceph-users] wrong diff-export format description

2015-05-07 Thread Jason Dillaman
You are correct -- it is little endian like the other values. I'll open a ticket to correct the document. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Ultral" To: ceph-us...@ceph.com Sent: Thursday, May 7,

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-08 Thread Jason Dillaman
two snapshots and no trim operations released your changes back? If you diff from move2db24-20150428 to HEAD, do you see all your changes? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Ultral" To: "ceph-users&qu

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-12 Thread Jason Dillaman
a few kilobyes of deltas)? Also, would it be possible for you to create a new, test image in the same pool, snapshot it, use 'rbd bench-write' to generate some data, and then verify if export-diff is properly working against the new image? -- Jason Dillaman Red Hat dilla..

Re: [ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-13 Thread Jason Dillaman
/master/install/get-packages/#add-ceph-development -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Pavel V. Kaygorodov" To: "Tuomas Juntunen" Cc: "ceph-users" Sent: Tuesday, May 12, 2015 3:55:21 PM Subjec

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-14 Thread Jason Dillaman
e your issues on Giant and was unable to recreate it. I would normally ask for a log dump with 'debug rbd = 20', but given the size of your image, that log will be astronomically large. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message ---

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Jason Dillaman
th/to/my/new/ceph.conf" QEMU parameter where the RBD cache is explicitly disabled [2]. [1] http://git.qemu.org/?p=qemu.git;a=blob;f=block/rbd.c;h=fbe87e035b12aab2e96093922a83a3545738b68f;hb=HEAD#l478 [2] http://ceph.com/docs/master/rbd/qemu-rbd/#usage -- Jason Dillaman Red

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Jason Dillaman
the short-term, you can remove the "rbd cache" setting from your ceph.conf so that QEMU controls it (i.e. it cannot get overridden when reading the configuration file) or use a different ceph.conf for a drive which requires different cache settings from the default configuration's settings. Jason ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-09 Thread Jason Dillaman
actor the current cache mutex into finer-grained locks. Jason ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Best method to limit snapshot/clone space overhead

2015-07-24 Thread Jason Dillaman
cs (or can you gather any statistics) that indicate the percentage of block-size, zeroed extents within the clone images' RADOS objects? If there is a large amount of waste, it might be possible / worthwhile to optimize how RBD handles copy-on-write operations against the clone. -- Jas

Re: [ceph-users] Best method to limit snapshot/clone space overhead

2015-07-27 Thread Jason Dillaman
will locate all associated RADOS objects, download the objects one at a time, and perform a scan for fully zeroed blocks. It's not the most CPU efficient script, but it should get the job done. [1] http://fpaste.org/248755/43803526/ -- Jason Dillaman Red Hat Ceph Storage Engineering dilla

Re: [ceph-users] rbd rename snaps?

2015-08-12 Thread Jason Dillaman
There currently is no mechanism to rename snapshots without hex editing the RBD image header data structure. I created a new Ceph feature request [1] to add this ability in the future. [1] http://tracker.ceph.com/issues/12678 -- Jason Dillaman Red Hat Ceph Storage Engineering dilla

Re: [ceph-users] Rados: Undefined symbol error

2015-08-21 Thread Jason Dillaman
It sounds like you have rados CLI tool from an earlier Ceph release (< Hammer) installed and it is attempting to use the librados shared library from a newer (>= Hammer) version of Ceph. Jason - Original Message - > From: "Aakanksha Pudipeddi-SSI" > To: ceph

Re: [ceph-users] rbd du

2015-08-24 Thread Jason Dillaman
That rbd CLI command is a new feature that will be included with the upcoming infernalis release. In the meantime, you can use this approach [1] to estimate your RBD image usage. [1] http://ceph.com/planet/real-size-of-a-ceph-rbd-image/ -- Jason Dillaman Red Hat Ceph Storage Engineering

Re: [ceph-users] How to disable object-map and exclusive features ?

2015-08-31 Thread Jason Dillaman
uot;thread apply all bt". With the gcore or backtrace method, we would need a listing of all the package versions installed on the machine to recreate a similar debug environment. Thanks, Jason - Original Message - > From: "Christoph Adomeit" > To: ceph-users

[ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
performance closer to native performance with 8K blocks? Thanks in advance. -- -- *Jason Villalta* Co-founder [image: Inline image 1] 800.799.4407x1230 | www.RubixTechnology.com<http://www.rubixtechnology.com/> ___ ceph-users mailing list ceph

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
17, 2013 at 10:56 AM, Campbell, Bill < bcampb...@axcess-financial.com> wrote: > Windows default (NTFS) is a 4k block. Are you changing the allocation > unit to 8k as a default for your configuration? > > -- > *From: *"Gregory Farnum"

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
al location. Are your journals on separate disks or on the > same disk as the OSD? What is the replica size of your pool? > > -- > *From: *"Jason Villalta" > *To: *"Bill Campbell" > *Cc: *"Gregory Farnum" , "ceph-user

Re: [ceph-users] Disk partition and replicas

2013-09-17 Thread Jason Villalta
You can deploy an osd using ceph deploy to folder. Use ceph-deploy odd prepare host:/path On Sep 17, 2013 1:40 PM, "Jordi Arcas" wrote: > Hi! > I've a remote server with one unit where is installed Ubuntu. I can't > create another partition on the disk to install OSD because is mounted. > There

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
of >> clients, and if you don't force those 8k sync IOs (which RBD won't, >> unless the application asks for them by itself using directIO or >> frequent fsync or whatever) your performance will go way up. >> -Greg >> Software Engineer #42 @ http://inktank.com | h

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
directIO or > frequent fsync or whatever) your performance will go way up. > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Tue, Sep 17, 2013 at 1:47 PM, Jason Villalta > wrote: > > > > Here are the stats with direct io. > >

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
; > RADOS performance from what I've seen is largely going to hinge on replica > size and journal location. Are your journals on separate disks or on the > same disk as the OSD? What is the replica size of your pool? > > -- > *From: *"Jason Vi

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-17 Thread Jason Villalta
say it would make sense to just use SSD for the journal and a spindel disk for data and read. On Tue, Sep 17, 2013 at 5:12 PM, Jason Villalta wrote: > Here are the results: > > dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=dsync > 819200 bytes (8.2 GB) copied, 266.87

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
the speed be the same or would the read speed be a factor of 10 less than the speed of the underlying disk? On Wed, Sep 18, 2013 at 4:27 AM, Alex Bligh wrote: > > On 17 Sep 2013, at 21:47, Jason Villalta wrote: > > > dd if=ddbenchfile of=/dev/null bs=8K > > 819200

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
Any other thoughts on this thread guys. I am just crazy to want near native SSD performance on a small SSD cluster? On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta wrote: > That dd give me this. > > dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K > 819200 bytes (8.

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-18 Thread Jason Villalta
1.1 GB) copied, 6.26289 s, 171 MB/s > dd if=/dev/zero of=1g bs=1M count=1024 oflag=dsync > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 37.4144 s, 28.7 MB/s > > As you can see, latency is a killer. > > On Sep 18, 2013, at 3:23 PM, Jason Villalta

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-20 Thread Jason Villalta
those to pull from three SSD disks on a local machine atleast as fast one Native SDD test. But I don't see that, its actually slower. On Wed, Sep 18, 2013 at 4:02 PM, Jason Villalta wrote: > Thank Mike, > High hopes right ;) > > I guess we are not doing too bad compared to

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-20 Thread Jason Villalta
e, but assuming you want a solid synchronous / non-cached read, you > should probably specify 'iflag=direct'. > > On Friday, September 20, 2013, Jason Villalta wrote: > >> Mike, >> So I do have to ask, where would the extra latency be coming from if all >> my OSDs

Re: [ceph-users] Ceph performance with 8K blocks.

2013-09-20 Thread Jason Villalta
her testing > "dd performance" as opposed to "using dd to test performance") if the > concern is what to expect for your multi-tenant vm block store. > > Personally, I get more bugged out over many-thread random read throughput > or synchronous write latency. > &

Re: [ceph-users] About Ceph SSD and HDD strategy

2013-10-07 Thread Jason Villalta
ach could have the most > >> advantage. > >> > >> Your point of view would definitely help me. > >> > >> Sincerely, > >> Martin > >> > >> -- > >> Martin Catudal > >> Responsable TIC > >> Ressources Me

Re: [ceph-users] About Ceph SSD and HDD strategy

2013-10-07 Thread Jason Villalta
I found this without much effort. http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/ On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta wrote: > I also would be interested in how bcache or flashcache would integrate. > > > On Mon, Oct 7, 2013 at 11:3

Re: [ceph-users] About Ceph SSD and HDD strategy

2013-10-07 Thread Jason Villalta
caching for writes. On Mon, Oct 7, 2013 at 11:43 AM, Jason Villalta wrote: > I found this without much effort. > > http://www.sebastien-han.fr/blog/2012/11/15/make-your-rbd-fly-with-flashcache/ > > > On Mon, Oct 7, 2013 at 11:39 AM, Jason Villalta wrote: > >> I also

Re: [ceph-users] Dumpling ceph.conf looks different

2013-10-09 Thread Jason Villalta
I too have noticed this as well when using ceph-deploy to configure ceph. >From what I can tell it just creates symlinks from the default osd location at /var/lib/ceph. Same for the journal. If it on a different device a symlink is created from the dir. Then it appears the osds are just defined i

Re: [ceph-users] RBD command crash & can't delete volume!

2014-11-07 Thread Jason Dillaman
that issue? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Chu Duc Minh" To: ceph-de...@vger.kernel.org, "ceph-users@lists.ceph.com >> ceph-users@lists.ceph.com" Sent: Friday, November 7, 2014 7:05:5

Re: [ceph-users] RBD - possible to query "used space" of images/clones ?

2014-11-07 Thread Jason Dillaman
In the longer term, there is an in-progress RBD feature request to add a new RBD command to see image disk usage: http://tracker.ceph.com/issues/7746 -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: "Sébastien Han" T

Re: [ceph-users] error adding OSD to crushmap

2015-01-13 Thread Jason King
Hi Luis, Could you show us the output of *ceph osd tree*? Jason 2015-01-12 20:45 GMT+08:00 Luis Periquito : > Hi all, > > I've been trying to add a few new OSDs, and as I manage everything with > puppet, it was manually adding via the CLI. > > At one point it adds t

Re: [ceph-users] Different flavors of storage?

2015-01-23 Thread Jason King
Hi Don, Take a look at CRUSH settings. http://ceph.com/docs/master/rados/operations/crush-map/ Jason 2015-01-22 2:41 GMT+08:00 Don Doerner : > OK, I've set up 'giant' in a single-node cluster, played with a replicated > pool and an EC pool. All goes well so far.

[ceph-users] pg_num not being set to ceph.conf default when creating pool via python librados

2015-01-26 Thread Jason Anderson
g_num pg_num: 8 Has anyone else run into this issue? Am I missing something? I know I could just spawn a subprocess call to the ceph command line utility, but I would like to avoid that in the name of a cleaner python integration. Your assistance is greatly appreciated. Thank you, - Jason _

Re: [ceph-users] pg_num not being set to ceph.conf default when creating pool via python librados

2015-01-26 Thread Jason Anderson
, -Jason From: Gregory Farnum [mailto:g...@gregs42.com] Sent: Monday, January 26, 2015 10:09 AM To: Jason Anderson; ceph-users@lists.ceph.com Subject: Re: [ceph-users] pg_num not being set to ceph.conf default when creating pool via python librados Just from memory, I think these values are only used

Re: [ceph-users] pg_num not being set to ceph.conf default when creating pool via python librados

2015-01-26 Thread Jason Anderson
] sections. Greg: Thank you for your help on this, I really appreciate it! -Jason -Original Message- From: Gregory Farnum [mailto:g...@gregs42.com] Sent: Monday, January 26, 2015 1:17 PM To: Jason Anderson Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] pg_num not being set to

[ceph-users] Rbd image performance

2013-12-12 Thread Jason Villalta
Has anyone tried scaling a VMs io by adding additional disks and striping them in the guest os? I am curious what effect this would have on io performance? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-user

Re: [ceph-users] Rbd image performance

2013-12-16 Thread Jason Villalta
Thanks for the info everyone. On Dec 16, 2013 1:23 AM, "Kyle Bader" wrote: > >> Has anyone tried scaling a VMs io by adding additional disks and > >> striping them in the guest os? I am curious what effect this would have > >> on io performance? > > > Why would it? You can also change the stripe

Re: [ceph-users] Ceph Performance

2014-01-09 Thread Jason Villalta
> feel I should be getting significantly more from ceph than what I am able > to. > > Of course, as soon as bcache stops providing benefits (ie data is pushed > out of the SSD cache) then the raw performance drops to a standard SATA > drive of around 120 IOPS. > > Regards > --

Re: [ceph-users] Useful visualizations / metrics

2014-04-12 Thread Jason Villalta
Just looking for some suggestions. Thanks! > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- -- *Jason Villalta* Co-founder [image: Inline image 1] 800.799.44

Re: [ceph-users] Useful visualizations / metrics

2014-04-12 Thread Jason Villalta
OSDs/Nodes. I am not sure there is a specific metric in ceph for this but it would be awesome if there was. On Sat, Apr 12, 2014 at 10:37 AM, Greg Poirier wrote: > Curious as to how you define cluster latency. > > > On Sat, Apr 12, 2014 at 7:21 AM, Jason Villalta wrote: > &g

Re: [ceph-users] How to disable object-map and exclusive features ?

2015-09-04 Thread Jason Dillaman
> I have a coredump with the size of 1200M compressed . > > Where shall i put the dump ? > I believe you can use the ceph-post-file utility [1] to upload the core and your current package list to ceph.com. Jason [1] http://ceph.com/docs/master/man/8/ce

Re: [ceph-users] crash on rbd bench-write

2015-09-04 Thread Jason Dillaman
ench-write with a Ceph Hammer-release client? -- Jason > Hiya. Playing with a small cephs setup from the Quick start documentation. > > Seeing an issue running rdb bench-write. Initial trace is provided > below, let me know if you need other information. fwiw the rados bench > works

Re: [ceph-users] crash on rbd bench-write

2015-09-08 Thread Jason Dillaman
ed, still trying to grok how > things should go together. You would execute bench-write just as you did. I am just saying there is no reason to map the rbd image via the kernel RBD driver (i.e. no need to run 'rbd map' prior to executing the bench-write command). Jason _

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-21 Thread Jason Dillaman
This is usually indicative of the same tracepoint event being included by both a static and dynamic library. See the following thread regarding this issue within Ceph when LTTng-ust was first integrated [1]. Since I don't have any insight into your application, are you somehow linking against

Re: [ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Jason Dillaman
ifying the image while at the same time not crippling other use cases. librbd also supports cooperative exclusive lock transfer, which is used in the case of qemu VM migrations where the image needs to be opened R/W by two clients at the same time. -- Jason Dillaman - Original Mes

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
You can run the program under 'gdb' with a breakpoint on the 'abort' function to catch the program's abnormal exit. Assuming you have debug symbols installed, you should hopefully be able to see which probe is being re-registered. -- Jason Dillaman - Orig

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph project only due to the fact that EPEL 7 doesn't provide the required packages [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1235461 -- Jason Dillaman - Original Message - > From: "Paul Man

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
> On 22/09/15 17:46, Jason Dillaman wrote: > > As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph > > project only due to the fact that EPEL 7 doesn't provide the required > > packages [1]. > > interesting. so basically our program migh

Re: [ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Jason Dillaman
ourself. The new exclusive-lock feature is managed via 'rbd feature enable/disable' commands and does ensure that only the current lock owner can manipulate the RBD image. It was introduced to support the RBD object map feature (which can track which backing RADOS objects are in-use in order

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-23 Thread Jason Dillaman
It looks like the issue you are experiencing was fixed in the Infernalis/master branches [1]. I've opened a new tracker ticket to backport the fix to Hammer [2]. -- Jason Dillaman [1] https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e [2]

Re: [ceph-users] rbd map failing for image with exclusive-lock feature

2015-09-24 Thread Jason Dillaman
approach via "rbd lock add/remove" to verify that no other client has the image mounted before attempting to mount it locally. -- Jason Dillaman - Original Message - > From: "Allen Liao" > To: ceph-users@lists.ceph.com > Sent: Wednesday, September 23, 201

Re: [ceph-users] possibility to delete all zeros

2015-10-02 Thread Jason Dillaman
est and your cleanup operation. -- Jason - Original Message - > From: "Stefan Priebe - Profihost AG" > To: ceph-users@lists.ceph.com > Sent: Friday, October 2, 2015 8:16:52 AM > Subject: [ceph-users] possibility to delete all zeros > Hi, > we accidentally

Re: [ceph-users] Annoying libust warning on ceph reload

2015-10-08 Thread Jason Dillaman
isn't enabled. [1] https://github.com/ceph/ceph/pull/6135 -- Jason Dillaman - Original Message - > From: "Ken Dreyer" > To: "Goncalo Borges" > Cc: ceph-users@lists.ceph.com > Sent: Thursday, October 8, 2015 11:58:27 AM > Subject: Re: [ceph-users] A

Re: [ceph-users] how to get cow usage of a clone

2015-10-09 Thread Jason Dillaman
mental, you could install the infernalis-based rbd tools from the Ceph gitbuilder [1] into a sandbox environment and use the tool against your pre-infernalis cluster. [1] http://ceph.com/gitbuilder.cgi -- Jason Dillaman - Original Message - > From: "Corin Langosch" >

Re: [ceph-users] How expensive are 'rbd ls' and 'rbd snap ls' calls?

2015-10-12 Thread Jason Dillaman
o the object, so they will be read via LevelDB or RocksDB (depending on your configuration) within the object's PG's OSD. -- Jason Dillaman - Original Message - > From: "Allen Liao" > To: ceph-users@lists.ceph.com > Sent: Monday, October 12, 2015 2:52

Re: [ceph-users] Ceph journal - isn't it a bit redundant sometimes?

2015-10-14 Thread Jason Dillaman
ite operations by decoupling objects from the underlying filesystem's actual storage path. [1] https://github.com/ceph/ceph/blob/master/doc/rados/configuration/journal-ref.rst -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph journal - isn't it a bit redundant sometimes?

2015-10-19 Thread Jason Dillaman
uncate, overwrite, etc). -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How ceph client abort IO

2015-10-20 Thread Jason Dillaman
There is no such interface currently on the librados / OSD side to abort IO operations. Can you provide some background on your use-case for aborting in-flight IOs? -- Jason Dillaman - Original Message - > From: "min fang" > To: ceph-users@lists.ceph.com > Se

Re: [ceph-users] rbd export hangs / does nothing without regular drop_cache

2015-10-20 Thread Jason Dillaman
Can you provide more details on your setup and how you are running the rbd export? If clearing the pagecache, dentries, and inodes solves the issue, it sounds like it's outside of Ceph (unless you are exporting to a CephFS or krbd mount point). -- Jason Dillaman - Original Me

Re: [ceph-users] How ceph client abort IO

2015-10-21 Thread Jason Dillaman
> On Tue, 20 Oct 2015, Jason Dillaman wrote: > > There is no such interface currently on the librados / OSD side to abort > > IO operations. Can you provide some background on your use-case for > > aborting in-flight IOs? > > The internal Objecter has a cancel interf

Re: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4

2015-10-21 Thread Jason Dillaman
] http://tracker.ceph.com/issues/13559 -- Jason Dillaman - Original Message - > From: "Andrei Mikhailovsky" > To: ceph-us...@ceph.com > Sent: Wednesday, October 21, 2015 8:17:39 AM > Subject: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4 > Hello

Re: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4

2015-10-21 Thread Jason Dillaman
command-line properties [1]. If you have "rbd cache = true" in your ceph.conf, it would override "cache=none" in your qemu command-line. [1] https://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg03078.html -- Jason Dillaman

Re: [ceph-users] how to understand deep flatten implementation

2015-10-22 Thread Jason Dillaman
afe to detach a clone from a parent image even if snapshots exist due to the changes to copyup. -- Jason Dillaman - Original Message - > From: "Zhongyan Gu" > To: dilla...@redhat.com > Sent: Thursday, October 22, 2015 5:11:56 AM > Subject: how to understand deep

Re: [ceph-users] how to understand deep flatten implementation

2015-10-23 Thread Jason Dillaman
ter flatten, child > snapshot still has parent snap info? > overlap: 1024 MB Because deep-flatten wasn't enabled on the clone. > Another question is since deep-flatten operations are applied to cloned > image, why we need to create p

Re: [ceph-users] Not possible to remove cache tier with RBDs open?

2015-10-26 Thread Jason Dillaman
would immediately race to re-establish the lost watch/notify connection before you could disassociate the cache tier. -- Jason Dillaman - Original Message - > From: "Robert LeBlanc" > To: ceph-users@lists.ceph.com > Sent: Monday, October 26, 2015 12:22:06 PM > Subject

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-27 Thread Jason Dillaman
> Hi Jason dillaman > Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when > I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID > flag. > When I create a rbd with “—image-features = 13 ” , we enable object-map > featu

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-28 Thread Jason Dillaman
r its been enabled. -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  1   2   3   4   5   6   7   8   9   >