[ceph-users] Is it still unsafe to map a RBD device on an OSD server?

2014-06-10 Thread Sebastien Han
Hi all, A couple of years ago, I heard that it wasn’t safe to map a krbd block on an OSD host. It was more or less like mounting a NFS mount on the NFS server, we can potentially end up with some deadlocks. At least, I tried again recently and didn’t encounter any problem. What do you think?

Re: [ceph-users] question about feature set mismatch

2014-06-10 Thread Sebastien Han
FYI I encountered the same problem for krbd, removing the ec pool didn’t solve my problem. I’m running 3.13 Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood." Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008

Re: [ceph-users] Is it still unsafe to map a RBD device on an OSD server?

2014-06-10 Thread Sebastien Han
John > > > On Tue, Jun 10, 2014 at 9:51 AM, Jean-Charles LOPEZ > wrote: > Hi Sébastien, > > still the case. Depending on what you do, the OSD process will get to a hang > and will suicide. > > Regards > JC > > On Jun 10, 2014, at 09:46, Sebastien Han wrote

Re: [ceph-users] qemu image create failed

2014-07-15 Thread Sebastien Han
Can you connect to your Ceph cluster? You can pass options to the cmd line like this: $ qemu-img create -f rbd rbd:instances/vmdisk01:id=leseb:conf=/etc/ceph/ceph-leseb.conf 2G Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood." Phone: +33 (0)1 49 70 99

Re: [ceph-users] Moving Journal to SSD

2014-08-11 Thread Sebastien Han
Hi Dane, If you deployed with ceph-deploy, you will see that the journal is just a symlink. Take a look at /var/lib/ceph/osd//journal The link should point to the first partition of your hard drive disk, so no filesystem for the journal, just a block device. Roughly you should try: create N pa

[ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-28 Thread Sebastien Han
Hey all, It has been a while since the last thread performance related on the ML :p I’ve been running some experiment to see how much I can get from an SSD on a Ceph cluster. To achieve that I did something pretty simple: * Debian wheezy 7.6 * kernel from debian 3.14-0.bpo.2-amd64 * 1 cluster, 3

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-29 Thread Sebastien Han
your ceph setting) didn’t bring much, now I can reach 3,5K IOPS. By any chance, would it be possible for you to test with a single OSD SSD? On 28 Aug 2014, at 18:11, Sebastien Han wrote: > Hey all, > > It has been a while since the last thread performance related on the ML :p > I’ve

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-29 Thread Sebastien Han
@Dan: thanks for sharing your config, with all your flags I don’t seem to get more that 3,4K IOPS and they even seem to slow me down :( This is really weird. Yes I already tried to run to simultaneous processes and only half of 3,4K for each of them. @Kasper: thanks for these results, I believe

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-01 Thread Sebastien Han
Mark, thanks a lot for experimenting this for me. I’m gonna try master soon and will tell you how much I can get. It’s interesting to see that using 2 SSDs brings up more performance, even both SSDs are under-utilized… They should be able to sustain both loads at the same time (journal and osd

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-01 Thread Sebastien Han
On 01 Sep 2014, at 11:13, Sebastien Han wrote: > Mark, thanks a lot for experimenting this for me. > I’m gonna try master soon and will tell you how much I can get. > > It’s interesting to see that using 2 SSDs brings up more performance, even > both SSDs are under-utilized… > Th

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Sebastien Han
lesystem write syncs) > > > > > - Mail original - > > De: "Sebastien Han" > À: "Somnath Roy" > Cc: ceph-users@lists.ceph.com > Envoyé: Mardi 2 Septembre 2014 02:19:16 > Objet: Re: [ceph-users] [Single OSD performance on SSD] Can&

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Sebastien Han
say that we experience the same limitations (ceph level). @Cédric, yes I did and what fio was showing was consistent with the iostat output, same goes for disk utilisation. On 02 Sep 2014, at 12:44, Cédric Lemarchand wrote: > Hi Sebastian, > >> Le 2 sept. 2014 à 10:41, Sebastien

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Sebastien Han
Mail original - > > De: "Sebastien Han" > À: "Cédric Lemarchand" > Cc: "Alexandre DERUMIER" , ceph-users@lists.ceph.com > Envoyé: Mardi 2 Septembre 2014 13:59:13 > Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Sebastien Han
try to bench them with firefly and master. > > Is a debian wheezy gitbuilder repository available ? (I'm a bit lazy to > compile all packages) > > > ----- Mail original - > > De: "Sebastien Han" > À: "Alexandre DERUMIER" > Cc: ceph-user

Re: [ceph-users] docker + coreos + ceph

2014-09-03 Thread Sebastien Han
Well done! Gonna test this :) On 03 Sep 2014, at 11:24, Marco Garcês wrote: > Amazing work, will test it as soon as I can! > Thanks > > > Marco Garcês > #sysadmin > Maputo - Mozambique > [Phone] +258 84 4105579 > [Skype] marcogarces > > > On Wed, Sep 3, 2014 at 3:20 AM, David Moreau Simard

Re: [ceph-users] script for commissioning a node with multiple osds, added to cluster as a whole

2014-09-03 Thread Sebastien Han
Or Ansible: https://github.com/ceph/ceph-ansible On 29 Aug 2014, at 20:24, Olivier DELHOMME wrote: > Hello, > > - Mail original - >> De: "Chad Seys" >> À: ceph-users@lists.ceph.com >> Envoyé: Vendredi 29 Août 2014 18:53:19 >> Objet: [ceph-users] script for commissioning a node with mu

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-03 Thread Sebastien Han
tainly evident on > longer runs or lots of historical data on the drives. The max transaction > time looks pretty good for your test. Something to consider though. > > Warren > > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-08 Thread Sebastien Han
he test parameters you gave > below, but they're worth keeping in mind. > > -- > Warren Wang > Comcast Cloud (OpenStack) > > > From: Cedric Lemarchand > Date: Wednesday, September 3, 2014 at 5:14 PM > To: "ceph-users@lists.ceph.com" > Subject: Re: [cep

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-16 Thread Sebastien Han
t;> fio randwrite : bw=11658KB/s, iops=2914 >>> >>> fio randread : bw=38642KB/s, iops=9660 >>> >>> >>> >>> 0.85 + osd_enable_op_tracker=false >>> --- >>> fio randwrite : bw=11630KB/s, iops=2907 >&g

Re: [ceph-users] vdb busy error when attaching to instance

2014-09-16 Thread Sebastien Han
Did you follow this ceph.com/docs/master/rbd/rbd-openstack/ to configure your env? On 12 Sep 2014, at 14:38, m.channappa.nega...@accenture.com wrote: > Hello Team, > > I have configured ceph as a multibackend for openstack. > > I have created 2 pools . > 1. Volumes (replication size =3

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-23 Thread Sebastien Han
users-boun...@lists.ceph.com] On Behalf Of > Sebastien Han > Sent: Tuesday, September 16, 2014 9:33 PM > To: Alexandre DERUMIER > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K > IOPS > > Hi, > > Th

Re: [ceph-users] OSD on LVM volume

2015-02-24 Thread Sebastien Han
A while ago, I managed to have this working but this was really tricky. See my comment here: https://github.com/ceph/ceph-ansible/issues/9#issuecomment-37127128 One use case I had was a system with 2 SSD for the OS and a couple of OSDs. Both SSD were in RAID1 and the system was configured with lv

Re: [ceph-users] Sparse RBD instance snapshots in OpenStack

2015-03-12 Thread Sebastien Han
Several patches aim to solve that by using RBD snapshots instead of QEMU snapshots. Unfortunately I doubt we will have something ready for OpenStack Juno. Hopefully Liberty will be the release that fixes that. Having RAW images is not that bad since booting from that snapshot will do a clone. So

Re: [ceph-users] ceph cluster on docker containers

2015-03-29 Thread Sebastien Han
You can have a look at: https://github.com/ceph/ceph-docker > On 23 Mar 2015, at 17:16, Pavel V. Kaygorodov wrote: > > Hi! > > I'm using ceph cluster, packed to a number of docker containers. > There are two things, which you need to know: > > 1. Ceph OSDs are using FS attributes, which may no

[ceph-users] Ceph recovery network?

2015-04-26 Thread Sebastien Han
Hi list, While reading this http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks, I came across the following sentence: "You can also establish a separate cluster network to handle OSD heartbeat, object replication and recovery traffic” I didn’t know it was possib

Re: [ceph-users] Ceph recovery network?

2015-04-27 Thread Sebastien Han
u specify 'cluster network = ' in your ceph.conf. > It is useful to remember that replication, recovery and backfill > traffic are pretty much the same thing, just at different points in > time. > > On Sun, Apr 26, 2015 at 4:39 PM, Sebastien Han > wrote: >> Hi list

Re: [ceph-users] Ceph is Full

2015-04-28 Thread Sebastien Han
You can try to push the full ratio a bit further and then delete some objects. > On 28 Apr 2015, at 15:51, Ray Sun wrote: > > More detail about ceph health detail > [root@controller ~]# ceph health detail > HEALTH_ERR 20 pgs backfill_toofull; 20 pgs degraded; 20 pgs stuck unclean; > recovery 74

Re: [ceph-users] Ceph is Full

2015-04-29 Thread Sebastien Han
With mon_osd_full_ratio you should restart the monitors and this should’t be a problem. For the unclean PG, looks like something is preventing them to be healthy, look at the state of the OSD responsible for these 2 PGs. > On 29 Apr 2015, at 05:06, Ray Sun wrote: > > mon osd full ratio Chee

Re: [ceph-users] Find out the location of OSD Journal

2015-05-11 Thread Sebastien Han
Under the OSD directory, you can look where the symlink points. This is generally called ‘journal’, it should point to a device. > On 06 May 2015, at 06:54, Patrik Plank wrote: > > Hi, > > i cant remember on which drive I install which OSD journal :-|| > Is there any command to show this? > >

Re: [ceph-users] rbd unmap command hangs when there is no network connection with mons and osds

2015-05-12 Thread Sebastien Han
Should we put a timeout to the unmap command on the RBD RA in the meantime? > On 08 May 2015, at 15:13, Vandeir Eduardo wrote: > > Wouldn't be better a configuration named (map|unmap)_timeout? Cause we are > talking about a map/unmap of a RBD device, not a mount/unmount of a file > system. >

Re: [ceph-users] Expanding a ceph cluster with ansible

2015-06-22 Thread Sebastien Han
Hi Bryan, It shouldn’t be a problem for ceph-ansible to expand a cluster even if it wasn’t deployed with it. I believe this requires a bit of tweaking on the ceph-ansible, but it’s not much. Can you elaborate on what went wrong and perhaps how you configured ceph-ansible? As far as I understoo

Re: [ceph-users] Expanding a ceph cluster with ansible

2015-06-24 Thread Sebastien Han
e. I should probably rework a little bit this part with an easier declaration though... > Thanks, > Bryan > > On 6/22/15, 7:09 AM, "Sebastien Han" wrote: > >> Hi Bryan, >> >> It shouldn¹t be a problem for ceph-ansible to expand a cluster even if it >

Re: [ceph-users] Nova with Ceph generate error

2015-07-10 Thread Sebastien Han
Which request generated this trace? Is it nova-compute log? > On 10 Jul 2015, at 07:13, Mario Codeniera wrote: > > Hi, > > It is my first time here. I am just having an issue regarding with my > configuration with the OpenStack which works perfectly for the cinder and the > glance based on K

[ceph-users] RADOS Bench strange behavior

2013-07-09 Thread Sebastien Han
Hi all,While running some benchmarks with the internal rados benchmarker I noticed something really strange. First of all, this is the line I used to run it:$ sudo rados -p 07:59:54_performance bench 300 write -b 4194304 -t 1 --no-cleanupSo I want to test an IO with a concurrency of 1. I had a look

Re: [ceph-users] RADOS Bench strange behavior

2013-07-09 Thread Sebastien Han
vance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance On Jul 9, 2013, at 1:11 PM, Mark Nelson <mark.nel...@inktank.com> wrote:On 07/09/2013 03:20 AM, Sebastien Han wrote:Hi all,While running some benchmarks with the internal rado

Re: [ceph-users] RADOS Bench strange behavior

2013-07-09 Thread Sebastien Han
44 70Email : sebastien@enovance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance On Jul 9, 2013, at 2:19 PM, Mark Nelson <mark.nel...@inktank.com> wrote:On 07/09/2013 06:47 AM, Sebastien Han wrote:Hi Mark,Yes write back

Re: [ceph-users] Openstack on ceph rbd installation failure

2013-07-23 Thread Sebastien Han
Can you send your ceph.conf too?Is /etc/ceph/ceph.conf present? Is the key of user volume present too?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : sebastien@enovance.com – Skype : han.sbastienAd

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
Hi Greg,Just tried the list watchers, on a rbd with the QEMU driver and I got:root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944awatcher=client.30882 cookie=1I also tried with the kernel module but didn't see anything…No IP addresses anywhere… :/, any idea?Nice tip btw :)Sébastie

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
enovance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance On Jul 24, 2013, at 12:08 AM, Gregory Farnum <g...@inktank.com> wrote:On Tue, Jul 23, 2013 at 2:55 PM, Sebastien Han<sebastien@enovance.com> wrote:Hi Greg,Just tried the list

Re: [ceph-users] presentation videos from Ceph Day London?

2013-10-31 Thread Sebastien Han
Nothing has been recorded as far as I know. However I’ve seen some guys from Scality recording sessions with a cam. Scality? Are you there? :) Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Add

Re: [ceph-users] Intel 520/530 SSD for ceph

2013-11-22 Thread Sebastien Han
> I used a blocksize of 350k as my graphes shows me that this is the > average workload we have on the journal. Pretty interesting metric Stefan. Has anyone seen the same behaviour? Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72

Re: [ceph-users] LevelDB Backend For Ceph OSD Preview

2013-11-25 Thread Sebastien Han
Nice job Haomai! Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On 25 Nov 2013, at 02:50, Haomai Wa

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-25 Thread Sebastien Han
Hi, 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/) This has been in production for more than a year now and heavily tested before. Performance was not expected since frontend server mainly do read (90%). Cheers. Sébastien Han Cloud Engineer "Always give 100%

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-25 Thread Sebastien Han
> > On Mon, Nov 25, 2013 at 4:39 AM, Sebastien Han > wrote: > Hi, > > 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/) > > This has been in production for more than a year now and heavily tested > before. > Performance was

Re: [ceph-users] LevelDB Backend For Ceph OSD Preview

2013-11-26 Thread Sebastien Han
m Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On 25 Nov 2013, at 10:00, Sebastien Han wrote: > Nice job Haomai! > > > Sébastien Han > Cloud Engineer > > "Always give 100%. Unless you're giving blood.”

Re: [ceph-users] how to Testing cinder and glance with CEPH

2013-11-26 Thread Sebastien Han
Hi, Well after restarting the services run: $ cinder create 1 Then you can check both status in Cinder and Ceph: For Cinder run: $ cinder list For Ceph run: $ rbd -p ls If the image is there, you’re good. Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving

Re: [ceph-users] Docker

2013-11-29 Thread Sebastien Han
Hi guys! Some experiment here: http://www.sebastien-han.fr/blog/2013/09/19/how-I-barely-got-my-first-ceph-mon-running-in-docker/ Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, ru

Re: [ceph-users] rbd + openstack nova instance snapshots?

2014-09-30 Thread Sebastien Han
Hi, Unfortunately this is expected. If you take a snapshot you should not expect a clone but a RBD snapshot. Please see this BP: https://blueprints.launchpad.net/nova/+spec/implement-rbd-snapshots-instead-of-qemu-snapshots A major part of the code is ready, however we missed nova-specs feature

Re: [ceph-users] rbd + openstack nova instance snapshots?

2014-10-01 Thread Sebastien Han
On 01 Oct 2014, at 15:26, Jonathan Proulx wrote: > On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han > wrote: >> Hi, >> >> Unfortunately this is expected. >> If you take a snapshot you should not expect a clone but a RBD snapshot. > > Unfortunate that it doesn&

Re: [ceph-users] RBD on openstack glance+cinder CoW?

2014-10-08 Thread Sebastien Han
Hum I just tried on a devstack and on firefly stable, it works for me. Looking at your config it seems that the glance_api_version=2 is put in the wrong section. Please move it to [DEFAULT] and let me know if it works. On 08 Oct 2014, at 14:28, Nathan Stratton wrote: > On Tue, Oct 7, 2014 at 5

Re: [ceph-users] Micro Ceph summit during the OpenStack summit

2014-10-13 Thread Sebastien Han
Hey all, I just saw this thread, I’ve been working on this and was about to share it: https://etherpad.openstack.org/p/kilo-ceph Since the ceph etherpad is down I think we should switch to this one as an alternative. Loic, feel free to work on this one and add more content :). On 13 Oct 2014,

Re: [ceph-users] Performance doesn't scale well on a full ssd cluster.

2014-10-16 Thread Sebastien Han
Mark, please read this: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html On 16 Oct 2014, at 19:19, Mark Wu wrote: > > Thanks for the detailed information. but I am already using fio with rbd > engine. Almost 4 volumes can reach the peak. > > 2014 年 10 月 17 日 上午 1:03于 wud.

Re: [ceph-users] All SSD storage and journals

2014-10-27 Thread Sebastien Han
They were some investigations as well around F2FS (https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt), the last time I tried to install an OSD dir under f2fs it failed. I tried to run the OSD on f2fs however ceph-osd mkfs got stuck on a xattr test: fremovexattr(10, "user.test@5848273

Re: [ceph-users] Tool or any command to inject metadata/data corruption on rbd

2014-12-04 Thread Sebastien Han
AFAIK there is no tool to do this. You simply rm object or dd a new content in the object (fill with zero) > On 04 Dec 2014, at 13:41, Mallikarjun Biradar > wrote: > > Hi all, > > I would like to know which tool or cli that all users are using to simulate > metadata/data corruption. > This is

Re: [ceph-users] Suitable SSDs for journal

2014-12-04 Thread Sebastien Han
Eneko, I do have plan to push to a performance initiative section on the ceph.com/docs sooner or later so people will put their own results through github PR. > On 04 Dec 2014, at 16:09, Eneko Lacunza wrote: > > Thanks, will look back in the list archive. > > On 04/12/14 15:47, Nick Fisk wrot

Re: [ceph-users] Watch for fstrim running on your Ubuntu systems

2014-12-09 Thread Sebastien Han
Good to know. Thanks for sharing! > On 09 Dec 2014, at 10:21, Wido den Hollander wrote: > > Hi, > > Last sunday I got a call early in the morning that a Ceph cluster was > having some issues. Slow requests and OSDs marking each other down. > > Since this is a 100% SSD cluster I was a bit confu

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Sebastien Han
Discard works with virtio-scsi controllers for disks in QEMU. Just use discard=unmap in the disk section (scsi disk). > On 12 Dec 2014, at 13:17, Max Power > wrote: > >> Wido den Hollander hat am 12. Dezember 2014 um 12:53 >> geschrieben: >> It depends. Kernel RBD does not support discard/tri

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Sebastien Han
Salut, The general recommended ratio (for me at least) is 3 journals per SSD. Using 200GB Intel DC S3700 is great. If you’re going with a low perf scenario I don’t think you should bother buying SSD, just remove them from the picture and do 12 SATA 7.2K 4TB. For medium and medium ++ perf using

Re: [ceph-users] Ceph as backend for Swift

2015-01-08 Thread Sebastien Han
You can have a look of what I did here with Christian: * https://github.com/stackforge/swift-ceph-backend * https://github.com/enovance/swiftceph-ansible If you have further question just let us know. > On 08 Jan 2015, at 15:51, Robert LeBlanc wrote: > > Anyone have a reference for documentati

Re: [ceph-users] reset osd perf counters

2015-01-14 Thread Sebastien Han
It was added in 0.90 > On 13 Jan 2015, at 00:11, Gregory Farnum wrote: > > "perf reset" on the admin socket. I'm not sure what version it went in > to; you can check the release logs if it doesn't work on whatever you > have installed. :) > -Greg > > > On Mon, Jan 12, 2015 at 2:26 PM, Shain Mi

Re: [ceph-users] Spark/Mesos on top of Ceph/Btrfs

2015-01-14 Thread Sebastien Han
Hey What do you want to use from Ceph? RBD? CephFS? It is not really clear, you mentioned ceph/btfrs which makes me either think of using btrfs for OSD store or btrfs on top of a RBD device. Later you mentioned HDFS, does that mean you want to use CephFS? I don’t know much about Mesos, but what

Re: [ceph-users] how do I show active ceph configuration

2015-01-21 Thread Sebastien Han
You can use the admin socket: $ ceph daemon mon. config show or locally ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config show > On 21 Jan 2015, at 19:46, Robert Fantini wrote: > > Hello > > Is there a way to see running / acrive ceph.conf configuration items? > > kind regards > R

Re: [ceph-users] Journals on all SSD cluster

2015-01-21 Thread Sebastien Han
It has been proven that the OSDs can’t take advantage of the SSD, so I’ll probably collocate both journal and osd data. Search in the ML for [Single OSD performance on SSD] Can't go over 3, 2K IOPS You will see that there is no difference it terms of performance between the following: * 1 SSD f

Re: [ceph-users] mon leveldb loss

2015-01-30 Thread Sebastien Han
Hi Mike, Sorry to hear that, I hope this can help you to recover your RBD images: http://www.sebastien-han.fr/blog/2015/01/29/ceph-recover-a-rbd-image-from-a-dead-cluster/ Since you don’t have your monitors, you can still walk through the OSD data dir and look for the rbd identifiers. Something

Re: [ceph-users] Journal, SSD and OS

2013-12-05 Thread Sebastien Han
Hi guys, I won’t do a RAID 1 with SSDs since they both write the same data. Thus, they are more likely to “almost” die at the same time. What I will try to do instead is to use both disk in JBOD mode or (degraded RAID0). Then I will create a tiny root partition for the OS. Then I’ll still have

Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread Sebastien Han
lly implement some way to monitor SSD write life SMART data - at least it > gives a guide as to device condition compared to its rated life. That can be > done with smartmontools, but it would be nice to have it on the InkTank > dashboard for example. > > > On 2013-12-05 14:26, Seba

Re: [ceph-users] My experience with ceph now documentted

2013-12-17 Thread Sebastien Han
The ceph doc is currently being updated. See https://github.com/ceph/ceph/pull/906 Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.enova

Re: [ceph-users] [Ceph-community] Ceph User Committee elections : call for participation

2013-12-31 Thread Sebastien Han
Hi, I’m not sure to have the whole visibility of the role but I will be more than happy to take over. I believe that I can allocate some time for this. Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien

Re: [ceph-users] [Ceph-community] Ceph User Committee elections : call for participation

2014-01-01 Thread Sebastien Han
ic Dachary wrote: > > > On 01/01/2014 02:39, Sebastien Han wrote: >> Hi, >> >> I’m not sure to have the whole visibility of the role but I will be more >> than happy to take over. >> I believe that I can allocate some time for this. > > Your name is ad

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
Hi Alexandre, Are you going with a 10Gb network? It’s not an issue for IOPS but more for the bandwidth. If so read the following: I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even 1:4) is preferable. SAS 10K gives you around 140MB/sec for sequential writes. So if y

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s. Intel DC S3700 100G showed around 200MB/sec for us. Actually, I don’t know the price difference between the crucial and the intel but the intel looks more suitable for me. Especially after Mark’s comment. Séba

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
@enovance On 15 Jan 2014, at 15:46, Stefan Priebe wrote: > Am 15.01.2014 15:44, schrieb Mark Nelson: >> On 01/15/2014 08:39 AM, Stefan Priebe wrote: >>> >>> Am 15.01.2014 15:34, schrieb Sebastien Han: >>>> Hum the Crucial m500 is pretty slow. The biggest o

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Sebastien Han
e la Victoire - 75009 Paris Web : www.enovance.com - Twitter : @enovance On 15 Jan 2014, at 15:49, Sebastien Han wrote: > Sorry I was only looking at the 4K aligned results. > > > Sébastien Han > Cloud Engineer > > "Always give 100%. Unless you're givi

Re: [ceph-users] Openstack Havana release installation with ceph

2014-01-24 Thread Sebastien Han
Usually you would like to start here: http://ceph.com/docs/master/rbd/rbd-openstack/ Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.eno

Re: [ceph-users] OSD port usage

2014-01-24 Thread Sebastien Han
Greg, Do you have any estimation about how heartbeat messages use the network? How busy is it? At some point (if the cluster gets big enough), could this degrade the network performance? Will it make sense to have a separate network for this? So in addition to public and storage we will have an

Re: [ceph-users] OSD port usage

2014-01-24 Thread Sebastien Han
I agree but somehow this generates more traffic too. We just need to find a good balance. But I don’t think this will change the scenario where the cluster network is down and OSDs die because of this… Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone:

Re: [ceph-users] OSD port usage

2014-01-24 Thread Sebastien Han
On 24 Jan 2014, at 18:22, Gregory Farnum wrote: > On Friday, January 24, 2014, Sebastien Han wrote: > Greg, > > Do you have any estimation about how heartbeat messages use the network? > How busy is it? > > Not very. It's one very small message per OSD peer per...se

Re: [ceph-users] During copy new rbd image is totally thick

2014-02-03 Thread Sebastien Han
I have the same behaviour here. I believe this is somehow expected since you’re calling “copy”, clone will do the cow. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Vic

Re: [ceph-users] get virtual size and used

2014-02-03 Thread Sebastien Han
Hi, $ rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }’ Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire - 75009 Paris Web : www.enovance

Re: [ceph-users] Meetup in Frankfurt, before the Ceph day

2014-02-05 Thread Sebastien Han
Hi Alexandre, We have a meet up in Paris. Please see: http://www.meetup.com/Ceph-in-Paris/events/158942372/ Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victo

Re: [ceph-users] Block Devices and OpenStack

2014-02-17 Thread Sebastien Han
Hi, Can I see your ceph.conf? I suspect that [client.cinder] and [client.glance] sections are missing. Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 10, rue de la Victoire -

Re: [ceph-users] Block Devices and OpenStack

2014-02-17 Thread Sebastien Han
cephx > auth_service_required = cephx > auth_client_required = cephx > filestore_xattr_use_omap = true > > If I provide admin.keyring file to openstack node (in /etc/ceph) it works > fine and issue is gone . > > Thanks > > Ashish > > > On Mon, Feb 17, 2014 a

Re: [ceph-users] Unable top start instance in openstack

2014-02-20 Thread Sebastien Han
Which distro and packages? libvirt_image_type is broken on cloud archive, please patch with https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien...

Re: [ceph-users] How to Configure Cinder to access multiple pools

2014-02-25 Thread Sebastien Han
Hi, Please have a look at the cinder multi-backend functionality: examples here: http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/ Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien.

Re: [ceph-users] storage

2014-02-25 Thread Sebastien Han
Hi, RBD blocks are stored as objects on a filesystem usually under: /var/lib/ceph/osd//current// RBD is just an abstraction layer. Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Addres

Re: [ceph-users] Size of objects in Ceph

2014-02-25 Thread Sebastien Han
Hi, The value can be set during the image creation. Start with this: http://ceph.com/docs/master/man/8/rbd/#striping Followed by the example section. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.

Re: [ceph-users] qemu-rbd

2014-03-17 Thread Sebastien Han
There is a RBD engine for FIO, have a look at http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bi

Re: [ceph-users] qemu non-shared storage migration of nova instances?

2014-03-17 Thread Sebastien Han
Hi, I use the following live migration flags: VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST It deletes the libvirt.xml and re-creates it on the other side. Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.”

Re: [ceph-users] OpenStack + Ceph Integration

2014-04-02 Thread Sebastien Han
The section should be [client.keyring] keyring = Then restart cinder-volume after. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web : www.eno

Re: [ceph-users] OpenStack + Ceph Integration

2014-04-03 Thread Sebastien Han
.200.10.117:6789 > > [mon.c] > host = ceph03 > mon_addr = 10.200.10.118:6789 > > [osd.0] > public_addr = 10.200.10.116 > cluster_addr = 10.200.9.116 > > [osd.1] > public_addr = 10.200.10.117 > cluster_addr = 10.200.9.117 > > [osd.2] > public_addr = 10.

Re: [ceph-users] Openstack Nova not removing RBD volumes after removing of instance

2014-04-03 Thread Sebastien Han
Are you running Havana with josh’s branch? (https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd) Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 P

Re: [ceph-users] Openstack Nova not removing RBD volumes after removing of instance

2014-04-04 Thread Sebastien Han
- Twitter : @enovance On 04 Apr 2014, at 09:56, Mariusz Gronczewski wrote: > Nope, one from RDO packages http://openstack.redhat.com/Main_Page > > On Thu, 3 Apr 2014 23:22:15 +0200, Sebastien Han > wrote: > >> Are you running Havana with josh’s branch? >> (https://g

Re: [ceph-users] ceph osd creation error ---Please help me

2014-04-08 Thread Sebastien Han
Try ceph auth del osd.1 And then repeat step 6 Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web : www.enovance.com - Twitter : @enovance On 08

Re: [ceph-users] ceph-brag installation

2014-04-22 Thread Sebastien Han
Hey Loïc, The machine was setup a while ago :). The server side is ready, there is just no graphical interface, everything appears as plain text. It’s not necessary to upgrade. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 M

Re: [ceph-users] rdb - huge disk - slow ceph

2014-04-22 Thread Sebastien Han
To speed up the deletion, you can remove the rbd_header (if the image is empty) and then remove it. For example: $ rados -p rbd ls huge.rbd rbd_directory $ rados -p rbd rm huge.rbd $ time rbd rm huge 2013-12-10 09:35:44.168695 7f9c4a87d780 -1 librbd::ImageCtx: error finding header: (2) No s

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Sebastien Han
This is a COW clone, but the BP you pointed doesn’t match the feature you described. This might explain Greg’s answer. The BP refers to the libvirt_image_type functionality for Nova. What do you get now when you try to create a volume from an image? Sébastien Han Cloud Engineer "Always

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Sebastien Han
ou're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web : www.enovance.com - Twitter : @enovance On 25 Apr 2014, at 16:37, Sebastien Han wrote: > g signature.asc Description: Message signed with OpenPGP

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-28 Thread Sebastien Han
- Twitter : @enovance On 25 Apr 2014, at 18:16, Sebastien Han wrote: > I just tried, I have the same problem, it looks like a regression… > It’s weird because the code didn’t change that much during the Icehouse cycle. > > I just reported the bug here: https://bugs.launchpad.n

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-28 Thread Sebastien Han
ovance.com - Twitter : @enovance On 28 Apr 2014, at 16:10, Maciej Gałkiewicz wrote: > On 28 April 2014 15:58, Sebastien Han wrote: > FYI It’s fixed here: https://review.openstack.org/#/c/90644/1 > > I already have this patch and it didn't help. Have it fixed the problem in > yo

[ceph-users] Fwd: Ceph perfomance issue!

2014-05-06 Thread Sebastien Han
What does ‘very high load’ mean for you? Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web : www.enovance.com - Twitter : @enovance Begin forward

  1   2   >