Hi all,
A couple of years ago, I heard that it wasn’t safe to map a krbd block on an
OSD host.
It was more or less like mounting a NFS mount on the NFS server, we can
potentially end up with some deadlocks.
At least, I tried again recently and didn’t encounter any problem.
What do you think?
FYI I encountered the same problem for krbd, removing the ec pool didn’t solve
my problem.
I’m running 3.13
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood."
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008
John
>
>
> On Tue, Jun 10, 2014 at 9:51 AM, Jean-Charles LOPEZ
> wrote:
> Hi Sébastien,
>
> still the case. Depending on what you do, the OSD process will get to a hang
> and will suicide.
>
> Regards
> JC
>
> On Jun 10, 2014, at 09:46, Sebastien Han wrote
Can you connect to your Ceph cluster?
You can pass options to the cmd line like this:
$ qemu-img create -f rbd
rbd:instances/vmdisk01:id=leseb:conf=/etc/ceph/ceph-leseb.conf 2G
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood."
Phone: +33 (0)1 49 70 99
Hi Dane,
If you deployed with ceph-deploy, you will see that the journal is just a
symlink.
Take a look at /var/lib/ceph/osd//journal
The link should point to the first partition of your hard drive disk, so no
filesystem for the journal, just a block device.
Roughly you should try:
create N pa
Hey all,
It has been a while since the last thread performance related on the ML :p
I’ve been running some experiment to see how much I can get from an SSD on a
Ceph cluster.
To achieve that I did something pretty simple:
* Debian wheezy 7.6
* kernel from debian 3.14-0.bpo.2-amd64
* 1 cluster, 3
your ceph setting) didn’t bring much, now I can reach 3,5K IOPS.
By any chance, would it be possible for you to test with a single OSD SSD?
On 28 Aug 2014, at 18:11, Sebastien Han wrote:
> Hey all,
>
> It has been a while since the last thread performance related on the ML :p
> I’ve
@Dan: thanks for sharing your config, with all your flags I don’t seem to get
more that 3,4K IOPS and they even seem to slow me down :( This is really weird.
Yes I already tried to run to simultaneous processes and only half of 3,4K for
each of them.
@Kasper: thanks for these results, I believe
Mark, thanks a lot for experimenting this for me.
I’m gonna try master soon and will tell you how much I can get.
It’s interesting to see that using 2 SSDs brings up more performance, even both
SSDs are under-utilized…
They should be able to sustain both loads at the same time (journal and osd
On 01 Sep 2014, at 11:13, Sebastien Han wrote:
> Mark, thanks a lot for experimenting this for me.
> I’m gonna try master soon and will tell you how much I can get.
>
> It’s interesting to see that using 2 SSDs brings up more performance, even
> both SSDs are under-utilized…
> Th
lesystem write syncs)
>
>
>
>
> - Mail original -
>
> De: "Sebastien Han"
> À: "Somnath Roy"
> Cc: ceph-users@lists.ceph.com
> Envoyé: Mardi 2 Septembre 2014 02:19:16
> Objet: Re: [ceph-users] [Single OSD performance on SSD] Can&
say that we experience the same limitations (ceph level).
@Cédric, yes I did and what fio was showing was consistent with the iostat
output, same goes for disk utilisation.
On 02 Sep 2014, at 12:44, Cédric Lemarchand wrote:
> Hi Sebastian,
>
>> Le 2 sept. 2014 à 10:41, Sebastien
Mail original -
>
> De: "Sebastien Han"
> À: "Cédric Lemarchand"
> Cc: "Alexandre DERUMIER" , ceph-users@lists.ceph.com
> Envoyé: Mardi 2 Septembre 2014 13:59:13
> Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3
try to bench them with firefly and master.
>
> Is a debian wheezy gitbuilder repository available ? (I'm a bit lazy to
> compile all packages)
>
>
> ----- Mail original -
>
> De: "Sebastien Han"
> À: "Alexandre DERUMIER"
> Cc: ceph-user
Well done! Gonna test this :)
On 03 Sep 2014, at 11:24, Marco Garcês wrote:
> Amazing work, will test it as soon as I can!
> Thanks
>
>
> Marco Garcês
> #sysadmin
> Maputo - Mozambique
> [Phone] +258 84 4105579
> [Skype] marcogarces
>
>
> On Wed, Sep 3, 2014 at 3:20 AM, David Moreau Simard
Or Ansible: https://github.com/ceph/ceph-ansible
On 29 Aug 2014, at 20:24, Olivier DELHOMME
wrote:
> Hello,
>
> - Mail original -
>> De: "Chad Seys"
>> À: ceph-users@lists.ceph.com
>> Envoyé: Vendredi 29 Août 2014 18:53:19
>> Objet: [ceph-users] script for commissioning a node with mu
tainly evident on
> longer runs or lots of historical data on the drives. The max transaction
> time looks pretty good for your test. Something to consider though.
>
> Warren
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
he test parameters you gave
> below, but they're worth keeping in mind.
>
> --
> Warren Wang
> Comcast Cloud (OpenStack)
>
>
> From: Cedric Lemarchand
> Date: Wednesday, September 3, 2014 at 5:14 PM
> To: "ceph-users@lists.ceph.com"
> Subject: Re: [cep
t;> fio randwrite : bw=11658KB/s, iops=2914
>>>
>>> fio randread : bw=38642KB/s, iops=9660
>>>
>>>
>>>
>>> 0.85 + osd_enable_op_tracker=false
>>> ---
>>> fio randwrite : bw=11630KB/s, iops=2907
>&g
Did you follow this ceph.com/docs/master/rbd/rbd-openstack/ to configure your
env?
On 12 Sep 2014, at 14:38, m.channappa.nega...@accenture.com wrote:
> Hello Team,
>
> I have configured ceph as a multibackend for openstack.
>
> I have created 2 pools .
> 1. Volumes (replication size =3
users-boun...@lists.ceph.com] On Behalf Of
> Sebastien Han
> Sent: Tuesday, September 16, 2014 9:33 PM
> To: Alexandre DERUMIER
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
> IOPS
>
> Hi,
>
> Th
A while ago, I managed to have this working but this was really tricky.
See my comment here:
https://github.com/ceph/ceph-ansible/issues/9#issuecomment-37127128
One use case I had was a system with 2 SSD for the OS and a couple of OSDs.
Both SSD were in RAID1 and the system was configured with lv
Several patches aim to solve that by using RBD snapshots instead of QEMU
snapshots.
Unfortunately I doubt we will have something ready for OpenStack Juno.
Hopefully Liberty will be the release that fixes that.
Having RAW images is not that bad since booting from that snapshot will do a
clone.
So
You can have a look at: https://github.com/ceph/ceph-docker
> On 23 Mar 2015, at 17:16, Pavel V. Kaygorodov wrote:
>
> Hi!
>
> I'm using ceph cluster, packed to a number of docker containers.
> There are two things, which you need to know:
>
> 1. Ceph OSDs are using FS attributes, which may no
Hi list,
While reading this
http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks,
I came across the following sentence:
"You can also establish a separate cluster network to handle OSD heartbeat,
object replication and recovery traffic”
I didn’t know it was possib
u specify 'cluster network = ' in your ceph.conf.
> It is useful to remember that replication, recovery and backfill
> traffic are pretty much the same thing, just at different points in
> time.
>
> On Sun, Apr 26, 2015 at 4:39 PM, Sebastien Han
> wrote:
>> Hi list
You can try to push the full ratio a bit further and then delete some objects.
> On 28 Apr 2015, at 15:51, Ray Sun wrote:
>
> More detail about ceph health detail
> [root@controller ~]# ceph health detail
> HEALTH_ERR 20 pgs backfill_toofull; 20 pgs degraded; 20 pgs stuck unclean;
> recovery 74
With mon_osd_full_ratio you should restart the monitors and this should’t be a
problem.
For the unclean PG, looks like something is preventing them to be healthy, look
at the state of the OSD responsible for these 2 PGs.
> On 29 Apr 2015, at 05:06, Ray Sun wrote:
>
> mon osd full ratio
Chee
Under the OSD directory, you can look where the symlink points. This is
generally called ‘journal’, it should point to a device.
> On 06 May 2015, at 06:54, Patrik Plank wrote:
>
> Hi,
>
> i cant remember on which drive I install which OSD journal :-||
> Is there any command to show this?
>
>
Should we put a timeout to the unmap command on the RBD RA in the meantime?
> On 08 May 2015, at 15:13, Vandeir Eduardo wrote:
>
> Wouldn't be better a configuration named (map|unmap)_timeout? Cause we are
> talking about a map/unmap of a RBD device, not a mount/unmount of a file
> system.
>
Hi Bryan,
It shouldn’t be a problem for ceph-ansible to expand a cluster even if it
wasn’t deployed with it.
I believe this requires a bit of tweaking on the ceph-ansible, but it’s not
much.
Can you elaborate on what went wrong and perhaps how you configured
ceph-ansible?
As far as I understoo
e.
I should probably rework a little bit this part with an easier declaration
though...
> Thanks,
> Bryan
>
> On 6/22/15, 7:09 AM, "Sebastien Han" wrote:
>
>> Hi Bryan,
>>
>> It shouldn¹t be a problem for ceph-ansible to expand a cluster even if it
>
Which request generated this trace?
Is it nova-compute log?
> On 10 Jul 2015, at 07:13, Mario Codeniera wrote:
>
> Hi,
>
> It is my first time here. I am just having an issue regarding with my
> configuration with the OpenStack which works perfectly for the cinder and the
> glance based on K
Hi all,While running some benchmarks with the internal rados benchmarker I noticed something really strange. First of all, this is the line I used to run it:$ sudo rados -p 07:59:54_performance bench 300 write -b 4194304 -t 1 --no-cleanupSo I want to test an IO with a concurrency of 1. I had a look
vance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance
On Jul 9, 2013, at 1:11 PM, Mark Nelson <mark.nel...@inktank.com> wrote:On 07/09/2013 03:20 AM, Sebastien Han wrote:Hi all,While running some benchmarks with the internal rado
44 70Email : sebastien@enovance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance
On Jul 9, 2013, at 2:19 PM, Mark Nelson <mark.nel...@inktank.com> wrote:On 07/09/2013 06:47 AM, Sebastien Han wrote:Hi Mark,Yes write back
Can you send your ceph.conf too?Is /etc/ceph/ceph.conf present? Is the key of user volume present too?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : sebastien@enovance.com – Skype : han.sbastienAd
Hi Greg,Just tried the list watchers, on a rbd with the QEMU driver and I got:root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944awatcher=client.30882 cookie=1I also tried with the kernel module but didn't see anything…No IP addresses anywhere… :/, any idea?Nice tip btw :)Sébastie
enovance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Twitter : @enovance
On Jul 24, 2013, at 12:08 AM, Gregory Farnum <g...@inktank.com> wrote:On Tue, Jul 23, 2013 at 2:55 PM, Sebastien Han<sebastien@enovance.com> wrote:Hi Greg,Just tried the list
Nothing has been recorded as far as I know.
However I’ve seen some guys from Scality recording sessions with a cam.
Scality? Are you there? :)
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Add
> I used a blocksize of 350k as my graphes shows me that this is the
> average workload we have on the journal.
Pretty interesting metric Stefan.
Has anyone seen the same behaviour?
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Nice job Haomai!
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 25 Nov 2013, at 02:50, Haomai Wa
Hi,
1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
This has been in production for more than a year now and heavily tested before.
Performance was not expected since frontend server mainly do read (90%).
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%
>
> On Mon, Nov 25, 2013 at 4:39 AM, Sebastien Han
> wrote:
> Hi,
>
> 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
>
> This has been in production for more than a year now and heavily tested
> before.
> Performance was
m
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 25 Nov 2013, at 10:00, Sebastien Han wrote:
> Nice job Haomai!
>
>
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood.”
Hi,
Well after restarting the services run:
$ cinder create 1
Then you can check both status in Cinder and Ceph:
For Cinder run:
$ cinder list
For Ceph run:
$ rbd -p ls
If the image is there, you’re good.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving
Hi guys!
Some experiment here:
http://www.sebastien-han.fr/blog/2013/09/19/how-I-barely-got-my-first-ceph-mon-running-in-docker/
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, ru
Hi,
Unfortunately this is expected.
If you take a snapshot you should not expect a clone but a RBD snapshot.
Please see this BP:
https://blueprints.launchpad.net/nova/+spec/implement-rbd-snapshots-instead-of-qemu-snapshots
A major part of the code is ready, however we missed nova-specs feature
On 01 Oct 2014, at 15:26, Jonathan Proulx wrote:
> On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han
> wrote:
>> Hi,
>>
>> Unfortunately this is expected.
>> If you take a snapshot you should not expect a clone but a RBD snapshot.
>
> Unfortunate that it doesn&
Hum I just tried on a devstack and on firefly stable, it works for me.
Looking at your config it seems that the glance_api_version=2 is put in the
wrong section.
Please move it to [DEFAULT] and let me know if it works.
On 08 Oct 2014, at 14:28, Nathan Stratton wrote:
> On Tue, Oct 7, 2014 at 5
Hey all,
I just saw this thread, I’ve been working on this and was about to share it:
https://etherpad.openstack.org/p/kilo-ceph
Since the ceph etherpad is down I think we should switch to this one as an
alternative.
Loic, feel free to work on this one and add more content :).
On 13 Oct 2014,
Mark, please read this:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html
On 16 Oct 2014, at 19:19, Mark Wu wrote:
>
> Thanks for the detailed information. but I am already using fio with rbd
> engine. Almost 4 volumes can reach the peak.
>
> 2014 年 10 月 17 日 上午 1:03于 wud.
They were some investigations as well around F2FS
(https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt), the last time
I tried to install an OSD dir under f2fs it failed.
I tried to run the OSD on f2fs however ceph-osd mkfs got stuck on a xattr test:
fremovexattr(10, "user.test@5848273
AFAIK there is no tool to do this.
You simply rm object or dd a new content in the object (fill with zero)
> On 04 Dec 2014, at 13:41, Mallikarjun Biradar
> wrote:
>
> Hi all,
>
> I would like to know which tool or cli that all users are using to simulate
> metadata/data corruption.
> This is
Eneko,
I do have plan to push to a performance initiative section on the ceph.com/docs
sooner or later so people will put their own results through github PR.
> On 04 Dec 2014, at 16:09, Eneko Lacunza wrote:
>
> Thanks, will look back in the list archive.
>
> On 04/12/14 15:47, Nick Fisk wrot
Good to know. Thanks for sharing!
> On 09 Dec 2014, at 10:21, Wido den Hollander wrote:
>
> Hi,
>
> Last sunday I got a call early in the morning that a Ceph cluster was
> having some issues. Slow requests and OSDs marking each other down.
>
> Since this is a 100% SSD cluster I was a bit confu
Discard works with virtio-scsi controllers for disks in QEMU.
Just use discard=unmap in the disk section (scsi disk).
> On 12 Dec 2014, at 13:17, Max Power
> wrote:
>
>> Wido den Hollander hat am 12. Dezember 2014 um 12:53
>> geschrieben:
>> It depends. Kernel RBD does not support discard/tri
Salut,
The general recommended ratio (for me at least) is 3 journals per SSD. Using
200GB Intel DC S3700 is great.
If you’re going with a low perf scenario I don’t think you should bother buying
SSD, just remove them from the picture and do 12 SATA 7.2K 4TB.
For medium and medium ++ perf using
You can have a look of what I did here with Christian:
* https://github.com/stackforge/swift-ceph-backend
* https://github.com/enovance/swiftceph-ansible
If you have further question just let us know.
> On 08 Jan 2015, at 15:51, Robert LeBlanc wrote:
>
> Anyone have a reference for documentati
It was added in 0.90
> On 13 Jan 2015, at 00:11, Gregory Farnum wrote:
>
> "perf reset" on the admin socket. I'm not sure what version it went in
> to; you can check the release logs if it doesn't work on whatever you
> have installed. :)
> -Greg
>
>
> On Mon, Jan 12, 2015 at 2:26 PM, Shain Mi
Hey
What do you want to use from Ceph? RBD? CephFS?
It is not really clear, you mentioned ceph/btfrs which makes me either think of
using btrfs for OSD store or btrfs on top of a RBD device.
Later you mentioned HDFS, does that mean you want to use CephFS?
I don’t know much about Mesos, but what
You can use the admin socket:
$ ceph daemon mon. config show
or locally
ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config show
> On 21 Jan 2015, at 19:46, Robert Fantini wrote:
>
> Hello
>
> Is there a way to see running / acrive ceph.conf configuration items?
>
> kind regards
> R
It has been proven that the OSDs can’t take advantage of the SSD, so I’ll
probably collocate both journal and osd data.
Search in the ML for [Single OSD performance on SSD] Can't go over 3, 2K IOPS
You will see that there is no difference it terms of performance between the
following:
* 1 SSD f
Hi Mike,
Sorry to hear that, I hope this can help you to recover your RBD images:
http://www.sebastien-han.fr/blog/2015/01/29/ceph-recover-a-rbd-image-from-a-dead-cluster/
Since you don’t have your monitors, you can still walk through the OSD data dir
and look for the rbd identifiers.
Something
Hi guys,
I won’t do a RAID 1 with SSDs since they both write the same data.
Thus, they are more likely to “almost” die at the same time.
What I will try to do instead is to use both disk in JBOD mode or (degraded
RAID0).
Then I will create a tiny root partition for the OS.
Then I’ll still have
lly implement some way to monitor SSD write life SMART data - at least it
> gives a guide as to device condition compared to its rated life. That can be
> done with smartmontools, but it would be nice to have it on the InkTank
> dashboard for example.
>
>
> On 2013-12-05 14:26, Seba
The ceph doc is currently being updated. See
https://github.com/ceph/ceph/pull/906
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enova
Hi,
I’m not sure to have the whole visibility of the role but I will be more than
happy to take over.
I believe that I can allocate some time for this.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien
ic Dachary wrote:
>
>
> On 01/01/2014 02:39, Sebastien Han wrote:
>> Hi,
>>
>> I’m not sure to have the whole visibility of the role but I will be more
>> than happy to take over.
>> I believe that I can allocate some time for this.
>
> Your name is ad
Hi Alexandre,
Are you going with a 10Gb network? It’s not an issue for IOPS but more for the
bandwidth. If so read the following:
I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even
1:4) is preferable.
SAS 10K gives you around 140MB/sec for sequential writes.
So if y
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
Actually, I don’t know the price difference between the crucial and the intel
but the intel looks more suitable for me. Especially after Mark’s comment.
Séba
@enovance
On 15 Jan 2014, at 15:46, Stefan Priebe wrote:
> Am 15.01.2014 15:44, schrieb Mark Nelson:
>> On 01/15/2014 08:39 AM, Stefan Priebe wrote:
>>>
>>> Am 15.01.2014 15:34, schrieb Sebastien Han:
>>>> Hum the Crucial m500 is pretty slow. The biggest o
e la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 15 Jan 2014, at 15:49, Sebastien Han wrote:
> Sorry I was only looking at the 4K aligned results.
>
>
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're givi
Usually you would like to start here:
http://ceph.com/docs/master/rbd/rbd-openstack/
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.eno
Greg,
Do you have any estimation about how heartbeat messages use the network?
How busy is it?
At some point (if the cluster gets big enough), could this degrade the network
performance? Will it make sense to have a separate network for this?
So in addition to public and storage we will have an
I agree but somehow this generates more traffic too. We just need to find a
good balance.
But I don’t think this will change the scenario where the cluster network is
down and OSDs die because of this…
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone:
On 24 Jan 2014, at 18:22, Gregory Farnum wrote:
> On Friday, January 24, 2014, Sebastien Han wrote:
> Greg,
>
> Do you have any estimation about how heartbeat messages use the network?
> How busy is it?
>
> Not very. It's one very small message per OSD peer per...se
I have the same behaviour here.
I believe this is somehow expected since you’re calling “copy”, clone will do
the cow.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Vic
Hi,
$ rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }’
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance
Hi Alexandre,
We have a meet up in Paris.
Please see: http://www.meetup.com/Ceph-in-Paris/events/158942372/
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victo
Hi,
Can I see your ceph.conf?
I suspect that [client.cinder] and [client.glance] sections are missing.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire -
cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
>
> If I provide admin.keyring file to openstack node (in /etc/ceph) it works
> fine and issue is gone .
>
> Thanks
>
> Ashish
>
>
> On Mon, Feb 17, 2014 a
Which distro and packages?
libvirt_image_type is broken on cloud archive, please patch with
https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien...
Hi,
Please have a look at the cinder multi-backend functionality: examples here:
http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien.
Hi,
RBD blocks are stored as objects on a filesystem usually under:
/var/lib/ceph/osd//current//
RBD is just an abstraction layer.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Addres
Hi,
The value can be set during the image creation.
Start with this: http://ceph.com/docs/master/man/8/rbd/#striping
Followed by the example section.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.
There is a RBD engine for FIO, have a look at
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bi
Hi,
I use the following live migration flags:
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
It deletes the libvirt.xml and re-creates it on the other side.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
The section should be
[client.keyring]
keyring =
Then restart cinder-volume after.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.eno
.200.10.117:6789
>
> [mon.c]
> host = ceph03
> mon_addr = 10.200.10.118:6789
>
> [osd.0]
> public_addr = 10.200.10.116
> cluster_addr = 10.200.9.116
>
> [osd.1]
> public_addr = 10.200.10.117
> cluster_addr = 10.200.9.117
>
> [osd.2]
> public_addr = 10.
Are you running Havana with josh’s branch?
(https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd)
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 P
- Twitter : @enovance
On 04 Apr 2014, at 09:56, Mariusz Gronczewski
wrote:
> Nope, one from RDO packages http://openstack.redhat.com/Main_Page
>
> On Thu, 3 Apr 2014 23:22:15 +0200, Sebastien Han
> wrote:
>
>> Are you running Havana with josh’s branch?
>> (https://g
Try ceph auth del osd.1
And then repeat step 6
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance
On 08
Hey Loïc,
The machine was setup a while ago :).
The server side is ready, there is just no graphical interface, everything
appears as plain text.
It’s not necessary to upgrade.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
M
To speed up the deletion, you can remove the rbd_header (if the image is empty)
and then remove it.
For example:
$ rados -p rbd ls
huge.rbd
rbd_directory
$ rados -p rbd rm huge.rbd
$ time
rbd rm huge
2013-12-10 09:35:44.168695 7f9c4a87d780 -1 librbd::ImageCtx: error finding
header: (2)
No s
This is a COW clone, but the BP you pointed doesn’t match the feature you
described. This might explain Greg’s answer.
The BP refers to the libvirt_image_type functionality for Nova.
What do you get now when you try to create a volume from an image?
Sébastien Han
Cloud Engineer
"Always
ou're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance
On 25 Apr 2014, at 16:37, Sebastien Han wrote:
> g
signature.asc
Description: Message signed with OpenPGP
- Twitter : @enovance
On 25 Apr 2014, at 18:16, Sebastien Han wrote:
> I just tried, I have the same problem, it looks like a regression…
> It’s weird because the code didn’t change that much during the Icehouse cycle.
>
> I just reported the bug here: https://bugs.launchpad.n
ovance.com - Twitter : @enovance
On 28 Apr 2014, at 16:10, Maciej Gałkiewicz wrote:
> On 28 April 2014 15:58, Sebastien Han wrote:
> FYI It’s fixed here: https://review.openstack.org/#/c/90644/1
>
> I already have this patch and it didn't help. Have it fixed the problem in
> yo
What does ‘very high load’ mean for you?
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance
Begin forward
1 - 100 of 131 matches
Mail list logo