There’s a known issue with Havana’s rbd driver in nova and it has nothing to do
with ceph. Unfortunately, it is only fixed in icehouse. See
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1219658 for more details.
I can confirm that applying the patch manually works.
On 05 Aug 2014, at 1
Hello,
I've been doing some tests on a newly installed ceph cluster:
# ceph osd create bench1 2048 2048
# ceph osd create bench2 2048 2048
# rbd -p bench1 create test
# rbd -p bench1 bench-write test --io-pattern rand
elapsed: 483 ops: 396579 ops/sec: 820.23 bytes/sec: 2220781.36
# rad
.2MB/s, minb=67674KB/s, maxb=106493KB/s,
mint=60404msec, maxt=61875msec
Individually (best:worst) HDD 71:73 MB/s, SSD 68:101 MB/s (with only one out of
6 doing 101)
This is on just one of the osd servers.
Thanks,
Dinu
On Oct 30, 2013, at 6:38 PM, Mark Nelson wrote:
> On 10/30/2013 09
2013, at 8:59 PM, Mark Nelson wrote:
> On 10/30/2013 01:51 PM, Dinu Vlad wrote:
>> Mark,
>>
>> The SSDs are
>> http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/enterprise-sata-ssd/?sku=ST240FN0021
>> and the HDDs are
>> ht
I don't know of any guide besides the official install docs from
grizzly/havana, but I'm running openstack grizzly on top of rbd storage using
glance & cinder and it makes (almost) no use of /var/lib/nova/instances. Live
migrations also work. The only files there should be "config.xml" and "cons
Any other options or ideas?
Thanks,
Dinu
On Oct 31, 2013, at 6:35 PM, Dinu Vlad wrote:
>
> I tested the osd performance from a single node. For this purpose I deployed
> a new cluster (using ceph-deploy, as before) and on fresh/repartitioned
> drives. I created a single pool,
Is disk sda on server1 empty or does it contain already a partition?
On Nov 4, 2013, at 5:25 PM, charles L wrote:
>
> Pls can somebody help? Im getting this error.
>
> ceph@CephAdmin:~$ ceph-deploy osd create server1:sda:/dev/sdj1
> [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-d
x27;d appreciate any suggestion, where to look for the issue. Thanks!
On Oct 31, 2013, at 6:35 PM, Dinu Vlad wrote:
>
> I tested the osd performance from a single node. For this purpose I deployed
> a new cluster (using ceph-deploy, as before) and on fresh/repartitioned
> dr
chassis I have with no expanders and 30 drives with 6 SSDs can push about
> 2100MB/s with a bunch of 9207-8i controllers and XFS (no replication).
>
> Mark
>
> On 11/05/2013 05:15 AM, Dinu Vlad wrote:
>> Ok, so after tweaking the deadline scheduler and the filestore_wbthrottl
Hello,
I'm testing the 0.72 release and thought to give a spin to the zfs support.
While I managed to setup a cluster on top of a number of zfs datasets, the
ceph-osd logs show it's using the "genericfilestorebackend":
2013-11-06 09:27:59.386392 7fdfee0ab7c0 0
genericfilestorebackend(/var/l
osd journals. In our case, the slow SSDs
> showed spikes of 100x higher latency than expected.
>
> What SSDs were you using that were so slow?
>
> Cheers,
> Mike
>
> On 11/6/2013 12:39 PM, Dinu Vlad wrote:
>> I'm using the latest 3.8.0 branch from raring. I
nd pass --with-zfs to
> ./configure.
>
> Once it is built in, ceph-osd will detect whether the underlying fs is zfs
> on its own.
>
> sage
>
>
>
> On Wed, 6 Nov 2013, Dinu Vlad wrote:
>
>> Hello,
>>
>> I'm testing the 0.72 release and
I had great results from the older 530 series too.
In this case however, the SSDs were only used for journals and I don't know if
ceph-osd sends TRIM to the drive in the process of journaling over a block
device. They were also under-subscribed, with just 3 x 10G partitions out of
240 GB raw c
uded in the target distro already, or we need to
> bundle it in the Ceph.com repos.
>
> I am currently looking at the possibility of making the OSD back end
> dynamically linked at runtime, which would allow a separately packaged zfs
> back end; that may (or may not!) help.
>
>
Under grizzly we disabled completely the image injection via
libvirt_inject_partition = -2 in nova.conf. I'm not sure rbd images can even be
mounted that way - but then again, I don't have experience with havana. We're
using config disks (which break live migrations) and/or the metadata service
I was under the same impression - using a small portion of the SSD via
partitioning (in my case - 30 gigs out of 240) would have the same effect as
activating the HPA explicitly.
Am I wrong?
On Nov 7, 2013, at 8:16 PM, ja...@peacon.co.uk wrote:
> On 2013-11-07 17:47, Gruher, Joseph R wrote:
I have 2 SSDs (same model, smaller capacity) for / connected on the mainboard.
Their sync write performance is also poor - less than 600 iops, 4k blocks.
On Nov 7, 2013, at 9:44 PM, Kyle Bader wrote:
>> ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
>
> The problem might be SATA transp
Out of curiosity - can you live-migrate instances with this setup?
On Nov 12, 2013, at 10:38 PM, Dmitry Borodaenko
wrote:
> And to answer my own question, I was missing a meaningful error
> message: what the ObjectNotFound exception I got from librados didn't
> tell me was that I didn't have
Thank you all for the info. Any chance this may make it into mainline?
Thanks,
Dinu
On Nov 14, 2013, at 4:27 PM, Jens-Christian Fischer
wrote:
>> On Thu, Nov 14, 2013 at 9:12 PM, Jens-Christian Fischer
>> wrote:
>> We have migration working partially - it works through Horizon (to a random
Hello,
I've been working to upgrade the hardware on a semi-production ceph cluster,
following the instructions for OSD removal from
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual.
Basically, I've added the new hosts to the cluster and now I'm removing the
old
I personally am not experienced enough
> to feel comfortable making that kind of a change.
>
>
> Adeel
>
>> -Original Message-----
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Dinu Vlad
>> Sent: Monday, December 15,
Hi all,
I was going through the documentation
(http://ceph.com/docs/master/radosgw/federated-config/), having in mind a
(future) replicated swift object store between 2 geographically separated
datacenters (and 2 different Ceph clusters) and a few things caught my
attention. Considering I'm pl
I'm trying to figure out a way to configure selective replication of objects
between 2 geographically-separated ceph clusters, via the radosgw-agent.
Ideally that should happen at the bucket level - but as far as I can figure
that seems impossible (running ceph emperor, 0.72.1).
Is there any w
AM, Dinu Vlad wrote:
>> I'm trying to figure out a way to configure selective replication of objects
>> between 2 geographically-separated ceph clusters, via the radosgw-agent.
>> Ideally that should happen at the bucket level - but as far as I can figure
>> tha
I'm running a ceph cluster with 3 mon and 4 osd nodes (32 disks total) and I've
been looking at the possibility to "migrate" the data to 2 new nodes. The
operation should happen by relocating the disks - I'm not getting any new
hard-drives. The cluster is used as a backend for an openstack clou
Hello Sage,
Yes, original deployment was done via ceph-deploy - and I am very happy to read
this :)
Thank you!
Dinu
On May 14, 2014, at 4:17 PM, Sage Weil wrote:
> Hi Dinu,
>
> On Wed, 14 May 2014, Dinu Vlad wrote:
>>
>> I'm running a ceph cluster with 3 mon
26 matches
Mail list logo