On 10/07/2013 18:01, Mark Nelson wrote:
Hello again!
Part 2 is now out! We've got a whole slew of results for 4K FIO tests
on RBD:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-2-4k-rbd-performance/
Hey mark,
I'm really fond of this kind of plotting performance results.
Hi folks,
I face the difficulty that I have to change ip adresses in the
public network for the monitors.
What needs to be done beside the change of the ceph.conf?
Best regards
Joachim Tork___
ceph-users mailing list
ceph-users@lists.ceph.com
http://
On 12 March 2013 21:38, Josh Durgin wrote:
> Yes, it works with true live migration just fine (even with caching). You
> can use "virsh migrate" or even do it through the virt-manager gui.
> Nova is just doing a check that doesn't make sense for volume-backed
> instances with live migration there
Hi guys,
We want to use our Ceph cluster to create a shared disk file system to host
VM's. Our preference would be to use CephFS but since it is not considered
stable I'm looking into alternatives.
The most appealing alternative seems to be to create a RBD volume, format
it with a cluster file sy
On 07/11/2013 09:03 AM, joachim.t...@gad.de wrote:
Hi folks,
I face the difficulty that I have to change ip adresses in the
public network for the monitors.
What needs to be done beside the change of the ceph.conf?
ceph.conf is only used by other daemons (that aren't the monitors) and
client
On 07/11/2013 02:36 AM, Erwan Velu wrote:
On 10/07/2013 18:01, Mark Nelson wrote:
Hello again!
Part 2 is now out! We've got a whole slew of results for 4K FIO tests
on RBD:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-2-4k-rbd-performance/
Hey mark,
I'm really fond of thi
Hi,
I'd like the pool_id to be included in the hash used for the PG, to
try and improve the data distribution. (I have 10 pool).
I see that there is a flag named FLAG_HASHPSPOOL. Is it possible to
enable it on existing pool ?
Cheers,
Sylvain
__
On Wed, Jul 10, 2013 at 6:23 PM, ker can wrote:
>
> Now separating out the journal from data disk ...
>
> HDFS write numbers (3 disks/data node)
> Average execution time: 466
> Best execution time : 426
> Worst execution time : 508
>
> ceph write numbers (3 data disks/data node + 3 journal d
yep - thats right. 3 OSD daemons per node.
On Thu, Jul 11, 2013 at 9:16 AM, Noah Watkins wrote:
> On Wed, Jul 10, 2013 at 6:23 PM, ker can wrote:
> >
> > Now separating out the journal from data disk ...
> >
> > HDFS write numbers (3 disks/data node)
> > Average execution time: 466
> > Best ex
[sorry I didn't manage to reply to the original message; I only just joined
this list.
Sorry if this breaks your threading!]
On 10 Jul 2013 at 16:01 John Shen wrote:
> I was following the tech preview of libvirt/ceph integration in xenserver,
> but ran
> into an issue with ceph auth in setting
And We've now got part 3 out showing 128K FIO results:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-3-128k-rbd-performance/
Mark
On 07/10/2013 11:01 AM, Mark Nelson wrote:
Hello again!
Part 2 is now out! We've got a whole slew of results for 4K FIO tests
on RBD:
http://ceph
>> It will never log anything to /var/log/ceph/radosgw.log. I am looking
>> for the debug output which I have seen people post, does anyone have a
>> pointer to what could be going on?
>
> You don't need the 'rgw_enable_ops_log' to have debug logs. The
> log_file param should be enough. Do you ha
Hello,
We are planning to use Intel 10 GE ethernet between nodes of OSDs. Host
operation system will be Ubuntu 12.04 x86_64. Are there any recommendations
available to tuning options (ex. sysctl and ceph)?
Thank you,
Mihaly
___
ceph-users mailing list
c
On 07/11/2013 10:04 AM, Mihály Árva-Tóth wrote:
Hello,
We are planning to use Intel 10 GE ethernet between nodes of OSDs. Host
operation system will be Ubuntu 12.04 x86_64. Are there any
recommendations available to tuning options (ex. sysctl and ceph)?
Thank you,
Mihaly
Hi,
Generally if per
2013/7/11 Mark Nelson
> On 07/11/2013 10:04 AM, Mihály Árva-Tóth wrote:
>
>> Hello,
>>
>> We are planning to use Intel 10 GE ethernet between nodes of OSDs. Host
>> operation system will be Ubuntu 12.04 x86_64. Are there any
>> recommendations available to tuning options (ex. sysctl and ceph)?
>>
On 07/11/2013 10:27 AM, Mihály Árva-Tóth wrote:
2013/7/11 Mark Nelson mailto:mark.nel...@inktank.com>>
On 07/11/2013 10:04 AM, Mihály Árva-Tóth wrote:
Hello,
We are planning to use Intel 10 GE ethernet between nodes of
OSDs. Host
operation system will be Ubu
Hi Dave, Thank you so much for getting back to me.
the command returns the same errors:
[root@xen02 ~]# virsh pool-create ceph.xml
error: Failed to create pool from ceph.xml
error: Invalid secret: virSecretFree
[root@xen02 ~]#
the secret was precreated for the user admin that I use elsewhere wi
On 11/07/2013 16:56, Mark Nelson wrote:
And We've now got part 3 out showing 128K FIO results:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-3-128k-rbd-performance/
Hey Mark,
As speaking about the 10GbE at the end of your document, I do have the
following questions for you.
On 07/11/2013 11:16 AM, Erwan Velu wrote:
On 11/07/2013 16:56, Mark Nelson wrote:
And We've now got part 3 out showing 128K FIO results:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-3-128k-rbd-performance/
Hey Mark,
Hi!
As speaking about the 10GbE at the end of your docu
Le 11/07/2013 12:08, Tom Verdaat a écrit :
Hi guys,
We want to use our Ceph cluster to create a shared disk file system to
host VM's. Our preference would be to use CephFS but since it is not
considered stable I'm looking into alternatives.
The most appealing alternative seems to be to creat
Hi.
So, the problem here is a couple of things.
First: libvirt doesn't handle RBD storage pools without auth. That's my
bad, but I never resolved that bug: http://tracker.ceph.com/issues/3493
For now, make sure cephx is enabled.
Also, the commands you are using don't seem to be right.
It sh
Wido, Thanks! I tried again with your command syntax but the result is the
same.
[root@xen02 ~]# virsh secret-set-value $(cat uuid) $(cat client.admin.key)
Secret value set
[root@xen02 ~]# xe sr-create type=libvirt name-label=ceph
device-config:xml-filename=ceph.xml
Error code: libvirt
Error para
On 11 Jul 2013, at 19:25, Gilles Mocellin wrote:
> Hello,
>
> Yes, you missed that qemu can use directly RADOS volume.
> Look here :
> http://ceph.com/docs/master/rbd/qemu-rbd/
>
> Create :
> qemu-img create -f rbd rbd:data/squeeze 10G
>
> Use :
>
> qemu -m 1024 -drive format=raw,file=rbd:dat
Is there any command (in shell or python API), that can tell me if
ceph is still creating pgs other than actually attempting a
modification of the pg_num or pgp_num of a pool? I would like to
minimize the number of errors I get and not keep trying the commands
until success, if possible.
Right no
Correct me if I'm wrong, I'm new to this, but I think the distinction between
the two methods is that using 'qemu-img create -f rbd' creates an RBD for
either a VM to boot from, or for mounting within a VM. Whereas, the OP wants a
single RBD, formatted with a cluster file system, to use as a pl
I'm not certain what the correct behavior should be in this case, so
maybe it is not a bug, but here is what is happening:
When an OSD becomes full, a process fails and we unmount the rbd
attempt to remove the lock associated with the rbd for the process.
The unmount works fine, but removing the l
You are right, I do want a single RBD, formatted with a cluster file
system, to use as a place for multiple VM image files to reside.
Doing everything straight from volumes would be more effective with regards
to snapshots, using CoW etc. but unfortunately for now OpenStack nova
insists on having
Hi Alex,
We're planning to deploy OpenStack Grizzly using KVM. I agree that running
every VM directly from RBD devices would be preferable, but booting from
volumes is not one of OpenStack's strengths and configuring nova to make
boot from volume the default method that works automatically is not
Tom,
I'm no expert as I didn't set it up, but we are using Openstack Grizzly with KVM/QEMU and RBD volumes for VM's.
We boot the VMs from the RBD volumes and it all seems to work just fine.
Migration works perfectly, although live - no break migration only works from the command line tools. The
I will love to know the different between "ceph-deploy new host" and
"ceph-deploy new mon"? I will appreaciate your help
Sent from my LG Mobile
"McNamara, Bradley" wrote:
Correct me if I'm wrong, I'm new to this, but I think the distinction between
the two methods is that using 'qemu-img crea
Depending on which hypervisor he's using, it may not be possible to mount the
RBD's natively.
For instance, the elephant in the room... ESXi.
I've pondered several architectures for presentation of Ceph to ESXi which may
be related to this thread.
1) Large RBD's (2TB-512B), re-presented throug
Hi,
Is it possible to turn off ceph journaling if I switch to xfs ?
For using it as a storage layer for hadoop we're concerned about the
additional requirements for separate SSDs ($$) etc. In our testing we're
seeing a performance hit when using the same disk for both journal + data
... so we're
On Thu, 11 Jul 2013, Mandell Degerness wrote:
> Is there any command (in shell or python API), that can tell me if
> ceph is still creating pgs other than actually attempting a
> modification of the pg_num or pgp_num of a pool? I would like to
> minimize the number of errors I get and not keep try
Hi Ker,
Unfortunately no. Ceph uses the journal for internal consistency and
atomicity and it can't use the XFS journal for it. On the BTRFS side,
we've been investigating allowing the Ceph journal to be on the same
disk as the OSD and doing a clone() operation to effectively reduce the
jou
Note that you *can* disable teh journal if you use btrfs, but your write
latency will tend to be pretty terrible. This is only viable for
bulk-storage use cases where throughput trumps all and latency is not an
issue at all (it may be seconds).
We are planning on eliminating the double-write f
Hi there,
We've been noticing nasty multi-second cluster wide latencies if an OSD
drops out of an active cluster (due to power failure, or even being
stopped cleanly). We've also seen this problem occur when an OSD is
inserted back into the cluster.
Obviously, this has the effect of freezing
I am on first exploration of ceph, I need help to understand these terms;
ceph-deploy new Host, ceph-deploy new MON Host and ceph-deploy mon create Host?
I will appreciate your help.
Sent from my LG Mobile
___
ceph-users mailing list
ceph-users@lists.c
Hello,
is this calculation for the number of PGs correct?
36 OSDs, Replication Factor 3
36 * 100 / 3 => 1200 PGs
But i then read that it should be an exponent of 2 so it should be 2048?
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
htt
38 matches
Mail list logo