On Wed, Nov 5, 2014 at 11:57 PM, Wido den Hollander wrote:
> On 11/05/2014 11:03 PM, Lindsay Mathieson wrote:
>
> >
> > - Geo Replication - thats done via federated gateways? looks complicated
> :(
> > * The remote slave, it would be read only?
> >
>
> That is only for the RADOS Gateway. Ceph i
On 11/05/2014 11:03 PM, Lindsay Mathieson wrote:
> Morning all ..
>
> I have a simple 3 node 2 osd cluster setup serving VM Images (proxmox). The
> two OSD's are on the two VM hosts. Size is set to 2 for replication on both
> OSD's. SSD journals.
>
>
> - if the Ceph Client (VM quest over RBD)
Morning all ..
I have a simple 3 node 2 osd cluster setup serving VM Images (proxmox). The
two OSD's are on the two VM hosts. Size is set to 2 for replication on both
OSD's. SSD journals.
- if the Ceph Client (VM quest over RBD) is accessing data that is stored on
the local OSD, will it avoi
On 13.10.2014 16:47, Marcus White wrote:
>
> 1. In what stack is the driver used in that case if QEMU communicates
> directly with librados?
The qemu process directly communicates with the Ceph cluster via
network. It is a "normal" userland process when it comes to the host kernel.
> 2. With QEM
Great!
Some more folowups:)
1. In what stack is the driver used in that case if QEMU communicates
directly with librados?
2. With QEMU-librados I would guess the new kernel targets/LIO would
not work? They give better performance and lower CPU..
3. Where is the kernel driver used in that case?..
On 10.10.2014 02:19, Marcus White wrote:
>
> For VMs, I am trying to visualize how the RBD device would be exposed.
> Where does the driver live exactly? If its exposed via libvirt and
> QEMU, does the kernel driver run in the host OS, and communicate with
> a backend Ceph cluster? If yes, does li
Thanks:)
If someone can help reg the question below, that would be great!
"
>
> For VMs, I am trying to visualize how the RBD device would be exposed.
> Where does the driver live exactly? If its exposed via libvirt and
> QEMU, does the kernel driver run in the host OS, and communicate with
> a b
On Fri, Oct 10, 2014 at 1:19 AM, Marcus White
wrote:
> FUSE is probably for Ceph file system..
For avoidance of doubt: there are *two* fuse modules in ceph:
* RBD: http://ceph.com/docs/master/man/8/rbd-fuse/
* CephFS: http://ceph.com/docs/master/man/8/ceph-fuse/
Cheers,
John
__
>
>
> Just curious, what kind of applications use RBD? It cant be
> applications which need high speed SAN storage performance
> characteristics?
>
Most people seem to be using it as storage for OpenStack.
I've heard about people using RDB + Heartbeats to make an HA NFS, while
they wait for CephF
Thanks:)
Just curious, what kind of applications use RBD? It cant be
applications which need high speed SAN storage performance
characteristics?
For VMs, I am trying to visualize how the RBD device would be exposed.
Where does the driver live exactly? If its exposed via libvirt and
QEMU, does the
Comments inline.
On Tue, Oct 7, 2014 at 5:51 PM, Marcus White
wrote:
> Hello,
> Some basic Ceph questions, would appreciate your help:) Sorry about
> the number and detail in advance!
>
> a. Ceph RADOS is strongly consistent and different from usual object,
> does that mean all metadata also, co
Just a bump:)
Is this the right list or should I be posting in devel?
MW
On Tue, Oct 7, 2014 at 5:51 PM, Marcus White wrote:
> Hello,
> Some basic Ceph questions, would appreciate your help:) Sorry about
> the number and detail in advance!
>
> a. Ceph RADOS is strongly consistent and different
Hello,
Some basic Ceph questions, would appreciate your help:) Sorry about
the number and detail in advance!
a. Ceph RADOS is strongly consistent and different from usual object,
does that mean all metadata also, container and account etc is all
consistent and everything is updated in the path of
13 matches
Mail list logo