When you run qemu-img you are essentially converting the qcow2 image to
the appropriate raw format during the conversion and import process to the
cluster. When you use rbd import you are not doing a conversion, so the
image is being imported AS IS (you can validate this by looking at the
size of
We use VMware with Ceph, however we don't use RBD directly (we have an NFS
server which has RBD volumes exported as datastores in VMware). We did attempt
iSCSI with RBD to connect to VMware but ran into stability issues (could have
been the target software we were using) but have found NFS to be
to use ESXi back-ended with CEPH storage. When you tested iSCSI what
were the issues you noticed ? What version of CEPH were you running then ? What
iSCSI software did you use for setup ?
Regards,
Nikhil Mitra
From: "Campbell, Bill" < bcampb...@axcess-financial.com >
Reply-To:
Hey Stefan,
Are you using your Ceph cluster for virtualization storage? Is dm-writeboost
configured on the OSD nodes themselves?
- Original Message -
From: "Stefan Priebe - Profihost AG"
To: "Mark Nelson" , ceph-users@lists.ceph.com
Sent: Tuesday, August 18, 2015 7:36:10 AM
Subject
Windows default (NTFS) is a 4k block. Are you changing the allocation unit to
8k as a default for your configuration?
- Original Message -
From: "Gregory Farnum"
To: "Jason Villalta"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, September 17, 2013 10:40:09 AM
Subject: Re: [ceph-use
compare native performance of the SSD disks at 4K blocks vs Ceph performance with 4K blocks? It just seems their is a huge difference in the results.
On Tue, Sep 17, 2013 at 10:56 AM, Campbell, Bill <bcampb...@axcess-financial.com> wrote:
Windows default (NTFS) is a 4k block. Are you chang
I can't speak for OpenStack, but OpenNebula uses Libvirt/QEMU/KVM to access an
RBD directly for each virtual instance deployed, live-migration included (as
each RBD is in and of itself a separate block device, not file system). I would
imagine OpenStack works in a similar fashion.
- Origin
I think the version of Libvirt included with RHEL/CentOS supports RBD storage
(but not pools), so outside of compiling a newer version not sure there can be
anything else done aside from waiting for repo additions/newer versions of the
distro.
Not sure what your scenario is, but this is the ex
The "public" network is where all storage accesses from other systems or
clients will occur. When you map RBD's to other hosts, access object storage
through the RGW, or CephFS access, you will access the data through the
"public" network. The "cluster" network is where all internal replication
e all copies of the message and its
attachments and notify the sender immediately via reply e-mail. **
From: Campbell, Bill [mailto:bcampb...@axcess-financial.com]
Sent: Friday, October 23, 2015 9:11 AM
To: Jon Heese
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Proper Ceph netw
Yes, that is the TOTAL amount in the cluster.
For example, if you have a replica size of '3' , 81489 GB available, and
you write 1 GB of data, then that data is written to the cluster 3 times,
so your total available will be 81486 GB. It definitely threw me off at
first, but seeing as you can hav
Hello,
I was wondering if there were any plans in the near future for some sort of
Web-based management interface for Ceph clusters?
Bill Campbell
Infrastructure Architect
Axcess Financial Services, Inc.
7755 Montgomery Rd., Suite 400
Cincinnati, OH 45236
NOTICE: Protect the information i
12 matches
Mail list logo