Hello Cephers,
I use ceph-ansible v3.1.5 to build a new Mimic CEph Cluster for OpenStack.
I want to use Erasure Coding for certain pools (images, cinder backups, cinder
for one additional backend, rgw data...).
The examples in group_vars/all.yml.sample don't show how to specify an erasure
coded
Le samedi 21 juillet 2018, 15:56:31 CEST Satish Patel a écrit :
> I am trying to deploy ceph-ansible with lvm osd scenario and reading
> at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html
>
> I have all SSD disk and i don't have separate journal, my plan was
> keep WAL/DB on same disk
Le 2018-07-10 06:26, Konstantin Shalygin a écrit :
Does someone have used EC
pools with OpenStack in production ?
By chance, I found that link :
https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/
Yes, this good post.
My configuration is:
cinder.conf:
[erasure-rbd-hdd]
Hello Cephers !
After having read since Luminuous that EC pools are now supported for writable
RBD pools, I decided to use it in a new OpenStack Cloud deployment. The gain
on storage is really noticeable, and I want to reduce the storage cost.
So I decided to use ceph-ansible to deploy the Ceph
Le 12/05/2014 15:45, Uwe Grohnwaldt a écrit :
Hi,
yes, we use it in production. I can stop/kill the tgt on one server and
XenServer goes to the second one. We enabled multipathing in xenserver. In our
setup we haven't multiple ip-ranges so we scan/login the second target on
xenserverstartup w
Le 07/05/2014 15:23, Vlad Gorbunov a écrit :
It's easy to install tgtd with ceph support. ubuntu 12.04 for example:
Connect ceph-extras repo:
echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release
-sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list
Install tgtd with rbd
Le 07/03/2014 10:50, Indra Pramana a écrit :
Hi,
I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs
with SSD drives and I noted that the I/O speed, especially write
access to the cluster is degrading over time. When we first started
the cluster, we can get up to 250-300 MB
Le 08/01/2014 02:46, Christian Balzer a écrit :
It is what it is.
As in, sid (unstable) and testing are named jessie/sid
in /etc/debian_version, including a notebook of mine that has been
"sid" (as in /etc/apt/sources.list) for 10 years.
This naming convention (next_release/sid) has been in place
Le 20/12/2013 03:51, Christian Balzer a écrit :
Hello Mark,
On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
On 12/16/2013 02:42 AM, Christian Balzer wrote:
Hello,
Hi Christian!
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessi
Le 05/12/2013 14:01, Karan Singh a écrit :
Hello Everyone
Trying to boot from ceph volume using bolg
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
and http://docs.openstack.org/user-guide/content/boot_from_volume.html
Need help for this error.
=
Le 01/12/2013 15:22, German Anders a écrit :
[...]
ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3 4; do sudo dd
if=/dev/zero of=./a bs=1M count=1000; done
Hello,
You should really write anything but zeros.
I suspect that nothing is really written to disk, specially on btrfs, a
cow fi
Le 22/10/2013 14:38, Damien Churchill a écrit :
Yeah, I'd thought of doing it that way, however it would be nice to
avoid that if possible since the machines in the cluster will be
running under QEMU using librbd, so it'd be additional overhead having
to re-export the drives using iSCSI.
Hell
Le 17/10/2013 11:06, NEVEU Stephane a écrit :
Hi list,
I'm trying to figure out how can I set up 3 defined cluster IPs and 3
other public IPs on my 3 node cluster with ceph-deploy (Ubuntu raring,
stable).
Here are my IPs for the public network : 172.23.5.101, 172.23.5.102,
172.23.5.103
A
Defaults env_keep += "http_proxy https_proxy ftp_proxy no_proxy"
Hope it can help.
--
Gilles Mocellin
Nuage Libre
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Le 06/09/2013 17:37, Alfredo Deza a écrit :
On Fri, Sep 6, 2013 at 11:17 AM, Gilles Mocellin
wrote:
Perhaps it's worth a bug report, or some changes in ceph-deploy :
I've just deployed some test clusters with ceph-deploy on Debian Wheezy.
I had errors with ceph-deploy, when the d
ffic (with bwm-ng) I see that the cluster
network is now used.
(you can also look at established connextions with ss).
--
Gilles Mocellin
Nuage Libre
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
f it does
not found sudo...
Thank you devs for your work !
--
Gilles Mocellin
Nuage Libre
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Le 03/09/2013 14:56, Joao Eduardo Luis a écrit :
On 09/03/2013 02:02 AM, 이주헌 wrote:
Hi all.
I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling
0.67.2 version)
At first, It works perfectly. But, after I reboot one of OSD, ceph-mon
launched on port 6800 not 6789.
This has been
Le 06/08/2013 02:57, James Harper a écrit :
In the previous email, you are forgetting Raid1 has a write penalty of 2 since
it
is mirroring and now we are talking about different types of raid and nothing
really to do about Ceph. One of the main advantages of Ceph is to have data
replicated so y
Le 11/07/2013 12:08, Tom Verdaat a écrit :
Hi guys,
We want to use our Ceph cluster to create a shared disk file system to
host VM's. Our preference would be to use CephFS but since it is not
considered stable I'm looking into alternatives.
The most appealing alternative seems to be to creat
Le 01/05/2013 18:23, Wyatt Gorman a écrit :
Here is my ceph.conf. I just figured out that the second host = isn't
necessary, though it is like that on the 5-minute quick start guide...
(Perhaps I'll submit my couple of fixes that I've had to implement so
far). That fixes the "redefined host" is
21 matches
Mail list logo