Hi All,
I have four node ceph cluster. I have another three node setup for openstack.
I have integrated Ceph with openstack.
Whenever I try to create storage with ceph as storage backend for the openstack
vm, the creation process goes on forever in the horizon dashboard.
It never completes.
Hi,
You can use KeyValueStore via set "osd objectstore = keyvaluestore-dev".
On Tue, Jun 10, 2014 at 2:36 PM, 飞 wrote:
> hello, I've read the release note of firefly, this release supports
> KeyValueStore , and provide better performance,
> but I can't found any documents for how to using KeyVa
hello, I've read the release note of firefly, this release supports
KeyValueStore , and provide better performance,
but I can't found any documents for how to using KeyValueStore as backend store,
can you tell me how to ? thank you___
ceph-users mailing
Hi,
I can mount from connect to Ceph BlockDevice mount.
glance image-create is created on the local.
# glance image-list
+--+-+-+--+---++
| ID | Name
On Mon, Jun 9, 2014 at 6:42 PM, Mike Dawson wrote:
> Craig,
>
> I've struggled with the same issue for quite a while. If your i/o is similar
> to mine, I believe you are on the right track. For the past month or so, I
> have been running this cronjob:
>
> * * * * * for strPg in `ceph pg dump
Hi,
I fail for the cooperation of Openstack and Ceph.
I was set on the basis of the url.
http://ceph.com/docs/next/rbd/rbd-openstack/
Can look at the state of cephcluster from Openstack(cephClient)
Failure occurs at cinder create
Ceph Cluster:
CentOS release 6.5
Ceph 0.80.1
OpenStack:
Ubuntu 12
Craig,
I've struggled with the same issue for quite a while. If your i/o is
similar to mine, I believe you are on the right track. For the past
month or so, I have been running this cronjob:
* * * * * for strPg in `ceph pg dump | egrep
'^[0-9]\.[0-9a-f]{1,4}' | sort -k20 | awk '{ print
On Mon, Jun 9, 2014 at 3:22 PM, Craig Lewis wrote:
> I've correlated a large deep scrubbing operation to cluster stability
> problems.
>
> My primary cluster does a small amount of deep scrubs all the time, spread
> out over the whole week. It has no stability problems.
>
> My secondary cluster d
I've correlated a large deep scrubbing operation to cluster stability
problems.
My primary cluster does a small amount of deep scrubs all the time, spread
out over the whole week. It has no stability problems.
My secondary cluster doesn't spread them out. It saves them up, and tries
to do all o
Barring a newly-introduced bug (doubtful), that assert basically means
that your computer lied to the ceph monitor about the durability or
ordering of data going to disk, and the store is now inconsistent. If
you don't have data you care about on the cluster, by far your best
option is:
1) Figure o
Miki,
osd crush chooseleaf type is set to 1 by default, which means that it looks
to peer with placement groups on another node, not the same node. You would
need to set that to 0 for a 1-node cluster.
John
On Sun, Jun 8, 2014 at 10:40 PM, Miki Habryn wrote:
> I set up a single-node, dual-osd
More detail to this. I recently upgraded my Ceph cluster from Emperor to
Firefly. After the upgrade had been done, I noticed 1 of the OSD not coming
back to life. While in the process of troubleshooting, rebooted the osd server
and the keyring shifted.
My $ENV.
4x OSD servers (each has 12, 1 f
Thanks Alfredo , happy to see your email.
I was a victim of this problem , hope 1.5.4 will take away my pain :-)
- Karan Sing -
On 09 Jun 2014, at 15:33, Alfredo Deza wrote:
> http://ceph.com/ceph-deploy/docs/changelog.html#id1
___
ceph-users maili
Hi,
I am trying to run schedule_suite.sh on our custom Ceph build for leveraging
InkTank suites in our testing. Can someone help me in using this shell script,
where I can provide my own targets instead of the script picking from Ceph lab?
Also kindly let me know if anyone has setup a lock serv
Hi All,
We've experienced a lot of issues since EPEL started packaging a
0.80.1-2 version that YUM
will see as higher than 0.80.1 and therefore will choose to install
the EPEL one.
That package has some issues from what we have seen and in most cases
will break the installation
process.
There is
We have an NFS to RBD gateway with a large number of smaller RBDs. In
our use case we are allowing users to request their own RBD containers
that are then served up via NFS into a mixed cluster of clients.Our
gateway is quite beefy, probably more than it needs to be, 2x8 core
cpus and 96GB ra
Many thanks
2014-06-09 14:04 GMT+02:00 Wido den Hollander :
> On 06/09/2014 02:00 PM, Ignazio Cassano wrote:
>
>> Many thanks...
>> Can I create a format 2 image (with support for linear snapshot) using
>> qemu-img command ?
>>
>
> Yes:
>
> qemu-img create -f raw rbd:rbd/image1:rbd_default_form
On 06/09/2014 02:00 PM, Ignazio Cassano wrote:
Many thanks...
Can I create a format 2 image (with support for linear snapshot) using
qemu-img command ?
Yes:
qemu-img create -f raw rbd:rbd/image1:rbd_default_format=2 10G
'rbd_default_format' is a Ceph setting which is passed down to librbd
d
Many thanks...
Can I create a format 2 image (with support for linear snapshot) using
qemu-img command ?
2014-06-09 13:05 GMT+02:00 Ilya Dryomov :
> On Mon, Jun 9, 2014 at 3:01 PM, Ignazio Cassano
> wrote:
> > Hi all,
> > I installed cep firefly and now I am playing with rbd snapshot.
> > I cr
On Mon, Jun 9, 2014 at 3:01 PM, Ignazio Cassano
wrote:
> Hi all,
> I installed cep firefly and now I am playing with rbd snapshot.
> I created a pool (libvirt-pool) with two images:
>
> libvirtimage1 (format 1)
> image2 (format 2).
>
> When I try to protect the first image:
>
> rbd --pool libvirt-
Hi all,
I installed cep firefly and now I am playing with rbd snapshot.
I created a pool (libvirt-pool) with two images:
libvirtimage1 (format 1)
image2 (format 2).
When I try to protect the first image:
rbd --pool libvirt-pool snap protect --image libvirtimage1 --snap
libvirt-snap
it gives me
i solved this by export key from "ceph auth export..." :D
above question, i use key with old format version.
On 06/09/2014 05:44 PM, Ta Ba Tuan wrote:
Hi all,
I adding a new ceph-data host, but
#ceph -s -k /etc/ceph/ceph.client.admin.keyring
2014-06-09 17:39:51.686082 7fade4f14700 0 librado
Hi all,
I adding a new ceph-data host, but
#ceph -s -k /etc/ceph/ceph.client.admin.keyring
2014-06-09 17:39:51.686082 7fade4f14700 0 librados: client.admin
authentication error (1) Operation not permitted
Error connecting to cluster: PermissionError
my ceph.conf:
[global]
auth cluster requ
On Mon, Jun 9, 2014 at 11:48 AM, wrote:
> I was building a small test cluster and noticed a difference with trying
> to rbd map depending on whether the cluster was built using fedora or
> CentOS.
>
> When I used CentOS osds, and tried to rbd map from arch linux or fedora,
> I would get "rbd: add
I was building a small test cluster and noticed a difference with trying
to rbd map depending on whether the cluster was built using fedora or
CentOS.
When I used CentOS osds, and tried to rbd map from arch linux or fedora,
I would get "rbd: add failed: (34) Numerical result out of range". It
se
25 matches
Mail list logo