Hi all,
I have ever to setup the cache-pool for my pool.
But had some proplems about cache-pool running, so I removed the cache
pool from My CEPH Cluster.
The DATA pool currently don't use cache pool, but "lfor" setting still
be appeared.
*lfor* seems is a setting, not flag.
pool 3 'data_po
Hi!
We have a small production ceph cluster, based on firefly release.
Now, client and cluster network shared the same IP range and VLAN. Also the
same network
used for OpenNebula instance, that we use to manage our cloud. This network
segment
was created some time ago and grows up with install
Hello,
I want to use CephFS instead of vanilla HDFS.
I have a question in regards to data locality.
When I configure the object size (ceph.object.size) as 64MB what will happen
with data striping (http://ceph.com/docs/master/architecture/#data-striping),
is it still will be striped by unit-size o
Hi, first off: long time reader, first time poster :)..
I have a 4 node ceph cluster (~12TB in total) and an openstack cloud
(juno) running.
Everything we have is Suse based and ceph 0.80.8
Now, the cluster works fine.. :
cluster 54636e1e-aeb2-47a3-8cc6-684685264b63
health HEALTH_OK
Glance needs some additional permissions including write access to the pool
you want to add images to. See the docs at:
http://ceph.com/docs/master/rbd/rbd-openstack/
Cheers,
Erik
On Apr 6, 2015 7:21 AM, wrote:
> Hi, first off: long time reader, first time poster :)..
> I have a 4 node ceph clu
Hi eric, thanks for the reply.
As far as I can tell client.glance already already has all the rights
needed to the images pool?
//f
> Glance needs some additional permissions including write access to the
> pool
> you want to add images to. See the docs at:
>
> http://ceph.com/docs/master/rbd/rbd-
Hi,
Can you check if you can import the images directly to the pool?
# rbd -p images --id glance --keyring
/etc/ceph/ceph.client.glance.keyring import
If that command goes well, you may have to go over with glance config file.
On 4/6/15 9:24 PM, florian.rom...@datalounges.com wrote:
Hi eri
Hey Cephers,
For four days, August 10-13, Intel and Red Hat are holding a face to
face Ceph hackathon in Hillsboro, Oregon! We will be focusing on both
bug fixes and feature progression in the areas listed below.
Any developer who has made a significant contribution to the Ceph code
base is invit
On 04/04/2015 02:49 PM, Don Doerner wrote:
> Key problem resolved by actually installing (as opposed to simply
> configuring) the EPEL repo. And with that, the cluster became viable.
> Thanks all.
Hi Don,
I'm not sure I understand what you did to fix this. Can you share more
information about t
moving this to ceph-user where it needs to be for eyeballs and responses. :)
On Mon, Apr 6, 2015 at 1:34 AM, Paul Evans wrote:
> Hello Ceph Community & thanks to anyone with advice on this interesting
> situation...
>
> The Problem: we hav
Hello,
I want to use CephFS instead of vanilla HDFS.
I have a question in regards to data locality.
When I configure the object size (ceph.object.size) as 64MB what will happen
with data striping (http://ceph.com/docs/master/architecture/#data-striping),
is it still will be striped by unit-size o
Hey,
We keep hearing that running Hypervisors (KVM) on the OSD nodes is a bad
idea. But why exactly is that the case?
In our usecase, under normal operations our VMs use relatively low amounts
of CPU resources. So are the OSD services, so why not combine them? (We use
ceph for openstack volume/im
On Apr 3, 2015, at 12:37 AM, LOPEZ Jean-Charles wrote:
>
> according to your ceph osd tree capture, although the OSD reweight is set to
> 1, the OSD CRUSH weight is set to 0 (2nd column). You need to assign the OSD
> a CRUSH weight so that it can be selected by CRUSH: ceph osd crush reweight
>
We just had a fairly extensive discussion about this on the thread "running
Qemu / Hypervisor AND Ceph on the same nodes". Check that out in the
archives.
On Fri, Apr 3, 2015 at 6:08 AM, Piotr Wachowicz <
piotr.wachow...@brightcomputing.com> wrote:
>
> Hey,
>
> We keep hearing that running Hypervi
Trying to download the latest Ceph repos to my RHEL 7.1 system. I need to get
the gpg key for this, and this is what happens:
# rpm --import
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
curl: (7) Failed to connect to 2607:f298:4:147::b05:fe2a: Network is unreachable
error
Please somebody reply my queries.
Thank yuo -RegardsPragya JainDepartment of Computer ScienceUniversity of
DelhiDelhi, India
On Saturday, 4 April 2015 3:24 PM, pragya jain
wrote:
hello all!
As the documentation said "One of the unique features of Ceph is that it
decouples da
Hi Guys,
We ran tests a while back looking at different IO elevators but they are
quite old now:
http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/
On 04/05/2015 08:36 PM, Francois Lafont wrote:
On 04/06/2015 02:54, Lionel Bouton wrote:
I have never tested these pa
Like the last comment on the bug says, the message about block migration (drive
mirroring) indicates that nova is telling libvirt to copy the virtual disks,
which is not what should happen for ceph or other shared storage.
For ceph just plain live migration should be used, not block migration.
Yes, it's expected. The crush map contains the inputs to the CRUSH hashing
algorithm. Every change made to the crush map causes the hashing algorithm
to behave slightly differently. It is consistent though. If you removed
the new bucket, it would go back to the way it was before you made the
ch
In that case, I'd set the crush weight to the disk's size in TiB, and mark
the osd out:
ceph osd crush reweight osd.
ceph osd out
Then your tree should look like:
-9 *2.72* host ithome
30 *2.72* osd.30 up *0*
An OSD can be UP and OUT, which ca
On Mon, Apr 6, 2015 at 2:21 AM, Ta Ba Tuan wrote:
> Hi all,
>
> I have ever to setup the cache-pool for my pool.
> But had some proplems about cache-pool running, so I removed the cache pool
> from My CEPH Cluster.
>
> The DATA pool currently don't use cache pool, but "lfor" setting still be
> app
On Mon, Apr 6, 2015 at 7:48 AM, Patrick McGarry wrote:
> moving this to ceph-user where it needs to be for eyeballs and responses. :)
>
>
> On Mon, Apr 6, 2015 at 1:34 AM, Paul Evans wrote:
>> Hello Ceph Community & thanks to anyone with advice on this interesting
>> situation...
>> =
On Mon, Apr 6, 2015 at 4:17 AM, Dmitry Meytin wrote:
> Hello,
>
> I want to use CephFS instead of vanilla HDFS.
>
> I have a question in regards to data locality.
>
> When I configure the object size (ceph.object.size) as 64MB what will happen
> with data striping
> (http://ceph.com/docs/master/ar
Thanks for the insights, Greg. It would be great if the CRUSH rule for an EC
pool can be dynamically changed…but if that’s not the case, the troubleshooting
doc also offers up the idea of adding more OSDs, and we have another 8 OSDs
(one from each node) we can move into the default root.
Howeve
Hi,
lrwxrwxrwx 1 root root 10 Apr 4 16:27 1618223e-b8c9-4c4a-b5d2-ebff6d64cb12 ->
../../sdb1
looks encouraging. Could you
sudo sgdisk --info 1 /dev/sdb
to check that it matches ? If you're on IRC feel free to ping me (CEST
France/Paris time) and we can have a debug session.
Cheers
On 06/04
I see that ceph has 'ceph osd perf' that gets the latency of the OSDs.
Is there a similar command that would provide some performance data
about RBDs in use? I'm concerned about out ability to determine which
RBD(s) may be "abusing" our storage at any given time.
What are others doing to locate pe
Mark Nelson wrote:
> We ran tests a while back looking at different IO elevators but they are
> quite old now:
>
> http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/
It doesn't seem so interesting to switch from deadline to cfq with HDD.
But in this case, I can't use so
Hi,
I tried to install firefly v0.80.9 on a freshly installed RHEL 6.5 by following
http://ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster but it
installed v0.80.5 instead. Is it really what we want by default ? Or is it me
misreading the instructions somehow ?
Cheers
--
Loïc D
I'm not sure exactly what your steps where, but I reinstalled a monitor
yesterday on Centos 6.5 using ceph-deploy with the /etc/yum.repos.d/ceph.repo
from ceph.com which I've included below.
Bruce
[root@essperf13 ceph-mon01]# ceph -v
ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
I understand, Thank Gregory Farnum for your explaining
--
Tuantaba
Ha Noi-VietNam
On 07/04/2015 00:54, Gregory Farnum wrote:
On Mon, Apr 6, 2015 at 2:21 AM, Ta Ba Tuan wrote:
Hi all,
I have ever to setup the cache-pool for my pool.
But had some proplems about cache-pool running, so I remove
On Apr 6, 2015, at 1:49 PM, Craig Lewis wrote:
> In that case, I'd set the crush weight to the disk's size in TiB, and mark
> the osd out:
> ceph osd crush reweight osd.
> ceph osd out
>
> Then your tree should look like:
> -9 2.72 host ithome
> 30 2.72
On Apr 6, 2015, at 7:04 PM, Robert LeBlanc wrote:
> I see that ceph has 'ceph osd perf' that gets the latency of the OSDs.
> Is there a similar command that would provide some performance data
> about RBDs in use? I'm concerned about out ability to determine which
> RBD(s) may be "abusing" our sto
32 matches
Mail list logo