ceph-deploy --release dumpling or previously ceph-deploy --stable
dumpling now results in Firefly (0.80.1) being installed, is this
intentional?
I'm adding another host with more OSDs and guessing it is preferable
to deploy the same version.
___
ceph-use
Hi,
I am running a few tests for exporting volumes with rbd export and noticing
very poor performance. It takes almost 3 hours to export 100GB volume. Servers
are pretty idle during the export.
The performance of the cluster itself is way faster. How can I increase the
speed of rbd export?
Th
Precisely.
On 08/25/2014 05:26 PM, Somnath Roy wrote:
> Thanks Dan !
> Yes, I saw that in the ceph-disk scripts and it is using ceph-conf utility to
> parse the config option.
> But, while installing with ceph-deploy, the default config file is created by
> ceph-deploy only. So, I need to do the
Thanks Dan !
Yes, I saw that in the ceph-disk scripts and it is using ceph-conf utility to
parse the config option.
But, while installing with ceph-deploy, the default config file is created by
ceph-deploy only. So, I need to do the following while installing I guess.
Correct me if I am wrong.
> Message: 25
> Date: Fri, 15 Aug 2014 15:06:49 +0200
> From: Loic Dachary
> To: Erik Logtenberg , ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Best practice K/M-parameters EC pool
> Message-ID: <53ee05e9.1040...@dachary.org>
> Content-Type: text/plain; charset="iso-8859-1"
> ...
> Here i
The mounting is actually done by "ceph-disk", which can also run from a
udev rule. It gets options from the ceph configuration option "osd
mount options {fstype}", which you can set globally or per-daemon as
with any other ceph option.
On 08/25/2014 04:11 PM, Somnath Roy wrote:
> Hi,
>
> Ceph-d
Hi,
Ceph-deploy does partition and mount the OSD/journal drive for the user. I
can't find any option of supplying mount options like discard,noatiome etc
suitable for SSDs during ceph-deploy.
Is there a way to control it ? If not, what could be the workaround ?
Thanks & Regards
Somnath
Hi James,
On 26 August 2014 07:17, LaBarre, James (CTR) A6IT
wrote:
>
>
> [ceph@first_cluster ~]$ ceph -s
>
> cluster e0433b49-d64c-4c3e-8ad9-59a47d84142d
>
> health HEALTH_OK
>
> monmap e1: 1 mons at {first_cluster=10.25.164.192:6789/0}, election
> epoch 2, quorum 0 first_cluster
I have built a couple of ceph test clusters, and am attempting to mount the
storage through ceph-fuse on a RHEL 6.4 VM (the clusters are also in VMs). The
first one I built under v0.80, using directories for the ceph OSDs (as per the
Storage Cluster Quick Start at
http://ceph.com/docs/master/s
Hi Greg,
Thanks for helping to take a look. Please find your requested outputs below.
ceph osd tree:
# idweight type name up/down reweight
-1 0 root default
-2 0 host osd1
0 0 osd.0 up 1
4 0
Hello
I am seeing this message every 900 seconds on the osd servers. My dmesg output
is all filled with:
[256627.683702] libceph: osd3 192.168.168.200:6821 socket closed (con state
OPEN)
[256627.687663] libceph: osd6 192.168.168.200:6841 socket closed (con state
OPEN)
Looking at the ceph-osd
After looking a little closer now that I have a better understanding of
osd_heartbeat_grace for the monitor all the osd failures are coming from 1 node
in the cluster. Yes your hunch was correct and that node had stale in the
iptables. After disabling iptables the osd "flapping" has stopped.
On Mon, Aug 25, 2014 at 10:56 AM, Bruce McFarland
wrote:
> Thank you very much for the help.
>
> I'm moving osd_heartbeat_grace to the global section and trying to figure out
> what's going on between the osd's. Since increasing the osd_heartbeat_grace
> in the [mon] section of ceph.conf on the
Thank you very much for the help.
I'm moving osd_heartbeat_grace to the global section and trying to figure out
what's going on between the osd's. Since increasing the osd_heartbeat_grace in
the [mon] section of ceph.conf on the monitor I still see failures, but now
they are 2 seconds > osd_h
Each daemon only reads conf values from its section (or its
daemon-type section, or the global section). You'll need to either
duplicate the "osd heartbeat grace" value in the [mon] section or put
it in the [global] section instead. This is one of the misleading
values; sorry about that...
Anyway,
I just added osd_heartbeat_grace to the [mon] section of ceph.conf, restarted
ceph-mon, and now the monitor is reporting a 35 second osd_heartbeat_grace:
[root@ceph-mon01 ceph]# ceph --admin-daemon
/var/run/ceph/ceph-mon.ceph-mon01.asok config show | grep osd_heartbeat_grace
"osd_heartbeat_gra
Thanks Steve. Appreciate your help.
On Aug 25, 2014, at 9:58 AM, Stephen Jahl wrote:
> Hi Jiten,
>
> The Ceph quick-start guide here was pretty helpful to me when I was starting
> with my test cluster: http://ceph.com/docs/master/start/
>
> ceph-deploy is a very easy way to get a test cluste
What's the output of "ceph osd tree"? And the full output of "ceph -s"?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Aug 18, 2014 at 8:07 PM, Ripal Nathuji wrote:
> Hi folks,
>
> I've come across an issue which I found a "fix" for, but I'm not sure
> whether it's co
That's something that was been puzzling to me. The monitor ceph.conf is set to
35, but it's runtime config reports 20. I've restarted it after initial
creation to try and get it to reload the ceph.conf settings, but it stays's at
20.
[root@ceph-mon01 ceph]# ceph --admin-daemon
/var/run/ceph/ce
On Sat, Aug 23, 2014 at 11:06 PM, Bruce McFarland
wrote:
> I see osd’s being failed for heartbeat reporting > default
> osd_heartbeat_grace of 20 but the run time config shows that the grace is
> set to 30. Is there another variable for the osd or the mon I need to set
> for the non default osd_he
See inline:
Ceph version:
>>> [root@ceph2 ceph]# ceph -v
>>> ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6)
Initial testing was with 30 osd's 10/storage server with the following HW:
>>> 4TB SATA disks - 1 hdd/osd - 30hdd's/server - 6 ssd's/server - forming a md
>>> raid0 virtual
Hi Jiten,
The Ceph quick-start guide here was pretty helpful to me when I was
starting with my test cluster: http://ceph.com/docs/master/start/
ceph-deploy is a very easy way to get a test cluster up quickly, even with
minimal experience with Ceph.
If you use puppet, the puppet-ceph module has a
Hi folks,
I've come across an issue which I found a "fix" for, but I'm not sure whether
it's correct or if there is some other misconfiguration on my end and this is
merely a symptom. I'd appreciate any insights anyone could provide based on the
information below, and happy to provide more deta
Hi Jens,
There's a bug in cinder that causes, at least, to get size wrong from
cinder. If you search a little bit you will find it. I think it's still
not solved.
El 21/08/14 a las #4, Jens-Christian Fischer escribió:
I am working with Cinder Multi Backends on an Icehouse installation and ha
Hi all,
After having played for a while with Ceph and its S3 gateway, I've come to the
conclusion that the default behaviour is that FULL_CONTROL acl on a bucket does
not give you FULL_CONTROL on its underlying keys..it is an issue regarding the
usage we want to make of our ceph cluster .. So fi
The rbd diff-related commands compare points in time of a single
image. Since children are identical to their parent when they're cloned,
if I created a snapshot right after it was cloned, I could export
the diff between the used child and the parent. Something like:
rbd clone child parent@snap
rb
Hi Guys,
I have been looking to try out a test ceph cluster in my lab to see if we can
replace it with our traditional storage. Heard a lot of good things about Ceph
but need some guidance on how to begin with.
I have read some stuff on ceph.com but wanted to get a first hand info and
knowled
[Copying ceph-devel, dropping ceph-users]
Yeah, that looks like a bug. I pushed wip-filejournal that reapplies
Jianpeng's original patch and this one. I'm not certain about last other
suggested fix, though, but I'm hoping that this fix explains the strange
behavior Jianpeng and Mark have seen
The rbd diff-related commands compare points in time of a single
image. Since children are identical to their parent when they're cloned,
if I created a snapshot right after it was cloned, I could export
the diff between the used child and the parent. Something like:
rbd clone child parent@snap
rb
From the top of my head, it is recommended to use 3 mons in production. Also,
for the 22 osds your number of PGs look a bug low, you should look at that.
The performance of the cluster is poor - this is too vague. What is your
current performance, what benchmarks have you tried, what is your dat
Hello,
we have deployed ceph cluster with 4 monitors and 22 osd's. We are using
only rbd's. All VM's on KVM have specified monitors in the same order.
One of monitors (the first on the list in vm disk specification -
ceph35) has more load than others and the performance of cluster is
poor. How
Does rbd export and export-diff and like wise import and import-diff grantee
the consistency of data? So, that if the image is "damaged" during the
transfer, would this be flagged by the other side? Or would it simply leave the
broken image on the destination cluster?
Cheers
- Original Me
On 25 August 2014 10:31, Wido den Hollander wrote:
> On 08/24/2014 08:27 PM, Andrei Mikhailovsky wrote:
>>
>> Hello guys,
>>
>> I am planning to do rbd images off-site backup with rbd export-diff and I
>> was wondering if ceph has checksumming functionality so that I can compare
>> source and dest
On 08/24/2014 08:27 PM, Andrei Mikhailovsky wrote:
Hello guys,
I am planning to do rbd images off-site backup with rbd export-diff and I was
wondering if ceph has checksumming functionality so that I can compare source
and destination files for consistency? If so, how do I retrieve the checksu
Hello guys,
Is it possible to export rbd image while preserving the clones structure? So,
if I've got a single clone rbd image and 10 vm images that were cloned from the
original one, would the rbd export preserve this structure on the destination
pool, or would it waste space and create 10 ind
Hello,
On Sat, 23 Aug 2014 20:23:55 + Bruce McFarland wrote:
Firstly while the runtime changes you injected into the cluster
should have done something (and I hope some Ceph developer comments
on that) you're asking for tuning advice which really isn't the issue here.
Your cluster should no
The rbd diff-related commands compare points in time of a single
image. Since children are identical to their parent when they're cloned,
if I created a snapshot right after it was cloned, I could export
the diff between the used child and the parent. Something like:
rbd clone child parent@snap
rb
37 matches
Mail list logo