FYI - same issue when installing Hammer, 94.5. I also fixed it by enabling
the cr repo.
-Ben
On Tue, Dec 8, 2015 at 5:13 PM, Goncalo Borges wrote:
> Hi Cephers
>
> This is just to report an issue (and a workaround) regarding dependencies
> in Centos 7.1.1503
>
> Last week, I installed a couple
Hi,
I'm getting blocked requests (>30s) every time when an OSD is set to "in" in
our clusters. Once this has happened, backfills run smoothly.
I have currently no idea where to start debugging. Has anyone a hint what to
examine first in order to narrow this issue?
TIA
Christian
--
Dipl-Inf. C
Are you seeing "peering" PGs when the blocked requests are happening? That's
what we see regularly when starting OSDs.
I'm not sure this can be solved completely (and whether there are major
improvements in newer Ceph versions), but it can be sped up by
1) making sure you have free (and not dirt
Hi,
It also had to be fixed for the development environment (see
http://tracker.ceph.com/issues/14019).
Cheers
On 09/12/2015 09:37, Ben Hines wrote:
> FYI - same issue when installing Hammer, 94.5. I also fixed it by enabling
> the cr repo.
>
> -Ben
>
> On Tue, Dec 8, 2015 at 5:13 PM, Goncal
Hi guys,
I am creating a 4-node/16OSD/32TB CephFS from scratch.
According to the ceph documentation the metadata pool should have small
amount of PGs since it contains some negligible amount of data compared
to data pool. This makes me feel it might not be safe.
So I was wondering how to cho
Number of PGs doesn't affect the number of replicas, so don't worry about it.
Jan
> On 09 Dec 2015, at 13:03, Mykola Dvornik wrote:
>
> Hi guys,
>
> I am creating a 4-node/16OSD/32TB CephFS from scratch.
>
> According to the ceph documentation the metadata pool should have small
> amount of
Hi,
I have a working ceph cluster with storage nodes running Ubuntu 14.04
and ceph hammer 0.94.5.
Now I want to switch to CentOS 7.1 (forget about the reasons for now, I
can explain, but it would be a long story and irrelevant to my question).
I've set the osd noout flag and norebalance,nor
Hi Jan,
Thanks for the reply. I see your point about replicas. However my
motivation was a bit different.
Consider some given amount of objects that are stored in the metadata
pool.
If I understood correctly ceph data placement approach, the number of
objects per PG should decrease with the
On Wed, Dec 9, 2015 at 1:25 PM, Mykola Dvornik wrote:
> Hi Jan,
>
> Thanks for the reply. I see your point about replicas. However my motivation
> was a bit different.
>
> Consider some given amount of objects that are stored in the metadata pool.
> If I understood correctly ceph data placement ap
Good point. Thanks!
Triple-failure is essentially what I've faced about a months ago. So
now I want to make sure that the new cephfs setup I am deploying at the
moment will handle this kind of things better.
On Wed, Dec 9, 2015 at 2:41 PM, John Spray wrote:
On Wed, Dec 9, 2015 at 1:25 PM, My
Am 09.12.2015 um 11:21 schrieb Jan Schermer:
> Are you seeing "peering" PGs when the blocked requests are happening? That's
> what we see regularly when starting OSDs.
Mostly "peering" and "activating".
> I'm not sure this can be solved completely (and whether there are major
> improvements in
to update this, the error looks like it comes from updatedb scanning the ceph
disks.
When we make sure it doesn’t, by putting the ceph mount points in the exclusion
file, the problem goes away.
Thanks for the help and time.
On 30 Nov 2015, at 09:53, MATHIAS, Bryn (Bryn)
mailto:bryn.math...@alc
One thing I noticed with all my testing, as the speed difference between the
SSDs and the spinning rust can be quite high and as your journal needs to flush
every X bytes (configurable), the impact of this flush can be hard, as IO to
this journal will stop until it’s finished (I believe). Someth
Hi All,
Long story short:
I have built ceph hammer RPMs, everything seems to work OK but running "ceph
--version" does not report the version number. I don't get a version number
returned from "service ceph status", either. I'm concerned that other
components in our system my rely on ceph --ve
This is the problem:
http://tracker.ceph.com/issues/14030
As a workaround, you can pass the first diff in via stdin, e.g.:
cat snap1.diff | rbd merge-diff - snap2.diff combined.diff
Josh
On 12/08/2015 11:11 PM, Josh Durgin wrote:
On 12/08/2015 10:44 PM, Alex Gorbachev wrote:
Hi Josh,
On Mo
Can someone help me?
Help would be highly appreciated ;-)
Last message on OpenStack mailing list:
Dear OpenStack-users,
I just installed my first multi-node OS-setup with Ceph as my storage backend.
After configuring cinder, nova and glance as described in the Ceph-HowTo
(http://docs.ceph.co
Hi Felix,
It would be great if you could try the fix from
https://github.com/dachary/ceph/commit/7395a6a0c5776d4a92728f1abf0e8a87e5d5e4bb
. It's only changing the ceph-disk file so you could just get it from
https://github.com/dachary/ceph/raw/7395a6a0c5776d4a92728f1abf0e8a87e5d5e4bb/src/ceph-d
This has also exploded puppet-ceph CI. Do we have a workaround? Moving
to Civetweb is in progress but I would prefer to not disable all of
the RGW integration until it can be merged.
[1]
http://logs.openstack.org/21/255421/1/check/gate-puppet-ceph-puppet-beaker-rspec-dsvm-trusty/e75bc1b/console.h
Great, thanks Josh! Using stdin/stdout merge-diff is working. Thank you
for looking into this.
--
Alex Gorbachev
Storcium
On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin wrote:
> This is the problem:
>
> http://tracker.ceph.com/issues/14030
>
> As a workaround, you can pass the first diff in via
Hi Josh, looks like I celebrated too soon:
On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin wrote:
> This is the problem:
>
> http://tracker.ceph.com/issues/14030
>
> As a workaround, you can pass the first diff in via stdin, e.g.:
>
> cat snap1.diff | rbd merge-diff - snap2.diff combined.diff
one
Hello,
I encountered a strange issue when rebuilding monitors reusing same hostnames,
however different IPs.
Steps to reproduce:
- Build monitor using ceph-deploy create mon
- Remove monitor via
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ (remove
monitor) — I didn’t rea
Hello,
On Wed, 9 Dec 2015 15:57:36 + MATHIAS, Bryn (Bryn) wrote:
> to update this, the error looks like it comes from updatedb scanning the
> ceph disks.
>
> When we make sure it doesn’t, by putting the ceph mount points in the
> exclusion file, the problem goes away.
>
Ah, I didn't even t
To get us around the immediate problem, I copied the deb I needed from a
cache to a private repo - I'm sorry that's not going to help you at all,
but if you need a copy, let me know.
The documentation upstream shows that the mod_fastcgi is for older apache
only, and 2.4 onwards can use mod_proxy_f
More oddity: retrying several times, the merge-diff sometimes works and
sometimes does not, using the same source files.
On Wed, Dec 9, 2015 at 10:15 PM, Alex Gorbachev
wrote:
> Hi Josh, looks like I celebrated too soon:
>
> On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin wrote:
>
>> This is the pr
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I had this problem because CentOS and Debian have different versions
of leveldb (Debian's was newer) and the old version would not read the
new version. I just had to blow away the OSDs and let them backfill.
Going from CentOS to Debian didn't requir
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I noticed this a while back and did some tracing. As soon as the PGs
are read in by the OSD (very limited amount of housekeeping done), the
OSD is set to the "in" state so that peering with other OSDs can
happen and the recovery process can begin. Th
Hello,
I seem to vaguely remember a Ceph leveldb package, which might help in
this case, or something from the CentOS equivalent to backports maybe.
Christian
On Wed, 9 Dec 2015 22:18:56 -0700 Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> I had this problem bec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
You actually have to walk through part of the make process before you
can build the tarball so that the version is added to the source
files.
I believe the steps are:
./autogen.sh
./configure
make dist-[gzip|bzip2|lzip|xz]
Then you can copy the SPE
Hmm, perhaps there's a secondary bug.
Can you send the output from strace, i.e. strace.log after running:
cat snap1.diff | strace -f -o strace.log rbd merge-diff - snap2.diff
combined.diff
for a case where it fails?
Josh
On 12/09/2015 08:38 PM, Alex Gorbachev wrote:
More oddity: retrying s
Hi, All
I used a rbd command to create a 6TB-size image, And then created a snapshot of
this image. After that, I kept writing something like modifying files so the
snapshots would be cloned one by one.
At this time, I did the fellow 2 ops simultaneously.
1. keep client io to this image.
2. exc
30 matches
Mail list logo