On 03/28/17 17:28, Brian Andrus wrote:
> Just adding some anecdotal input. It likely won't be ultimately
> helpful other than a +1..
>
> Seemingly, we also have the same issue since enabling exclusive-lock
> on images. We experienced these messages at a large scale when making
> a CRUSH map change
Hi,
I'm pleased to announce that my efforts to port to FreeBSD have resulted
in a ceph-devel port commit in the ports tree.
https://www.freshports.org/net/ceph-devel/
I'd like to thank everybody that helped me by answering my questions,
fixing by mistakes, undoing my Git mess. Especially Sage, K
We've had a couple of puzzling experiences recently with unfound
objects, and I wonder if anyone can shed some light.
This happened with Hammer 0.94.7 on a cluster with 1,309 OSDs. Our use
case is exclusively RBD in this cluster, so it's naturally replicated.
The rbd pool size is 3, min_size is 2.
I have a cluster that has been leaking objects in radosgw and I've
upgraded it to 10.2.6.
After that I ran
radosgw-admin orphans find --pool=default.rgw.buckets.data --job-id=orphans
which found a bunch of objects. And ran
radosgw-admin orphans finish --pool=default.rgw.buckets.data --job-id=orph
Hi Steve,
If you can recreate or if you can remember the object name, it might be worth
trying to run "ceph osd map" on the objects and see
where it thinks they map to. And/or maybe pg query might show something?
Nick
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behal
Good suggestion, Nick. I actually did that at the time. The "ceph osd map"
wasn't all that interesting because the OSDs had been outed and their PGs had
been mapped to new OSDs. Everything appeared to be in order with the PGs being
mapped to the right number of new OSDs. The PG mappings looked f
On Thu, Mar 30, 2017 at 7:56 PM, Willem Jan Withagen wrote:
> Hi,
>
> I'm pleased to announce that my efforts to port to FreeBSD have resulted
> in a ceph-devel port commit in the ports tree.
>
> https://www.freshports.org/net/ceph-devel/
>
> I'd like to thank everybody that helped me by answering
Thanks for the reply Wido! How do you handle IPv6 routes and routing with
IPv6 on public and cluster networks? You mentioned that your cluster
network is routed, so they will need routes to reach the other racks. But
you can't have more than 1 default gateway. Are you running a routing
protocol to
When mixing hard drives of different sizes, what are the advantages
and disadvantages of one big pool vs multiple pools with matching
drives within each pool?
-= Long Story =-
Using a mix of new and existing hardware, I'm going to end up with
10x8T HDD and 42x600G@15krpm HDD. I can distribute driv
One other thing to note with this experience is that we do a LOT of RBD snap
trimming, like hundreds of millions of objects per day added to our snap_trimqs
globally. All of the unfound objects in these cases were found on other OSDs in
the cluster with identical contents, but associated with di
Hello Brad,
Many thanks of the info :)
ENV:-- Kracken - bluestore - EC 4+1 - 5 node cluster : RHEL7
What is the status of the down+out osd? Only one osd osd.6 down and out
from cluster.
What role did/does it play? Mostimportantly, is it osd.6? Yes, due to
underlying I/O error issue we removed th
That’s interesting, the only time I have experienced unfound objects has also
been related to snapshots and highly likely snap trimming. I had a number of
OSD’s start flapping under load of snap trimming and 2 of them on the same host
died with an assert.
>From memory the unfound objects wer
Hi John, any idea on what's wrong. Any info is appreciated.
--
Deepak
-Original Message-
From: Deepak Naidu
Sent: Thursday, March 23, 2017 2:20 PM
To: John Spray
Cc: ceph-users
Subject: RE: [ceph-users] How to mount different ceph FS using ceph-fuse or
kernel cephfs mount
Fixing typo
Hi Alexandre,
But can we use aio=native when using librbd volume, or it will be plainly
ignored by QEMU? (My understanding is that for networked volumes, like ceph,
aio=native doesn't make a difference and it can only be used when using RAW
disks).
Thanks!
Xavi
-Mensaje original-
De:
After some tests I just wanted to post my findings about this. Looks like for
some reason POSIX AIO reads -at least using FIO- are not really asynchronous,
as the results I'm getting are quite similar to using SYNC engine instead of
POSIX AIO engine.
The biggest improvement for this has been u
Hi Michal,
Yeah, looks like there is something wrong with FIO and POSIX AIO, as the reads
don’t seem to be really asynchronous. I don’t know why this is happening -might
be related with the parameters I’m using- but is really bothering me.
What is bothering me even more, is that in Amazon EBS v
My opinion, go for the 2 pools option. And, try to use SSD for journals. In our
tests HDDs and VMs don't really work well together (Too much small IOs) but
obviously it depends on what the VMs are running.
Another option would be to have an SSD cache tier in front of the HDD. That
would really
Hi Guys,
I encountered some issue with installing ceph package for giant, was there
a change somewhere or was I using wrong repo information.
ceph.repo
-
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-giant/rhel7/$basearch
enabled=1
priority=1
gpgcheck=1
Try setting
obsoletes=0
in /etc/yum.conf and see if that doesn't make it happier. The package is
clearly there and it even shows it available in your log.
-Erik
On Thu, Mar 30, 2017 at 8:55 PM, Vlad Blando wrote:
> Hi Guys,
>
> I encountered some issue with installing ceph package for giant,
Hey Yehuda,
Are there plans to port of this fix to Kraken? (or is there even another
Kraken release planned? :)
thanks!
-Ben
On Wed, Mar 1, 2017 at 11:33 AM, Yehuda Sadeh-Weinraub
wrote:
> This sounds like this bug:
> http://tracker.ceph.com/issues/17076
>
> Will be fixed in 10.2.6. It's tri
20 matches
Mail list logo