Hi Goncalo,
That bug is fixed in 10.2.4. See http://tracker.ceph.com/issues/16066
-- Dan
On Tue, Dec 6, 2016 at 5:11 AM, Goncalo Borges
wrote:
> Hi John, Greg, Zheng
>
> And now a much more relevant problem. Once again, my environment:
>
> - ceph/cephfs in 10.2.2 but patched for
> o client:
Hi,
I found my problem, it's the cron job. It starts the script every
minute in the given hour, not just once as I wanted it to. So I guess
this simply led to conflicts while searching for the oldest PGs or
scrubbing them. I'm not sure yet, what this message exactly means, but
I corrected
Thanks Dan for your critical eye.
Somehow I did not notice that there was already a tracker for it.
Cheers
G.
From: Dan van der Ster [d...@vanderster.com]
Sent: 06 December 2016 19:30
To: Goncalo Borges
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] segf
Hi Sage,
Could you please clarify: do we need to set nodeep-scrub also, or does
this somehow only affect the (shallow) scrub?
(Note that deep scrubs will start when the deep_scrub_interval has
passed, even with noscrub set).
Cheers, Dan
On Tue, Nov 15, 2016 at 11:35 PM, Sage Weil wrote:
> Hi
Hi Nick,
thanks for the parameters. As you were kind enough to share them, I
thought I'll share my results. I tested within a virtual machine with
the kvm rbd driver and used the following command line:
> fio --name=fio-test --randrepeat=0 --invalidate=0 --rw=write --bs=64k
> --direct=1 --time_b
On Tue, 6 Dec 2016, Dan van der Ster wrote:
> Hi Sage,
>
> Could you please clarify: do we need to set nodeep-scrub also, or does
> this somehow only affect the (shallow) scrub?
>
> (Note that deep scrubs will start when the deep_scrub_interval has
> passed, even with noscrub set).
Hmm, I though
We are using ceph 80.9 and we recently recovered from a power outage which
caused some data loss. We had replica set to 1. Since then we have
installed another node with the idea that we would change the replica to 3.
We tried to change 1 of the pools to replica 3 but it always gets stuck.
It's be
Hi Sascha,
Have you got any write back caching enabled? That time looks very fast, almost
too fast to me. It looks like some of the writes
completed in around 70us which is almost the same as a single hop of 10G
networking, where you would have at least 2 hops
(Client->OSD1->OSD2).
What are you
Hi Nick,
m( of course, you're right. Yes, we have rbd_cache enabled for KVM /
QEMU. That probably also explains the large diff between avg and stdev.
Thanks for the Pointer.
Unfortunately I have not yet gotten fio to work with the rbd engine.
Always fails with
> rbd engine: RBD version: 0.1.9
>
> On Dec 5, 2016, at 9:42 PM, Christian Balzer wrote:
>
>
> Hello,
>
> On Tue, 6 Dec 2016 03:37:32 +0100 Christian Theune wrote:
>
>> Hi Christian (heh),
>>
>> thanks for picking this up. :)
>>
>> This has become a rather long post as I added more details and giving
>> our history, but if w
CCing in ceph-users:
That is a pretty old version of fio and I know a couple rbd-related
bugs / crashes have been fixed since fio 2.2.8. Can you retry using a
more up-to-date version of fio?
On Tue, Dec 6, 2016 at 2:40 AM, wrote:
> Hello Jason,
>
> I'm from ZTE corporation, and we are using cep
Hello,
On Tue, 6 Dec 2016 11:14:59 -0600 Reed Dier wrote:
>
> > On Dec 5, 2016, at 9:42 PM, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > On Tue, 6 Dec 2016 03:37:32 +0100 Christian Theune wrote:
> >
> >> Hi Christian (heh),
> >>
> >> thanks for picking this up. :)
> >>
> >> This ha
Hi:
I want to know the best radosgw performance in practice now. What
is best W/ iops ? If I have 10 concurrent PUTs . The files are
different size. And for small files ,like <100k ,I hope the response
time is millisecond level. We now have a ceph cluster with 45 hosts,
540 osds. What sho
Hello,
On Tue, 6 Dec 2016 20:58:52 +0100 Christian Theune wrote:
> Hi,
>
> > On 6 Dec 2016, at 04:42, Christian Balzer wrote:
> > Jewel issues, like the most recent one with scrub sending OSDs to
> > neverland.
>
> Alright. We’re postponing this for now. Is that actually a more widespread
>
Am 10.10.2016 um 10:05 schrieb Hauke Homburg:
> Am 07.10.2016 um 17:37 schrieb Gregory Farnum:
>> On Fri, Oct 7, 2016 at 7:15 AM, Hauke Homburg
>> wrote:
>>> Hello,
>>>
>>> I have a Ceph Cluster with 5 Server, and 40 OSD. Aktual on this Cluster
>>> are 85GB Free Space, and the rsync dir has lots
15 matches
Mail list logo