> I tested fread on Fedora 28. fread does 8k read on even block size is 4M.
So maybe I should be looking at changing my GNU Libc instead of my Ceph.
But I can't confirm that reading 8K regardless of blocksize is normal anywhere.
My test on Debian 9 (about 3 years old) with glibc 2.24 shows fread
> Going back through the logs though it looks like the main reason we do a
> 4MiB block size is so that we have a chance of reporting actual cluster
> sizes to 32-bit systems,
I believe you're talking about a different block size (there are so many of
them).
The 'statvfs' system call (the essence
CRUSH map tunables support within the kernel is documented here [1]
and RBD feature support within the kernel is documented here [2].
[1]
http://docs.ceph.com/docs/master/rados/operations/crush-map/?highlight=tunables#tunables
[2] http://docs.ceph.com/docs/master/rbd/rbd-config-ref/#rbd-features
Hi there,
We are replicating a RBD image from Primary to DR site using RBD mirroring.
We were using 10.2.10.
We decided to upgrade the DR site to luminous and upgrade went fine and
mirroring status also was good.
We then promoted the DR copy to test the failure. Everything checked out
good.
The
Have you tried adjusting osd_scrub_sleep? This together with
increasing the time for deep scrubs is usually the solution for
performance problems during scrubbing.
(Only increase time between scrubs if using Bluestore)
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at
It's unfortunately not as easy as mapping the kernel version to a
specific Ceph release.
The kernel implementation is completely separate from the normal user
space implementation, so you have to check for individual features and
check when they have been implemented.
Paul
Paul
--
Paul Emmer
Hello.
I am deleting files via S3CMD and for the most part have no issue. Every
once in a while though, I get a positive response that a file has been
deleted but when I check back the next day, the file is still there.
I was wondering if there is a way to delete a file from within CEPH? I
don'
Thanks Dan!
This seems to be added by the following commit:
https://github.com/ceph/ceph/commit/87be7c70a17492c9e5f06e01722690acec7a2c51
It contains a brief instruction on how to use it.
PS. I agree with the other comments that the cluster needs to be able to
handle deep-scrub during normal ope
On 14/12/2018 13:42, Alexandru Cucu wrote:
Hi,
Unfortunately there is no way of doing this from the Ceph
configuration but you could create some cron jobs to add and remove
the nodeep-scrub flag.
The only problem would be that your cluster status will show
HEALTH_WARN but i think you could set/u
Luminous has:
osd_scrub_begin_week_day
osd_scrub_end_week_day
Maybe these aren't documented. I usually check here for available option:
https://github.com/ceph/ceph/blob/luminous/src/common/options.cc#L2533
-- Dan
On Fri, Dec 14, 2018 at 12:25 PM Caspar Smit wrote:
>
> Hi all,
>
> We have op
Hi,
Unfortunately there is no way of doing this from the Ceph
configuration but you could create some cron jobs to add and remove
the nodeep-scrub flag.
The only problem would be that your cluster status will show
HEALTH_WARN but i think you could set/unset the flags per pool to
avoid this.
On Fr
Den fre 14 dec. 2018 kl 12:25 skrev Caspar Smit :
> We have operating hours from 4 pm until 7 am each weekday and 24 hour days in
> the weekend.
> I was wondering if it's possible to allow deep-scrubbing from 7 am until 15
> pm only on weekdays and prevent any deep-scrubbing in the weekend.
> I'v
On 12/14/18 1:44 AM, Christian Balzer wrote:
> On Thu, 13 Dec 2018 19:44:30 +0100 Ronny Aasen wrote:
>
>> On 13.12.2018 18:19, Alex Gorbachev wrote:
>>> On Thu, Dec 13, 2018 at 10:48 AM Dietmar Rieder
>>> wrote:
Hi Cephers,
one of our OSD nodes is experiencing a Disk controller p
Hi all,
We have operating hours from 4 pm until 7 am each weekday and 24 hour days
in the weekend.
I was wondering if it's possible to allow deep-scrubbing from 7 am until 15
pm only on weekdays and prevent any deep-scrubbing in the weekend.
I've seen the osd scrub begin/end hour settings but th
Hello,
maybe a dump question but is there a way to correlate the ceph kernel
module version with a ceph specific version? For example can I figure
this out using "modinfo ceph"?
Whats the best way to check if a specific client is running at least
at Luminous?
Best,
Martin
___
On Fri, Dec 14, 2018 at 7:50 AM Bryan Henderson wrote:
>
> I've searched the ceph-users archives and found no discussion to speak of of
> Cephfs block sizes, and I wonder how much people have thought about it.
>
> The POSIX 'stat' system call reports for each file a block size, which is
> usually
16 matches
Mail list logo