ere a way to access this
information (e.g. with the MDS admin socket)?
2017-05-19 17:02 GMT+02:00 Andreas Gerstmayr :
> Hi,
>
> is there a way to monitor the progress of the 'ceph daemon mds.0
> scrub_path / recursive repair' command? It returns immediately
> (without any out
Hi,
is there a way to monitor the progress of the 'ceph daemon mds.0
scrub_path / recursive repair' command? It returns immediately
(without any output), but the MDS is scrubbing in the background.
When I start the same command again, I get a JSON response with return_code: -16
What does this ret
2017-03-06 14:08 GMT+01:00 Nick Fisk :
>
> I can happily run 12 disks on a 4 core 3.6Ghz Xeon E3. I've never seen
> average CPU usage over 15-20%. The only time CPU hits 100% is for the ~10
> seconds when the OSD boots up. Running Jewel BTW.
>
> So, I would say that during normal usage you should h
Hi,
what is the current CPU recommendation for storage nodes with multiple
HDDs attached? In the hardware recommendations [1] it says "Therefore,
OSDs should have a reasonable amount of processing power (e.g., dual
core processors).", but I guess this is for servers with a single OSD.
How many co
Hi,
Due to a faulty upgrade from Jewel 10.2.0 to Kraken 11.2.0 our test
cluster is unhealthy since about two weeks and can't recover itself
anymore (unfortunately I skipped the upgrade to 10.2.5 because I
missed the ".z" in "All clusters must first be upgraded to Jewel
10.2.z").
Immediately after
Thanks for your response!
2016-12-09 1:27 GMT+01:00 Gregory Farnum :
> On Wed, Dec 7, 2016 at 5:45 PM, Andreas Gerstmayr
> wrote:
>> Hi,
>>
>> does the CephFS kernel module (as of kernel version 4.8.8) support
>> parallel reads of file stripes?
>> When an app
, you may have an increase of performance.
>
> Cheers
>
> Goncalo
>
>
>
>
> On 12/08/2016 12:45 PM, Andreas Gerstmayr wrote:
>>
>> Hi,
>>
>> does the CephFS kernel module (as of kernel version 4.8.8) support
>> parallel reads of file stripes
Hi,
does the CephFS kernel module (as of kernel version 4.8.8) support
parallel reads of file stripes?
When an application requests a 500MB block from a file (which is
splitted into multiple objects and stripes on different OSDs) at once,
does the CephFS kernel client request these blocks in paral
Hello,
2 parallel jobs with one job simulating the journal (sequential
writes, ioengine=libaio, direct=1, sync=1, iodeph=128, bs=1MB) and the
other job simulating the datastore (random writes of 1MB)?
To test against a single HDD?
Yes, something like that, the first fio job would need go again
2016-11-07 3:05 GMT+01:00 Christian Balzer :
>
> Hello,
>
> On Fri, 4 Nov 2016 17:10:31 +0100 Andreas Gerstmayr wrote:
>
>> Hello,
>>
>> I'd like to understand how replication works.
>> In the paper [1] several replication strategies are described, and
&
Hello,
I'd like to understand how replication works.
In the paper [1] several replication strategies are described, and
according to a (bit old) mailing list post [2] primary-copy is used.
Therefore the primary OSD waits until the object is persisted and then
updates all replicas in parallel.
Cur
at's
> helping you or not; I don't have a good intuitive grasp of what
> readahead will do in that case. I think you may need to adjust the
> readahead config knob in order to make it read all those objects
> together instead of one or two at a time.
> -Greg
>
> O
And there is no readahead, so all
the 10 threads are busy all the time during the benchmark, where in
the CephFS scenario it depends on the client readahead setting if 10
stripes are requested in parallel all the time?
>
> On Wed, Sep 14, 2016 at 12:51 PM, Henrik Korkuc wrote:
>> On 16
Hello,
I'm currently performing some benchmark tests with our Ceph storage
cluster and trying to find the bottleneck in our system.
I'm writing a random 30GB file with the following command:
$ time fio --name=job1 --rw=write --blocksize=1MB --size=30GB
--randrepeat=0 --end_fsync=1
[...]
WRITE: i
14 matches
Mail list logo