> Your test dataset is too small and you aren't flushing the cache before 
> exiting dd, so you are largely seeing the time it takes to write to cache, 
> not to disk.
> But that gives the RAID10 system 220 IOPs, still nowhere near the 100,000 
> IOPs of a single SSD.
> I suggest that you google a bit on how to do fileystem benchmarks first, then 
> try it and report back if something is still odd.
> Your test dataset is too small and you aren't flushing the cache before 
> exiting dd, so you are largely seeing the time it takes to write to cache, 
> not to disk.
 . . .
 Oh, well, yes. I knew that I was "seeing" something that wasn't quite right.
 Your answers grounded me on such issues.
 Thank you und Entschuldigung!
 lbrtchx

On 6/17/20, Anders Andersson <pipat...@gmail.com> wrote:
> On Wed, Jun 17, 2020 at 12:15 PM Albretch Mueller <lbrt...@gmail.com>
> wrote:
>>
>>  HDDs have their internal caching mechanism and I have heard that the
>> Linux kernel uses RAM very effitiently, but to my understanding RAM
>> being only 3-4 times faster doesn't make much sense, so I may be doing
>> or understanding something not entirely right.
>
> I suggest that you google a bit on how to do fileystem benchmarks
> first, then try it and report back if something is still odd. There
> are many ways but "dd" is not the way unless you really dig through
> the sync flags and understand what they do. I normally use "fio" but
> it's not very friendly (so it suits me).
>
> However, I just recently put a fast NVMe SSD in an older server with
> (lots) of DDR3 ECC RAM. The RAM bandwidth for one node/CPU is about
> 10-12 GB/s, and the SSD bandwidth is nearing 2 GB/s for most loads.
> That's getting close to your figures!
>
>

Reply via email to