Thanks for the suggestions. I re-created the pool, set the record size to 8K,
re-created the file and increased the I/O size from the application. It's
nearly all writes now.
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
Hey, Richard -
I'm confused now.
My understanding was that any files created after the recordsize was set
would use that as the new maximum recordsize, but files already created
would continue to use the old recordsize.
Though I'm now a little hazy on what will happen when the new existing
fi
What about new blocks written to an existing file?
Perhaps we could make that clearer in the manpage too...
hm.
Mattias Pantzare wrote:
>> >
>> > If you created them after, then no worries, but if I understand
>> > correctly, if the *file* was created with 128K recordsize, then it'll
>> > k
> >
> > If you created them after, then no worries, but if I understand
> > correctly, if the *file* was created with 128K recordsize, then it'll
> > keep that forever...
>
>
> Files have nothing to do with it. The recordsize is a file system
> parameter. It gets a little more complicated be
Nathan Kroenert wrote:
> And something I was told only recently - It makes a difference if you
> created the file *before* you set the recordsize property.
Actually, it has always been true for RAID-0, RAID-5, RAID-6.
If your I/O strides over two sets then you end up doing more I/O,
perhaps twice
Nathan Kroenert wrote:
> And something I was told only recently - It makes a difference if you
> created the file *before* you set the recordsize property.
>
> If you created them after, then no worries, but if I understand
> correctly, if the *file* was created with 128K recordsize, then it'l
And something I was told only recently - It makes a difference if you
created the file *before* you set the recordsize property.
If you created them after, then no worries, but if I understand
correctly, if the *file* was created with 128K recordsize, then it'll
keep that forever...
Assuming I
Anton B. Rang wrote:
>> Create a pool [ ... ]
>> Write a 100GB file to the filesystem [ ... ]
>> Run I/O against that file, doing 100% random writes with an 8K block size.
>>
>
> Did you set the record size of the filesystem to 8K?
>
> If not, each 8K write will first read 128K, then write 128
> Create a pool [ ... ]
> Write a 100GB file to the filesystem [ ... ]
> Run I/O against that file, doing 100% random writes with an 8K block size.
Did you set the record size of the filesystem to 8K?
If not, each 8K write will first read 128K, then write 128K.
Anton
This message posted from
I'm running on s10s_u4wos_12b and doing the following test.
Create a pool, striped across 4 physical disks from a storage array.
Write a 100GB file to the filesystem (dd from /dev/zero out to the file).
Run I/O against that file, doing 100% random writes with an 8K block size.
zpool iostat shows
10 matches
Mail list logo