On Fri, Jan 18, 2013 at 4:55 PM, Freddie Cash wrote:
> On Thu, Jan 17, 2013 at 4:48 PM, Peter Blajev wrote:
>
>> Right on Tim. Thanks. I didn't know that. I'm sure it's documented
>> somewhere and I should have read it so double thanks for explaining it.
>>
>
> When in doubt, always check the ma
On Thu, Jan 17, 2013 at 4:48 PM, Peter Blajev wrote:
> Right on Tim. Thanks. I didn't know that. I'm sure it's documented
> somewhere and I should have read it so double thanks for explaining it.
>
When in doubt, always check the man page first:
man zpool
It's listed in the section on the "
Right on Tim. Thanks. I didn't know that. I'm sure it's documented
somewhere and I should have read it so double thanks for explaining it.
--
Peter Blajev
IT Manager, TAAZ Inc.
Office: 858-597-0512 x125
On Thu, Jan 17, 2013 at 4:18 PM, Timothy Coalson wrote:
> On Thu, Jan 17, 2013 at 5:33 PM,
On Thu, 17 Jan 2013, Bob Friesenhahn wrote:
For NFS you should disable atime on the NFS client mounts.
This advice was wrong. It needs to be done on the server side.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
On Thu, 17 Jan 2013, Peter Wood wrote:
Great points Jim. I have requested more information how the gallery share is
being used and any temporary data will
be moved out of there.
About atime, it is set to "on" right now and I've considered to turn it off but
I wasn't sure if this will effect
in
Great points Jim. I have requested more information how the gallery share
is being used and any temporary data will be moved out of there.
About atime, it is set to "on" right now and I've considered to turn it off
but I wasn't sure if this will effect incremental zfs send/receive.
'zfs send -i s
Right on Tim. Thanks. I didn't know that. I'm sure it's documented
somewhere and I should have read it so double thanks for explaining it.
On Thu, Jan 17, 2013 at 4:18 PM, Timothy Coalson wrote:
> On Thu, Jan 17, 2013 at 5:33 PM, Peter Wood wrote:
>
>>
>> The 'zpool iostat -v' output is uncomfo
On 2013-01-18 00:42, Bob Friesenhahn wrote:
You can install Brendan Gregg's DTraceToolkit and use it to find out who
and what is doing all the writing. 1.2GB in an hour is quite a lot of
writing. If this is going continuously, then it may be causing more
fragmentation in conjunction with your s
On Thu, Jan 17, 2013 at 5:33 PM, Peter Wood wrote:
>
> The 'zpool iostat -v' output is uncomfortably static. The values of
> read/write operations and bandwidth are the same for hours and even days.
> I'd expect at least some variations between morning and night. The load on
> the servers is diff
On Thu, 17 Jan 2013, Peter Wood wrote:
Unless there is some other way to test what/where these write operations are
applied.
You can install Brendan Gregg's DTraceToolkit and use it to find out
who and what is doing all the writing. 1.2GB in an hour is quite a
lot of writing. If this is g
I have a script that rotates hourly, daily and monthly snapshots. Each
filesystem has about 40 snapshots (zfsList.png - output of 'zfs list | grep
-v home/' - the home directories datasets are snipped from the output. 4
users in total.)
I noticed that the hourly snapshots on the heaviest filesyst
On 01/16/2013 10:25 PM, Peter Wood wrote:
>
> Today I started migrating file systems from some old Open Solaris
> servers to these Supermicro boxes and noticed the transfer to one of
> them was going 10x slower then to the other one (like 10GB/hour).
What does "dladm show-link" show? I'm guessing
On Wed, 16 Jan 2013, Peter Wood wrote:
Running zpool iostat -v (attachment zpool-IOStat.png) shows 1,22K write
operations on the drives and 661 on the
ZIL. Compare to the other server (who is in way heavier use then this one)
these numbers are extremely high.
Any idea how to debug any further
13 matches
Mail list logo