hi James
here is the out put you've requested
abdul...@hp_hdx_16:~/Downloads# zpool status -v
pool: hdd
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
hdd ONLINE 0 0 0
c7t0d0p3 ONLINE 0 0 0
cache
On Sat, Mar 6, 2010 at 12:50 PM, Erik Trimble wrote:
> This is true. SSDs and HDs differ little in their ability to handle raw
> throughput. However, we often still see problems in ZFS associated with
> periodic system "pauses" where ZFS effectively monopolizes the HDs to write
> out it's curren
2010/3/4 Michael Shadle :
> Typically rackmounts are not designed for quiet. He said quietness is
> #2 in his priorities...
I have a Supermicro 743 case, also 4U. The one I used is the "Super
Quiet" variant, which uses fewer & slower PWM fans. It's got 8 hot
swap bays and an additional 3x 5.25" ba
On Sat, Mar 6, 2010 at 3:15 PM, Abdullah Al-Dahlawi wrote:
> abdul...@hp_hdx_16:~/Downloads# zpool iostat -v hdd
> capacity operations bandwidth
> pool used avail read write read write
> -- - - - - - -
> hdd 1.96G
Hello,
On Mar 5, 2010, at 10:46 AM, Abdullah Al-Dahlawi wrote:
> Greeting All
>
> I have create a pool that consists oh a hard disk and a ssd as a cache
>
> zpool create hdd c11t0d0p3
> zpool add hdd cache c8t0d0p0 - cache device
>
> I ran an OLTP bench mark to emulate a DMBS
>
> One I ra
This is purely tactical, to avoid l2arc write penalty on eviction. You seem
to have missed the very next paragraph:
3644 * 2. The L2ARC attempts to cache data from the ARC before it
is evicted. 3645 * It does this by periodically scanning buffers
from the eviction-end of 3646 * the MFU a
Hello,
On Mar 6, 2010, at 6:02 PM, Andrey Kuzmin wrote:
> This is purely tactical, to avoid l2arc write penalty on eviction. You seem
> to have missed the very next paragraph:
>
>3644 * 2. The L2ARC attempts to cache data from the ARC before it is
> evicted.
>3645 * It does this by p
we recently started to look at a ZFS based solution as a possible
replacement for our DCE/DFS based campus filesystem (yes, this is
still in production here). The ACL model of the combination
OpenSolaris+ZFS+in-kernel-CIFS+NFSv4 looks like a really
promising setup, something which could place it h
Hi
okay its not what i feared, it is probably caching every bit of data and
metadata you have written so far, why shouldn't it you have the space in the
l2 cache, and it can't offer to return it if its not in the cache, after the
cache is full or near full it will choose more carefully what to kee
On 6-3-2010 18:41, Ralf Utermann wrote:
So from this site: we very much support the idea of adding ignore
and deny values for the aclmode property!
However, reading PSARC/2010/029, it looks like we will get
aclmode=discard for everybody and the property removed.
I hope this is not the end of the
Recently, I'm benchmarking all kinds of stuff on my systems. And one
question I can't intelligently answer is what blocksize I should use in
these tests.
I assume there is something which monitors present disk activity, that I
could run on my production servers, to give me some statistics of t
On Mar 5, 2010, at 5:10 PM, James Dickens wrote:
> On Fri, Mar 5, 2010 at 4:48 PM, Tonmaus wrote:
> Hi,
>
> so, what would be a critical test size in your opinion? Are there any other
> side conditions?
>
>
> when your dedup hash table ( a table that holds a checksum of every block
> seen on
On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote:
hdd ONLINE 0 0 0
c7t0d0p3 ONLINE 0 0 0
rpool ONLINE 0 0 0
c7t0d0s0 ONLINE 0 0 0
I trimmed your zpool status output a bit.
Are those two the
On Mar 6, 2010, at 1:38 AM, Zhu Han wrote:
> On Sat, Mar 6, 2010 at 12:50 PM, Erik Trimble wrote:
> This is true. SSDs and HDs differ little in their ability to handle raw
> throughput. However, we often still see problems in ZFS associated with
> periodic system "pauses" where ZFS effectively
On Mar 6, 2010, at 1:02 PM, Edward Ned Harvey wrote:
> Recently, I’m benchmarking all kinds of stuff on my systems. And one
> question I can’t intelligently answer is what blocksize I should use in these
> tests.
>
> I assume there is something which monitors present disk activity, that I
> c
On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote:
> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote:
>>
>> hdd ONLINE 0 0 0
>>c7t0d0p3 ONLINE 0 0 0
>>
>> rpool ONLINE 0 0 0
>>c7t0d0s0 ONLINE 0 0 0
Hi guys,
On my home server (2009.6) I have a 2 HDD's in a mirrored rpool.
I just added a 3rd to the mirror and made all disks bootable (ie. installgrub
on the mirror disks).
My though is this, I remove the 3rd mirror disk and offsite it as a backup.
That way if I mess up the rpool, I can get
I'm about to try it! My LSI SAS 9211-8i should arrive Monday or Tuesday. I
bought the cable-less version, opting instead to save a few $ and buy Adaptec
2247000-R SAS to SATA cables.
My rig will be based off of fairly new kit, so it should be interesting to see
how 2009.06 deals with it all :
On Mar 6, 2010, at 5:38 PM, tomwaters wrote:
> Hi guys,
> On my home server (2009.6) I have a 2 HDD's in a mirrored rpool.
>
> I just added a 3rd to the mirror and made all disks bootable (ie. installgrub
> on the mirror disks).
>
> My though is this, I remove the 3rd mirror disk and offsite
> From everything I've seen, an SSD wins simply because it's 20-100x the
> size. HBAs almost never have more than 512MB of cache, and even fancy
> SAN boxes generally have 1-2GB max. So, HBAs are subject to being
> overwhelmed with heavy I/O. The SSD ZIL has a much better chance of
> being able to
> You are running into this bug:
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751
> Currently, building a pool from files is not fully supported.
I think Cindy and I interpreted the question differently. If you want the
zpool inside a file to stay mounted while the system is r
On Sat, Mar 6 at 15:04, Richard Elling wrote:
On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote:
On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote:
hdd ONLINE 0 0 0
c7t0d0p3 ONLINE 0 0 0
rpool ONLINE 0 0 0
c7t0d
On Mar 6, 2010, at 8:05 PM, Eric D. Mudama wrote:
> On Sat, Mar 6 at 15:04, Richard Elling wrote:
>> On Mar 6, 2010, at 2:42 PM, Eric D. Mudama wrote:
>>> On Sat, Mar 6 at 3:15, Abdullah Al-Dahlawi wrote:
hdd ONLINE 0 0 0
c7t0d0p3 ONLINE 0
On Sat, 6 Mar 2010, Ralf Utermann wrote:
> we recently started to look at a ZFS based solution as a possible
> replacement for our DCE/DFS based campus filesystem (yes, this is still
> in production here).
Hey, a fellow DFS shop :)... We finally migrated the last production files
off of DFS last
Hi ALL
I might be little bit confused !!!
I will try to ask my question in a simple way ...
Why would a 16GB L2ARC device got filled by running a benchmark that uses a
2GB workingset while having a 2GB ARC max ?
I know I am missing something here !
Thanks
On Sun, Mar 7, 2010 at 12:05
I have a zpool of five 1.5TB disks in raidz1. They are on c?t?d?p0 devices -
using the full disk, not any slice or partition, because the pool was
created in zfs-fuse in linux and no partition tables were ever created. (for
the full saga of my move from that to opensolaris, anyone who missed out on
26 matches
Mail list logo