On 8/26/06, Mike Gerdts <[EMAIL PROTECTED]> wrote:
FWIW, I saw the same backtrace on build 46 doing some wierd stuff
documented at http://mgerdts.blogspot.com/. At the time I was booted
from cdrom media importing a pool that I had previously exported.
I got thinking... how can I outdo the "ME
On 8/26/06, James Dickens <[EMAIL PROTECTED]> wrote:
bash-3.00# pstack core.zpool.import
core 'core.zpool.import' of 644:zpool import -f pool
ff348adc zfs_prop_get (0, d, ffbfaef0, 400, 0, 0) + 14
ff34ed88 dataset_compare (c3f90, c3f9c, ffbfa2f0, 472a4, 14, ff396000) + 30
ff12f798 qsor
Hi
This weekend I upgraded my blade 1500 to b46 from b39, when importing
my pools, one raidz and another a single slice with -f because I
forgot to export the drives as usual, zpool dropped a core, file
somewhere after the pools were imported and before the filessysttems
were mounted. I ran zfs
> For various reasons, I can't post the zfs list type
here is one, and it seems inline with expected netapp(tm)
type usage considering the "cluster" size differences.
14 % cat snap_sched
#!/bin/sh
snaps=15
for fs in `echo Videos Movies Music users local`
do
i=$snaps
zfs destroy zfs/[EMAIL
Good start, I'm now motivated to run the same test on my server. My h/w config
for the test will be:
- E2900 (24 way x 96GB)
- 2 2Gbps QLogic cards
- 40 x 64GB EMC LUNs
I'll run the AOL deidentified clickstream database. It'll primarily be a write
test. I intend to use the following scenarios:
Daniel,
This is cool. I've convinced my DBA to attempt the same stunt. We
are just starting with the testing so I'll post results as I get them.
Will appreciate if you can share your zpool layout.
--
Just me,
Wire ...
On 8/26/06, Daniel Rock <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] sc
Thanks to all who have responded. I spent 2 weekends working through
the best practices tthat Jerome recommended -- it's quite a mouthful.
On 8/17/06, Roch <[EMAIL PROTECTED]> wrote:
My general principles are:
If you can, to improve you 'Availability' metrics,
let ZFS handle on